Latest Issue
The Negative Experiences of Low-Income Citizen Commute and Their Intentions Toward Public Bus in Phnom Penh
Published: December 31,2025Reliability Study on the Placement of Electric Vehicle Charging Stations in the Distribution Network of Cambodia
Published: December 31,2025Planning For Medium Voltage Distribution Systems Considering Economic And Reliability Aspects
Published: December 31,2025Security Management of Reputation Records in the Self-Sovereign Identity Network for the Trust Enhancement
Published: December 31,2025Effect of Enzyme on Physicochemical and Sensory Characteristics of Black Soy Sauce
Published: December 31,2025Activated Carbon Derived from Cassava Peels (Manihot esculenta) for the Removal of Diclofenac
Published: December 31,2025Impact of Smoking Materials on Smoked Fish Quality and Polycyclic Aromatic Hydrocarbon Contamination
Published: December 31,2025Estimation of rainfall and flooding with remotely-sensed spectral indices in the Mekong Delta region
Published: December 31,2025Text Image Reconstruction and Reparation for Khmer Historical Document
-
1. Department of Information and Communication Engineering, Institute of Technology of Cambodia, Russian Federation Blvd., P.O. Box 86, Phnom Penh, Cambodia
Academic Editor:
Received: August 07,2023 / Revised: / Accepted: September 05,2023 / Available online: June 30,2024
This research focuses on preserving Cambodia's historical Khmer palm leaf manuscripts by proposing a text-image reconstruction and reparation framework using advanced computer vision and deep learning techniques. To address the preservation, Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN) are employed to fill in the missing patterns of characters in the damaged images. The study utilizes the SleukRith Set [1], which consists of 91,600 images divided into two parts: 90,600 training images and 1,000 test images. Each image contains a single character of the Khmer palm leaf script. The training images are intentionally degraded into three different variants, each subjected to three levels of degradation (level 1, level 2, and level 3). To assess the performance and compare the effectiveness of the Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN) models in the proposed framework, various evaluation metrics were employed. These metrics include Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM). By evaluating the results of both models based on these metrics, it was observed that the GAN model consistently outperformed the CNN model in terms of MSE, PSNR, and SSIM. The GAN model achieved lower MSE values, higher PSNR values, and higher SSIM values compared to the CNN model, indicating its superior performance in image reconstruction and preservation of the original text.
