Latest Issue
Empowering Education with Online Khmer Handwritten Text Recognition for Teaching and Learning Assistance
Published: August 30,2025Undergraduate Student Dropout Prediction with Class Balancing Techniques
Published: August 30,2025Status of Seawater Quality at Koh Rong Island, Sihanoukville, Cambodia
Published: August 30,2025Low-Complexity Detection of Primary Synchronization Signal for 5G New Radio Terrestrial Cellular System
Published: August 30,2025Word Spotting on Khmer Printed Documents
Published: August 30,2025Tuning Hyperparameters Learning Rate and Gamma in Gym Environment Inverted Pendulum
Published: August 30,2025Examining Passenger Loyalty in Phnom Penh Public Bus System: A Structural Equation Modelling Approach
Published: August 30,2025Prediction on Load model for future load profile of Electric Vehicle charging demand in Phnom Penh
Published: August 30,2025Economic Study on Integrating PV-DG with Grid-Tie: Case Study in Cambodia
Published: August 30,2025Text Image Reconstruction and Reparation for Khmer Historical Document
-
1. Department of Information and Communication Engineering, Institute of Technology of Cambodia, Russian Federation Blvd., P.O. Box 86, Phnom Penh, Cambodia
Academic Editor:
Received: August 07,2023 / Revised: / Accepted: September 05,2023 / Available online: June 30,2024
This research focuses on preserving Cambodia's historical Khmer palm leaf manuscripts by proposing a text-image reconstruction and reparation framework using advanced computer vision and deep learning techniques. To address the preservation, Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN) are employed to fill in the missing patterns of characters in the damaged images. The study utilizes the SleukRith Set [1], which consists of 91,600 images divided into two parts: 90,600 training images and 1,000 test images. Each image contains a single character of the Khmer palm leaf script. The training images are intentionally degraded into three different variants, each subjected to three levels of degradation (level 1, level 2, and level 3). To assess the performance and compare the effectiveness of the Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN) models in the proposed framework, various evaluation metrics were employed. These metrics include Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM). By evaluating the results of both models based on these metrics, it was observed that the GAN model consistently outperformed the CNN model in terms of MSE, PSNR, and SSIM. The GAN model achieved lower MSE values, higher PSNR values, and higher SSIM values compared to the CNN model, indicating its superior performance in image reconstruction and preservation of the original text.