Latest Issue
THE 13TH SCIENTIFIC DAY (Catalyzing Innovation : Human Capital, Research, and Industry Linkages)
Published: August 23,2024Earth Resources and Geo-Environment Technology
Published: August 20,2024Word Spotting on Khmer Palm Leaf Manuscript Documents
Published: June 30,2024Text Image Reconstruction and Reparation for Khmer Historical Document
Published: June 30,2024Enhancing the Accuracy and Reliability of Docker Image Vulnerability Scanning Technology
Published: June 30,2024Walkability and Importance Assessment of Pedestrian Facilities in Phnom Penh City
Published: June 30,2024Assessment of Proximate Chemical Composition of Cambodian Rice Varieties
Published: June 30,2024Text Image Reconstruction and Reparation for Khmer Historical Document
-
1. Department of Information and Communication Engineering, Institute of Technology of Cambodia, Russian Federation Blvd., P.O. Box 86, Phnom Penh, Cambodia
Received: August 07,2023 / Revised: Accepted: September 05,2023 / Published: June 30,2024
This research focuses on preserving Cambodia's historical Khmer palm leaf manuscripts by proposing a text-image reconstruction and reparation framework using advanced computer vision and deep learning techniques. To address the preservation, Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN) are employed to fill in the missing patterns of characters in the damaged images. The study utilizes the SleukRith Set [1], which consists of 91,600 images divided into two parts: 90,600 training images and 1,000 test images. Each image contains a single character of the Khmer palm leaf script. The training images are intentionally degraded into three different variants, each subjected to three levels of degradation (level 1, level 2, and level 3). To assess the performance and compare the effectiveness of the Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN) models in the proposed framework, various evaluation metrics were employed. These metrics include Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM). By evaluating the results of both models based on these metrics, it was observed that the GAN model consistently outperformed the CNN model in terms of MSE, PSNR, and SSIM. The GAN model achieved lower MSE values, higher PSNR values, and higher SSIM values compared to the CNN model, indicating its superior performance in image reconstruction and preservation of the original text.