Latest Issue
Physico-chemical Characteristics of Rice-based Cereal Processed by Twin-screw Extrusion and Microwave Cooking
Published: December 31,2024Investigation of the Influence of Extrusion Conditions on Cambodian Extruded Rice Vermicelli
Published: December 31,2024Application of High-Pressure and High-Temperature Reactor for Extraction of Essential Oil from Kaffir Lime Peel
Published: December 31,2024Minimum Standard of Traffic Safety Devices at Primary School Zone Black Spot in Phnom Penh
Published: December 31,2024Effect of Different Water-Saving Irrigation Methods for Rice Cultivation, Case Study in Cambodia.
Published: December 31,2024Should water taxi service in Phnom Penh be abandoned or sustained?
Published: December 31,2024Text Image Reconstruction and Reparation for Khmer Historical Document
-
1. Department of Information and Communication Engineering, Institute of Technology of Cambodia, Russian Federation Blvd., P.O. Box 86, Phnom Penh, Cambodia
Academic Editor:
Received: August 07,2023 / Revised: Accepted: September 05,2023 / Published: June 30,2024
This research focuses on preserving Cambodia's historical Khmer palm leaf manuscripts by proposing a text-image reconstruction and reparation framework using advanced computer vision and deep learning techniques. To address the preservation, Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN) are employed to fill in the missing patterns of characters in the damaged images. The study utilizes the SleukRith Set [1], which consists of 91,600 images divided into two parts: 90,600 training images and 1,000 test images. Each image contains a single character of the Khmer palm leaf script. The training images are intentionally degraded into three different variants, each subjected to three levels of degradation (level 1, level 2, and level 3). To assess the performance and compare the effectiveness of the Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN) models in the proposed framework, various evaluation metrics were employed. These metrics include Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM). By evaluating the results of both models based on these metrics, it was observed that the GAN model consistently outperformed the CNN model in terms of MSE, PSNR, and SSIM. The GAN model achieved lower MSE values, higher PSNR values, and higher SSIM values compared to the CNN model, indicating its superior performance in image reconstruction and preservation of the original text.