Latest Issue
Empowering Education with Online Khmer Handwritten Text Recognition for Teaching and Learning Assistance
Published: August 30,2025Undergraduate Student Dropout Prediction with Class Balancing Techniques
Published: August 30,2025Status of Seawater Quality at Koh Rong Island, Sihanoukville, Cambodia
Published: August 30,2025Low-Complexity Detection of Primary Synchronization Signal for 5G New Radio Terrestrial Cellular System
Published: August 30,2025Word Spotting on Khmer Printed Documents
Published: August 30,2025Tuning Hyperparameters Learning Rate and Gamma in Gym Environment Inverted Pendulum
Published: August 30,2025Examining Passenger Loyalty in Phnom Penh Public Bus System: A Structural Equation Modelling Approach
Published: August 30,2025Prediction on Load model for future load profile of Electric Vehicle charging demand in Phnom Penh
Published: August 30,2025Economic Study on Integrating PV-DG with Grid-Tie: Case Study in Cambodia
Published: August 30,2025Helipad Detection for UAV based on YOLOv4 Transfer Learning Model
-
1. Dynamics and Control Laboratory, Department of Industrial and Mechanical Engineering, Institute of Technology of Cambodia, Russian Federation Blvd., P.O. Box 86, Phnom Penh, Cambodia.
Academic Editor:
Received: July 19,2021 / Revised: / Accepted: November 19,2021 / Available online: December 30,2021
Humans have fast and accurate visual system that allows them to perform complex tasks like driving with a little consciousness. Without visual system, UAV cannot do complex tasks like humans. When UAVs are equipped with visual system and trained with fast and accurate model, they will be able to carry out even more complex tasks such as autonomous landing. Computer vision is a technique suitable for UAV visual system. In this paper, we consider a computer vision technique that uses a deep learning model to recognize the landing site (Helipad). We conducted an experiment of training the deep learning model to recognize Helipad. In order to land on the desired site safely, we proposed a detection method based on YOLOv4-tiny transfer learning model to detect the Helipad in real time. The digital images were used as training data in order for the model to learn and gain a high-level recognizing object that exists in an image. The data collection to train the model was delimited by collecting them from the internet and video’s snapshot. The annotation tool was used in order to draw ground truth box for 184 training samples and 57 testing samples with 1 class. The YOLOv4-tiny model was trained on darknet framework, using YOLOv4-tiny pre-trained weight and the described input data. After training was completed with GPU acceleration, the best weights were saved in order to use in OpenCV’s Deep Neural Network (DNN). The model was first validated with testing images, tested on videos and finally real-time streaming video in order to investigate its performance. We used Intersection over Union (IOU), precision, recall, miss rate and mean Average Precision (mAP) as the evaluation metrics as well as Loss-function visualization to visualize and analyze the model’s performance. During real-time streaming video, we investigate frames per second (FPS) and inference time. Finally, the experimental results show that the detection method can accurately detect the Helipad in real-time video.