Latest Issue
THE 13TH SCIENTIFIC DAY (Catalyzing Innovation : Human Capital, Research, and Industry Linkages)
Published: August 23,2024Earth Resources and Geo-Environment Technology
Published: August 20,2024Word Spotting on Khmer Palm Leaf Manuscript Documents
Published: June 30,2024Text Image Reconstruction and Reparation for Khmer Historical Document
Published: June 30,2024Enhancing the Accuracy and Reliability of Docker Image Vulnerability Scanning Technology
Published: June 30,2024Walkability and Importance Assessment of Pedestrian Facilities in Phnom Penh City
Published: June 30,2024Assessment of Proximate Chemical Composition of Cambodian Rice Varieties
Published: June 30,2024Helipad Detection for UAV based on YOLOv4 Transfer Learning Model
-
1. Dynamics and Control Laboratory, Department of Industrial and Mechanical Engineering, Institute of Technology of Cambodia, Russian Federation Blvd., P.O. Box 86, Phnom Penh, Cambodia.
Received: July 19,2021 / Revised: Accepted: November 19,2021 / Published: December 30,2021
Humans have fast and accurate visual system that allows them to perform complex tasks like driving with a little consciousness. Without visual system, UAV cannot do complex tasks like humans. When UAVs are equipped with visual system and trained with fast and accurate model, they will be able to carry out even more complex tasks such as autonomous landing. Computer vision is a technique suitable for UAV visual system. In this paper, we consider a computer vision technique that uses a deep learning model to recognize the landing site (Helipad). We conducted an experiment of training the deep learning model to recognize Helipad. In order to land on the desired site safely, we proposed a detection method based on YOLOv4-tiny transfer learning model to detect the Helipad in real time. The digital images were used as training data in order for the model to learn and gain a high-level recognizing object that exists in an image. The data collection to train the model was delimited by collecting them from the internet and video’s snapshot. The annotation tool was used in order to draw ground truth box for 184 training samples and 57 testing samples with 1 class. The YOLOv4-tiny model was trained on darknet framework, using YOLOv4-tiny pre-trained weight and the described input data. After training was completed with GPU acceleration, the best weights were saved in order to use in OpenCV’s Deep Neural Network (DNN). The model was first validated with testing images, tested on videos and finally real-time streaming video in order to investigate its performance. We used Intersection over Union (IOU), precision, recall, miss rate and mean Average Precision (mAP) as the evaluation metrics as well as Loss-function visualization to visualize and analyze the model’s performance. During real-time streaming video, we investigate frames per second (FPS) and inference time. Finally, the experimental results show that the detection method can accurately detect the Helipad in real-time video.