Latest Issue
Study on Mechanical Structure Design for Plug-and-play Wheel Mobile Robot
Published: December 31,2023PI Controller for Velocity Controller Design based on Lumped Parameter Estimation: Simulation and Experiment
Published: December 31,2023Attitude Estimation by using Unscented Kalman Filter with Constraint State
Published: December 31,2023Characterization Study of Cambodian Natural Rubber and Clay Composites for Shock Absorption Floor Mat
Published: December 31,2023Selection of Observed Gridded Rainfall Data for different Analysis Purposes in Cambodia
Published: December 31,2023An Empirical Investigation of Gold Price Forecasting Using ARIMA Compare with LSTM Model
Published: December 31,2023Prediction of California Bearing Ratio with Soil Properties of Road Subgrade Materials in Cambodia
Published: December 31,2023Non-intrusive Load Monitoring Classification Based on Multi-Scale Electrical Appliance Load Signature
Published: December 31,2023Development of Control Framework Based on ROS Platform for a 3-Axis Gimbal
Published: December 31,2023Helipad Detection for UAV based on YOLOv4 Transfer Learning Model
-
1. Dynamics and Control Laboratory, Department of Industrial and Mechanical Engineering, Institute of Technology of Cambodia, Russian Federation Blvd., P.O. Box 86, Phnom Penh, Cambodia.
Received: July 19,2021 / Revised: Accepted: November 19,2021 / Published: December 30,2021
Humans have fast and accurate visual system that allows them to perform complex tasks like driving with a little consciousness. Without visual system, UAV cannot do complex tasks like humans. When UAVs are equipped with visual system and trained with fast and accurate model, they will be able to carry out even more complex tasks such as autonomous landing. Computer vision is a technique suitable for UAV visual system. In this paper, we consider a computer vision technique that uses a deep learning model to recognize the landing site (Helipad). We conducted an experiment of training the deep learning model to recognize Helipad. In order to land on the desired site safely, we proposed a detection method based on YOLOv4-tiny transfer learning model to detect the Helipad in real time. The digital images were used as training data in order for the model to learn and gain a high-level recognizing object that exists in an image. The data collection to train the model was delimited by collecting them from the internet and video’s snapshot. The annotation tool was used in order to draw ground truth box for 184 training samples and 57 testing samples with 1 class. The YOLOv4-tiny model was trained on darknet framework, using YOLOv4-tiny pre-trained weight and the described input data. After training was completed with GPU acceleration, the best weights were saved in order to use in OpenCV’s Deep Neural Network (DNN). The model was first validated with testing images, tested on videos and finally real-time streaming video in order to investigate its performance. We used Intersection over Union (IOU), precision, recall, miss rate and mean Average Precision (mAP) as the evaluation metrics as well as Loss-function visualization to visualize and analyze the model’s performance. During real-time streaming video, we investigate frames per second (FPS) and inference time. Finally, the experimental results show that the detection method can accurately detect the Helipad in real-time video.