TY - GEN
T1 - Landmark Localization for Drone Aerial Mapping Using GPS and Sparse Point Cloud for Photogrammetry Pipeline Automation
AU - Ryu, Byeong Yeon
AU - Park, Won Nyoung
AU - Jung, Donghwi
AU - Kim, Seong Woo
N1 - Funding Information:
B. Ryu is with Columbia University, NewYork, USA (b.ryu@columbia.edu) W. Park is with Angelswing Inc., Seoul, South Korea (peter@angelswing.io) D. Jung and S. Kim are with the Smart City major in the Department of Civil and Environmental Engineering, and Integrated Major in Smart City Global Convergence, Seoul National University, South Korea. (donghwi-jung@snu.ac.kr, snwoo@snu.ac.kr) Research conducted at Angelswing Inc. in coordination w/ SNU ARIL Labs, and supported in part by Korean Ministry of Land, Infrastructure and Transport as the Innovative Talent Education Program for Smart City.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Ground Control Point (GCP) rectification serves an important role in assuring the absolute and relative accuracy for drone photogrammetry generated data, yet the process of identifying and marking GCPs is still handled manually, hindering the scalability of the source photo processing pipeline. In this paper, we propose a method to accurately detect and automatically mark GCPs from aerial images using deep learning and photogrammetry-generated sparse point cloud to expedite the sourcephoto processing pipeline. Using SOTA Object Detection and Image Classification models, RetinaNet and Inception-ResNet-V2, we first accurately detect Ground Control Points from the collected source photos. The detected targets are then filtered and labeled by backward-projecting the detected image x-y coordinates to 3D sparse point cloud, and comparing the 3D coordinate with the surveyed GCP. The GPS matching process on the sparse point cloud assures sub-centimeter level accuracy errors compared to traditional human rectification while exceeding the performance of other commercially available GCP detection methods.
AB - Ground Control Point (GCP) rectification serves an important role in assuring the absolute and relative accuracy for drone photogrammetry generated data, yet the process of identifying and marking GCPs is still handled manually, hindering the scalability of the source photo processing pipeline. In this paper, we propose a method to accurately detect and automatically mark GCPs from aerial images using deep learning and photogrammetry-generated sparse point cloud to expedite the sourcephoto processing pipeline. Using SOTA Object Detection and Image Classification models, RetinaNet and Inception-ResNet-V2, we first accurately detect Ground Control Points from the collected source photos. The detected targets are then filtered and labeled by backward-projecting the detected image x-y coordinates to 3D sparse point cloud, and comparing the 3D coordinate with the surveyed GCP. The GPS matching process on the sparse point cloud assures sub-centimeter level accuracy errors compared to traditional human rectification while exceeding the performance of other commercially available GCP detection methods.
UR - http://www.scopus.com/inward/record.url?scp=85128825141&partnerID=8YFLogxK
U2 - 10.1109/ICEIC54506.2022.9748847
DO - 10.1109/ICEIC54506.2022.9748847
M3 - Conference contribution
AN - SCOPUS:85128825141
T3 - 2022 International Conference on Electronics, Information, and Communication, ICEIC 2022
BT - 2022 International Conference on Electronics, Information, and Communication, ICEIC 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 International Conference on Electronics, Information, and Communication, ICEIC 2022
Y2 - 6 February 2022 through 9 February 2022
ER -