TY - GEN
T1 - Pose Correction for Highly Accurate Visual Localization in Large-scale Indoor Spaces
AU - Hyeon, Janghun
AU - Kim, Joohyung
AU - Doh, Nakju
N1 - Funding Information:
Acknowledgement. This research was supported by the Technology Innovation Program (10073166) funded By the Ministry of Trade, Industry and Energy (MOTIE, Korea).
Publisher Copyright:
© 2021 IEEE
PY - 2021
Y1 - 2021
N2 - Indoor visual localization is significant for various applications such as autonomous robots, augmented reality, and mixed reality. Recent advances in visual localization have demonstrated their feasibility in large-scale indoor spaces through coarse-to-fine methods that typically employ three steps: image retrieval, pose estimation, and pose selection. However, further research is needed to improve the accuracy of large-scale indoor visual localization. We demonstrate that the limitations in the previous methods can be attributed to the sparsity of image positions in the database, which causes view-differences between a query and a retrieved image from the database. In this paper, to address this problem, we propose a novel module, named pose correction, that enables re-estimation of the pose with local feature matching in a similar view by reorganizing the local features. This module enhances the accuracy of the initially estimated pose and assigns more reliable ranks. Furthermore, the proposed method achieves a new state-of-the-art performance with an accuracy of more than 90 % within 1.0 m in the challenging indoor benchmark dataset InLoc for the first time.
AB - Indoor visual localization is significant for various applications such as autonomous robots, augmented reality, and mixed reality. Recent advances in visual localization have demonstrated their feasibility in large-scale indoor spaces through coarse-to-fine methods that typically employ three steps: image retrieval, pose estimation, and pose selection. However, further research is needed to improve the accuracy of large-scale indoor visual localization. We demonstrate that the limitations in the previous methods can be attributed to the sparsity of image positions in the database, which causes view-differences between a query and a retrieved image from the database. In this paper, to address this problem, we propose a novel module, named pose correction, that enables re-estimation of the pose with local feature matching in a similar view by reorganizing the local features. This module enhances the accuracy of the initially estimated pose and assigns more reliable ranks. Furthermore, the proposed method achieves a new state-of-the-art performance with an accuracy of more than 90 % within 1.0 m in the challenging indoor benchmark dataset InLoc for the first time.
UR - http://www.scopus.com/inward/record.url?scp=85127758587&partnerID=8YFLogxK
U2 - 10.1109/ICCV48922.2021.01567
DO - 10.1109/ICCV48922.2021.01567
M3 - Conference contribution
AN - SCOPUS:85127758587
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 15954
EP - 15963
BT - Proceedings - 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 18th IEEE/CVF International Conference on Computer Vision, ICCV 2021
Y2 - 11 October 2021 through 17 October 2021
ER -