TY - JOUR
T1 - Robust and Autonomous Stereo Visual-Inertial Navigation for Non-Holonomic Mobile Robots
AU - Chae, Hee Won
AU - Choi, Ji Hoon
AU - Song, Jae Bok
N1 - Funding Information:
Manuscript received August 13, 2019; revised January 28, 2020 and March 31, 2020; accepted June 16, 2020. Date of publication June 25, 2020; date of current version October 13, 2020. This work was supported by an IITP grant funded by the Korean Government (MSIT) (No. 2018-0-00622). The review of this article was coordinated by Dr. A. Chatterjee. (Corresponding author: Jae-Bok Song.) Hee-Won Chae and Jae-Bok Song are with the School of Mechanical Engineering, Korea University, Seoul 02841, South Korea (e-mail: anakin722@korea.ac.kr; jbsong@korea.ac.kr).
Publisher Copyright:
© 1967-2012 IEEE.
PY - 2020/9
Y1 - 2020/9
N2 - Unlike micro aerial vehicles, most mobile robots have non-holonomic constraints, which makes lateral movement impossible. Consequently, the vision-based navigation systems that perform accurate visual feature initialization by moving the camera to the side to ensure a sufficient parallax of the image are degraded when applied to mobile robots. Generally, to overcome this difficulty, a motion model based on wheel encoders mounted on a mobile robot is used to predict the pose of a robot, but it is difficult to cope with errors caused by wheel slip or inaccurate wheel calibration. In this study, we propose a robust autonomous navigation system that uses only a stereo inertial sensor and does not rely on wheel-based dead reckoning. The observation model of the line feature modified with vanishing-points is applied to the visual-inertial odometry along with the point features so that a mobile robot can perform robust pose estimation during autonomous navigation. The proposed algorithm, i.e., keyframe-based autonomous visual-inertial navigation (KAVIN) supports the entire navigation system and can run onboard without an additional graphics processing unit. A series of experiments in a real environment indicated that the KAVIN system provides robust pose estimation without wheel encoders and prevents the accumulation of drift error during autonomous driving.
AB - Unlike micro aerial vehicles, most mobile robots have non-holonomic constraints, which makes lateral movement impossible. Consequently, the vision-based navigation systems that perform accurate visual feature initialization by moving the camera to the side to ensure a sufficient parallax of the image are degraded when applied to mobile robots. Generally, to overcome this difficulty, a motion model based on wheel encoders mounted on a mobile robot is used to predict the pose of a robot, but it is difficult to cope with errors caused by wheel slip or inaccurate wheel calibration. In this study, we propose a robust autonomous navigation system that uses only a stereo inertial sensor and does not rely on wheel-based dead reckoning. The observation model of the line feature modified with vanishing-points is applied to the visual-inertial odometry along with the point features so that a mobile robot can perform robust pose estimation during autonomous navigation. The proposed algorithm, i.e., keyframe-based autonomous visual-inertial navigation (KAVIN) supports the entire navigation system and can run onboard without an additional graphics processing unit. A series of experiments in a real environment indicated that the KAVIN system provides robust pose estimation without wheel encoders and prevents the accumulation of drift error during autonomous driving.
KW - Autonomous navigation
KW - keyframes
KW - visual-inertial systems
KW - wheeled mobile robots
UR - http://www.scopus.com/inward/record.url?scp=85094191880&partnerID=8YFLogxK
U2 - 10.1109/TVT.2020.3004163
DO - 10.1109/TVT.2020.3004163
M3 - Article
AN - SCOPUS:85094191880
SN - 0018-9545
VL - 69
SP - 9613
EP - 9623
JO - IEEE Transactions on Vehicular Technology
JF - IEEE Transactions on Vehicular Technology
IS - 9
M1 - 9123559
ER -