Monocular vision-based global localization using position and orientation of ceiling features

Seo Yeon Hwang, Jae-Bok Song

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)

Abstract

This study presents an upward-looking camera-based global localization scheme using the position and orientation of ceiling features. If the robot pose is unknown, the region-based ceiling features from the current image are matched to a pre-built feature map from the RBPF-based SLAM process. Then, the candidate areas of the real robot pose are set around the matched features. The candidates are represented by two spots for the features having both position and orientation, while by a circle if they have only position. Finally, the real robot pose is determined at the intersection point. The candidate areas are realistically modeled by applying the observation error, and useless candidates are significantly reduced by considering the feature orientation. Several experiments in real environments validated the effectiveness of the proposed global localization scheme.

Original languageEnglish
Title of host publicationProceedings - IEEE International Conference on Robotics and Automation
Pages3785-3790
Number of pages6
DOIs
Publication statusPublished - 2013 Nov 14
Event2013 IEEE International Conference on Robotics and Automation, ICRA 2013 - Karlsruhe, Germany
Duration: 2013 May 62013 May 10

Other

Other2013 IEEE International Conference on Robotics and Automation, ICRA 2013
CountryGermany
CityKarlsruhe
Period13/5/613/5/10

Fingerprint

Ceilings
Robots
Cameras
Experiments

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence
  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Cite this

Hwang, S. Y., & Song, J-B. (2013). Monocular vision-based global localization using position and orientation of ceiling features. In Proceedings - IEEE International Conference on Robotics and Automation (pp. 3785-3790). [6631109] https://doi.org/10.1109/ICRA.2013.6631109

Monocular vision-based global localization using position and orientation of ceiling features. / Hwang, Seo Yeon; Song, Jae-Bok.

Proceedings - IEEE International Conference on Robotics and Automation. 2013. p. 3785-3790 6631109.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Hwang, SY & Song, J-B 2013, Monocular vision-based global localization using position and orientation of ceiling features. in Proceedings - IEEE International Conference on Robotics and Automation., 6631109, pp. 3785-3790, 2013 IEEE International Conference on Robotics and Automation, ICRA 2013, Karlsruhe, Germany, 13/5/6. https://doi.org/10.1109/ICRA.2013.6631109
Hwang SY, Song J-B. Monocular vision-based global localization using position and orientation of ceiling features. In Proceedings - IEEE International Conference on Robotics and Automation. 2013. p. 3785-3790. 6631109 https://doi.org/10.1109/ICRA.2013.6631109
Hwang, Seo Yeon ; Song, Jae-Bok. / Monocular vision-based global localization using position and orientation of ceiling features. Proceedings - IEEE International Conference on Robotics and Automation. 2013. pp. 3785-3790
@inproceedings{f2160ef47db84a4facce832750dd8f69,
title = "Monocular vision-based global localization using position and orientation of ceiling features",
abstract = "This study presents an upward-looking camera-based global localization scheme using the position and orientation of ceiling features. If the robot pose is unknown, the region-based ceiling features from the current image are matched to a pre-built feature map from the RBPF-based SLAM process. Then, the candidate areas of the real robot pose are set around the matched features. The candidates are represented by two spots for the features having both position and orientation, while by a circle if they have only position. Finally, the real robot pose is determined at the intersection point. The candidate areas are realistically modeled by applying the observation error, and useless candidates are significantly reduced by considering the feature orientation. Several experiments in real environments validated the effectiveness of the proposed global localization scheme.",
author = "Hwang, {Seo Yeon} and Jae-Bok Song",
year = "2013",
month = "11",
day = "14",
doi = "10.1109/ICRA.2013.6631109",
language = "English",
isbn = "9781467356411",
pages = "3785--3790",
booktitle = "Proceedings - IEEE International Conference on Robotics and Automation",

}

TY - GEN

T1 - Monocular vision-based global localization using position and orientation of ceiling features

AU - Hwang, Seo Yeon

AU - Song, Jae-Bok

PY - 2013/11/14

Y1 - 2013/11/14

N2 - This study presents an upward-looking camera-based global localization scheme using the position and orientation of ceiling features. If the robot pose is unknown, the region-based ceiling features from the current image are matched to a pre-built feature map from the RBPF-based SLAM process. Then, the candidate areas of the real robot pose are set around the matched features. The candidates are represented by two spots for the features having both position and orientation, while by a circle if they have only position. Finally, the real robot pose is determined at the intersection point. The candidate areas are realistically modeled by applying the observation error, and useless candidates are significantly reduced by considering the feature orientation. Several experiments in real environments validated the effectiveness of the proposed global localization scheme.

AB - This study presents an upward-looking camera-based global localization scheme using the position and orientation of ceiling features. If the robot pose is unknown, the region-based ceiling features from the current image are matched to a pre-built feature map from the RBPF-based SLAM process. Then, the candidate areas of the real robot pose are set around the matched features. The candidates are represented by two spots for the features having both position and orientation, while by a circle if they have only position. Finally, the real robot pose is determined at the intersection point. The candidate areas are realistically modeled by applying the observation error, and useless candidates are significantly reduced by considering the feature orientation. Several experiments in real environments validated the effectiveness of the proposed global localization scheme.

UR - http://www.scopus.com/inward/record.url?scp=84887273415&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84887273415&partnerID=8YFLogxK

U2 - 10.1109/ICRA.2013.6631109

DO - 10.1109/ICRA.2013.6631109

M3 - Conference contribution

AN - SCOPUS:84887273415

SN - 9781467356411

SP - 3785

EP - 3790

BT - Proceedings - IEEE International Conference on Robotics and Automation

ER -