Real-time human-robot interaction based on continuous gesture spotting and recognition

Heung-Il Suk, Seong Sik Cho, Hee Deok Yang, Myung Cheol Roh, Seong Whan Lee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Recently, service robots have begun to emerge in human life space, while traditional robots were used for the purpose of manufacturing, transportation, etc., in the past. The service robots are intelligent robots which can understand human gestures and provide services automatically. In order for natural interactions with the service robot, automatic gesture recognition is required. Especially, vision-based gesture recognition provides intuitive and natural interface without wearing on any special devices. This paper proposed gesture recognition methods of whole body and hand gestures in continuous gestures. There are two major issues. The first one is to estimate whole body components in whole body gesture. The second one is spotting gestures in continuous motion. The proposed human pose estimation method is based on analysis of common body components in which are the least moving and varying body parts. It is designated and used in the pose matching process in a flexible manner. From the exemplar database, the relative variability and tolerance model (in terms of allowable amount of motion) for each limb or body part for a given pose is acquired and the common body components found across the exemplar data for each pose are put to use to find a match to an input target. The proposed method showed excellent results in the CMU MoBo and aerobic sequence data. We also proposed a novel spotting method for designing a threshold model in conditional random field (CRF) that perform an adaptive threshold for distinguishing between meaningful and nonmeaningful gesture by augmenting the CRF with one additional label is proposed. The experiment was achieved on American Sign Language (ASL) which is one of the most complicated gestures. Experiments demonstrate that our system can detect signs from continuous data with an 87.5% spotting, versus 67.2% spotting for CRF without a non-sign label.

Original languageEnglish
Title of host publication39th International Symposium on Robotics, ISR 2008
Pages120-123
Number of pages4
Publication statusPublished - 2008 Dec 1
Event39th International Symposium on Robotics, ISR 2008 - Seoul, Korea, Republic of
Duration: 2008 Oct 152008 Oct 17

Other

Other39th International Symposium on Robotics, ISR 2008
CountryKorea, Republic of
CitySeoul
Period08/10/1508/10/17

Fingerprint

Human robot interaction
Gesture recognition
Robots
Labels
Intelligent robots
Experiments

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction
  • Software

Cite this

Suk, H-I., Cho, S. S., Yang, H. D., Roh, M. C., & Lee, S. W. (2008). Real-time human-robot interaction based on continuous gesture spotting and recognition. In 39th International Symposium on Robotics, ISR 2008 (pp. 120-123)

Real-time human-robot interaction based on continuous gesture spotting and recognition. / Suk, Heung-Il; Cho, Seong Sik; Yang, Hee Deok; Roh, Myung Cheol; Lee, Seong Whan.

39th International Symposium on Robotics, ISR 2008. 2008. p. 120-123.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Suk, H-I, Cho, SS, Yang, HD, Roh, MC & Lee, SW 2008, Real-time human-robot interaction based on continuous gesture spotting and recognition. in 39th International Symposium on Robotics, ISR 2008. pp. 120-123, 39th International Symposium on Robotics, ISR 2008, Seoul, Korea, Republic of, 08/10/15.
Suk H-I, Cho SS, Yang HD, Roh MC, Lee SW. Real-time human-robot interaction based on continuous gesture spotting and recognition. In 39th International Symposium on Robotics, ISR 2008. 2008. p. 120-123
Suk, Heung-Il ; Cho, Seong Sik ; Yang, Hee Deok ; Roh, Myung Cheol ; Lee, Seong Whan. / Real-time human-robot interaction based on continuous gesture spotting and recognition. 39th International Symposium on Robotics, ISR 2008. 2008. pp. 120-123
@inproceedings{f72fb4dc80674fdf8a9635deb04fc120,
title = "Real-time human-robot interaction based on continuous gesture spotting and recognition",
abstract = "Recently, service robots have begun to emerge in human life space, while traditional robots were used for the purpose of manufacturing, transportation, etc., in the past. The service robots are intelligent robots which can understand human gestures and provide services automatically. In order for natural interactions with the service robot, automatic gesture recognition is required. Especially, vision-based gesture recognition provides intuitive and natural interface without wearing on any special devices. This paper proposed gesture recognition methods of whole body and hand gestures in continuous gestures. There are two major issues. The first one is to estimate whole body components in whole body gesture. The second one is spotting gestures in continuous motion. The proposed human pose estimation method is based on analysis of common body components in which are the least moving and varying body parts. It is designated and used in the pose matching process in a flexible manner. From the exemplar database, the relative variability and tolerance model (in terms of allowable amount of motion) for each limb or body part for a given pose is acquired and the common body components found across the exemplar data for each pose are put to use to find a match to an input target. The proposed method showed excellent results in the CMU MoBo and aerobic sequence data. We also proposed a novel spotting method for designing a threshold model in conditional random field (CRF) that perform an adaptive threshold for distinguishing between meaningful and nonmeaningful gesture by augmenting the CRF with one additional label is proposed. The experiment was achieved on American Sign Language (ASL) which is one of the most complicated gestures. Experiments demonstrate that our system can detect signs from continuous data with an 87.5{\%} spotting, versus 67.2{\%} spotting for CRF without a non-sign label.",
author = "Heung-Il Suk and Cho, {Seong Sik} and Yang, {Hee Deok} and Roh, {Myung Cheol} and Lee, {Seong Whan}",
year = "2008",
month = "12",
day = "1",
language = "English",
pages = "120--123",
booktitle = "39th International Symposium on Robotics, ISR 2008",

}

TY - GEN

T1 - Real-time human-robot interaction based on continuous gesture spotting and recognition

AU - Suk, Heung-Il

AU - Cho, Seong Sik

AU - Yang, Hee Deok

AU - Roh, Myung Cheol

AU - Lee, Seong Whan

PY - 2008/12/1

Y1 - 2008/12/1

N2 - Recently, service robots have begun to emerge in human life space, while traditional robots were used for the purpose of manufacturing, transportation, etc., in the past. The service robots are intelligent robots which can understand human gestures and provide services automatically. In order for natural interactions with the service robot, automatic gesture recognition is required. Especially, vision-based gesture recognition provides intuitive and natural interface without wearing on any special devices. This paper proposed gesture recognition methods of whole body and hand gestures in continuous gestures. There are two major issues. The first one is to estimate whole body components in whole body gesture. The second one is spotting gestures in continuous motion. The proposed human pose estimation method is based on analysis of common body components in which are the least moving and varying body parts. It is designated and used in the pose matching process in a flexible manner. From the exemplar database, the relative variability and tolerance model (in terms of allowable amount of motion) for each limb or body part for a given pose is acquired and the common body components found across the exemplar data for each pose are put to use to find a match to an input target. The proposed method showed excellent results in the CMU MoBo and aerobic sequence data. We also proposed a novel spotting method for designing a threshold model in conditional random field (CRF) that perform an adaptive threshold for distinguishing between meaningful and nonmeaningful gesture by augmenting the CRF with one additional label is proposed. The experiment was achieved on American Sign Language (ASL) which is one of the most complicated gestures. Experiments demonstrate that our system can detect signs from continuous data with an 87.5% spotting, versus 67.2% spotting for CRF without a non-sign label.

AB - Recently, service robots have begun to emerge in human life space, while traditional robots were used for the purpose of manufacturing, transportation, etc., in the past. The service robots are intelligent robots which can understand human gestures and provide services automatically. In order for natural interactions with the service robot, automatic gesture recognition is required. Especially, vision-based gesture recognition provides intuitive and natural interface without wearing on any special devices. This paper proposed gesture recognition methods of whole body and hand gestures in continuous gestures. There are two major issues. The first one is to estimate whole body components in whole body gesture. The second one is spotting gestures in continuous motion. The proposed human pose estimation method is based on analysis of common body components in which are the least moving and varying body parts. It is designated and used in the pose matching process in a flexible manner. From the exemplar database, the relative variability and tolerance model (in terms of allowable amount of motion) for each limb or body part for a given pose is acquired and the common body components found across the exemplar data for each pose are put to use to find a match to an input target. The proposed method showed excellent results in the CMU MoBo and aerobic sequence data. We also proposed a novel spotting method for designing a threshold model in conditional random field (CRF) that perform an adaptive threshold for distinguishing between meaningful and nonmeaningful gesture by augmenting the CRF with one additional label is proposed. The experiment was achieved on American Sign Language (ASL) which is one of the most complicated gestures. Experiments demonstrate that our system can detect signs from continuous data with an 87.5% spotting, versus 67.2% spotting for CRF without a non-sign label.

UR - http://www.scopus.com/inward/record.url?scp=84876768029&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84876768029&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84876768029

SP - 120

EP - 123

BT - 39th International Symposium on Robotics, ISR 2008

ER -