An intelligent robot is required for natural interaction with humans. Visual interpretation of gestures can be useful in accomplishing natural Human-Robot Interaction (HRI). Previous HRI research focused on issues such as hand gesture, sign language, and command gesture recognition. Automatic recognition of whole body gestures is required in order for HRI to operate naturally. This presents a challenging problem, because describing and modeling meaningful gesture patterns from whole body gestures, is a complex task. This paper presents a new method for recognition of whole body key gestures in HRI. A human subject is first described by a set of features, encoding the angular relationship between a dozen body parts in 3D. A feature vector is then mapped to a codeword of gesture HMMs. In order to spot key gestures accurately, a sophisticated method of designing a garbage gesture model is proposed; model reduction, which merges similar states, based on data-dependent statistics and relative entropy. The proposed method has been tested with 20 persons' samples and 200 synthetic data. The proposed method achieved a reliability rate of 94.8% in spotting task and a recognition rate of 97.4% from an isolated gesture.
|Number of pages||4|
|Journal||Proceedings - International Conference on Pattern Recognition|
|Publication status||Published - 2006|
|Event||18th International Conference on Pattern Recognition, ICPR 2006 - Hong Kong, China|
Duration: 2006 Aug 20 → 2006 Aug 24
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition