Robust spotting of key gestures from whole body motion sequence

Hee Deok Yang, A. Yeon Park, Seong Whan Lee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Citations (Scopus)

Abstract

Robust gesture recognition in video requires segmentation of the meaningful gestures from a whole body gesture sequence. This is a challenging problem because it is not straightforward to describe and model meaningless gesture patterns. This paper presents a new method for simultaneous spotting and recognition of whole body key gestures. A human subject is first described by a set of features encoding the angular relations between a dozen body parts in 3D. A feature vector is then mapped to a codeword of gesture HMMs. In order to spot key gestures accurately, a sophisticated method of designing a garbage gesture model is proposed; a model reduction which merges similar states based on data-dependent statistics and relative entropy. This model provides an effective mechanism for qualifying or disqualifying gestural motions. The proposed method has been tested with 20 persons' samples and 80 synthetic data. The proposed method achieved a reliability rate of 94.8% in spotting task and a recognition rate of 97.4% from an isolated gesture.

Original languageEnglish
Title of host publicationFGR 2006: Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition
Pages231-236
Number of pages6
Volume2006
DOIs
Publication statusPublished - 2006 Nov 14
EventFGR 2006: 7th International Conference on Automatic Face and Gesture Recognition - Southampton, United Kingdom
Duration: 2006 Apr 102006 Apr 12

Other

OtherFGR 2006: 7th International Conference on Automatic Face and Gesture Recognition
CountryUnited Kingdom
CitySouthampton
Period06/4/1006/4/12

Fingerprint

Gesture recognition
Entropy
Statistics

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Yang, H. D., Park, A. Y., & Lee, S. W. (2006). Robust spotting of key gestures from whole body motion sequence. In FGR 2006: Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (Vol. 2006, pp. 231-236). [1613025] https://doi.org/10.1109/FGR.2006.99

Robust spotting of key gestures from whole body motion sequence. / Yang, Hee Deok; Park, A. Yeon; Lee, Seong Whan.

FGR 2006: Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition. Vol. 2006 2006. p. 231-236 1613025.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Yang, HD, Park, AY & Lee, SW 2006, Robust spotting of key gestures from whole body motion sequence. in FGR 2006: Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition. vol. 2006, 1613025, pp. 231-236, FGR 2006: 7th International Conference on Automatic Face and Gesture Recognition, Southampton, United Kingdom, 06/4/10. https://doi.org/10.1109/FGR.2006.99
Yang HD, Park AY, Lee SW. Robust spotting of key gestures from whole body motion sequence. In FGR 2006: Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition. Vol. 2006. 2006. p. 231-236. 1613025 https://doi.org/10.1109/FGR.2006.99
Yang, Hee Deok ; Park, A. Yeon ; Lee, Seong Whan. / Robust spotting of key gestures from whole body motion sequence. FGR 2006: Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition. Vol. 2006 2006. pp. 231-236
@inproceedings{ad873e6181734a95ab7a8479c6353f37,
title = "Robust spotting of key gestures from whole body motion sequence",
abstract = "Robust gesture recognition in video requires segmentation of the meaningful gestures from a whole body gesture sequence. This is a challenging problem because it is not straightforward to describe and model meaningless gesture patterns. This paper presents a new method for simultaneous spotting and recognition of whole body key gestures. A human subject is first described by a set of features encoding the angular relations between a dozen body parts in 3D. A feature vector is then mapped to a codeword of gesture HMMs. In order to spot key gestures accurately, a sophisticated method of designing a garbage gesture model is proposed; a model reduction which merges similar states based on data-dependent statistics and relative entropy. This model provides an effective mechanism for qualifying or disqualifying gestural motions. The proposed method has been tested with 20 persons' samples and 80 synthetic data. The proposed method achieved a reliability rate of 94.8{\%} in spotting task and a recognition rate of 97.4{\%} from an isolated gesture.",
author = "Yang, {Hee Deok} and Park, {A. Yeon} and Lee, {Seong Whan}",
year = "2006",
month = "11",
day = "14",
doi = "10.1109/FGR.2006.99",
language = "English",
isbn = "0769525032",
volume = "2006",
pages = "231--236",
booktitle = "FGR 2006: Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition",

}

TY - GEN

T1 - Robust spotting of key gestures from whole body motion sequence

AU - Yang, Hee Deok

AU - Park, A. Yeon

AU - Lee, Seong Whan

PY - 2006/11/14

Y1 - 2006/11/14

N2 - Robust gesture recognition in video requires segmentation of the meaningful gestures from a whole body gesture sequence. This is a challenging problem because it is not straightforward to describe and model meaningless gesture patterns. This paper presents a new method for simultaneous spotting and recognition of whole body key gestures. A human subject is first described by a set of features encoding the angular relations between a dozen body parts in 3D. A feature vector is then mapped to a codeword of gesture HMMs. In order to spot key gestures accurately, a sophisticated method of designing a garbage gesture model is proposed; a model reduction which merges similar states based on data-dependent statistics and relative entropy. This model provides an effective mechanism for qualifying or disqualifying gestural motions. The proposed method has been tested with 20 persons' samples and 80 synthetic data. The proposed method achieved a reliability rate of 94.8% in spotting task and a recognition rate of 97.4% from an isolated gesture.

AB - Robust gesture recognition in video requires segmentation of the meaningful gestures from a whole body gesture sequence. This is a challenging problem because it is not straightforward to describe and model meaningless gesture patterns. This paper presents a new method for simultaneous spotting and recognition of whole body key gestures. A human subject is first described by a set of features encoding the angular relations between a dozen body parts in 3D. A feature vector is then mapped to a codeword of gesture HMMs. In order to spot key gestures accurately, a sophisticated method of designing a garbage gesture model is proposed; a model reduction which merges similar states based on data-dependent statistics and relative entropy. This model provides an effective mechanism for qualifying or disqualifying gestural motions. The proposed method has been tested with 20 persons' samples and 80 synthetic data. The proposed method achieved a reliability rate of 94.8% in spotting task and a recognition rate of 97.4% from an isolated gesture.

UR - http://www.scopus.com/inward/record.url?scp=33750816716&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33750816716&partnerID=8YFLogxK

U2 - 10.1109/FGR.2006.99

DO - 10.1109/FGR.2006.99

M3 - Conference contribution

SN - 0769525032

SN - 9780769525037

VL - 2006

SP - 231

EP - 236

BT - FGR 2006: Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition

ER -