Combination of manual and non-manual features for sign language recognition based on conditional random field and active appearance model

Hee Deok Yang, Seong Whan Lee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Citations (Scopus)

Abstract

Sign language recognition is the task of detection and recognition of manual signals (MSs) and non-manual signals (NMSs) in a signed utterance. In this paper, a novel method for recognizing MS and facial expressions as a NMS is proposed. This is achieved through a framework consisting of three components: (1) Candidate segments of MSs are discriminated using an hierarchical conditional random field (CRF) and Boost-Map embedding. It can distinguish signs, fingerspellings and non-sign patterns, and is robust to the various sizes, scales and rotations of the signer's hand. (2) Facial expressions as a NMS are recognized with support vector machine (SVM) and active appearance model (AAM), AAM is used to extract facial feature points. From these facial feature points, several measurements are computed to distinguish each facial component into defined facial expressions with SVM. (3) Finally, the recognition results of MSs and NMSs are fused in order to recognize signed sentences. Experiments demonstrate that the proposed method can successfully combine MSs and NMSs features for recognizing signed sentences from utterance data.

Original languageEnglish
Title of host publicationProceedings - International Conference on Machine Learning and Cybernetics
Pages1726-1731
Number of pages6
Volume4
DOIs
Publication statusPublished - 2011 Nov 7
Event2011 International Conference on Machine Learning and Cybernetics, ICMLC 2011 - Guilin, Guangxi, China
Duration: 2011 Jul 102011 Jul 13

Other

Other2011 International Conference on Machine Learning and Cybernetics, ICMLC 2011
CountryChina
CityGuilin, Guangxi
Period11/7/1011/7/13

Fingerprint

Support vector machines
Experiments

Keywords

  • active appearance model
  • conditional random held
  • manual sign
  • non-manual sign
  • Sign language recognition
  • support vector machine

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computational Theory and Mathematics
  • Computer Networks and Communications
  • Human-Computer Interaction

Cite this

Yang, H. D., & Lee, S. W. (2011). Combination of manual and non-manual features for sign language recognition based on conditional random field and active appearance model. In Proceedings - International Conference on Machine Learning and Cybernetics (Vol. 4, pp. 1726-1731). [6016973] https://doi.org/10.1109/ICMLC.2011.6016973

Combination of manual and non-manual features for sign language recognition based on conditional random field and active appearance model. / Yang, Hee Deok; Lee, Seong Whan.

Proceedings - International Conference on Machine Learning and Cybernetics. Vol. 4 2011. p. 1726-1731 6016973.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Yang, HD & Lee, SW 2011, Combination of manual and non-manual features for sign language recognition based on conditional random field and active appearance model. in Proceedings - International Conference on Machine Learning and Cybernetics. vol. 4, 6016973, pp. 1726-1731, 2011 International Conference on Machine Learning and Cybernetics, ICMLC 2011, Guilin, Guangxi, China, 11/7/10. https://doi.org/10.1109/ICMLC.2011.6016973
Yang HD, Lee SW. Combination of manual and non-manual features for sign language recognition based on conditional random field and active appearance model. In Proceedings - International Conference on Machine Learning and Cybernetics. Vol. 4. 2011. p. 1726-1731. 6016973 https://doi.org/10.1109/ICMLC.2011.6016973
Yang, Hee Deok ; Lee, Seong Whan. / Combination of manual and non-manual features for sign language recognition based on conditional random field and active appearance model. Proceedings - International Conference on Machine Learning and Cybernetics. Vol. 4 2011. pp. 1726-1731
@inproceedings{a934519aa2d546c2a0c6d1ae999295dc,
title = "Combination of manual and non-manual features for sign language recognition based on conditional random field and active appearance model",
abstract = "Sign language recognition is the task of detection and recognition of manual signals (MSs) and non-manual signals (NMSs) in a signed utterance. In this paper, a novel method for recognizing MS and facial expressions as a NMS is proposed. This is achieved through a framework consisting of three components: (1) Candidate segments of MSs are discriminated using an hierarchical conditional random field (CRF) and Boost-Map embedding. It can distinguish signs, fingerspellings and non-sign patterns, and is robust to the various sizes, scales and rotations of the signer's hand. (2) Facial expressions as a NMS are recognized with support vector machine (SVM) and active appearance model (AAM), AAM is used to extract facial feature points. From these facial feature points, several measurements are computed to distinguish each facial component into defined facial expressions with SVM. (3) Finally, the recognition results of MSs and NMSs are fused in order to recognize signed sentences. Experiments demonstrate that the proposed method can successfully combine MSs and NMSs features for recognizing signed sentences from utterance data.",
keywords = "active appearance model, conditional random held, manual sign, non-manual sign, Sign language recognition, support vector machine",
author = "Yang, {Hee Deok} and Lee, {Seong Whan}",
year = "2011",
month = "11",
day = "7",
doi = "10.1109/ICMLC.2011.6016973",
language = "English",
isbn = "9781457703065",
volume = "4",
pages = "1726--1731",
booktitle = "Proceedings - International Conference on Machine Learning and Cybernetics",

}

TY - GEN

T1 - Combination of manual and non-manual features for sign language recognition based on conditional random field and active appearance model

AU - Yang, Hee Deok

AU - Lee, Seong Whan

PY - 2011/11/7

Y1 - 2011/11/7

N2 - Sign language recognition is the task of detection and recognition of manual signals (MSs) and non-manual signals (NMSs) in a signed utterance. In this paper, a novel method for recognizing MS and facial expressions as a NMS is proposed. This is achieved through a framework consisting of three components: (1) Candidate segments of MSs are discriminated using an hierarchical conditional random field (CRF) and Boost-Map embedding. It can distinguish signs, fingerspellings and non-sign patterns, and is robust to the various sizes, scales and rotations of the signer's hand. (2) Facial expressions as a NMS are recognized with support vector machine (SVM) and active appearance model (AAM), AAM is used to extract facial feature points. From these facial feature points, several measurements are computed to distinguish each facial component into defined facial expressions with SVM. (3) Finally, the recognition results of MSs and NMSs are fused in order to recognize signed sentences. Experiments demonstrate that the proposed method can successfully combine MSs and NMSs features for recognizing signed sentences from utterance data.

AB - Sign language recognition is the task of detection and recognition of manual signals (MSs) and non-manual signals (NMSs) in a signed utterance. In this paper, a novel method for recognizing MS and facial expressions as a NMS is proposed. This is achieved through a framework consisting of three components: (1) Candidate segments of MSs are discriminated using an hierarchical conditional random field (CRF) and Boost-Map embedding. It can distinguish signs, fingerspellings and non-sign patterns, and is robust to the various sizes, scales and rotations of the signer's hand. (2) Facial expressions as a NMS are recognized with support vector machine (SVM) and active appearance model (AAM), AAM is used to extract facial feature points. From these facial feature points, several measurements are computed to distinguish each facial component into defined facial expressions with SVM. (3) Finally, the recognition results of MSs and NMSs are fused in order to recognize signed sentences. Experiments demonstrate that the proposed method can successfully combine MSs and NMSs features for recognizing signed sentences from utterance data.

KW - active appearance model

KW - conditional random held

KW - manual sign

KW - non-manual sign

KW - Sign language recognition

KW - support vector machine

UR - http://www.scopus.com/inward/record.url?scp=80155203162&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=80155203162&partnerID=8YFLogxK

U2 - 10.1109/ICMLC.2011.6016973

DO - 10.1109/ICMLC.2011.6016973

M3 - Conference contribution

AN - SCOPUS:80155203162

SN - 9781457703065

VL - 4

SP - 1726

EP - 1731

BT - Proceedings - International Conference on Machine Learning and Cybernetics

ER -