Facial expression recognition using active contour-based face detection, facial movement-based feature extraction, and non-linear feature selection

Muhammad Hameed Siddiqi, Rahman Ali, Adil Mehmood Khan, Eun Soo Kim, Jeonghyun Kim, Sungyoung Lee

Research output: Contribution to journalArticle

22 Citations (Scopus)

Abstract

Knowledge about people’s emotions can serve as an important context for automatic service delivery in context-aware systems. Hence, human facial expression recognition (FER) has emerged as an important research area over the last two decades. To accurately recognize expressions, FER systems require automatic face detection followed by the extraction of robust features from important facial parts. Furthermore, the process should be less susceptible to the presence of noise, such as different lighting conditions and variations in facial characteristics of subjects. Accordingly, this work implements a robust FER system, capable of providing high recognition accuracy even in the presence of aforementioned variations. The system uses an unsupervised technique based on active contour model for automatic face detection and extraction. In this model, a combination of two energy functions: Chan–Vese energy and Bhattacharyya distance functions are employed to minimize the dissimilarities within a face and maximize the distance between the face and the background. Next, noise reduction is achieved by means of wavelet decomposition, followed by the extraction of facial movement features using optical flow. These features reflect facial muscle movements which signify static, dynamic, geometric, and appearance characteristics of facial expressions. Post-feature extraction, feature selection, is performed using Stepwise Linear Discriminant Analysis, which is more robust in contrast to previously employed feature selection methods for FER. Finally, expressions are recognized using trained HMM(s). To show the robustness of the proposed system, unlike most of the previous works, which were evaluated using a single dataset, performance of the proposed system is assessed in a large-scale experimentation using five publicly available different datasets. The weighted average recognition rate across these datasets indicates the success of employing the proposed system for FER.

Original languageEnglish
Pages (from-to)541-555
Number of pages15
JournalMultimedia Systems
Volume21
Issue number6
DOIs
Publication statusPublished - 2014 Jul 20

Fingerprint

Face recognition
Feature extraction
Wavelet decomposition
Optical flows
Discriminant analysis
Noise abatement
Muscle
Lighting

Keywords

  • Active contour
  • Face detection
  • Facial expressions
  • Hidden Markov model
  • Level set
  • Optical flow
  • Stepwise linear discriminant analysis
  • Wavelet transform

ASJC Scopus subject areas

  • Media Technology
  • Hardware and Architecture
  • Information Systems
  • Software
  • Computer Networks and Communications

Cite this

Facial expression recognition using active contour-based face detection, facial movement-based feature extraction, and non-linear feature selection. / Siddiqi, Muhammad Hameed; Ali, Rahman; Khan, Adil Mehmood; Kim, Eun Soo; Kim, Jeonghyun; Lee, Sungyoung.

In: Multimedia Systems, Vol. 21, No. 6, 20.07.2014, p. 541-555.

Research output: Contribution to journalArticle

Siddiqi, Muhammad Hameed ; Ali, Rahman ; Khan, Adil Mehmood ; Kim, Eun Soo ; Kim, Jeonghyun ; Lee, Sungyoung. / Facial expression recognition using active contour-based face detection, facial movement-based feature extraction, and non-linear feature selection. In: Multimedia Systems. 2014 ; Vol. 21, No. 6. pp. 541-555.
@article{330365b738e94b10aa73c0893e20b8b9,
title = "Facial expression recognition using active contour-based face detection, facial movement-based feature extraction, and non-linear feature selection",
abstract = "Knowledge about people’s emotions can serve as an important context for automatic service delivery in context-aware systems. Hence, human facial expression recognition (FER) has emerged as an important research area over the last two decades. To accurately recognize expressions, FER systems require automatic face detection followed by the extraction of robust features from important facial parts. Furthermore, the process should be less susceptible to the presence of noise, such as different lighting conditions and variations in facial characteristics of subjects. Accordingly, this work implements a robust FER system, capable of providing high recognition accuracy even in the presence of aforementioned variations. The system uses an unsupervised technique based on active contour model for automatic face detection and extraction. In this model, a combination of two energy functions: Chan–Vese energy and Bhattacharyya distance functions are employed to minimize the dissimilarities within a face and maximize the distance between the face and the background. Next, noise reduction is achieved by means of wavelet decomposition, followed by the extraction of facial movement features using optical flow. These features reflect facial muscle movements which signify static, dynamic, geometric, and appearance characteristics of facial expressions. Post-feature extraction, feature selection, is performed using Stepwise Linear Discriminant Analysis, which is more robust in contrast to previously employed feature selection methods for FER. Finally, expressions are recognized using trained HMM(s). To show the robustness of the proposed system, unlike most of the previous works, which were evaluated using a single dataset, performance of the proposed system is assessed in a large-scale experimentation using five publicly available different datasets. The weighted average recognition rate across these datasets indicates the success of employing the proposed system for FER.",
keywords = "Active contour, Face detection, Facial expressions, Hidden Markov model, Level set, Optical flow, Stepwise linear discriminant analysis, Wavelet transform",
author = "Siddiqi, {Muhammad Hameed} and Rahman Ali and Khan, {Adil Mehmood} and Kim, {Eun Soo} and Jeonghyun Kim and Sungyoung Lee",
year = "2014",
month = "7",
day = "20",
doi = "10.1007/s00530-014-0400-2",
language = "English",
volume = "21",
pages = "541--555",
journal = "Multimedia Systems",
issn = "0942-4962",
publisher = "Springer Verlag",
number = "6",

}

TY - JOUR

T1 - Facial expression recognition using active contour-based face detection, facial movement-based feature extraction, and non-linear feature selection

AU - Siddiqi, Muhammad Hameed

AU - Ali, Rahman

AU - Khan, Adil Mehmood

AU - Kim, Eun Soo

AU - Kim, Jeonghyun

AU - Lee, Sungyoung

PY - 2014/7/20

Y1 - 2014/7/20

N2 - Knowledge about people’s emotions can serve as an important context for automatic service delivery in context-aware systems. Hence, human facial expression recognition (FER) has emerged as an important research area over the last two decades. To accurately recognize expressions, FER systems require automatic face detection followed by the extraction of robust features from important facial parts. Furthermore, the process should be less susceptible to the presence of noise, such as different lighting conditions and variations in facial characteristics of subjects. Accordingly, this work implements a robust FER system, capable of providing high recognition accuracy even in the presence of aforementioned variations. The system uses an unsupervised technique based on active contour model for automatic face detection and extraction. In this model, a combination of two energy functions: Chan–Vese energy and Bhattacharyya distance functions are employed to minimize the dissimilarities within a face and maximize the distance between the face and the background. Next, noise reduction is achieved by means of wavelet decomposition, followed by the extraction of facial movement features using optical flow. These features reflect facial muscle movements which signify static, dynamic, geometric, and appearance characteristics of facial expressions. Post-feature extraction, feature selection, is performed using Stepwise Linear Discriminant Analysis, which is more robust in contrast to previously employed feature selection methods for FER. Finally, expressions are recognized using trained HMM(s). To show the robustness of the proposed system, unlike most of the previous works, which were evaluated using a single dataset, performance of the proposed system is assessed in a large-scale experimentation using five publicly available different datasets. The weighted average recognition rate across these datasets indicates the success of employing the proposed system for FER.

AB - Knowledge about people’s emotions can serve as an important context for automatic service delivery in context-aware systems. Hence, human facial expression recognition (FER) has emerged as an important research area over the last two decades. To accurately recognize expressions, FER systems require automatic face detection followed by the extraction of robust features from important facial parts. Furthermore, the process should be less susceptible to the presence of noise, such as different lighting conditions and variations in facial characteristics of subjects. Accordingly, this work implements a robust FER system, capable of providing high recognition accuracy even in the presence of aforementioned variations. The system uses an unsupervised technique based on active contour model for automatic face detection and extraction. In this model, a combination of two energy functions: Chan–Vese energy and Bhattacharyya distance functions are employed to minimize the dissimilarities within a face and maximize the distance between the face and the background. Next, noise reduction is achieved by means of wavelet decomposition, followed by the extraction of facial movement features using optical flow. These features reflect facial muscle movements which signify static, dynamic, geometric, and appearance characteristics of facial expressions. Post-feature extraction, feature selection, is performed using Stepwise Linear Discriminant Analysis, which is more robust in contrast to previously employed feature selection methods for FER. Finally, expressions are recognized using trained HMM(s). To show the robustness of the proposed system, unlike most of the previous works, which were evaluated using a single dataset, performance of the proposed system is assessed in a large-scale experimentation using five publicly available different datasets. The weighted average recognition rate across these datasets indicates the success of employing the proposed system for FER.

KW - Active contour

KW - Face detection

KW - Facial expressions

KW - Hidden Markov model

KW - Level set

KW - Optical flow

KW - Stepwise linear discriminant analysis

KW - Wavelet transform

UR - http://www.scopus.com/inward/record.url?scp=84942195363&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84942195363&partnerID=8YFLogxK

U2 - 10.1007/s00530-014-0400-2

DO - 10.1007/s00530-014-0400-2

M3 - Article

AN - SCOPUS:84942195363

VL - 21

SP - 541

EP - 555

JO - Multimedia Systems

JF - Multimedia Systems

SN - 0942-4962

IS - 6

ER -