Manipulating Video Sequences to Determine the Components of Conversational Facial Expressions

Douglas W. Cunningham, Mariokleiner Christianwallraven, Heinrich Bulthoff

Research output: Contribution to journalArticle

28 Citations (Scopus)

Abstract

Communication plays a central role in everday life. During an average conversation, information is exchanged in a variety of ways, including through facial motion. Here, we employ a custom, model-based image manipulation technique to selectively “freeze” portions of a face in video recordings in order to determine the areas that are sufficient for proper recognition of nine conversational expressions. The results show that most expressions rely primarily on a single facial area to convey meaning with different expressions using different areas. The results also show that the combination of rigid head, eye, eyebrow, and mouth motions is sufficient to produce expressions that are as easy to recognize as the original, unmanipulated recordings. Finally, the results show that the manipulation technique introduced few perceptible artifacts into the altered video sequences. This fusion of psychophysics and computer graphics techniques provides not only fundamental insights into human perception and cognition, but also yields the basis for a systematic description of what needs to move in order to produce realistic, recognizable conversational facial animations.

Original languageEnglish
Pages (from-to)251-269
Number of pages19
JournalACM Transactions on Applied Perception
Volume2
Issue number3
DOIs
Publication statusPublished - 2005
Externally publishedYes

Fingerprint

Video recording
Facial Expression
Computer graphics
Animation
Fusion reactions
Psychophysics
Computer Graphics
Eyebrows
Video Recording
Communication
Artifacts
Cognition
Mouth
Manipulation
Head
Facial Animation
Sufficient
Human Perception
Motion
Fusion

Keywords

  • animation
  • Applied perception
  • computer graphics
  • Experimentation
  • facial expressions
  • human-computer interface

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)
  • Experimental and Cognitive Psychology

Cite this

Manipulating Video Sequences to Determine the Components of Conversational Facial Expressions. / Cunningham, Douglas W.; Christianwallraven, Mariokleiner; Bulthoff, Heinrich.

In: ACM Transactions on Applied Perception, Vol. 2, No. 3, 2005, p. 251-269.

Research output: Contribution to journalArticle

Cunningham, Douglas W. ; Christianwallraven, Mariokleiner ; Bulthoff, Heinrich. / Manipulating Video Sequences to Determine the Components of Conversational Facial Expressions. In: ACM Transactions on Applied Perception. 2005 ; Vol. 2, No. 3. pp. 251-269.
@article{e23b4ca91a4e4a60b03b5f7f78c5a943,
title = "Manipulating Video Sequences to Determine the Components of Conversational Facial Expressions",
abstract = "Communication plays a central role in everday life. During an average conversation, information is exchanged in a variety of ways, including through facial motion. Here, we employ a custom, model-based image manipulation technique to selectively “freeze” portions of a face in video recordings in order to determine the areas that are sufficient for proper recognition of nine conversational expressions. The results show that most expressions rely primarily on a single facial area to convey meaning with different expressions using different areas. The results also show that the combination of rigid head, eye, eyebrow, and mouth motions is sufficient to produce expressions that are as easy to recognize as the original, unmanipulated recordings. Finally, the results show that the manipulation technique introduced few perceptible artifacts into the altered video sequences. This fusion of psychophysics and computer graphics techniques provides not only fundamental insights into human perception and cognition, but also yields the basis for a systematic description of what needs to move in order to produce realistic, recognizable conversational facial animations.",
keywords = "animation, Applied perception, computer graphics, Experimentation, facial expressions, human-computer interface",
author = "Cunningham, {Douglas W.} and Mariokleiner Christianwallraven and Heinrich Bulthoff",
year = "2005",
doi = "10.1145/1077399.1077404",
language = "English",
volume = "2",
pages = "251--269",
journal = "ACM Transactions on Applied Perception",
issn = "1544-3558",
publisher = "Association for Computing Machinery (ACM)",
number = "3",

}

TY - JOUR

T1 - Manipulating Video Sequences to Determine the Components of Conversational Facial Expressions

AU - Cunningham, Douglas W.

AU - Christianwallraven, Mariokleiner

AU - Bulthoff, Heinrich

PY - 2005

Y1 - 2005

N2 - Communication plays a central role in everday life. During an average conversation, information is exchanged in a variety of ways, including through facial motion. Here, we employ a custom, model-based image manipulation technique to selectively “freeze” portions of a face in video recordings in order to determine the areas that are sufficient for proper recognition of nine conversational expressions. The results show that most expressions rely primarily on a single facial area to convey meaning with different expressions using different areas. The results also show that the combination of rigid head, eye, eyebrow, and mouth motions is sufficient to produce expressions that are as easy to recognize as the original, unmanipulated recordings. Finally, the results show that the manipulation technique introduced few perceptible artifacts into the altered video sequences. This fusion of psychophysics and computer graphics techniques provides not only fundamental insights into human perception and cognition, but also yields the basis for a systematic description of what needs to move in order to produce realistic, recognizable conversational facial animations.

AB - Communication plays a central role in everday life. During an average conversation, information is exchanged in a variety of ways, including through facial motion. Here, we employ a custom, model-based image manipulation technique to selectively “freeze” portions of a face in video recordings in order to determine the areas that are sufficient for proper recognition of nine conversational expressions. The results show that most expressions rely primarily on a single facial area to convey meaning with different expressions using different areas. The results also show that the combination of rigid head, eye, eyebrow, and mouth motions is sufficient to produce expressions that are as easy to recognize as the original, unmanipulated recordings. Finally, the results show that the manipulation technique introduced few perceptible artifacts into the altered video sequences. This fusion of psychophysics and computer graphics techniques provides not only fundamental insights into human perception and cognition, but also yields the basis for a systematic description of what needs to move in order to produce realistic, recognizable conversational facial animations.

KW - animation

KW - Applied perception

KW - computer graphics

KW - Experimentation

KW - facial expressions

KW - human-computer interface

UR - http://www.scopus.com/inward/record.url?scp=33749046446&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33749046446&partnerID=8YFLogxK

U2 - 10.1145/1077399.1077404

DO - 10.1145/1077399.1077404

M3 - Article

AN - SCOPUS:33749046446

VL - 2

SP - 251

EP - 269

JO - ACM Transactions on Applied Perception

JF - ACM Transactions on Applied Perception

SN - 1544-3558

IS - 3

ER -