We can learn a lot about someone by watching their facial expressions and body language. Harnessing these aspects of non-verbal communication can lend artificial communication agents greater depth and realism but requires a sound understanding of the relationship between cognition and expressive behaviour. Here, we extend traditional word-based methodology to use actual videos and then extract the semantic/cognitive space of facial expressions.We find that depending on the specific expressions used, either a four-dimensional or a two-dimensional space is needed to describe the variance in the stimuli. The shape and structure of the 4D and 2D spaces are related to each other and very stable to methodological changes. The results show that there is considerable variance between how different people express the same emotion. The recovered space can well capture the full range of facial communication and is very suitable for semantic-driven facial animation.
- Cognitive-based behavioural modelling
- Emotional models
- Facial expressions
ASJC Scopus subject areas
- Computer Graphics and Computer-Aided Design