Using 3D computer graphics for perception: The role of local and global information in face processing

Adrian Schwaninger, Sandra Schumacher, Heinrich Bulthoff, Christian Wallraven

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Citations (Scopus)

Abstract

Everyday life requires us to recognize faces under transient changes in pose, expression and lighting conditions. Despite this, humans are adept at recognizing familiar faces. In this study, we focused on determining the types of information human observers use to recognize faces across variations in viewpoint. Of specific interest was whether holistic information is used exclusively, or whether the local information contained in facial parts (featural or component information), as well as their spatial relationships (configural information) is also encoded. A rigorous study investigating this question has not previously been possible, as the generation of a suitable set of stimuli using standard image manipulation techniques was not feasible. A 3D database of faces that have been processed to extract morphable models (Blanz & Vetter, 1999) allows us to generate such stimuli efficiently and with a high degree of control over display parameters. Three experiments were conducted, modeled after the inter-extra-ortho experiments by Bülthoff & Edelman, 1992. The first experiment served as a baseline for the subsequent two experiments. Ten face-stimuli were presented from a frontal view and from a 45° side view. At test, they had to be recognized among ten distractor faces shown from different viewpoints. We found systematic effects of viewpoint, in that the recognition performance increased as the angle between the learned view and the tested view decreased. This finding is consistent with face processing models based on 2D-view interpolation. Experiments 2 and 3 were the same as Experiment 1 expect for the fact that in the testing phase, the faces were presented scrambled or blurred. Scrambling was used to isolate featural from configural information. Blurring was used to provide stimuli in which local featural information was reduced. The results demonstrated that human observers are capable of recognizing faces across different viewpoints on the sole basis of isolated featural information and of isolated configural information.

Original languageEnglish
Title of host publicationACM International Conference Proceeding Series
Pages19-26
Number of pages8
Volume253
DOIs
Publication statusPublished - 2007 Dec 14
Externally publishedYes
EventAPGV 2007: 4th Symposium on Applied Perception in Graphics and Visualization - Tubingen, Germany
Duration: 2007 May 252007 May 27

Other

OtherAPGV 2007: 4th Symposium on Applied Perception in Graphics and Visualization
CountryGermany
CityTubingen
Period07/5/2507/5/27

Fingerprint

Computer graphics
Processing
Experiments
Interpolation
Lighting
Display devices
Testing

Keywords

  • Face recognition
  • Psychophysics
  • Viewpoint generalization

ASJC Scopus subject areas

  • Human-Computer Interaction

Cite this

Using 3D computer graphics for perception : The role of local and global information in face processing. / Schwaninger, Adrian; Schumacher, Sandra; Bulthoff, Heinrich; Wallraven, Christian.

ACM International Conference Proceeding Series. Vol. 253 2007. p. 19-26.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Schwaninger, A, Schumacher, S, Bulthoff, H & Wallraven, C 2007, Using 3D computer graphics for perception: The role of local and global information in face processing. in ACM International Conference Proceeding Series. vol. 253, pp. 19-26, APGV 2007: 4th Symposium on Applied Perception in Graphics and Visualization, Tubingen, Germany, 07/5/25. https://doi.org/10.1145/1272582.1272586
Schwaninger, Adrian ; Schumacher, Sandra ; Bulthoff, Heinrich ; Wallraven, Christian. / Using 3D computer graphics for perception : The role of local and global information in face processing. ACM International Conference Proceeding Series. Vol. 253 2007. pp. 19-26
@inproceedings{d7330633ae4c43aab6a9ee35fff63c00,
title = "Using 3D computer graphics for perception: The role of local and global information in face processing",
abstract = "Everyday life requires us to recognize faces under transient changes in pose, expression and lighting conditions. Despite this, humans are adept at recognizing familiar faces. In this study, we focused on determining the types of information human observers use to recognize faces across variations in viewpoint. Of specific interest was whether holistic information is used exclusively, or whether the local information contained in facial parts (featural or component information), as well as their spatial relationships (configural information) is also encoded. A rigorous study investigating this question has not previously been possible, as the generation of a suitable set of stimuli using standard image manipulation techniques was not feasible. A 3D database of faces that have been processed to extract morphable models (Blanz & Vetter, 1999) allows us to generate such stimuli efficiently and with a high degree of control over display parameters. Three experiments were conducted, modeled after the inter-extra-ortho experiments by B{\"u}lthoff & Edelman, 1992. The first experiment served as a baseline for the subsequent two experiments. Ten face-stimuli were presented from a frontal view and from a 45° side view. At test, they had to be recognized among ten distractor faces shown from different viewpoints. We found systematic effects of viewpoint, in that the recognition performance increased as the angle between the learned view and the tested view decreased. This finding is consistent with face processing models based on 2D-view interpolation. Experiments 2 and 3 were the same as Experiment 1 expect for the fact that in the testing phase, the faces were presented scrambled or blurred. Scrambling was used to isolate featural from configural information. Blurring was used to provide stimuli in which local featural information was reduced. The results demonstrated that human observers are capable of recognizing faces across different viewpoints on the sole basis of isolated featural information and of isolated configural information.",
keywords = "Face recognition, Psychophysics, Viewpoint generalization",
author = "Adrian Schwaninger and Sandra Schumacher and Heinrich Bulthoff and Christian Wallraven",
year = "2007",
month = "12",
day = "14",
doi = "10.1145/1272582.1272586",
language = "English",
isbn = "159593670X",
volume = "253",
pages = "19--26",
booktitle = "ACM International Conference Proceeding Series",

}

TY - GEN

T1 - Using 3D computer graphics for perception

T2 - The role of local and global information in face processing

AU - Schwaninger, Adrian

AU - Schumacher, Sandra

AU - Bulthoff, Heinrich

AU - Wallraven, Christian

PY - 2007/12/14

Y1 - 2007/12/14

N2 - Everyday life requires us to recognize faces under transient changes in pose, expression and lighting conditions. Despite this, humans are adept at recognizing familiar faces. In this study, we focused on determining the types of information human observers use to recognize faces across variations in viewpoint. Of specific interest was whether holistic information is used exclusively, or whether the local information contained in facial parts (featural or component information), as well as their spatial relationships (configural information) is also encoded. A rigorous study investigating this question has not previously been possible, as the generation of a suitable set of stimuli using standard image manipulation techniques was not feasible. A 3D database of faces that have been processed to extract morphable models (Blanz & Vetter, 1999) allows us to generate such stimuli efficiently and with a high degree of control over display parameters. Three experiments were conducted, modeled after the inter-extra-ortho experiments by Bülthoff & Edelman, 1992. The first experiment served as a baseline for the subsequent two experiments. Ten face-stimuli were presented from a frontal view and from a 45° side view. At test, they had to be recognized among ten distractor faces shown from different viewpoints. We found systematic effects of viewpoint, in that the recognition performance increased as the angle between the learned view and the tested view decreased. This finding is consistent with face processing models based on 2D-view interpolation. Experiments 2 and 3 were the same as Experiment 1 expect for the fact that in the testing phase, the faces were presented scrambled or blurred. Scrambling was used to isolate featural from configural information. Blurring was used to provide stimuli in which local featural information was reduced. The results demonstrated that human observers are capable of recognizing faces across different viewpoints on the sole basis of isolated featural information and of isolated configural information.

AB - Everyday life requires us to recognize faces under transient changes in pose, expression and lighting conditions. Despite this, humans are adept at recognizing familiar faces. In this study, we focused on determining the types of information human observers use to recognize faces across variations in viewpoint. Of specific interest was whether holistic information is used exclusively, or whether the local information contained in facial parts (featural or component information), as well as their spatial relationships (configural information) is also encoded. A rigorous study investigating this question has not previously been possible, as the generation of a suitable set of stimuli using standard image manipulation techniques was not feasible. A 3D database of faces that have been processed to extract morphable models (Blanz & Vetter, 1999) allows us to generate such stimuli efficiently and with a high degree of control over display parameters. Three experiments were conducted, modeled after the inter-extra-ortho experiments by Bülthoff & Edelman, 1992. The first experiment served as a baseline for the subsequent two experiments. Ten face-stimuli were presented from a frontal view and from a 45° side view. At test, they had to be recognized among ten distractor faces shown from different viewpoints. We found systematic effects of viewpoint, in that the recognition performance increased as the angle between the learned view and the tested view decreased. This finding is consistent with face processing models based on 2D-view interpolation. Experiments 2 and 3 were the same as Experiment 1 expect for the fact that in the testing phase, the faces were presented scrambled or blurred. Scrambling was used to isolate featural from configural information. Blurring was used to provide stimuli in which local featural information was reduced. The results demonstrated that human observers are capable of recognizing faces across different viewpoints on the sole basis of isolated featural information and of isolated configural information.

KW - Face recognition

KW - Psychophysics

KW - Viewpoint generalization

UR - http://www.scopus.com/inward/record.url?scp=36849016018&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=36849016018&partnerID=8YFLogxK

U2 - 10.1145/1272582.1272586

DO - 10.1145/1272582.1272586

M3 - Conference contribution

AN - SCOPUS:36849016018

SN - 159593670X

SN - 9781595936707

VL - 253

SP - 19

EP - 26

BT - ACM International Conference Proceeding Series

ER -