Classifying faces by sex is more accurate with 3D shape information than with texture

A. J. O'Toole, T. Vetter, N. F. Troje, Heinrich Bulthoff

Research output: Contribution to journalArticle

Abstract

Purpose. We compared quality of information available in 3D surface models versus texture maps for classifying human faces by sex. Methods. 3D surface models and texture maps from laser scans of 130 human heads (65 male, 65 female) were analyzed with separate principal components analyses (PCAs). Individual principal components (PCs) from the 3D head data characterized complex structural differences between male and female heads. Likewise, individual PCs in the texture analysis contrasted characteristically male vs. female texture patterns (e.g., presence/absence of facial hair shadowing). More formally, representing faces with only their projection coefficients onto the PCs, and varying the subspace from 1 to 50 dimensions, we trained a series of perceptrons to predict the sex of the faces using either the 3D or texture data. A "leave-one-out" technique was applied to measure the generalizability of the perceptron's sex predictions. Results. While very good sex generalization performance was obtained for both representations, even with very low dimensional subspaces (e.g., 76.1% correct with only one 3D projection coefficient), the 3D data supported more accurate sex classification across nearly the entire range of subspaces tested. For texture, 93.8% correct sex generalization was achieved with a minimun subspace of 20 projection coefficients. For 3D data, 96.9% correct generalization was achieved with 17 projection coefficients. Conclusions. These data highlight the importance of considering the kinds of information available in different face representations with respect to the task demands.

Original languageEnglish
JournalInvestigative Ophthalmology and Visual Science
Volume37
Issue number3
Publication statusPublished - 1996 Feb 15
Externally publishedYes

Fingerprint

Neural Networks (Computer)
Head
Principal Component Analysis
Hair
Lasers

ASJC Scopus subject areas

  • Ophthalmology

Cite this

Classifying faces by sex is more accurate with 3D shape information than with texture. / O'Toole, A. J.; Vetter, T.; Troje, N. F.; Bulthoff, Heinrich.

In: Investigative Ophthalmology and Visual Science, Vol. 37, No. 3, 15.02.1996.

Research output: Contribution to journalArticle

@article{5427d2734234444db8f583115106f5a4,
title = "Classifying faces by sex is more accurate with 3D shape information than with texture",
abstract = "Purpose. We compared quality of information available in 3D surface models versus texture maps for classifying human faces by sex. Methods. 3D surface models and texture maps from laser scans of 130 human heads (65 male, 65 female) were analyzed with separate principal components analyses (PCAs). Individual principal components (PCs) from the 3D head data characterized complex structural differences between male and female heads. Likewise, individual PCs in the texture analysis contrasted characteristically male vs. female texture patterns (e.g., presence/absence of facial hair shadowing). More formally, representing faces with only their projection coefficients onto the PCs, and varying the subspace from 1 to 50 dimensions, we trained a series of perceptrons to predict the sex of the faces using either the 3D or texture data. A {"}leave-one-out{"} technique was applied to measure the generalizability of the perceptron's sex predictions. Results. While very good sex generalization performance was obtained for both representations, even with very low dimensional subspaces (e.g., 76.1{\%} correct with only one 3D projection coefficient), the 3D data supported more accurate sex classification across nearly the entire range of subspaces tested. For texture, 93.8{\%} correct sex generalization was achieved with a minimun subspace of 20 projection coefficients. For 3D data, 96.9{\%} correct generalization was achieved with 17 projection coefficients. Conclusions. These data highlight the importance of considering the kinds of information available in different face representations with respect to the task demands.",
author = "O'Toole, {A. J.} and T. Vetter and Troje, {N. F.} and Heinrich Bulthoff",
year = "1996",
month = "2",
day = "15",
language = "English",
volume = "37",
journal = "Investigative Ophthalmology and Visual Science",
issn = "0146-0404",
publisher = "Association for Research in Vision and Ophthalmology Inc.",
number = "3",

}

TY - JOUR

T1 - Classifying faces by sex is more accurate with 3D shape information than with texture

AU - O'Toole, A. J.

AU - Vetter, T.

AU - Troje, N. F.

AU - Bulthoff, Heinrich

PY - 1996/2/15

Y1 - 1996/2/15

N2 - Purpose. We compared quality of information available in 3D surface models versus texture maps for classifying human faces by sex. Methods. 3D surface models and texture maps from laser scans of 130 human heads (65 male, 65 female) were analyzed with separate principal components analyses (PCAs). Individual principal components (PCs) from the 3D head data characterized complex structural differences between male and female heads. Likewise, individual PCs in the texture analysis contrasted characteristically male vs. female texture patterns (e.g., presence/absence of facial hair shadowing). More formally, representing faces with only their projection coefficients onto the PCs, and varying the subspace from 1 to 50 dimensions, we trained a series of perceptrons to predict the sex of the faces using either the 3D or texture data. A "leave-one-out" technique was applied to measure the generalizability of the perceptron's sex predictions. Results. While very good sex generalization performance was obtained for both representations, even with very low dimensional subspaces (e.g., 76.1% correct with only one 3D projection coefficient), the 3D data supported more accurate sex classification across nearly the entire range of subspaces tested. For texture, 93.8% correct sex generalization was achieved with a minimun subspace of 20 projection coefficients. For 3D data, 96.9% correct generalization was achieved with 17 projection coefficients. Conclusions. These data highlight the importance of considering the kinds of information available in different face representations with respect to the task demands.

AB - Purpose. We compared quality of information available in 3D surface models versus texture maps for classifying human faces by sex. Methods. 3D surface models and texture maps from laser scans of 130 human heads (65 male, 65 female) were analyzed with separate principal components analyses (PCAs). Individual principal components (PCs) from the 3D head data characterized complex structural differences between male and female heads. Likewise, individual PCs in the texture analysis contrasted characteristically male vs. female texture patterns (e.g., presence/absence of facial hair shadowing). More formally, representing faces with only their projection coefficients onto the PCs, and varying the subspace from 1 to 50 dimensions, we trained a series of perceptrons to predict the sex of the faces using either the 3D or texture data. A "leave-one-out" technique was applied to measure the generalizability of the perceptron's sex predictions. Results. While very good sex generalization performance was obtained for both representations, even with very low dimensional subspaces (e.g., 76.1% correct with only one 3D projection coefficient), the 3D data supported more accurate sex classification across nearly the entire range of subspaces tested. For texture, 93.8% correct sex generalization was achieved with a minimun subspace of 20 projection coefficients. For 3D data, 96.9% correct generalization was achieved with 17 projection coefficients. Conclusions. These data highlight the importance of considering the kinds of information available in different face representations with respect to the task demands.

UR - http://www.scopus.com/inward/record.url?scp=33750173926&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33750173926&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:33750173926

VL - 37

JO - Investigative Ophthalmology and Visual Science

JF - Investigative Ophthalmology and Visual Science

SN - 0146-0404

IS - 3

ER -