The POETICON enacted scenario corpus A tool for human and computational experiments on action understanding

Christian Wallraven, Michael Schultze, Betty Mohler, Argiro Vatakis, Katerina Pastra

Research output: Chapter in Book/Report/Conference proceedingConference contribution

11 Citations (Scopus)

Abstract

A good data corpus lies at the heart of progress in both perceptual/cognitive science and in computer vision. While there are a few datasets that deal with simple actions, creating a realistic corpus for complex, long action sequences that contains also human-human interactions has so far not been attempted to our knowledge. Here, we introduce such a corpus for (inter)action understanding that contains six everyday scenarios taking place in a kitchen / living-room setting. Each scenario was acted out several times by different pairs of actors and contains simple object interactions as well as spoken dialogue. In addition, each scenario was first recorded with several HD cameras and also with motion-capturing of the actors and several key objects. Having access to the motion capture data allows not only for kinematic analyses, but also allows for the production of realistic animations where all aspects of the scenario can be fully controlled. We also present results from a first series of perceptual experiments that show how humans are able to infer scenario classes, as well as individual actions and objects from computer animations of everyday situations. These results can serve as a benchmark for future computational approaches that begin to take on complex action understanding.

Original languageEnglish
Title of host publication2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, FG 2011
Pages484-491
Number of pages8
DOIs
Publication statusPublished - 2011 Jun 17
Event2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, FG 2011 - Santa Barbara, CA, United States
Duration: 2011 Mar 212011 Mar 25

Other

Other2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, FG 2011
CountryUnited States
CitySanta Barbara, CA
Period11/3/2111/3/25

Fingerprint

Animation
Kitchens
Computer vision
Data acquisition
Kinematics
Experiments
Cameras

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition

Cite this

Wallraven, C., Schultze, M., Mohler, B., Vatakis, A., & Pastra, K. (2011). The POETICON enacted scenario corpus A tool for human and computational experiments on action understanding. In 2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, FG 2011 (pp. 484-491). [5771446] https://doi.org/10.1109/FG.2011.5771446

The POETICON enacted scenario corpus A tool for human and computational experiments on action understanding. / Wallraven, Christian; Schultze, Michael; Mohler, Betty; Vatakis, Argiro; Pastra, Katerina.

2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, FG 2011. 2011. p. 484-491 5771446.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Wallraven, C, Schultze, M, Mohler, B, Vatakis, A & Pastra, K 2011, The POETICON enacted scenario corpus A tool for human and computational experiments on action understanding. in 2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, FG 2011., 5771446, pp. 484-491, 2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, FG 2011, Santa Barbara, CA, United States, 11/3/21. https://doi.org/10.1109/FG.2011.5771446
Wallraven C, Schultze M, Mohler B, Vatakis A, Pastra K. The POETICON enacted scenario corpus A tool for human and computational experiments on action understanding. In 2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, FG 2011. 2011. p. 484-491. 5771446 https://doi.org/10.1109/FG.2011.5771446
Wallraven, Christian ; Schultze, Michael ; Mohler, Betty ; Vatakis, Argiro ; Pastra, Katerina. / The POETICON enacted scenario corpus A tool for human and computational experiments on action understanding. 2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, FG 2011. 2011. pp. 484-491
@inproceedings{e2cc4905e48846d583abf59f805fb4d1,
title = "The POETICON enacted scenario corpus A tool for human and computational experiments on action understanding",
abstract = "A good data corpus lies at the heart of progress in both perceptual/cognitive science and in computer vision. While there are a few datasets that deal with simple actions, creating a realistic corpus for complex, long action sequences that contains also human-human interactions has so far not been attempted to our knowledge. Here, we introduce such a corpus for (inter)action understanding that contains six everyday scenarios taking place in a kitchen / living-room setting. Each scenario was acted out several times by different pairs of actors and contains simple object interactions as well as spoken dialogue. In addition, each scenario was first recorded with several HD cameras and also with motion-capturing of the actors and several key objects. Having access to the motion capture data allows not only for kinematic analyses, but also allows for the production of realistic animations where all aspects of the scenario can be fully controlled. We also present results from a first series of perceptual experiments that show how humans are able to infer scenario classes, as well as individual actions and objects from computer animations of everyday situations. These results can serve as a benchmark for future computational approaches that begin to take on complex action understanding.",
author = "Christian Wallraven and Michael Schultze and Betty Mohler and Argiro Vatakis and Katerina Pastra",
year = "2011",
month = "6",
day = "17",
doi = "10.1109/FG.2011.5771446",
language = "English",
isbn = "9781424491407",
pages = "484--491",
booktitle = "2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, FG 2011",

}

TY - GEN

T1 - The POETICON enacted scenario corpus A tool for human and computational experiments on action understanding

AU - Wallraven, Christian

AU - Schultze, Michael

AU - Mohler, Betty

AU - Vatakis, Argiro

AU - Pastra, Katerina

PY - 2011/6/17

Y1 - 2011/6/17

N2 - A good data corpus lies at the heart of progress in both perceptual/cognitive science and in computer vision. While there are a few datasets that deal with simple actions, creating a realistic corpus for complex, long action sequences that contains also human-human interactions has so far not been attempted to our knowledge. Here, we introduce such a corpus for (inter)action understanding that contains six everyday scenarios taking place in a kitchen / living-room setting. Each scenario was acted out several times by different pairs of actors and contains simple object interactions as well as spoken dialogue. In addition, each scenario was first recorded with several HD cameras and also with motion-capturing of the actors and several key objects. Having access to the motion capture data allows not only for kinematic analyses, but also allows for the production of realistic animations where all aspects of the scenario can be fully controlled. We also present results from a first series of perceptual experiments that show how humans are able to infer scenario classes, as well as individual actions and objects from computer animations of everyday situations. These results can serve as a benchmark for future computational approaches that begin to take on complex action understanding.

AB - A good data corpus lies at the heart of progress in both perceptual/cognitive science and in computer vision. While there are a few datasets that deal with simple actions, creating a realistic corpus for complex, long action sequences that contains also human-human interactions has so far not been attempted to our knowledge. Here, we introduce such a corpus for (inter)action understanding that contains six everyday scenarios taking place in a kitchen / living-room setting. Each scenario was acted out several times by different pairs of actors and contains simple object interactions as well as spoken dialogue. In addition, each scenario was first recorded with several HD cameras and also with motion-capturing of the actors and several key objects. Having access to the motion capture data allows not only for kinematic analyses, but also allows for the production of realistic animations where all aspects of the scenario can be fully controlled. We also present results from a first series of perceptual experiments that show how humans are able to infer scenario classes, as well as individual actions and objects from computer animations of everyday situations. These results can serve as a benchmark for future computational approaches that begin to take on complex action understanding.

UR - http://www.scopus.com/inward/record.url?scp=79958722724&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=79958722724&partnerID=8YFLogxK

U2 - 10.1109/FG.2011.5771446

DO - 10.1109/FG.2011.5771446

M3 - Conference contribution

SN - 9781424491407

SP - 484

EP - 491

BT - 2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, FG 2011

ER -