MPI cybermotion simulator: Implementation of a novel motion simulator to investigate multisensory path integration in three dimensions

Michael Barnett-Cowan, Tobias Meilinger, Manuel Vidal, Harald Teufel, Heinrich Bulthoff

Research output: Contribution to journalArticle

18 Citations (Scopus)

Abstract

Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point 1. Humans can do path integration based exclusively on visual 2-3, auditory 4, or inertial cues 5. However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate 6-7. In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones 5. Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see 3 for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator 8-9 with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibularkinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s 2 peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is nonsymmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.

Original languageEnglish
Article numbere3436
JournalJournal of Visualized Experiments
Issue number63
DOIs
Publication statusPublished - 2012 May 10

Fingerprint

Simulators
Cues
Articular Range of Motion
Anthropomorphic robots
Optic Flow
Flanges
Seats
Stars
Optics
Brain
Navigation
Trajectories

Keywords

  • Cybernetics
  • Issue 63
  • Motion simulator
  • Multisensory integration
  • Neuroscience
  • Path integration
  • Robotics
  • Space perception
  • Vestibular
  • Vision

ASJC Scopus subject areas

  • Biochemistry, Genetics and Molecular Biology(all)
  • Chemical Engineering(all)
  • Immunology and Microbiology(all)
  • Medicine(all)
  • Neuroscience(all)

Cite this

MPI cybermotion simulator : Implementation of a novel motion simulator to investigate multisensory path integration in three dimensions. / Barnett-Cowan, Michael; Meilinger, Tobias; Vidal, Manuel; Teufel, Harald; Bulthoff, Heinrich.

In: Journal of Visualized Experiments, No. 63, e3436, 10.05.2012.

Research output: Contribution to journalArticle

@article{0357c93ba2fd4362ab56fdfe88f8c488,
title = "MPI cybermotion simulator: Implementation of a novel motion simulator to investigate multisensory path integration in three dimensions",
abstract = "Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point 1. Humans can do path integration based exclusively on visual 2-3, auditory 4, or inertial cues 5. However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate 6-7. In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones 5. Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see 3 for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator 8-9 with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibularkinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s 2 peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is nonsymmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.",
keywords = "Cybernetics, Issue 63, Motion simulator, Multisensory integration, Neuroscience, Path integration, Robotics, Space perception, Vestibular, Vision",
author = "Michael Barnett-Cowan and Tobias Meilinger and Manuel Vidal and Harald Teufel and Heinrich Bulthoff",
year = "2012",
month = "5",
day = "10",
doi = "10.3791/3436",
language = "English",
journal = "Journal of Visualized Experiments",
issn = "1940-087X",
publisher = "MYJoVE Corporation",
number = "63",

}

TY - JOUR

T1 - MPI cybermotion simulator

T2 - Implementation of a novel motion simulator to investigate multisensory path integration in three dimensions

AU - Barnett-Cowan, Michael

AU - Meilinger, Tobias

AU - Vidal, Manuel

AU - Teufel, Harald

AU - Bulthoff, Heinrich

PY - 2012/5/10

Y1 - 2012/5/10

N2 - Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point 1. Humans can do path integration based exclusively on visual 2-3, auditory 4, or inertial cues 5. However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate 6-7. In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones 5. Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see 3 for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator 8-9 with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibularkinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s 2 peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is nonsymmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.

AB - Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point 1. Humans can do path integration based exclusively on visual 2-3, auditory 4, or inertial cues 5. However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate 6-7. In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones 5. Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see 3 for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator 8-9 with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibularkinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s 2 peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is nonsymmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.

KW - Cybernetics

KW - Issue 63

KW - Motion simulator

KW - Multisensory integration

KW - Neuroscience

KW - Path integration

KW - Robotics

KW - Space perception

KW - Vestibular

KW - Vision

UR - http://www.scopus.com/inward/record.url?scp=84862762252&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84862762252&partnerID=8YFLogxK

U2 - 10.3791/3436

DO - 10.3791/3436

M3 - Article

C2 - 22617497

AN - SCOPUS:84862762252

JO - Journal of Visualized Experiments

JF - Journal of Visualized Experiments

SN - 1940-087X

IS - 63

M1 - e3436

ER -