Storing upright turns

How visual and vestibular cues interact during the encoding and recalling process

Manuel Vidal, Heinrich Bulthoff

Research output: Contribution to journalArticle

11 Citations (Scopus)

Abstract

Many previous studies have focused on how humans combine inputs provided by different modalities for the same physical property. However, it is not yet very clear how different senses providing information about our own movements combine in order to provide a motion percept. We designed an experiment to investigate how upright turns are stored, and particularly how vestibular and visual cues interact at the different stages of the memorization process (encoding/recalling). Subjects experienced passive yaw turns stimulated in the vestibular modality (whole-body rotations) and/or in the visual modality (limited lifetime star-field rotations), with the visual scene turning 1.5 times faster when combined (unnoticed conflict). Then they were asked to actively reproduce the rotation displacement in the opposite direction, with body cues only, visual cues only, or both cues with either the same or a different gain factor. First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles). Second, the unimodal recalling of turns was largely uninfluenced by the other sensory cue that it could be combined with during the encoding. Therefore, turns in each modality, visual, and vestibular are stored independently. Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions. This suggests that with both visual and vestibular cues available, these combine in order to improve the reproduction. Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced.

Original languageEnglish
Pages (from-to)37-49
Number of pages13
JournalExperimental Brain Research
Volume200
Issue number1
DOIs
Publication statusPublished - 2010 Jan 1
Externally publishedYes

Fingerprint

Cues
Reproduction
Yaws

Keywords

  • Multisensory integration
  • Self-motion
  • Spatial orientation
  • Vestibular
  • Yaw rotations

ASJC Scopus subject areas

  • Neuroscience(all)

Cite this

Storing upright turns : How visual and vestibular cues interact during the encoding and recalling process. / Vidal, Manuel; Bulthoff, Heinrich.

In: Experimental Brain Research, Vol. 200, No. 1, 01.01.2010, p. 37-49.

Research output: Contribution to journalArticle

@article{dff10ef4738f4235a89977105d403e19,
title = "Storing upright turns: How visual and vestibular cues interact during the encoding and recalling process",
abstract = "Many previous studies have focused on how humans combine inputs provided by different modalities for the same physical property. However, it is not yet very clear how different senses providing information about our own movements combine in order to provide a motion percept. We designed an experiment to investigate how upright turns are stored, and particularly how vestibular and visual cues interact at the different stages of the memorization process (encoding/recalling). Subjects experienced passive yaw turns stimulated in the vestibular modality (whole-body rotations) and/or in the visual modality (limited lifetime star-field rotations), with the visual scene turning 1.5 times faster when combined (unnoticed conflict). Then they were asked to actively reproduce the rotation displacement in the opposite direction, with body cues only, visual cues only, or both cues with either the same or a different gain factor. First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles). Second, the unimodal recalling of turns was largely uninfluenced by the other sensory cue that it could be combined with during the encoding. Therefore, turns in each modality, visual, and vestibular are stored independently. Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions. This suggests that with both visual and vestibular cues available, these combine in order to improve the reproduction. Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced.",
keywords = "Multisensory integration, Self-motion, Spatial orientation, Vestibular, Yaw rotations",
author = "Manuel Vidal and Heinrich Bulthoff",
year = "2010",
month = "1",
day = "1",
doi = "10.1007/s00221-009-1980-5",
language = "English",
volume = "200",
pages = "37--49",
journal = "Experimental Brain Research",
issn = "0014-4819",
publisher = "Springer Verlag",
number = "1",

}

TY - JOUR

T1 - Storing upright turns

T2 - How visual and vestibular cues interact during the encoding and recalling process

AU - Vidal, Manuel

AU - Bulthoff, Heinrich

PY - 2010/1/1

Y1 - 2010/1/1

N2 - Many previous studies have focused on how humans combine inputs provided by different modalities for the same physical property. However, it is not yet very clear how different senses providing information about our own movements combine in order to provide a motion percept. We designed an experiment to investigate how upright turns are stored, and particularly how vestibular and visual cues interact at the different stages of the memorization process (encoding/recalling). Subjects experienced passive yaw turns stimulated in the vestibular modality (whole-body rotations) and/or in the visual modality (limited lifetime star-field rotations), with the visual scene turning 1.5 times faster when combined (unnoticed conflict). Then they were asked to actively reproduce the rotation displacement in the opposite direction, with body cues only, visual cues only, or both cues with either the same or a different gain factor. First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles). Second, the unimodal recalling of turns was largely uninfluenced by the other sensory cue that it could be combined with during the encoding. Therefore, turns in each modality, visual, and vestibular are stored independently. Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions. This suggests that with both visual and vestibular cues available, these combine in order to improve the reproduction. Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced.

AB - Many previous studies have focused on how humans combine inputs provided by different modalities for the same physical property. However, it is not yet very clear how different senses providing information about our own movements combine in order to provide a motion percept. We designed an experiment to investigate how upright turns are stored, and particularly how vestibular and visual cues interact at the different stages of the memorization process (encoding/recalling). Subjects experienced passive yaw turns stimulated in the vestibular modality (whole-body rotations) and/or in the visual modality (limited lifetime star-field rotations), with the visual scene turning 1.5 times faster when combined (unnoticed conflict). Then they were asked to actively reproduce the rotation displacement in the opposite direction, with body cues only, visual cues only, or both cues with either the same or a different gain factor. First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles). Second, the unimodal recalling of turns was largely uninfluenced by the other sensory cue that it could be combined with during the encoding. Therefore, turns in each modality, visual, and vestibular are stored independently. Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions. This suggests that with both visual and vestibular cues available, these combine in order to improve the reproduction. Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced.

KW - Multisensory integration

KW - Self-motion

KW - Spatial orientation

KW - Vestibular

KW - Yaw rotations

UR - http://www.scopus.com/inward/record.url?scp=74449091844&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=74449091844&partnerID=8YFLogxK

U2 - 10.1007/s00221-009-1980-5

DO - 10.1007/s00221-009-1980-5

M3 - Article

VL - 200

SP - 37

EP - 49

JO - Experimental Brain Research

JF - Experimental Brain Research

SN - 0014-4819

IS - 1

ER -