Visual influence on path integration in darkness indicates a multimodal representation of large-scale space

Lili Tcheang, Heinrich Bulthoff, Neil Burgess

Research output: Contribution to journalArticle

49 Citations (Scopus)

Abstract

Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map.

Original languageEnglish
Pages (from-to)1152-1157
Number of pages6
JournalProceedings of the National Academy of Sciences of the United States of America
Volume108
Issue number3
DOIs
Publication statusPublished - 2011 Jan 18

    Fingerprint

ASJC Scopus subject areas

  • General

Cite this