Visual influence on path integration in darkness indicates a multimodal representation of large-scale space

Lili Tcheang, Heinrich Bulthoff, Neil Burgess

Research output: Contribution to journalArticle

48 Citations (Scopus)

Abstract

Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map.

Original languageEnglish
Pages (from-to)1152-1157
Number of pages6
JournalProceedings of the National Academy of Sciences of the United States of America
Volume108
Issue number3
DOIs
Publication statusPublished - 2011 Jan 18

Fingerprint

Darkness
Aptitude
Imagery (Psychotherapy)

ASJC Scopus subject areas

  • General

Cite this

@article{ca2027ad653c49668902ebd598cbf3e0,
title = "Visual influence on path integration in darkness indicates a multimodal representation of large-scale space",
abstract = "Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map.",
author = "Lili Tcheang and Heinrich Bulthoff and Neil Burgess",
year = "2011",
month = "1",
day = "18",
doi = "10.1073/pnas.1011843108",
language = "English",
volume = "108",
pages = "1152--1157",
journal = "Proceedings of the National Academy of Sciences of the United States of America",
issn = "0027-8424",
number = "3",

}

TY - JOUR

T1 - Visual influence on path integration in darkness indicates a multimodal representation of large-scale space

AU - Tcheang, Lili

AU - Bulthoff, Heinrich

AU - Burgess, Neil

PY - 2011/1/18

Y1 - 2011/1/18

N2 - Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map.

AB - Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map.

UR - http://www.scopus.com/inward/record.url?scp=79551641197&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=79551641197&partnerID=8YFLogxK

U2 - 10.1073/pnas.1011843108

DO - 10.1073/pnas.1011843108

M3 - Article

C2 - 21199934

AN - SCOPUS:79551641197

VL - 108

SP - 1152

EP - 1157

JO - Proceedings of the National Academy of Sciences of the United States of America

JF - Proceedings of the National Academy of Sciences of the United States of America

SN - 0027-8424

IS - 3

ER -