Combining sensory information to improve visualization

Marc Ernst, Martin Banks, Felix Wichmann, Laurence Maloney, Heinrich Bulthoff

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Seemingly effortlessly the human brain reconstructs the three-dimensional environment surrounding us from the light pattern striking the eyes. This seems to be true across almost all viewing and lighting conditions. One important factor for this apparent easiness is the redundancy of information provided by the sensory organs. For example, perspective distortions, shading, motion parallax, or the disparity between the two eyesí images are all, at least partly, redundant signals which provide us with information about the three-dimensional layout of the visual scene. Our brain uses all these different sensory signals and combines the available information into a coherent percept. In displays visualizing data, however, the information is often highly reduced and abstracted, which may lead to an altered perception and therefore a misinterpretation of the visualized data. In this panel we will discuss mechanisms involved in the combination of sensory information and their implications for simulations using computer displays, as well as problems resulting from current display technology such as cathode-ray tubes.

Original languageEnglish
Title of host publicationProceedings of the IEEE Visualization Conference
EditorsR. Moorhead, M. Gross, K.I. Joy
Pages571-574
Number of pages4
Publication statusPublished - 2002 Jan 1
Externally publishedYes
EventVIS 2002, IEEE Visualisation 2002 - Boston, MA, United States
Duration: 2002 Oct 272002 Nov 1

Other

OtherVIS 2002, IEEE Visualisation 2002
CountryUnited States
CityBoston, MA
Period02/10/2702/11/1

Fingerprint

Visualization
Display devices
Brain
Cathode ray tubes
Redundancy
Lighting
Computer simulation

ASJC Scopus subject areas

  • Computer Science(all)
  • Engineering(all)

Cite this

Ernst, M., Banks, M., Wichmann, F., Maloney, L., & Bulthoff, H. (2002). Combining sensory information to improve visualization. In R. Moorhead, M. Gross, & K. I. Joy (Eds.), Proceedings of the IEEE Visualization Conference (pp. 571-574)

Combining sensory information to improve visualization. / Ernst, Marc; Banks, Martin; Wichmann, Felix; Maloney, Laurence; Bulthoff, Heinrich.

Proceedings of the IEEE Visualization Conference. ed. / R. Moorhead; M. Gross; K.I. Joy. 2002. p. 571-574.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ernst, M, Banks, M, Wichmann, F, Maloney, L & Bulthoff, H 2002, Combining sensory information to improve visualization. in R Moorhead, M Gross & KI Joy (eds), Proceedings of the IEEE Visualization Conference. pp. 571-574, VIS 2002, IEEE Visualisation 2002, Boston, MA, United States, 02/10/27.
Ernst M, Banks M, Wichmann F, Maloney L, Bulthoff H. Combining sensory information to improve visualization. In Moorhead R, Gross M, Joy KI, editors, Proceedings of the IEEE Visualization Conference. 2002. p. 571-574
Ernst, Marc ; Banks, Martin ; Wichmann, Felix ; Maloney, Laurence ; Bulthoff, Heinrich. / Combining sensory information to improve visualization. Proceedings of the IEEE Visualization Conference. editor / R. Moorhead ; M. Gross ; K.I. Joy. 2002. pp. 571-574
@inproceedings{b6b199830f444c79bd8a806baad6547b,
title = "Combining sensory information to improve visualization",
abstract = "Seemingly effortlessly the human brain reconstructs the three-dimensional environment surrounding us from the light pattern striking the eyes. This seems to be true across almost all viewing and lighting conditions. One important factor for this apparent easiness is the redundancy of information provided by the sensory organs. For example, perspective distortions, shading, motion parallax, or the disparity between the two eyes{\'i} images are all, at least partly, redundant signals which provide us with information about the three-dimensional layout of the visual scene. Our brain uses all these different sensory signals and combines the available information into a coherent percept. In displays visualizing data, however, the information is often highly reduced and abstracted, which may lead to an altered perception and therefore a misinterpretation of the visualized data. In this panel we will discuss mechanisms involved in the combination of sensory information and their implications for simulations using computer displays, as well as problems resulting from current display technology such as cathode-ray tubes.",
author = "Marc Ernst and Martin Banks and Felix Wichmann and Laurence Maloney and Heinrich Bulthoff",
year = "2002",
month = "1",
day = "1",
language = "English",
pages = "571--574",
editor = "R. Moorhead and M. Gross and K.I. Joy",
booktitle = "Proceedings of the IEEE Visualization Conference",

}

TY - GEN

T1 - Combining sensory information to improve visualization

AU - Ernst, Marc

AU - Banks, Martin

AU - Wichmann, Felix

AU - Maloney, Laurence

AU - Bulthoff, Heinrich

PY - 2002/1/1

Y1 - 2002/1/1

N2 - Seemingly effortlessly the human brain reconstructs the three-dimensional environment surrounding us from the light pattern striking the eyes. This seems to be true across almost all viewing and lighting conditions. One important factor for this apparent easiness is the redundancy of information provided by the sensory organs. For example, perspective distortions, shading, motion parallax, or the disparity between the two eyesí images are all, at least partly, redundant signals which provide us with information about the three-dimensional layout of the visual scene. Our brain uses all these different sensory signals and combines the available information into a coherent percept. In displays visualizing data, however, the information is often highly reduced and abstracted, which may lead to an altered perception and therefore a misinterpretation of the visualized data. In this panel we will discuss mechanisms involved in the combination of sensory information and their implications for simulations using computer displays, as well as problems resulting from current display technology such as cathode-ray tubes.

AB - Seemingly effortlessly the human brain reconstructs the three-dimensional environment surrounding us from the light pattern striking the eyes. This seems to be true across almost all viewing and lighting conditions. One important factor for this apparent easiness is the redundancy of information provided by the sensory organs. For example, perspective distortions, shading, motion parallax, or the disparity between the two eyesí images are all, at least partly, redundant signals which provide us with information about the three-dimensional layout of the visual scene. Our brain uses all these different sensory signals and combines the available information into a coherent percept. In displays visualizing data, however, the information is often highly reduced and abstracted, which may lead to an altered perception and therefore a misinterpretation of the visualized data. In this panel we will discuss mechanisms involved in the combination of sensory information and their implications for simulations using computer displays, as well as problems resulting from current display technology such as cathode-ray tubes.

UR - http://www.scopus.com/inward/record.url?scp=0036448618&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0036448618&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0036448618

SP - 571

EP - 574

BT - Proceedings of the IEEE Visualization Conference

A2 - Moorhead, R.

A2 - Gross, M.

A2 - Joy, K.I.

ER -