Real-time tracking of visually attended objects in virtual environments and its application to LOD

Sungkil Lee, Jeonghyun Kim, Seungmoon Choi

Research output: Contribution to journalArticle

34 Citations (Scopus)

Abstract

This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

Original languageEnglish
Article number4531740
Pages (from-to)6-19
Number of pages14
JournalIEEE Transactions on Visualization and Computer Graphics
Volume15
Issue number1
DOIs
Publication statusPublished - 2009 Jan 1

Fingerprint

Virtual reality
Spatial Behavior
Cognition
Head
Hardware
Experiments
Graphics processing unit

Keywords

  • Bottom-up feature
  • Level of detail
  • Saliency map
  • Top-down context
  • Virtual environment
  • Visual attention

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Software
  • Computer Vision and Pattern Recognition
  • Signal Processing

Cite this

Real-time tracking of visually attended objects in virtual environments and its application to LOD. / Lee, Sungkil; Kim, Jeonghyun; Choi, Seungmoon.

In: IEEE Transactions on Visualization and Computer Graphics, Vol. 15, No. 1, 4531740, 01.01.2009, p. 6-19.

Research output: Contribution to journalArticle

@article{24554c7465b4420db86dc87e08b5192c,
title = "Real-time tracking of visually attended objects in virtual environments and its application to LOD",
abstract = "This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.",
keywords = "Bottom-up feature, Level of detail, Saliency map, Top-down context, Virtual environment, Visual attention",
author = "Sungkil Lee and Jeonghyun Kim and Seungmoon Choi",
year = "2009",
month = "1",
day = "1",
doi = "10.1109/TVCG.2008.82",
language = "English",
volume = "15",
pages = "6--19",
journal = "IEEE Transactions on Visualization and Computer Graphics",
issn = "1077-2626",
publisher = "IEEE Computer Society",
number = "1",

}

TY - JOUR

T1 - Real-time tracking of visually attended objects in virtual environments and its application to LOD

AU - Lee, Sungkil

AU - Kim, Jeonghyun

AU - Choi, Seungmoon

PY - 2009/1/1

Y1 - 2009/1/1

N2 - This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

AB - This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

KW - Bottom-up feature

KW - Level of detail

KW - Saliency map

KW - Top-down context

KW - Virtual environment

KW - Visual attention

UR - http://www.scopus.com/inward/record.url?scp=59449087750&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=59449087750&partnerID=8YFLogxK

U2 - 10.1109/TVCG.2008.82

DO - 10.1109/TVCG.2008.82

M3 - Article

VL - 15

SP - 6

EP - 19

JO - IEEE Transactions on Visualization and Computer Graphics

JF - IEEE Transactions on Visualization and Computer Graphics

SN - 1077-2626

IS - 1

M1 - 4531740

ER -