An integrated VR platform for 3D and image based models: A step toward interactive image based virtual environments

Jayoung Yoon, Gerard Jounghyun Kim

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al. [1], these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit 1, to accommodate new node types for environment maps, billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, if it exists. Before rendering, objects are conservatively culled from the view frustum using the representation with the largest volume. Finally, we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

Original languageEnglish
Title of host publicationProceedings of SPIE - The International Society for Optical Engineering
EditorsZ. Pan, J. Shi
Pages9-16
Number of pages8
Volume4756
DOIs
Publication statusPublished - 2002
Externally publishedYes
EventThird International Conference on Virtual Reality and Its Application in Industry - Hangzhou, China
Duration: 2002 Apr 92002 Apr 12

Other

OtherThird International Conference on Virtual Reality and Its Application in Industry
CountryChina
CityHangzhou
Period02/4/902/4/12

Fingerprint

virtual reality
Virtual reality
platforms
frustums
Navigation systems
data structures
Data structures
navigation
Textures
derivation
textures
interactions
Experiments

Keywords

  • Depth perception
  • Image based models
  • Interaction
  • Mixed rendering
  • Scene graph

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Condensed Matter Physics

Cite this

Yoon, J., & Kim, G. J. (2002). An integrated VR platform for 3D and image based models: A step toward interactive image based virtual environments. In Z. Pan, & J. Shi (Eds.), Proceedings of SPIE - The International Society for Optical Engineering (Vol. 4756, pp. 9-16) https://doi.org/10.1117/12.497661

An integrated VR platform for 3D and image based models : A step toward interactive image based virtual environments. / Yoon, Jayoung; Kim, Gerard Jounghyun.

Proceedings of SPIE - The International Society for Optical Engineering. ed. / Z. Pan; J. Shi. Vol. 4756 2002. p. 9-16.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Yoon, J & Kim, GJ 2002, An integrated VR platform for 3D and image based models: A step toward interactive image based virtual environments. in Z Pan & J Shi (eds), Proceedings of SPIE - The International Society for Optical Engineering. vol. 4756, pp. 9-16, Third International Conference on Virtual Reality and Its Application in Industry, Hangzhou, China, 02/4/9. https://doi.org/10.1117/12.497661
Yoon J, Kim GJ. An integrated VR platform for 3D and image based models: A step toward interactive image based virtual environments. In Pan Z, Shi J, editors, Proceedings of SPIE - The International Society for Optical Engineering. Vol. 4756. 2002. p. 9-16 https://doi.org/10.1117/12.497661
Yoon, Jayoung ; Kim, Gerard Jounghyun. / An integrated VR platform for 3D and image based models : A step toward interactive image based virtual environments. Proceedings of SPIE - The International Society for Optical Engineering. editor / Z. Pan ; J. Shi. Vol. 4756 2002. pp. 9-16
@inproceedings{6fc3e7de0f504efbbf0fd0060f123947,
title = "An integrated VR platform for 3D and image based models: A step toward interactive image based virtual environments",
abstract = "Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the {"}scene graph{"} is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al. [1], these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit 1, to accommodate new node types for environment maps, billboards, moving textures and sprites, {"}Tour-into-the-Picture{"} structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, if it exists. Before rendering, objects are conservatively culled from the view frustum using the representation with the largest volume. Finally, we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.",
keywords = "Depth perception, Image based models, Interaction, Mixed rendering, Scene graph",
author = "Jayoung Yoon and Kim, {Gerard Jounghyun}",
year = "2002",
doi = "10.1117/12.497661",
language = "English",
volume = "4756",
pages = "9--16",
editor = "Z. Pan and J. Shi",
booktitle = "Proceedings of SPIE - The International Society for Optical Engineering",

}

TY - GEN

T1 - An integrated VR platform for 3D and image based models

T2 - A step toward interactive image based virtual environments

AU - Yoon, Jayoung

AU - Kim, Gerard Jounghyun

PY - 2002

Y1 - 2002

N2 - Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al. [1], these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit 1, to accommodate new node types for environment maps, billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, if it exists. Before rendering, objects are conservatively culled from the view frustum using the representation with the largest volume. Finally, we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

AB - Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al. [1], these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit 1, to accommodate new node types for environment maps, billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, if it exists. Before rendering, objects are conservatively culled from the view frustum using the representation with the largest volume. Finally, we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

KW - Depth perception

KW - Image based models

KW - Interaction

KW - Mixed rendering

KW - Scene graph

UR - http://www.scopus.com/inward/record.url?scp=0042230905&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0042230905&partnerID=8YFLogxK

U2 - 10.1117/12.497661

DO - 10.1117/12.497661

M3 - Conference contribution

AN - SCOPUS:0042230905

VL - 4756

SP - 9

EP - 16

BT - Proceedings of SPIE - The International Society for Optical Engineering

A2 - Pan, Z.

A2 - Shi, J.

ER -