An efficient walkthrough from two images using view morphing and spidery mesh interface

Hang Shin Cho, Chang-Hun Kim, Seiichi Nishihara

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

This paper presents an efficient walkthrough animation with two images of the same scene. To make animation easily and fast, view morphing uses only 2D transition but restricts its camera path on the line between two views, and Tour Into the Picture(TIP) provides the spidery mesh interface to recover simplified 3D structure from single image but lacks the reality of its foreground object in parallel moving of the viewpoint. By combining advantages of these two image-based techniques, this paper suggests a new virtual navigation technique which enables natural scene transformation when the viewpoint changes in the parallel direction as well as in the depth direction. In our method, view morphing is employed only in foreground objects, and background scene perceived carelessly is mapped into cube-like 3D model as in TIP, so as to save detail 3D reconstruction cost and improve visual realism simultaneously. To do this, we define a new camera transformation between two images from the relationship of spidery mesh transformation and its corresponding view change. The result animation demonstrates the efficiency of our method.

Original languageEnglish
Title of host publicationProceedings - International Conference on Pattern Recognition
Pages139-142
Number of pages4
Volume15
Edition3
Publication statusPublished - 2000

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Computer Vision and Pattern Recognition
  • Hardware and Architecture

Fingerprint Dive into the research topics of 'An efficient walkthrough from two images using view morphing and spidery mesh interface'. Together they form a unique fingerprint.

  • Cite this

    Cho, H. S., Kim, C-H., & Nishihara, S. (2000). An efficient walkthrough from two images using view morphing and spidery mesh interface. In Proceedings - International Conference on Pattern Recognition (3 ed., Vol. 15, pp. 139-142)