Frame loss concealment for stereoscopic video based on inter-view similarity of motion and intensity difference

Tae Young Chung, Sanghoon Sull, Chang-Su Kim

Research output: Chapter in Book/Report/Conference proceedingConference contribution

16 Citations (Scopus)

Abstract

An efficient frame loss concealment algorithm for stereoscopic video based on the inter-view similarity of motion vectors and intensity differences is proposed in this work. Suppose that a frame at time t in the right view is lost during the transmission. To conceal its loss, we use the information in the previous frame at time t - 1 in the right view and the frames at t - 1 and t in the left view. More specifically, we first estimate the disparity vector field of the previous frame to find the matching pixels in the left view, and determine the motion vectors and the intensity differences of those matching pixels. By projecting those motion vectors and intensity differences onto the right view, we recover the lost frame. Simulation results demonstrate that the proposed algorithm provides significantly better performance than conventional algorithms.

Original languageEnglish
Title of host publication2010 IEEE International Conference on Image Processing, ICIP 2010 - Proceedings
Pages441-444
Number of pages4
DOIs
Publication statusPublished - 2010
Event2010 17th IEEE International Conference on Image Processing, ICIP 2010 - Hong Kong, Hong Kong
Duration: 2010 Sep 262010 Sep 29

Publication series

NameProceedings - International Conference on Image Processing, ICIP
ISSN (Print)1522-4880

Other

Other2010 17th IEEE International Conference on Image Processing, ICIP 2010
CountryHong Kong
CityHong Kong
Period10/9/2610/9/29

Keywords

  • And inter-view similarity
  • Disparity estimation
  • Frame loss concealment
  • Stereoscopic video

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Signal Processing

Fingerprint Dive into the research topics of 'Frame loss concealment for stereoscopic video based on inter-view similarity of motion and intensity difference'. Together they form a unique fingerprint.

Cite this