Robust view synthesis under varying illumination conditions using segment-based disparity estimation

Il Lyong Jung, Chang-Su Kim

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

An intermediate view synthesis scheme under varying illumination conditions is proposed in this work. First, we estimate the disparity map based on cumulative color histograms. Since the cumulative histogram of an image represents the brightness ranks of pixels, the disparity estimation is robust against varying illumination conditions. More specifically, we divide each image into segments, and compute the cumulative histogram of the representative values for these segments. Then, we estimate the disparity map based on the similarity of the cumulative histograms between stereo images. Second, we transform the colors of stereo images adaptively using the disparity map. Finally, we synthesize intermediate views using the transformed stereo images and the disparity map. Simulation results demonstrate that the proposed algorithm provides better disparity maps and intermediate views under varying illumination conditions than the conventional techniques.

Original languageEnglish
Title of host publication2012 Conference Handbook - Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2012
Publication statusPublished - 2012
Event2012 4th Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2012 - Hollywood, CA, United States
Duration: 2012 Dec 32012 Dec 6

Publication series

Name2012 Conference Handbook - Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2012

Other

Other2012 4th Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2012
CountryUnited States
CityHollywood, CA
Period12/12/312/12/6

ASJC Scopus subject areas

  • Information Systems

Fingerprint Dive into the research topics of 'Robust view synthesis under varying illumination conditions using segment-based disparity estimation'. Together they form a unique fingerprint.

Cite this