Fast scene change detection using direct feature extraction from MPEG compressed videos

Seong Whan Lee, Young Min Kim, Sung Woo Choi

Research output: Contribution to journalArticlepeer-review

119 Citations (Scopus)


In order to process video data efficiently, a video segmentation technique through scene change detection must be required. This is a fundamental operation used in many digital video applications such as digital libraries, video on demand (VOD), etc. Many of these advanced video applications require manipulations of compressed video signals. So, the scene change detection process is achieved by analyzing the video directly in the compressed domain, thereby avoiding the overhead of decompressing video into individual frames in the pixel domain. In this paper, we propose a fast scene change detection algorithm using direct feature extraction from MPEG compressed videos, and evaluate this technique using sample video data. First, we derive binary edge maps from the AC coefficients in blocks which were discrete cosine transformed. Second, we measure edge orientation, strength and offset using correlation between the AC coefficients in the derived binary edge maps. Finally, we match two consecutive frames using these two features (edge orientation and strength). This process was made possible by a new mathematical formulation for deriving the edge information directly from the discrete cosine transform (DCT) coefficients. We have shown that the proposed algorithm is faster or more accurate than the previously known scene change detection algorithms.

Original languageEnglish
Pages (from-to)240-254
Number of pages15
JournalIEEE Transactions on Multimedia
Issue number4
Publication statusPublished - 2000 Dec

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering


Dive into the research topics of 'Fast scene change detection using direct feature extraction from MPEG compressed videos'. Together they form a unique fingerprint.

Cite this