Simple Yet Effective Way for Improving the Performance of Lossy Image Compression

Yoon Jae Yeo, Yong Goo Shin, Min Cheol Sagong, Seung Wook Kim, Sung Jea Ko

Research output: Contribution to journalArticle

Abstract

Lossy image compression methods with deep neural network (DNN) include a quantization process between encoder and decoder networks as an essential part to increase the compression rate. However, the quantization operation impedes the flow of gradient and often disturbs the optimal learning of the encoder, which results in distortion in the reconstructed images. To alleviate this problem, this paper presents a simple yet effective way that enhances the performance of lossy image compression without imposing training overhead or modifying the original network architectures. In the proposed method, we utilize an auxiliary branch called a shortcut which directly connects the encoder and decoder. Since the shortcut does not include the quantization process, it supports the optimal learning of the encoder by flowing the accurate gradient. Furthermore, to assist the decoder which should handle additional feature maps obtained via the shortcut, we also propose a residual refinement unit (RRU) following the quantizer. The experimental results show that the image compression network trained with the proposed method remarkably improves the performance in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and multi-scale structural similarity (MS-SSIM).

Original languageEnglish
Article number9044407
Pages (from-to)530-534
Number of pages5
JournalIEEE Signal Processing Letters
Volume27
DOIs
Publication statusPublished - 2020

Keywords

  • Convolutional neural network
  • Deep learning
  • Image compression

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering
  • Applied Mathematics

Fingerprint Dive into the research topics of 'Simple Yet Effective Way for Improving the Performance of Lossy Image Compression'. Together they form a unique fingerprint.

  • Cite this