Estimation of distortion sensitivity for visual quality prediction using a convolutional neural network

Sebastian Bosse, Sören Becker, Klaus Robert Müller, Wojciech Samek, Thomas Wiegand

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)


The PSNR and MSE are the computationally simplest and thus most widely used measures for image quality, although they correlate only poorly with perceived visual quality. More accurate quality models that rely on processing on both the reference and distorted image are potentially difficult to integrate in time-critical communication systems where computational complexity is disadvantageous. This paper derives the concept of distortion sensitivity as a property of the reference image that compensates for a given computational quality model a potential lack of perceptual relevance. This compensation method is applied to the PSNR and leads to a local weighting scheme for the MSE. Local weights are estimated by a deep convolutional neural network and used to improve the PSNR in a computationally graceful distribution of computationally complex processing to the reference image only. The performance of the proposed estimation approach is evaluated on LIVE, TID2013 and CSIQ databases and shows comparable or superior performance compared to benchmark image quality measures.

Original languageEnglish
Pages (from-to)54-65
Number of pages12
JournalDigital Signal Processing: A Review Journal
Publication statusPublished - 2019 Aug


  • Deep learning
  • Distortion sensitivity
  • Image quality assessment
  • Perceptual coding
  • Visual perception

ASJC Scopus subject areas

  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Statistics, Probability and Uncertainty
  • Computational Theory and Mathematics
  • Electrical and Electronic Engineering
  • Artificial Intelligence
  • Applied Mathematics


Dive into the research topics of 'Estimation of distortion sensitivity for visual quality prediction using a convolutional neural network'. Together they form a unique fingerprint.

Cite this