Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment

Sebastian Bosse, Dominique Maniry, Klaus Muller, Thomas Wiegand, Wojciech Samek

Research output: Contribution to journalArticle

77 Citations (Scopus)

Abstract

We present a deep neural network-based approach to image quality assessment (IQA). The network is trained endto- end and comprises 10 convolutional layers and 5 pooling layers for feature extraction, and 2 fully connected layers for regression, which makes it significantly deeper than related IQA models. Unique features of the proposed architecture are that (1) with slight adaptations it can be used in a no-reference (NR) as well as in a full-reference (FR) IQA setting and (2) it allows for joint learning of local quality and local weights, i.e., relative importance of local quality to the global quality estimate, in an unified framework. Our approach is purely data-driven and does not rely on hand-crafted features or other types of prior domain knowledge about the human visual system or image statistics. We evaluate the proposed approach on the LIVE, CISQ and TID2013 databases as well as the LIVE In the Wild Image Quality Challenge Database and show superior performance to state-of-the-art NR and FR IQA methods. Finally, cross-database evaluation shows a high ability to generalize between different databases, indicating a high robustness of the learned features.

Original languageEnglish
JournalIEEE Transactions on Image Processing
DOIs
Publication statusAccepted/In press - 2017 Oct 9

Fingerprint

Image quality
Databases
Aptitude
Hand
Joints
Learning
Feature extraction
Weights and Measures
Deep neural networks
Statistics

Keywords

  • Computational modeling
  • Databases
  • deep learning
  • Distortion
  • Feature extraction
  • feature extraction
  • Full-reference image quality assessment
  • Image quality
  • neural networks
  • no-reference image quality assessment
  • Optimization
  • quality pooling
  • regression

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design

Cite this

Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment. / Bosse, Sebastian; Maniry, Dominique; Muller, Klaus; Wiegand, Thomas; Samek, Wojciech.

In: IEEE Transactions on Image Processing, 09.10.2017.

Research output: Contribution to journalArticle

Bosse, Sebastian ; Maniry, Dominique ; Muller, Klaus ; Wiegand, Thomas ; Samek, Wojciech. / Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment. In: IEEE Transactions on Image Processing. 2017.
@article{735bb65d5f3e496994e134d1ce33f4f7,
title = "Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment",
abstract = "We present a deep neural network-based approach to image quality assessment (IQA). The network is trained endto- end and comprises 10 convolutional layers and 5 pooling layers for feature extraction, and 2 fully connected layers for regression, which makes it significantly deeper than related IQA models. Unique features of the proposed architecture are that (1) with slight adaptations it can be used in a no-reference (NR) as well as in a full-reference (FR) IQA setting and (2) it allows for joint learning of local quality and local weights, i.e., relative importance of local quality to the global quality estimate, in an unified framework. Our approach is purely data-driven and does not rely on hand-crafted features or other types of prior domain knowledge about the human visual system or image statistics. We evaluate the proposed approach on the LIVE, CISQ and TID2013 databases as well as the LIVE In the Wild Image Quality Challenge Database and show superior performance to state-of-the-art NR and FR IQA methods. Finally, cross-database evaluation shows a high ability to generalize between different databases, indicating a high robustness of the learned features.",
keywords = "Computational modeling, Databases, deep learning, Distortion, Feature extraction, feature extraction, Full-reference image quality assessment, Image quality, neural networks, no-reference image quality assessment, Optimization, quality pooling, regression",
author = "Sebastian Bosse and Dominique Maniry and Klaus Muller and Thomas Wiegand and Wojciech Samek",
year = "2017",
month = "10",
day = "9",
doi = "10.1109/TIP.2017.2760518",
language = "English",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - JOUR

T1 - Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment

AU - Bosse, Sebastian

AU - Maniry, Dominique

AU - Muller, Klaus

AU - Wiegand, Thomas

AU - Samek, Wojciech

PY - 2017/10/9

Y1 - 2017/10/9

N2 - We present a deep neural network-based approach to image quality assessment (IQA). The network is trained endto- end and comprises 10 convolutional layers and 5 pooling layers for feature extraction, and 2 fully connected layers for regression, which makes it significantly deeper than related IQA models. Unique features of the proposed architecture are that (1) with slight adaptations it can be used in a no-reference (NR) as well as in a full-reference (FR) IQA setting and (2) it allows for joint learning of local quality and local weights, i.e., relative importance of local quality to the global quality estimate, in an unified framework. Our approach is purely data-driven and does not rely on hand-crafted features or other types of prior domain knowledge about the human visual system or image statistics. We evaluate the proposed approach on the LIVE, CISQ and TID2013 databases as well as the LIVE In the Wild Image Quality Challenge Database and show superior performance to state-of-the-art NR and FR IQA methods. Finally, cross-database evaluation shows a high ability to generalize between different databases, indicating a high robustness of the learned features.

AB - We present a deep neural network-based approach to image quality assessment (IQA). The network is trained endto- end and comprises 10 convolutional layers and 5 pooling layers for feature extraction, and 2 fully connected layers for regression, which makes it significantly deeper than related IQA models. Unique features of the proposed architecture are that (1) with slight adaptations it can be used in a no-reference (NR) as well as in a full-reference (FR) IQA setting and (2) it allows for joint learning of local quality and local weights, i.e., relative importance of local quality to the global quality estimate, in an unified framework. Our approach is purely data-driven and does not rely on hand-crafted features or other types of prior domain knowledge about the human visual system or image statistics. We evaluate the proposed approach on the LIVE, CISQ and TID2013 databases as well as the LIVE In the Wild Image Quality Challenge Database and show superior performance to state-of-the-art NR and FR IQA methods. Finally, cross-database evaluation shows a high ability to generalize between different databases, indicating a high robustness of the learned features.

KW - Computational modeling

KW - Databases

KW - deep learning

KW - Distortion

KW - Feature extraction

KW - feature extraction

KW - Full-reference image quality assessment

KW - Image quality

KW - neural networks

KW - no-reference image quality assessment

KW - Optimization

KW - quality pooling

KW - regression

UR - http://www.scopus.com/inward/record.url?scp=85031920409&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85031920409&partnerID=8YFLogxK

U2 - 10.1109/TIP.2017.2760518

DO - 10.1109/TIP.2017.2760518

M3 - Article

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

ER -