On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation

Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus Muller, Wojciech Samek

Research output: Contribution to journalArticle

300 Citations (Scopus)

Abstract

Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest.We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package.

Original languageEnglish
Article number0130140
JournalPLoS One
Volume10
Issue number7
DOIs
Publication statusPublished - 2015 Jul 10

Fingerprint

Classifiers
Pixels
Image classification
Volatile organic compounds
boxes (containers)
Learning systems
artificial intelligence
neural networks
Neural networks
Decomposition
bags
methodology
prediction
degradation
seeds

ASJC Scopus subject areas

  • Agricultural and Biological Sciences(all)
  • Biochemistry, Genetics and Molecular Biology(all)
  • Medicine(all)

Cite this

Bach, S., Binder, A., Montavon, G., Klauschen, F., Muller, K., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS One, 10(7), [0130140]. https://doi.org/10.1371/journal.pone.0130140

On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. / Bach, Sebastian; Binder, Alexander; Montavon, Grégoire; Klauschen, Frederick; Muller, Klaus; Samek, Wojciech.

In: PLoS One, Vol. 10, No. 7, 0130140, 10.07.2015.

Research output: Contribution to journalArticle

Bach, S, Binder, A, Montavon, G, Klauschen, F, Muller, K & Samek, W 2015, 'On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation', PLoS One, vol. 10, no. 7, 0130140. https://doi.org/10.1371/journal.pone.0130140
Bach, Sebastian ; Binder, Alexander ; Montavon, Grégoire ; Klauschen, Frederick ; Muller, Klaus ; Samek, Wojciech. / On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. In: PLoS One. 2015 ; Vol. 10, No. 7.
@article{faaaf4beb6f24ca599a7ecae54e0c724,
title = "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation",
abstract = "Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest.We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package.",
author = "Sebastian Bach and Alexander Binder and Gr{\'e}goire Montavon and Frederick Klauschen and Klaus Muller and Wojciech Samek",
year = "2015",
month = "7",
day = "10",
doi = "10.1371/journal.pone.0130140",
language = "English",
volume = "10",
journal = "PLoS One",
issn = "1932-6203",
publisher = "Public Library of Science",
number = "7",

}

TY - JOUR

T1 - On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation

AU - Bach, Sebastian

AU - Binder, Alexander

AU - Montavon, Grégoire

AU - Klauschen, Frederick

AU - Muller, Klaus

AU - Samek, Wojciech

PY - 2015/7/10

Y1 - 2015/7/10

N2 - Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest.We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package.

AB - Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest.We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package.

UR - http://www.scopus.com/inward/record.url?scp=84940560152&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84940560152&partnerID=8YFLogxK

U2 - 10.1371/journal.pone.0130140

DO - 10.1371/journal.pone.0130140

M3 - Article

C2 - 26161953

AN - SCOPUS:84940560152

VL - 10

JO - PLoS One

JF - PLoS One

SN - 1932-6203

IS - 7

M1 - 0130140

ER -