Layer-wise relevance propagation for neural networks with local renormalization layers

Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, Klaus Muller, Wojciech Samek

Research output: Chapter in Book/Report/Conference proceedingConference contribution

11 Citations (Scopus)

Abstract

Layer-wise relevance propagation is a framework which allows to decompose the prediction of a deep neural network computed over a sample, e.g. an image, down to relevance scores for the single input dimensions of the sample such as subpixels of an image. While this approach can be applied directly to generalized linear mappings, product type non-linearities are not covered. This paper proposes an approach to extend layer-wise relevance propagation to neural networks with local renormalization layers, which is a very common product-type non-linearity in convolutional neural networks. We evaluate the proposed method for local renormalization layers on the CIFAR-10, Imagenet and MIT Places datasets.

Original languageEnglish
Title of host publicationArtificial Neural Networks and Machine Learning - 25th International Conference on Artificial Neural Networks, ICANN 2016, Proceedings
PublisherSpringer Verlag
Pages63-71
Number of pages9
Volume9887 LNCS
ISBN (Print)9783319447803
DOIs
Publication statusPublished - 2016
Event25th International Conference on Artificial Neural Networks and Machine Learning, ICANN 2016 - Barcelona, Spain
Duration: 2016 Sep 62016 Sep 9

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9887 LNCS
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other25th International Conference on Artificial Neural Networks and Machine Learning, ICANN 2016
CountrySpain
CityBarcelona
Period16/9/616/9/9

Fingerprint

Renormalization
Neural Networks
Propagation
Neural networks
Nonlinearity
Sub-pixel
Decompose
Relevance
Evaluate
Prediction
Deep neural networks

Keywords

  • Image classification
  • Interpretability
  • Neural networks

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Binder, A., Montavon, G., Lapuschkin, S., Muller, K., & Samek, W. (2016). Layer-wise relevance propagation for neural networks with local renormalization layers. In Artificial Neural Networks and Machine Learning - 25th International Conference on Artificial Neural Networks, ICANN 2016, Proceedings (Vol. 9887 LNCS, pp. 63-71). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9887 LNCS). Springer Verlag. https://doi.org/10.1007/978-3-319-44781-0_8

Layer-wise relevance propagation for neural networks with local renormalization layers. / Binder, Alexander; Montavon, Grégoire; Lapuschkin, Sebastian; Muller, Klaus; Samek, Wojciech.

Artificial Neural Networks and Machine Learning - 25th International Conference on Artificial Neural Networks, ICANN 2016, Proceedings. Vol. 9887 LNCS Springer Verlag, 2016. p. 63-71 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9887 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Binder, A, Montavon, G, Lapuschkin, S, Muller, K & Samek, W 2016, Layer-wise relevance propagation for neural networks with local renormalization layers. in Artificial Neural Networks and Machine Learning - 25th International Conference on Artificial Neural Networks, ICANN 2016, Proceedings. vol. 9887 LNCS, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9887 LNCS, Springer Verlag, pp. 63-71, 25th International Conference on Artificial Neural Networks and Machine Learning, ICANN 2016, Barcelona, Spain, 16/9/6. https://doi.org/10.1007/978-3-319-44781-0_8
Binder A, Montavon G, Lapuschkin S, Muller K, Samek W. Layer-wise relevance propagation for neural networks with local renormalization layers. In Artificial Neural Networks and Machine Learning - 25th International Conference on Artificial Neural Networks, ICANN 2016, Proceedings. Vol. 9887 LNCS. Springer Verlag. 2016. p. 63-71. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-319-44781-0_8
Binder, Alexander ; Montavon, Grégoire ; Lapuschkin, Sebastian ; Muller, Klaus ; Samek, Wojciech. / Layer-wise relevance propagation for neural networks with local renormalization layers. Artificial Neural Networks and Machine Learning - 25th International Conference on Artificial Neural Networks, ICANN 2016, Proceedings. Vol. 9887 LNCS Springer Verlag, 2016. pp. 63-71 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{26789770627340918fabdb2325bb0572,
title = "Layer-wise relevance propagation for neural networks with local renormalization layers",
abstract = "Layer-wise relevance propagation is a framework which allows to decompose the prediction of a deep neural network computed over a sample, e.g. an image, down to relevance scores for the single input dimensions of the sample such as subpixels of an image. While this approach can be applied directly to generalized linear mappings, product type non-linearities are not covered. This paper proposes an approach to extend layer-wise relevance propagation to neural networks with local renormalization layers, which is a very common product-type non-linearity in convolutional neural networks. We evaluate the proposed method for local renormalization layers on the CIFAR-10, Imagenet and MIT Places datasets.",
keywords = "Image classification, Interpretability, Neural networks",
author = "Alexander Binder and Gr{\'e}goire Montavon and Sebastian Lapuschkin and Klaus Muller and Wojciech Samek",
year = "2016",
doi = "10.1007/978-3-319-44781-0_8",
language = "English",
isbn = "9783319447803",
volume = "9887 LNCS",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "63--71",
booktitle = "Artificial Neural Networks and Machine Learning - 25th International Conference on Artificial Neural Networks, ICANN 2016, Proceedings",

}

TY - GEN

T1 - Layer-wise relevance propagation for neural networks with local renormalization layers

AU - Binder, Alexander

AU - Montavon, Grégoire

AU - Lapuschkin, Sebastian

AU - Muller, Klaus

AU - Samek, Wojciech

PY - 2016

Y1 - 2016

N2 - Layer-wise relevance propagation is a framework which allows to decompose the prediction of a deep neural network computed over a sample, e.g. an image, down to relevance scores for the single input dimensions of the sample such as subpixels of an image. While this approach can be applied directly to generalized linear mappings, product type non-linearities are not covered. This paper proposes an approach to extend layer-wise relevance propagation to neural networks with local renormalization layers, which is a very common product-type non-linearity in convolutional neural networks. We evaluate the proposed method for local renormalization layers on the CIFAR-10, Imagenet and MIT Places datasets.

AB - Layer-wise relevance propagation is a framework which allows to decompose the prediction of a deep neural network computed over a sample, e.g. an image, down to relevance scores for the single input dimensions of the sample such as subpixels of an image. While this approach can be applied directly to generalized linear mappings, product type non-linearities are not covered. This paper proposes an approach to extend layer-wise relevance propagation to neural networks with local renormalization layers, which is a very common product-type non-linearity in convolutional neural networks. We evaluate the proposed method for local renormalization layers on the CIFAR-10, Imagenet and MIT Places datasets.

KW - Image classification

KW - Interpretability

KW - Neural networks

UR - http://www.scopus.com/inward/record.url?scp=84988311277&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84988311277&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-44781-0_8

DO - 10.1007/978-3-319-44781-0_8

M3 - Conference contribution

SN - 9783319447803

VL - 9887 LNCS

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 63

EP - 71

BT - Artificial Neural Networks and Machine Learning - 25th International Conference on Artificial Neural Networks, ICANN 2016, Proceedings

PB - Springer Verlag

ER -