Methods for interpreting and understanding deep neural networks

Grégoire Montavon, Wojciech Samek, Klaus Muller

Research output: Contribution to journalReview article

118 Citations (Scopus)

Abstract

This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. As a tutorial paper, the set of methods covered here is not exhaustive, but sufficiently representative to discuss a number of questions in interpretability, technical challenges, and possible applications. The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) technique, for which we provide theory, recommendations, and tricks, to make most efficient use of it on real data.

Original languageEnglish
Pages (from-to)1-15
Number of pages15
JournalDigital Signal Processing: A Review Journal
Volume73
DOIs
Publication statusPublished - 2018 Feb 1

Fingerprint

Deep neural networks

Keywords

  • Activation maximization
  • Deep neural networks
  • Layer-wise relevance propagation
  • Sensitivity analysis
  • Taylor decomposition

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering

Cite this

Methods for interpreting and understanding deep neural networks. / Montavon, Grégoire; Samek, Wojciech; Muller, Klaus.

In: Digital Signal Processing: A Review Journal, Vol. 73, 01.02.2018, p. 1-15.

Research output: Contribution to journalReview article

Montavon, Grégoire ; Samek, Wojciech ; Muller, Klaus. / Methods for interpreting and understanding deep neural networks. In: Digital Signal Processing: A Review Journal. 2018 ; Vol. 73. pp. 1-15.
@article{78390aa9f12848e88c6a19f21e10fb6e,
title = "Methods for interpreting and understanding deep neural networks",
abstract = "This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. As a tutorial paper, the set of methods covered here is not exhaustive, but sufficiently representative to discuss a number of questions in interpretability, technical challenges, and possible applications. The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) technique, for which we provide theory, recommendations, and tricks, to make most efficient use of it on real data.",
keywords = "Activation maximization, Deep neural networks, Layer-wise relevance propagation, Sensitivity analysis, Taylor decomposition",
author = "Gr{\'e}goire Montavon and Wojciech Samek and Klaus Muller",
year = "2018",
month = "2",
day = "1",
doi = "10.1016/j.dsp.2017.10.011",
language = "English",
volume = "73",
pages = "1--15",
journal = "Digital Signal Processing: A Review Journal",
issn = "1051-2004",
publisher = "Elsevier Inc.",

}

TY - JOUR

T1 - Methods for interpreting and understanding deep neural networks

AU - Montavon, Grégoire

AU - Samek, Wojciech

AU - Muller, Klaus

PY - 2018/2/1

Y1 - 2018/2/1

N2 - This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. As a tutorial paper, the set of methods covered here is not exhaustive, but sufficiently representative to discuss a number of questions in interpretability, technical challenges, and possible applications. The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) technique, for which we provide theory, recommendations, and tricks, to make most efficient use of it on real data.

AB - This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. As a tutorial paper, the set of methods covered here is not exhaustive, but sufficiently representative to discuss a number of questions in interpretability, technical challenges, and possible applications. The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) technique, for which we provide theory, recommendations, and tricks, to make most efficient use of it on real data.

KW - Activation maximization

KW - Deep neural networks

KW - Layer-wise relevance propagation

KW - Sensitivity analysis

KW - Taylor decomposition

UR - http://www.scopus.com/inward/record.url?scp=85033371689&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85033371689&partnerID=8YFLogxK

U2 - 10.1016/j.dsp.2017.10.011

DO - 10.1016/j.dsp.2017.10.011

M3 - Review article

VL - 73

SP - 1

EP - 15

JO - Digital Signal Processing: A Review Journal

JF - Digital Signal Processing: A Review Journal

SN - 1051-2004

ER -