Methods for interpreting and understanding deep neural networks

Grégoire Montavon, Wojciech Samek, Klaus Robert Müller

Research output: Contribution to journalReview articlepeer-review

1034 Citations (Scopus)


This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. As a tutorial paper, the set of methods covered here is not exhaustive, but sufficiently representative to discuss a number of questions in interpretability, technical challenges, and possible applications. The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) technique, for which we provide theory, recommendations, and tricks, to make most efficient use of it on real data.

Original languageEnglish
Pages (from-to)1-15
Number of pages15
JournalDigital Signal Processing: A Review Journal
Publication statusPublished - 2018 Feb


  • Activation maximization
  • Deep neural networks
  • Layer-wise relevance propagation
  • Sensitivity analysis
  • Taylor decomposition

ASJC Scopus subject areas

  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Statistics, Probability and Uncertainty
  • Computational Theory and Mathematics
  • Electrical and Electronic Engineering
  • Artificial Intelligence
  • Applied Mathematics


Dive into the research topics of 'Methods for interpreting and understanding deep neural networks'. Together they form a unique fingerprint.

Cite this