TY - JOUR
T1 - Methods for interpreting and understanding deep neural networks
AU - Montavon, Grégoire
AU - Samek, Wojciech
AU - Müller, Klaus Robert
N1 - Funding Information:
We gratefully acknowledge discussions and comments on the manuscript by our colleagues Sebastian Lapuschkin, and Alexander Binder. This work was supported by the Brain Korea 21 Plus Program through the National Research Foundation of Korea ; the Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea Government [No. 2017-0-00451 ]; the Deutsche Forschungsgemeinschaft ( DFG ) [grant MU 987/17-1 ]; and the German Ministry for Education and Research as Berlin Big Data Center (BBDC) [ 01IS14013A ]. This publication only reflects the authors views. Funding agencies are not liable for any use that may be made of the information contained herein.
Publisher Copyright:
© 2017
PY - 2018/2
Y1 - 2018/2
N2 - This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. As a tutorial paper, the set of methods covered here is not exhaustive, but sufficiently representative to discuss a number of questions in interpretability, technical challenges, and possible applications. The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) technique, for which we provide theory, recommendations, and tricks, to make most efficient use of it on real data.
AB - This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. As a tutorial paper, the set of methods covered here is not exhaustive, but sufficiently representative to discuss a number of questions in interpretability, technical challenges, and possible applications. The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) technique, for which we provide theory, recommendations, and tricks, to make most efficient use of it on real data.
KW - Activation maximization
KW - Deep neural networks
KW - Layer-wise relevance propagation
KW - Sensitivity analysis
KW - Taylor decomposition
UR - http://www.scopus.com/inward/record.url?scp=85033371689&partnerID=8YFLogxK
U2 - 10.1016/j.dsp.2017.10.011
DO - 10.1016/j.dsp.2017.10.011
M3 - Review article
AN - SCOPUS:85033371689
VL - 73
SP - 1
EP - 15
JO - Digital Signal Processing: A Review Journal
JF - Digital Signal Processing: A Review Journal
SN - 1051-2004
ER -