The LRP toolbox for artificial neural networks

Sebastian Lapuschkin, Alexander Binder, Grégoire Montavon, Klaus Muller, Wojciech Samek

Research output: Contribution to journalArticle

36 Citations (Scopus)

Abstract

The Layer-wise Relevance Propagation (LRP) algorithm explains a classifier's prediction specific to a given data point by attributing relevance scores to important components of the input by using the topology of the learned model itself. With the LRP Toolbox we provide platform-agnostic implementations for explaining the predictions of pre-trained state of the art Caffe networks and stand-alone implementations for fully connected Neural Network models. The implementations for Matlab and python shall serve as a playing field to familiarize oneself with the LRP algorithm and are implemented with readability and transparency in mind. Models and data can be imported and exported using raw text formats, Matlab's .mat files and the .npy format for numpy or plain text.

Original languageEnglish
Pages (from-to)1-5
Number of pages5
JournalJournal of Machine Learning Research
Volume17
Publication statusPublished - 2016 Jun 1

Keywords

  • Artificial neural networks
  • Computer vision
  • Deep learning
  • Explaining classifiers
  • Layer-wise relevance propagation

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'The LRP toolbox for artificial neural networks'. Together they form a unique fingerprint.

  • Cite this

    Lapuschkin, S., Binder, A., Montavon, G., Muller, K., & Samek, W. (2016). The LRP toolbox for artificial neural networks. Journal of Machine Learning Research, 17, 1-5.