How to explain individual classification decisions

David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, Klaus Robert Müller

Research output: Contribution to journalArticle

235 Citations (Scopus)

Abstract

After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted a particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.

Original languageEnglish
Pages (from-to)1803-1831
Number of pages29
JournalJournal of Machine Learning Research
Volume11
Publication statusPublished - 2010 Jun
Externally publishedYes

Keywords

  • Ames mutagenicity
  • Black box model
  • Explaining
  • Kernel methods
  • Nonlinear

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'How to explain individual classification decisions'. Together they form a unique fingerprint.

  • Cite this

    Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., & Müller, K. R. (2010). How to explain individual classification decisions. Journal of Machine Learning Research, 11, 1803-1831.