How to explain individual classification decisions

David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, Klaus Muller

Research output: Contribution to journalArticle

156 Citations (Scopus)

Abstract

After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted a particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.

Original languageEnglish
Pages (from-to)1803-1831
Number of pages29
JournalJournal of Machine Learning Research
Volume11
Publication statusPublished - 2010 Jun 1
Externally publishedYes

Fingerprint

Labels
Decision trees
Learning systems
Classifiers
Black Box
Decision tree
Machine Learning
Likely
Classifier
Predict
Model

Keywords

  • Ames mutagenicity
  • Black box model
  • Explaining
  • Kernel methods
  • Nonlinear

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Cite this

Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., & Muller, K. (2010). How to explain individual classification decisions. Journal of Machine Learning Research, 11, 1803-1831.

How to explain individual classification decisions. / Baehrens, David; Schroeter, Timon; Harmeling, Stefan; Kawanabe, Motoaki; Hansen, Katja; Muller, Klaus.

In: Journal of Machine Learning Research, Vol. 11, 01.06.2010, p. 1803-1831.

Research output: Contribution to journalArticle

Baehrens, D, Schroeter, T, Harmeling, S, Kawanabe, M, Hansen, K & Muller, K 2010, 'How to explain individual classification decisions', Journal of Machine Learning Research, vol. 11, pp. 1803-1831.
Baehrens D, Schroeter T, Harmeling S, Kawanabe M, Hansen K, Muller K. How to explain individual classification decisions. Journal of Machine Learning Research. 2010 Jun 1;11:1803-1831.
Baehrens, David ; Schroeter, Timon ; Harmeling, Stefan ; Kawanabe, Motoaki ; Hansen, Katja ; Muller, Klaus. / How to explain individual classification decisions. In: Journal of Machine Learning Research. 2010 ; Vol. 11. pp. 1803-1831.
@article{cf736d955d6e4296a9b7255bfee3b403,
title = "How to explain individual classification decisions",
abstract = "After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted a particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.",
keywords = "Ames mutagenicity, Black box model, Explaining, Kernel methods, Nonlinear",
author = "David Baehrens and Timon Schroeter and Stefan Harmeling and Motoaki Kawanabe and Katja Hansen and Klaus Muller",
year = "2010",
month = "6",
day = "1",
language = "English",
volume = "11",
pages = "1803--1831",
journal = "Journal of Machine Learning Research",
issn = "1532-4435",
publisher = "Microtome Publishing",

}

TY - JOUR

T1 - How to explain individual classification decisions

AU - Baehrens, David

AU - Schroeter, Timon

AU - Harmeling, Stefan

AU - Kawanabe, Motoaki

AU - Hansen, Katja

AU - Muller, Klaus

PY - 2010/6/1

Y1 - 2010/6/1

N2 - After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted a particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.

AB - After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted a particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.

KW - Ames mutagenicity

KW - Black box model

KW - Explaining

KW - Kernel methods

KW - Nonlinear

UR - http://www.scopus.com/inward/record.url?scp=77954665728&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77954665728&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:77954665728

VL - 11

SP - 1803

EP - 1831

JO - Journal of Machine Learning Research

JF - Journal of Machine Learning Research

SN - 1532-4435

ER -