Towards Explainable Artificial Intelligence

Wojciech Samek, Klaus Robert Müller

Research output: Chapter in Book/Report/Conference proceedingChapter

1 Citation (Scopus)

Abstract

In recent years, machine learning (ML) has become a key enabling technology for the sciences and industry. Especially through improvements in methodology, the availability of large databases and increased computational power, today’s ML algorithms are able to achieve excellent performance (at times even exceeding the human level) on an increasing number of complex tasks. Deep learning models are at the forefront of this development. However, due to their nested non-linear structure, these powerful models have been generally considered “black boxes”, not providing any information about what exactly makes them arrive at their predictions. Since in many applications, e.g., in the medical domain, such lack of transparency may be not acceptable, the development of methods for visualizing, explaining and interpreting deep learning models has recently attracted increasing attention. This introductory paper presents recent developments and applications in this field and makes a plea for a wider use of explainable learning algorithms in practice.

Original languageEnglish
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer Verlag
Pages5-22
Number of pages18
DOIs
Publication statusPublished - 2019 Jan 1

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11700 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Fingerprint

Artificial intelligence
Artificial Intelligence
Learning algorithms
Learning systems
Learning Algorithm
Machine Learning
Transparency
Black Box
Availability
Model
Industry
Methodology
Prediction
Learning
Deep learning
Human

Keywords

  • Deep learning
  • Explainable artificial intelligence
  • Interpretability
  • Model transparency
  • Neural networks

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Samek, W., & Müller, K. R. (2019). Towards Explainable Artificial Intelligence. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (pp. 5-22). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11700 LNCS). Springer Verlag. https://doi.org/10.1007/978-3-030-28954-6_1

Towards Explainable Artificial Intelligence. / Samek, Wojciech; Müller, Klaus Robert.

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Springer Verlag, 2019. p. 5-22 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11700 LNCS).

Research output: Chapter in Book/Report/Conference proceedingChapter

Samek, W & Müller, KR 2019, Towards Explainable Artificial Intelligence. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11700 LNCS, Springer Verlag, pp. 5-22. https://doi.org/10.1007/978-3-030-28954-6_1
Samek W, Müller KR. Towards Explainable Artificial Intelligence. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Springer Verlag. 2019. p. 5-22. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-030-28954-6_1
Samek, Wojciech ; Müller, Klaus Robert. / Towards Explainable Artificial Intelligence. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Springer Verlag, 2019. pp. 5-22 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inbook{f00db28af437430f8f9363a4ee7302ee,
title = "Towards Explainable Artificial Intelligence",
abstract = "In recent years, machine learning (ML) has become a key enabling technology for the sciences and industry. Especially through improvements in methodology, the availability of large databases and increased computational power, today’s ML algorithms are able to achieve excellent performance (at times even exceeding the human level) on an increasing number of complex tasks. Deep learning models are at the forefront of this development. However, due to their nested non-linear structure, these powerful models have been generally considered “black boxes”, not providing any information about what exactly makes them arrive at their predictions. Since in many applications, e.g., in the medical domain, such lack of transparency may be not acceptable, the development of methods for visualizing, explaining and interpreting deep learning models has recently attracted increasing attention. This introductory paper presents recent developments and applications in this field and makes a plea for a wider use of explainable learning algorithms in practice.",
keywords = "Deep learning, Explainable artificial intelligence, Interpretability, Model transparency, Neural networks",
author = "Wojciech Samek and M{\"u}ller, {Klaus Robert}",
year = "2019",
month = "1",
day = "1",
doi = "10.1007/978-3-030-28954-6_1",
language = "English",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "5--22",
booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",

}

TY - CHAP

T1 - Towards Explainable Artificial Intelligence

AU - Samek, Wojciech

AU - Müller, Klaus Robert

PY - 2019/1/1

Y1 - 2019/1/1

N2 - In recent years, machine learning (ML) has become a key enabling technology for the sciences and industry. Especially through improvements in methodology, the availability of large databases and increased computational power, today’s ML algorithms are able to achieve excellent performance (at times even exceeding the human level) on an increasing number of complex tasks. Deep learning models are at the forefront of this development. However, due to their nested non-linear structure, these powerful models have been generally considered “black boxes”, not providing any information about what exactly makes them arrive at their predictions. Since in many applications, e.g., in the medical domain, such lack of transparency may be not acceptable, the development of methods for visualizing, explaining and interpreting deep learning models has recently attracted increasing attention. This introductory paper presents recent developments and applications in this field and makes a plea for a wider use of explainable learning algorithms in practice.

AB - In recent years, machine learning (ML) has become a key enabling technology for the sciences and industry. Especially through improvements in methodology, the availability of large databases and increased computational power, today’s ML algorithms are able to achieve excellent performance (at times even exceeding the human level) on an increasing number of complex tasks. Deep learning models are at the forefront of this development. However, due to their nested non-linear structure, these powerful models have been generally considered “black boxes”, not providing any information about what exactly makes them arrive at their predictions. Since in many applications, e.g., in the medical domain, such lack of transparency may be not acceptable, the development of methods for visualizing, explaining and interpreting deep learning models has recently attracted increasing attention. This introductory paper presents recent developments and applications in this field and makes a plea for a wider use of explainable learning algorithms in practice.

KW - Deep learning

KW - Explainable artificial intelligence

KW - Interpretability

KW - Model transparency

KW - Neural networks

UR - http://www.scopus.com/inward/record.url?scp=85072843609&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85072843609&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-28954-6_1

DO - 10.1007/978-3-030-28954-6_1

M3 - Chapter

AN - SCOPUS:85072843609

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 5

EP - 22

BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

PB - Springer Verlag

ER -