"What is relevant in a text document?": An interpretable machine learning approach

Leila Arras, Franziska Horn, Grégoire Montavon, Klaus Muller, Wojciech Samek

Research output: Contribution to journalArticle

27 Citations (Scopus)

Abstract

Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text’s category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications.

Original languageEnglish
Article numbere0181142
JournalPLoS One
Volume12
Issue number8
DOIs
Publication statusPublished - 2017 Aug 1

Fingerprint

artificial intelligence
Semantics
Learning systems
Neural Networks (Computer)
Information Storage and Retrieval
neural networks
Classifiers
Neural networks
prediction
bags
Machine Learning
methodology

ASJC Scopus subject areas

  • Medicine(all)
  • Biochemistry, Genetics and Molecular Biology(all)
  • Agricultural and Biological Sciences(all)

Cite this

Arras, L., Horn, F., Montavon, G., Muller, K., & Samek, W. (2017). "What is relevant in a text document?": An interpretable machine learning approach. PLoS One, 12(8), [e0181142]. https://doi.org/10.1371/journal.pone.0181142

"What is relevant in a text document?" : An interpretable machine learning approach. / Arras, Leila; Horn, Franziska; Montavon, Grégoire; Muller, Klaus; Samek, Wojciech.

In: PLoS One, Vol. 12, No. 8, e0181142, 01.08.2017.

Research output: Contribution to journalArticle

Arras, L, Horn, F, Montavon, G, Muller, K & Samek, W 2017, '"What is relevant in a text document?": An interpretable machine learning approach', PLoS One, vol. 12, no. 8, e0181142. https://doi.org/10.1371/journal.pone.0181142
Arras, Leila ; Horn, Franziska ; Montavon, Grégoire ; Muller, Klaus ; Samek, Wojciech. / "What is relevant in a text document?" : An interpretable machine learning approach. In: PLoS One. 2017 ; Vol. 12, No. 8.
@article{79a8b9a3eed44d649b8a55bc95e00f3a,
title = "{"}What is relevant in a text document?{"}: An interpretable machine learning approach",
abstract = "Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text’s category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications.",
author = "Leila Arras and Franziska Horn and Gr{\'e}goire Montavon and Klaus Muller and Wojciech Samek",
year = "2017",
month = "8",
day = "1",
doi = "10.1371/journal.pone.0181142",
language = "English",
volume = "12",
journal = "PLoS One",
issn = "1932-6203",
publisher = "Public Library of Science",
number = "8",

}

TY - JOUR

T1 - "What is relevant in a text document?"

T2 - An interpretable machine learning approach

AU - Arras, Leila

AU - Horn, Franziska

AU - Montavon, Grégoire

AU - Muller, Klaus

AU - Samek, Wojciech

PY - 2017/8/1

Y1 - 2017/8/1

N2 - Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text’s category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications.

AB - Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text’s category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications.

UR - http://www.scopus.com/inward/record.url?scp=85027142265&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85027142265&partnerID=8YFLogxK

U2 - 10.1371/journal.pone.0181142

DO - 10.1371/journal.pone.0181142

M3 - Article

C2 - 28800619

AN - SCOPUS:85027142265

VL - 12

JO - PLoS One

JF - PLoS One

SN - 1932-6203

IS - 8

M1 - e0181142

ER -