A new approach to training more interpretable model with additional segmentation

Sunguk Shin, Youngjoon Kim, Ji Won Yoon

Research output: Contribution to journalArticlepeer-review

Abstract

It is not straightforward to understand how the complicated deep learning models work because they are almost black boxes. To address this problem, various approaches have been developed to provide interpretability and applied in black-box deep learning models. However, the traditional interpretable machine learning only helps us to understand the models which have already been trained. Therefore, if the models are not properly trained, it is obvious that the interpretable machine learning will not work well. We propose a simple but effective method which trains models to improve interpretability for image classification. We also evaluate how well the models focus on appropriate objects, not just relying on classification accuracy. We use Class Activation Mapping (CAM) to train and evaluate the model interpretability. As a result, with VOC PASCAL 2012 datasets, when the ResNet50 model is trained by the proposed approach the 0.5IOU is 29.61%, while the model which is trained only by images and labels is 13.00%. The classification accuracy of the proposed approach is 75.03%, the existing method is 68.38%, and FCN is 60.69%. These evaluations show that the proposed approach is effective.

Original languageEnglish
Pages (from-to)188-194
Number of pages7
JournalPattern Recognition Letters
Volume152
DOIs
Publication statusPublished - 2021 Dec

Keywords

  • Classification model
  • Convolutional neural networks
  • Interpretable machine learning

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'A new approach to training more interpretable model with additional segmentation'. Together they form a unique fingerprint.

Cite this