The Structure of Deep Neural Network for Interpretable Transfer Learning

Dowan Kim, Woohyun Lim, Minye Hong, Hyeoncheol Kim

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Training a deep neural network requires a large amount of high-quality data and time. However, most of the real tasks don't have enough labeled data to train each complex model. To solve this problem, transfer learning reuses the pretrained model on a new task. However, one weakness of transfer learning is that it applies a pretrained model to a new task without understanding the output of an existing model. This may cause a lack of interpretability in training deep neural network. In this paper, we propose a technique to improve the interpretability in transfer learning tasks. We define the interpretable features and use it to train model to a new task. Thus, we will be able to explain the relationship between the source and target domain in a transfer learning task. Feature Network (FN) consists of Feature Extraction Layer and a single mapping layer that connects the features extracted from the source domain to the target domain. We examined the interpretability of the transfer learning by applying pretrained model with defined features to Korean characters classification.

Original languageEnglish
Title of host publication2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781538677896
DOIs
Publication statusPublished - 2019 Apr 1
Event2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Kyoto, Japan
Duration: 2019 Feb 272019 Mar 2

Publication series

Name2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings

Conference

Conference2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019
CountryJapan
CityKyoto
Period19/2/2719/3/2

Fingerprint

Neural networks
Transfer learning
Deep neural networks
Feature extraction
Train
Data quality
Reuse

Keywords

  • Interpretability
  • Machine Learning
  • Transfer Learning

ASJC Scopus subject areas

  • Information Systems and Management
  • Artificial Intelligence
  • Computer Networks and Communications
  • Information Systems

Cite this

Kim, D., Lim, W., Hong, M., & Kim, H. (2019). The Structure of Deep Neural Network for Interpretable Transfer Learning. In 2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings [8679150] (2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/BIGCOMP.2019.8679150

The Structure of Deep Neural Network for Interpretable Transfer Learning. / Kim, Dowan; Lim, Woohyun; Hong, Minye; Kim, Hyeoncheol.

2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. 8679150 (2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Kim, D, Lim, W, Hong, M & Kim, H 2019, The Structure of Deep Neural Network for Interpretable Transfer Learning. in 2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings., 8679150, 2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings, Institute of Electrical and Electronics Engineers Inc., 2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019, Kyoto, Japan, 19/2/27. https://doi.org/10.1109/BIGCOMP.2019.8679150
Kim D, Lim W, Hong M, Kim H. The Structure of Deep Neural Network for Interpretable Transfer Learning. In 2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc. 2019. 8679150. (2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings). https://doi.org/10.1109/BIGCOMP.2019.8679150
Kim, Dowan ; Lim, Woohyun ; Hong, Minye ; Kim, Hyeoncheol. / The Structure of Deep Neural Network for Interpretable Transfer Learning. 2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. (2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings).
@inproceedings{c71d42f535df4c60aa6d79a34fb4e825,
title = "The Structure of Deep Neural Network for Interpretable Transfer Learning",
abstract = "Training a deep neural network requires a large amount of high-quality data and time. However, most of the real tasks don't have enough labeled data to train each complex model. To solve this problem, transfer learning reuses the pretrained model on a new task. However, one weakness of transfer learning is that it applies a pretrained model to a new task without understanding the output of an existing model. This may cause a lack of interpretability in training deep neural network. In this paper, we propose a technique to improve the interpretability in transfer learning tasks. We define the interpretable features and use it to train model to a new task. Thus, we will be able to explain the relationship between the source and target domain in a transfer learning task. Feature Network (FN) consists of Feature Extraction Layer and a single mapping layer that connects the features extracted from the source domain to the target domain. We examined the interpretability of the transfer learning by applying pretrained model with defined features to Korean characters classification.",
keywords = "Interpretability, Machine Learning, Transfer Learning",
author = "Dowan Kim and Woohyun Lim and Minye Hong and Hyeoncheol Kim",
year = "2019",
month = "4",
day = "1",
doi = "10.1109/BIGCOMP.2019.8679150",
language = "English",
series = "2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
booktitle = "2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings",

}

TY - GEN

T1 - The Structure of Deep Neural Network for Interpretable Transfer Learning

AU - Kim, Dowan

AU - Lim, Woohyun

AU - Hong, Minye

AU - Kim, Hyeoncheol

PY - 2019/4/1

Y1 - 2019/4/1

N2 - Training a deep neural network requires a large amount of high-quality data and time. However, most of the real tasks don't have enough labeled data to train each complex model. To solve this problem, transfer learning reuses the pretrained model on a new task. However, one weakness of transfer learning is that it applies a pretrained model to a new task without understanding the output of an existing model. This may cause a lack of interpretability in training deep neural network. In this paper, we propose a technique to improve the interpretability in transfer learning tasks. We define the interpretable features and use it to train model to a new task. Thus, we will be able to explain the relationship between the source and target domain in a transfer learning task. Feature Network (FN) consists of Feature Extraction Layer and a single mapping layer that connects the features extracted from the source domain to the target domain. We examined the interpretability of the transfer learning by applying pretrained model with defined features to Korean characters classification.

AB - Training a deep neural network requires a large amount of high-quality data and time. However, most of the real tasks don't have enough labeled data to train each complex model. To solve this problem, transfer learning reuses the pretrained model on a new task. However, one weakness of transfer learning is that it applies a pretrained model to a new task without understanding the output of an existing model. This may cause a lack of interpretability in training deep neural network. In this paper, we propose a technique to improve the interpretability in transfer learning tasks. We define the interpretable features and use it to train model to a new task. Thus, we will be able to explain the relationship between the source and target domain in a transfer learning task. Feature Network (FN) consists of Feature Extraction Layer and a single mapping layer that connects the features extracted from the source domain to the target domain. We examined the interpretability of the transfer learning by applying pretrained model with defined features to Korean characters classification.

KW - Interpretability

KW - Machine Learning

KW - Transfer Learning

UR - http://www.scopus.com/inward/record.url?scp=85064672305&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85064672305&partnerID=8YFLogxK

U2 - 10.1109/BIGCOMP.2019.8679150

DO - 10.1109/BIGCOMP.2019.8679150

M3 - Conference contribution

T3 - 2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings

BT - 2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

ER -