Learning to Balance Local Losses via Meta-Learning

Seungdong Yoa, Minkyu Jeon, Youngjin Oh, Hyunwoo J. Kim

Research output: Contribution to journalArticlepeer-review

Abstract

The standard training for deep neural networks relies on a global and fixed loss function. For more effective training, dynamic loss functions have been recently proposed. However, the dynamic global loss function is not flexible to differentially train layers in complex deep neural networks. In this paper, we propose a general framework that learns to adaptively train each layer of deep neural networks via meta-learning. Our framework leverages the local error signals from layers and identifies which layer needs to be trained more at every iteration. Also, the proposed method improves the local loss function with our minibatch-wise dropout and cross-validation loop to alleviate meta-overfitting. The experiments show that our method achieved competitive performance compared to state-of-the-art methods on popular benchmark datasets for image classification: CIFAR-10 and CIFAR-100. Surprisingly, our method enables training deep neural networks without skip-connections using dynamically weighted local loss functions.

Original languageEnglish
JournalIEEE Access
DOIs
Publication statusAccepted/In press - 2021

Keywords

  • Deep learning
  • Deep Learning
  • Image Classification
  • Licenses
  • Loss measurement
  • Machine Learning
  • Meta-Learning
  • Neural networks
  • Standards
  • Task analysis
  • Training

ASJC Scopus subject areas

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)

Fingerprint

Dive into the research topics of 'Learning to Balance Local Losses via Meta-Learning'. Together they form a unique fingerprint.

Cite this