Abstract
Domain generalization aims to learn a domain-invariant representation from multiple source domains so that a model can generalize well across unseen target domains. Such models are often trained with examples that are presented randomly from all source domains, which can make the training unstable due to optimization in conflicting gradient directions. Here, we explore inter-domain curriculum learning (IDCL) where source domains are exposed in a meaningful order to gradually provide more complex ones. The experiments show that significant improvements can be achieved in both PACS and Office–Home benchmarks, and ours improves the state-of-the-art method by 1.08%.
Original language | English |
---|---|
Pages (from-to) | 225-229 |
Number of pages | 5 |
Journal | ICT Express |
Volume | 8 |
Issue number | 2 |
DOIs | |
Publication status | Published - 2022 Jun |
Keywords
- Deep neural networks
- Domain generalization
- Inter-domain curriculum learning
ASJC Scopus subject areas
- Software
- Information Systems
- Hardware and Architecture
- Computer Networks and Communications
- Artificial Intelligence