Boundary-Focused Generative Adversarial Networks for Imbalanced and Multimodal Time Series

Han Kyu Lee, Jiyoon Lee, Seoung Bum Kim

Research output: Contribution to journalArticlepeer-review

Abstract

Class imbalance problems have been reported as a major issue in various applications. Classification becomes further complicated when an imbalance occurs in time series data sets. To address time series data, it is necessary to consider their characteristics (i.e., high dimensionality, high correlations, and multimodality). Oversampling is a well-known approach for addressing this problem; however, such an approach does not appropriately consider the characteristics of time series data. This paper addresses these limitations by presenting a model-based oversampling approach, a boundary-focused generative adversarial network (BFGAN). The proposed BFGAN employs a specifically designed additional label for reflecting the importance of a sample's position in data space. Furthermore, the BFGAN generates artificial samples after taking into consideration a sample's multimodality and importance by using a suitable modified GAN structure. We present empirical results that reveal a significant improvement in the quality of the generated data when the proposed BFGAN is used as an oversampling algorithm for an imbalanced multimodal time series data set.

Original languageEnglish
Pages (from-to)1-19
Number of pages19
JournalIEEE Transactions on Knowledge and Data Engineering
DOIs
Publication statusAccepted/In press - 2022

Keywords

  • Classification algorithms
  • Correlation
  • Generative adversarial network
  • Generative adversarial networks
  • Generators
  • Time series analysis
  • Training
  • Training data
  • generative model
  • imbalanced class
  • multimodality
  • oversampling

ASJC Scopus subject areas

  • Information Systems
  • Computer Science Applications
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'Boundary-Focused Generative Adversarial Networks for Imbalanced and Multimodal Time Series'. Together they form a unique fingerprint.

Cite this