Text Embedding Augmentation Based on Retraining With Pseudo-Labeled Adversarial Embedding

Myeongsup Kim, Pilsung Kang

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Pre-trained language models (LMs) have been shown to achieve outstanding performance in various natural language processing tasks; however, these models have a significantly large number of parameters to handle large-scale text corpora during the pre-training process, and thus, they entail the risk of overfitting when fine-tuning for small task-oriented datasets is conducted. In this paper, we propose a text embedding augmentation method to prevent such overfitting. The proposed method applies augmentation to a text embedding by generating an adversarial embedding, which is not identical to original input embedding but maintaining the characteristics of the original input embedding, using PGD-based adversarial training for input text data. A pseudo-label that is identical to the label of the input text is then assigned to adversarial embedding to conduct retraining by using adversarial embedding and pseudo-label as input embedding and label pair for a separate LM. Experimental results on several text classification benchmark datasets demonstrated that the proposed method effectively prevented overfitting, which commonly occurs when adjusting a large-scale pre-trained LM to a specific task.

Original languageEnglish
Pages (from-to)8363-8376
Number of pages14
JournalIEEE Access
Volume10
DOIs
Publication statusPublished - 2022

Keywords

  • Data models
  • Extrapolation
  • Interpolation
  • Semantics
  • Task analysis
  • Training
  • Transformers

ASJC Scopus subject areas

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)

Fingerprint

Dive into the research topics of 'Text Embedding Augmentation Based on Retraining With Pseudo-Labeled Adversarial Embedding'. Together they form a unique fingerprint.

Cite this