Grounded Vocabulary for Image Retrieval Using a Modified Multi-Generator Generative Adversarial Network

Kuekyeng Kim, Chanjun Park, Jaehyung Seo, Heuiseok Lim

Research output: Contribution to journalArticlepeer-review

Abstract

With the recent increase in requirement of both natural-language and visual information, the demand for research on seamless multi-modal processing for effective retrieval of these types of information has increased. However, because of the unstructured nature of images, it is difficult to retrieve images that accurately represent the input text. In this study, we utilized an augmented version of a multi-generator generative adversarial network that uses BERT embeddings and attention maps as input to enable grounded vocabulary for visual representations. We compared the performance of our proposed model with those of other state-of-the-art text input-based image retrieval methods on the MSCOCO and Flikr30K datasets, and the results showed the potential of our proposed method. Even with limited vocabulary, our proposed model was comparable to other state-of-the-art performances on R@10 or even exceed them in R@1. Moreover, we revealed the unique properties of our method by demonstrating how it could perform successfully even when using more descriptive text or short sentences as input.

Original languageEnglish
JournalIEEE Access
DOIs
Publication statusAccepted/In press - 2021

Keywords

  • Artificial Intelligence
  • Artificial Neural Network
  • Bit error rate
  • Computer Vision
  • Generators
  • Image Processing
  • Image retrieval
  • Search Methods
  • Task analysis
  • Training
  • Visualization
  • Vocabulary

ASJC Scopus subject areas

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)

Fingerprint

Dive into the research topics of 'Grounded Vocabulary for Image Retrieval Using a Modified Multi-Generator Generative Adversarial Network'. Together they form a unique fingerprint.

Cite this