Visual question answering based on local-scene-aware referring expression generation

Jung Jun Kim, Dong Gyu Lee, Jialin Wu, Hong Gyu Jung, Seong Whan Lee

Research output: Contribution to journalArticlepeer-review

Abstract

Visual question answering requires a deep understanding of both images and natural language. However, most methods mainly focus on visual concept; such as the relationships between various objects. The limited use of object categories combined with their relationships or simple question embedding is insufficient for representing complex scenes and explaining decisions. To address this limitation, we propose the use of text expressions generated for images, because such expressions have few structural constraints and can provide richer descriptions of images. The generated expressions can be incorporated with visual features and question embedding to obtain the question-relevant answer. A joint-embedding multi-head attention network is also proposed to model three different information modalities with co-attention. We quantitatively and qualitatively evaluated the proposed method on the VQA v2 dataset and compared it with state-of-the-art methods in terms of answer prediction. The quality of the generated expressions was also evaluated on the RefCOCO, RefCOCO+, and RefCOCOg datasets. Experimental results demonstrate the effectiveness of the proposed method and reveal that it outperformed all of the competing methods in terms of both quantitative and qualitative results.

Original languageEnglish
Pages (from-to)158-167
Number of pages10
JournalNeural Networks
Volume139
DOIs
Publication statusPublished - 2021 Jul

Keywords

  • Joint-embedding multi-head attention
  • Referring expression generation
  • Visual question answering

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Visual question answering based on local-scene-aware referring expression generation'. Together they form a unique fingerprint.

Cite this