Multi-view attention network for visual dialog

Sungjin Park, Taesun Whang, Yeochan Yoon, Heuiseok Lim

Research output: Contribution to journalArticlepeer-review

Abstract

Visual dialog is a challenging vision-language task in which a series of questions visually grounded by a given image are answered. To resolve the visual dialog task, a high-level understanding of various multimodal inputs (e.g., question, dialog history, and image) is required. Specifically, it is necessary for an agent to (1) determine the semantic intent of question and (2) align question-relevant textual and visual contents among heterogeneous modality inputs. In this paper, we propose Multi-View Attention Network (MVAN), which leverages multiple views about heterogeneous inputs based on attention mechanisms. MVAN effectively captures the question-relevant information from the dialog history with two complementary modules (i.e., Topic Aggregation and Context Matching), and builds multimodal representations through sequential alignment processes (i.e., Modality Align-ment). Experimental results on VisDial v1.0 dataset show the effectiveness of our proposed model, which outperforms previous state-of-the-art methods under both single model and ensemble settings.

Original languageEnglish
Article number3009
JournalApplied Sciences (Switzerland)
Volume11
Issue number7
DOIs
Publication statusPublished - 2021 Apr 1

Keywords

  • Attention mechanism
  • Multimodal learning
  • Vision-language
  • Visual dialog

ASJC Scopus subject areas

  • Materials Science(all)
  • Instrumentation
  • Engineering(all)
  • Process Chemistry and Technology
  • Computer Science Applications
  • Fluid Flow and Transfer Processes

Fingerprint Dive into the research topics of 'Multi-view attention network for visual dialog'. Together they form a unique fingerprint.

Cite this