Recurrent neural network-based semantic variational autoencoder for Sequence-to-sequence learning

Myeongjun Jang, Seungwan Seo, Pilsung Kang

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

Sequence-to-sequence (Seq2seq) models have played an important role in the recent success of various natural language processing methods, such as machine translation, text summarization, and speech recognition. However, current Seq2seq models have trouble preserving global latent information from a long sequence of words. Variational autoencoder (VAE) alleviates this problem by learning a continuous semantic space of the input sentence. However, it does not solve the problem completely. In this paper, we propose a new recurrent neural network (RNN)-based Seq2seq model, RNN semantic variational autoencoder (RNN–SVAE), to better capture the global latent information of a sequence of words. To suitably reflect the meanings of words in a sentence regardless of their position within the sentence, we utilized two approaches: (1) constructing a document information vector based on the attention information between the final state of the encoder and every prior hidden state, and (2) extracting the semantic vector based on the self-attention mechanism. Then, the mean and standard deviation of the continuous semantic space are learned by using this vector to take advantage of the variational method. By using the document information vector and the self-attention mechanism to find the semantic space of the sentence, it becomes possible to better capture the global latent feature of the sentence. Experimental results of three natural language tasks (i.e., language modeling, missing word imputation, paraphrase identification) confirm that the proposed RNN–SVAE yields higher performance than two benchmark models.

Original languageEnglish
Pages (from-to)59-73
Number of pages15
JournalInformation Sciences
Volume490
DOIs
Publication statusPublished - 2019 Jul 1
Externally publishedYes

Fingerprint

Recurrent neural networks
Recurrent Neural Networks
Semantics
Natural Language
Language Modeling
Mean deviation
Machine Translation
Imputation
Summarization
Speech Recognition
Encoder
Speech recognition
Variational Methods
Model
Standard deviation
Learning
High Performance
Benchmark
Experimental Results
Processing

Keywords

  • Auto-encoder
  • Document information vector
  • Natural language processing
  • Recurrent neural network
  • Self attention mechanism
  • Sequence-to-sequence learning
  • Variational method

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Theoretical Computer Science
  • Computer Science Applications
  • Information Systems and Management
  • Artificial Intelligence

Cite this

Recurrent neural network-based semantic variational autoencoder for Sequence-to-sequence learning. / Jang, Myeongjun; Seo, Seungwan; Kang, Pilsung.

In: Information Sciences, Vol. 490, 01.07.2019, p. 59-73.

Research output: Contribution to journalArticle

@article{99db1a3219f64ea0b1b42705a7012c21,
title = "Recurrent neural network-based semantic variational autoencoder for Sequence-to-sequence learning",
abstract = "Sequence-to-sequence (Seq2seq) models have played an important role in the recent success of various natural language processing methods, such as machine translation, text summarization, and speech recognition. However, current Seq2seq models have trouble preserving global latent information from a long sequence of words. Variational autoencoder (VAE) alleviates this problem by learning a continuous semantic space of the input sentence. However, it does not solve the problem completely. In this paper, we propose a new recurrent neural network (RNN)-based Seq2seq model, RNN semantic variational autoencoder (RNN–SVAE), to better capture the global latent information of a sequence of words. To suitably reflect the meanings of words in a sentence regardless of their position within the sentence, we utilized two approaches: (1) constructing a document information vector based on the attention information between the final state of the encoder and every prior hidden state, and (2) extracting the semantic vector based on the self-attention mechanism. Then, the mean and standard deviation of the continuous semantic space are learned by using this vector to take advantage of the variational method. By using the document information vector and the self-attention mechanism to find the semantic space of the sentence, it becomes possible to better capture the global latent feature of the sentence. Experimental results of three natural language tasks (i.e., language modeling, missing word imputation, paraphrase identification) confirm that the proposed RNN–SVAE yields higher performance than two benchmark models.",
keywords = "Auto-encoder, Document information vector, Natural language processing, Recurrent neural network, Self attention mechanism, Sequence-to-sequence learning, Variational method",
author = "Myeongjun Jang and Seungwan Seo and Pilsung Kang",
year = "2019",
month = "7",
day = "1",
doi = "10.1016/j.ins.2019.03.066",
language = "English",
volume = "490",
pages = "59--73",
journal = "Information Sciences",
issn = "0020-0255",
publisher = "Elsevier Inc.",

}

TY - JOUR

T1 - Recurrent neural network-based semantic variational autoencoder for Sequence-to-sequence learning

AU - Jang, Myeongjun

AU - Seo, Seungwan

AU - Kang, Pilsung

PY - 2019/7/1

Y1 - 2019/7/1

N2 - Sequence-to-sequence (Seq2seq) models have played an important role in the recent success of various natural language processing methods, such as machine translation, text summarization, and speech recognition. However, current Seq2seq models have trouble preserving global latent information from a long sequence of words. Variational autoencoder (VAE) alleviates this problem by learning a continuous semantic space of the input sentence. However, it does not solve the problem completely. In this paper, we propose a new recurrent neural network (RNN)-based Seq2seq model, RNN semantic variational autoencoder (RNN–SVAE), to better capture the global latent information of a sequence of words. To suitably reflect the meanings of words in a sentence regardless of their position within the sentence, we utilized two approaches: (1) constructing a document information vector based on the attention information between the final state of the encoder and every prior hidden state, and (2) extracting the semantic vector based on the self-attention mechanism. Then, the mean and standard deviation of the continuous semantic space are learned by using this vector to take advantage of the variational method. By using the document information vector and the self-attention mechanism to find the semantic space of the sentence, it becomes possible to better capture the global latent feature of the sentence. Experimental results of three natural language tasks (i.e., language modeling, missing word imputation, paraphrase identification) confirm that the proposed RNN–SVAE yields higher performance than two benchmark models.

AB - Sequence-to-sequence (Seq2seq) models have played an important role in the recent success of various natural language processing methods, such as machine translation, text summarization, and speech recognition. However, current Seq2seq models have trouble preserving global latent information from a long sequence of words. Variational autoencoder (VAE) alleviates this problem by learning a continuous semantic space of the input sentence. However, it does not solve the problem completely. In this paper, we propose a new recurrent neural network (RNN)-based Seq2seq model, RNN semantic variational autoencoder (RNN–SVAE), to better capture the global latent information of a sequence of words. To suitably reflect the meanings of words in a sentence regardless of their position within the sentence, we utilized two approaches: (1) constructing a document information vector based on the attention information between the final state of the encoder and every prior hidden state, and (2) extracting the semantic vector based on the self-attention mechanism. Then, the mean and standard deviation of the continuous semantic space are learned by using this vector to take advantage of the variational method. By using the document information vector and the self-attention mechanism to find the semantic space of the sentence, it becomes possible to better capture the global latent feature of the sentence. Experimental results of three natural language tasks (i.e., language modeling, missing word imputation, paraphrase identification) confirm that the proposed RNN–SVAE yields higher performance than two benchmark models.

KW - Auto-encoder

KW - Document information vector

KW - Natural language processing

KW - Recurrent neural network

KW - Self attention mechanism

KW - Sequence-to-sequence learning

KW - Variational method

UR - http://www.scopus.com/inward/record.url?scp=85063423035&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85063423035&partnerID=8YFLogxK

U2 - 10.1016/j.ins.2019.03.066

DO - 10.1016/j.ins.2019.03.066

M3 - Article

AN - SCOPUS:85063423035

VL - 490

SP - 59

EP - 73

JO - Information Sciences

JF - Information Sciences

SN - 0020-0255

ER -