TY - JOUR
T1 - Recurrent neural network-based semantic variational autoencoder for Sequence-to-sequence learning
AU - Jang, Myeongjun
AU - Seo, Seungwan
AU - Kang, Pilsung
N1 - Funding Information:
We sincerely appreciate the two anonymous reviewers’ valuable comments, especially concerning the self-attention mechanism. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2016R1D1A1B03930729) and Korea Electric Power Corporation (Grant number: R18XA05).
Funding Information:
We sincerely appreciate the two anonymous reviewers’ valuable comments, especially concerning the self-attention mechanism. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education ( NRF-2016R1D1A1B03930729 ) and Korea Electric Power Corporation (Grant number: R18XA05 ).
PY - 2019/7
Y1 - 2019/7
N2 - Sequence-to-sequence (Seq2seq) models have played an important role in the recent success of various natural language processing methods, such as machine translation, text summarization, and speech recognition. However, current Seq2seq models have trouble preserving global latent information from a long sequence of words. Variational autoencoder (VAE) alleviates this problem by learning a continuous semantic space of the input sentence. However, it does not solve the problem completely. In this paper, we propose a new recurrent neural network (RNN)-based Seq2seq model, RNN semantic variational autoencoder (RNN–SVAE), to better capture the global latent information of a sequence of words. To suitably reflect the meanings of words in a sentence regardless of their position within the sentence, we utilized two approaches: (1) constructing a document information vector based on the attention information between the final state of the encoder and every prior hidden state, and (2) extracting the semantic vector based on the self-attention mechanism. Then, the mean and standard deviation of the continuous semantic space are learned by using this vector to take advantage of the variational method. By using the document information vector and the self-attention mechanism to find the semantic space of the sentence, it becomes possible to better capture the global latent feature of the sentence. Experimental results of three natural language tasks (i.e., language modeling, missing word imputation, paraphrase identification) confirm that the proposed RNN–SVAE yields higher performance than two benchmark models.
AB - Sequence-to-sequence (Seq2seq) models have played an important role in the recent success of various natural language processing methods, such as machine translation, text summarization, and speech recognition. However, current Seq2seq models have trouble preserving global latent information from a long sequence of words. Variational autoencoder (VAE) alleviates this problem by learning a continuous semantic space of the input sentence. However, it does not solve the problem completely. In this paper, we propose a new recurrent neural network (RNN)-based Seq2seq model, RNN semantic variational autoencoder (RNN–SVAE), to better capture the global latent information of a sequence of words. To suitably reflect the meanings of words in a sentence regardless of their position within the sentence, we utilized two approaches: (1) constructing a document information vector based on the attention information between the final state of the encoder and every prior hidden state, and (2) extracting the semantic vector based on the self-attention mechanism. Then, the mean and standard deviation of the continuous semantic space are learned by using this vector to take advantage of the variational method. By using the document information vector and the self-attention mechanism to find the semantic space of the sentence, it becomes possible to better capture the global latent feature of the sentence. Experimental results of three natural language tasks (i.e., language modeling, missing word imputation, paraphrase identification) confirm that the proposed RNN–SVAE yields higher performance than two benchmark models.
KW - Auto-encoder
KW - Document information vector
KW - Natural language processing
KW - Recurrent neural network
KW - Self attention mechanism
KW - Sequence-to-sequence learning
KW - Variational method
UR - http://www.scopus.com/inward/record.url?scp=85063423035&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85063423035&partnerID=8YFLogxK
U2 - 10.1016/j.ins.2019.03.066
DO - 10.1016/j.ins.2019.03.066
M3 - Article
AN - SCOPUS:85063423035
VL - 490
SP - 59
EP - 73
JO - Information Sciences
JF - Information Sciences
SN - 0020-0255
ER -