Abstract
Sequence-to-sequence deep neural networks with attention mechanisms have shown superior performance across various domains, where the sizes of the input and the output sequences may differ. However, if the input sequences are much longer than the output sequences, and the characteristic of the input sequence changes within a single output token, the conventional attention mechanisms are inappropriate, because only a single context vector is used for each output token. In this paper, we propose a double-attention mechanism to handle this problem by using two context vectors that cover the left and the right parts of the input focus separately. The effectiveness of the proposed method is evaluated using speech recognition experiments on the TIMIT corpus.
Original language | English |
---|---|
Pages (from-to) | 476-482 |
Number of pages | 7 |
Journal | Journal of the Acoustical Society of Korea |
Volume | 39 |
Issue number | 5 |
DOIs | |
Publication status | Published - 2020 |
Keywords
- Attention
- Automatic speech recognition
- Deep neural network
- Sequence-to-sequence
ASJC Scopus subject areas
- Acoustics and Ultrasonics
- Instrumentation
- Applied Mathematics
- Signal Processing
- Speech and Hearing