TY - GEN
T1 - Joint modeling of articulatory and acoustic spaces for continuous speech recognition tasks
AU - Mitra, Vikramjit
AU - Sivaraman, Ganesh
AU - Bartels, Chris
AU - Nam, Hosung
AU - Wang, Wen
AU - Espy-Wilson, Carol
AU - Vergyri, Dimitra
AU - Franco, Horacio
N1 - Funding Information:
This research was supported by NSF Grant # IIS-0964556, IIS-1162046, BCS-1435831 and IIS-1161962.
Publisher Copyright:
© 2017 IEEE.
Copyright:
Copyright 2017 Elsevier B.V., All rights reserved.
PY - 2017/6/16
Y1 - 2017/6/16
N2 - Articulatory information can effectively model variability in speech and can improve speech recognition performance under varying acoustic conditions. Learning speaker-independent articulatory models has always been challenging, as speaker-specific information in the articulatory and acoustic spaces increases the complexity of the speech-to-articulatory space inverse modeling, which is already an ill-posed problem due to its inherent nonlinearity and non-uniqueness. This paper investigates using deep neural networks (DNN) and convolutional neural networks (CNNs) for mapping speech data into its corresponding articulatory space. Our results indicate that the CNN models perform better than their DNN counterparts for speech inversion. In addition, we used the inverse models to generate articulatory trajectories from speech for three different standard speech recognition tasks. To effectively model the articulatory features' temporal modulations while retaining the acoustic features' spatiotemporal signatures, we explored a joint modeling strategy to simultaneously learn both the acoustic and articulatory spaces. The results from multiple speech recognition tasks indicate that articulatory features can improve recognition performance when the acoustic and articulatory spaces are jointly learned with one common objective function.
AB - Articulatory information can effectively model variability in speech and can improve speech recognition performance under varying acoustic conditions. Learning speaker-independent articulatory models has always been challenging, as speaker-specific information in the articulatory and acoustic spaces increases the complexity of the speech-to-articulatory space inverse modeling, which is already an ill-posed problem due to its inherent nonlinearity and non-uniqueness. This paper investigates using deep neural networks (DNN) and convolutional neural networks (CNNs) for mapping speech data into its corresponding articulatory space. Our results indicate that the CNN models perform better than their DNN counterparts for speech inversion. In addition, we used the inverse models to generate articulatory trajectories from speech for three different standard speech recognition tasks. To effectively model the articulatory features' temporal modulations while retaining the acoustic features' spatiotemporal signatures, we explored a joint modeling strategy to simultaneously learn both the acoustic and articulatory spaces. The results from multiple speech recognition tasks indicate that articulatory features can improve recognition performance when the acoustic and articulatory spaces are jointly learned with one common objective function.
KW - articulatory trajectories
KW - automatic speech recognition
KW - convolutional neural networks
KW - hybrid convolutional neural networks
KW - time-frequency convolution
KW - vocal tract variables
UR - http://www.scopus.com/inward/record.url?scp=85023752222&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85023752222&partnerID=8YFLogxK
U2 - 10.1109/ICASSP.2017.7953149
DO - 10.1109/ICASSP.2017.7953149
M3 - Conference contribution
AN - SCOPUS:85023752222
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 5205
EP - 5209
BT - 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017
Y2 - 5 March 2017 through 9 March 2017
ER -