TY - JOUR
T1 - A novel discriminative feature extraction for acoustic scene classification using RNN based source separation
AU - Mun, Seongkyu
AU - Shon, Suwon
AU - Kim, Wooil
AU - Han, David K.
AU - Ko, Hanseok
N1 - Funding Information:
This material is based upon work supported by the Air Force Office of Scientific Research under award number FA2386-16-1-4130 and authors would like to thank the anonymous reviewers for their valuable comments.
Publisher Copyright:
Copyright © 2017 The Institute of Electronics, Information and Communication Engineers.
PY - 2017/12
Y1 - 2017/12
N2 - Various types of classifiers and feature extraction methods for acoustic scene classification have been recently proposed in the IEEE Detection and Classification of Acoustic Scenes and Events (DCASE) 2016 Challenge Task 1. The results of the final evaluation, however, have shown that even top 10 ranked teams, showed extremely low accuracy performance in particular class pairs with similar sounds. Due to such sound classes being difficult to distinguish even by human ears, the conventional deep learning based feature extraction methods, as used by most DCASE participating teams, are considered facing performance limitations. To address the low performance problem in similar class pair cases, this letter proposes to employ a recurrent neural network (RNN) based source separation for each class prior to the classification step. Based on the fact that the system can effectively extract trained sound components using the RNN structure, the mid-layer of the RNN can be considered to capture discriminative information of the trained class. Therefore, this letter proposes to use this mid-layer information as novel discriminative features. The proposed feature shows an average classification rate improvement of 2.3% compared to the conventional method, which uses additional classifiers for the similar class pair issue.
AB - Various types of classifiers and feature extraction methods for acoustic scene classification have been recently proposed in the IEEE Detection and Classification of Acoustic Scenes and Events (DCASE) 2016 Challenge Task 1. The results of the final evaluation, however, have shown that even top 10 ranked teams, showed extremely low accuracy performance in particular class pairs with similar sounds. Due to such sound classes being difficult to distinguish even by human ears, the conventional deep learning based feature extraction methods, as used by most DCASE participating teams, are considered facing performance limitations. To address the low performance problem in similar class pair cases, this letter proposes to employ a recurrent neural network (RNN) based source separation for each class prior to the classification step. Based on the fact that the system can effectively extract trained sound components using the RNN structure, the mid-layer of the RNN can be considered to capture discriminative information of the trained class. Therefore, this letter proposes to use this mid-layer information as novel discriminative features. The proposed feature shows an average classification rate improvement of 2.3% compared to the conventional method, which uses additional classifiers for the similar class pair issue.
KW - Acoustic scene classification
KW - Bottleneck feature
KW - Recurrent neural network
KW - Transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85038390088&partnerID=8YFLogxK
U2 - 10.1587/transinf.2017EDL8132
DO - 10.1587/transinf.2017EDL8132
M3 - Article
AN - SCOPUS:85038390088
SN - 0916-8532
VL - E100D
SP - 3041
EP - 3044
JO - IEICE Transactions on Information and Systems
JF - IEICE Transactions on Information and Systems
IS - 12
ER -