TY - GEN
T1 - EEG representation in deep convolutional neural networks for classification of motor imagery
AU - Robinson, Neethu
AU - Lee, Seong Whan
AU - Guan, Cuntai
N1 - Funding Information:
ACKNOWLEDGMENT This work was supported by the Agency for Science, Technology and Research, Singapore (No. IAF311022), and the Institute for Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (No. 2017-0-00451).
Publisher Copyright:
© 2019 IEEE.
PY - 2019/10
Y1 - 2019/10
N2 - With deep learning emerging as a powerful machine learning tool to build Brain Computer Interface (BCI) systems, researchers are investigating the use of different type of networks architectures and representations of brain activity to attain superior classification accuracy compared to state-of-the-art machine learning approaches, that rely on processed signal and optimally extracted features. This paper presents a deep learning driven electroencephalography (EEG)-BCI system to perform decoding of hand motor imagery using deep convolution neural network architecture, with spectrally localized time-domain representation of multi-channel EEG as input. A significant increase in decoding performance in terms of accuracy of +6.47% is obtained compared to a wideband EEG representation. We further illustrate the movement class specific feature patterns for both the architectures and demonstrate that higher difference between classes is observed using the proposed architecture. We conclude that the network trained by taking into account the dynamic spatial interactions in distinct frequency bands of EEG, can offer better decoding performance and aid in better interpretation of learned features.
AB - With deep learning emerging as a powerful machine learning tool to build Brain Computer Interface (BCI) systems, researchers are investigating the use of different type of networks architectures and representations of brain activity to attain superior classification accuracy compared to state-of-the-art machine learning approaches, that rely on processed signal and optimally extracted features. This paper presents a deep learning driven electroencephalography (EEG)-BCI system to perform decoding of hand motor imagery using deep convolution neural network architecture, with spectrally localized time-domain representation of multi-channel EEG as input. A significant increase in decoding performance in terms of accuracy of +6.47% is obtained compared to a wideband EEG representation. We further illustrate the movement class specific feature patterns for both the architectures and demonstrate that higher difference between classes is observed using the proposed architecture. We conclude that the network trained by taking into account the dynamic spatial interactions in distinct frequency bands of EEG, can offer better decoding performance and aid in better interpretation of learned features.
UR - http://www.scopus.com/inward/record.url?scp=85076757456&partnerID=8YFLogxK
U2 - 10.1109/SMC.2019.8914184
DO - 10.1109/SMC.2019.8914184
M3 - Conference contribution
AN - SCOPUS:85076757456
T3 - Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
SP - 1322
EP - 1326
BT - 2019 IEEE International Conference on Systems, Man and Cybernetics, SMC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE International Conference on Systems, Man and Cybernetics, SMC 2019
Y2 - 6 October 2019 through 9 October 2019
ER -