TY - GEN
T1 - A Multi-view CNN with Novel Variance Layer for Motor Imagery Brain Computer Interface
AU - Mane, Ravikiran
AU - Robinson, Neethu
AU - Vinod, A. P.
AU - Lee, Seong Whan
AU - Guan, Cuntai
N1 - Publisher Copyright:
© 2020 IEEE.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2020/7
Y1 - 2020/7
N2 - Accurate and robust classification of Motor Imagery (MI) from Electroencephalography (EEG) signals is among the most challenging tasks in Brain-Computer Interface (BCI) field. To address this challenge, this paper proposes a novel, neuro-physiologically inspired convolutional neural network (CNN) named Filter-Bank Convolutional Network (FBCNet) for MI classification. Capturing neurophysiological signatures of MI, FBCNet first creates a multi-view representation of the data by bandpass-filtering the EEG into multiple frequency bands. Next, spatially discriminative patterns for each view are learned using a CNN layer. Finally, the temporal information is aggregated using a new variance layer and a fully connected layer classifies the resultant features into MI classes. We evaluate the performance of FBCNet on a publicly available dataset from Korea University for classification of left vs right hand MI in a subject-specific 10-fold cross-validation setting. Results show that FBCNet achieves more than 6.7% higher accuracy compared to other state-of-the-art deep learning architectures while requiring less than 1% of the learning parameters. We explain the higher classification accuracy achieved by FBCNet using feature visualization where we show the superiority of FBCNet in learning interpretable and highly generalizable discriminative features. We provide the source code of FBCNet for reproducibility of results.
AB - Accurate and robust classification of Motor Imagery (MI) from Electroencephalography (EEG) signals is among the most challenging tasks in Brain-Computer Interface (BCI) field. To address this challenge, this paper proposes a novel, neuro-physiologically inspired convolutional neural network (CNN) named Filter-Bank Convolutional Network (FBCNet) for MI classification. Capturing neurophysiological signatures of MI, FBCNet first creates a multi-view representation of the data by bandpass-filtering the EEG into multiple frequency bands. Next, spatially discriminative patterns for each view are learned using a CNN layer. Finally, the temporal information is aggregated using a new variance layer and a fully connected layer classifies the resultant features into MI classes. We evaluate the performance of FBCNet on a publicly available dataset from Korea University for classification of left vs right hand MI in a subject-specific 10-fold cross-validation setting. Results show that FBCNet achieves more than 6.7% higher accuracy compared to other state-of-the-art deep learning architectures while requiring less than 1% of the learning parameters. We explain the higher classification accuracy achieved by FBCNet using feature visualization where we show the superiority of FBCNet in learning interpretable and highly generalizable discriminative features. We provide the source code of FBCNet for reproducibility of results.
UR - http://www.scopus.com/inward/record.url?scp=85091028186&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85091028186&partnerID=8YFLogxK
U2 - 10.1109/EMBC44109.2020.9175874
DO - 10.1109/EMBC44109.2020.9175874
M3 - Conference contribution
AN - SCOPUS:85091028186
T3 - Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS
SP - 2950
EP - 2953
BT - 42nd Annual International Conferences of the IEEE Engineering in Medicine and Biology Society
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 42nd Annual International Conferences of the IEEE Engineering in Medicine and Biology Society, EMBC 2020
Y2 - 20 July 2020 through 24 July 2020
ER -