EEG Representations of Spatial and Temporal Features in Imagined Speech and Overt Speech

Seo Hyun Lee, Minji Lee, Seong Whan Lee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Imagined speech is an emerging paradigm for intuitive control of the brain-computer interface based communication system. Although the decoding performance of the imagined speech is improving with actively proposed architectures, the fundamental question about ‘what component are they decoding?’ is still remaining as a question mark. Considering that the imagined speech refers to an internal mechanism of producing speech, it may naturally resemble the distinct features of the overt speech. In this paper, we investigate the close relation of the spatial and temporal features between imagined speech and overt speech using electroencephalography signals. Based on the common spatial pattern feature, we acquired 16.2% and 59.9% of averaged thirteen-class classification accuracy (chance rate = 7.7%) for imagined speech and overt speech, respectively. Although the overt speech showed significantly higher classification performance compared to the imagined speech, we found potentially similar common spatial pattern of the identical classes of imagined speech and overt speech. Furthermore, in the temporal feature, we examined the analogous grand averaged potentials of the highly distinguished classes in the two speech paradigms. Specifically, the correlation of the amplitude between the imagined speech and the overt speech was 0.71 in the class with the highest true positive rate. The similar spatial and temporal features of the two paradigms may provide a key to the bottom-up decoding of imagined speech, implying the possibility of robust classification of multiclass imagined speech. It could be a milestone to comprehensive decoding of the speech-related paradigms, considering their underlying patterns.

Original languageEnglish
Title of host publicationPattern Recognition - 5th Asian Conference, ACPR 2019, Revised Selected Papers
EditorsShivakumara Palaiahnakote, Gabriella Sanniti di Baja, Liang Wang, Wei Qi Yan
PublisherSpringer
Pages387-400
Number of pages14
ISBN (Print)9783030412982
DOIs
Publication statusPublished - 2020 Jan 1
Event5th Asian Conference on Pattern Recognition, ACPR 2019 - Auckland, New Zealand
Duration: 2019 Nov 262019 Nov 29

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12047 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference5th Asian Conference on Pattern Recognition, ACPR 2019
CountryNew Zealand
CityAuckland
Period19/11/2619/11/29

Keywords

  • Brain-computer interface
  • Common spatial pattern
  • Electroencephalography
  • Imagined speech
  • Overt speech

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Lee, S. H., Lee, M., & Lee, S. W. (2020). EEG Representations of Spatial and Temporal Features in Imagined Speech and Overt Speech. In S. Palaiahnakote, G. Sanniti di Baja, L. Wang, & W. Q. Yan (Eds.), Pattern Recognition - 5th Asian Conference, ACPR 2019, Revised Selected Papers (pp. 387-400). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 12047 LNCS). Springer. https://doi.org/10.1007/978-3-030-41299-9_30