Improving zero-training brain-computer interfaces by mixing model estimators

T. Verhoeven, D. Hübner, M. Tangermann, Klaus Muller, J. Dambre, P. J. Kindermans

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. Approach. We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method's strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. Main results. Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. Significance. Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.

Original languageEnglish
Article number036021
JournalJournal of Neural Engineering
Volume14
Issue number3
DOIs
Publication statusPublished - 2017 Apr 6

Fingerprint

Brain-Computer Interfaces
Brain computer interface
Calibration
Evoked Potentials
Decoding
Labels
Computer applications
Brain
Experiments
Learning

Keywords

  • brain-computer interface
  • event-related potential
  • machine learning
  • P300 speller

ASJC Scopus subject areas

  • Biomedical Engineering
  • Cellular and Molecular Neuroscience

Cite this

Verhoeven, T., Hübner, D., Tangermann, M., Muller, K., Dambre, J., & Kindermans, P. J. (2017). Improving zero-training brain-computer interfaces by mixing model estimators. Journal of Neural Engineering, 14(3), [036021]. https://doi.org/10.1088/1741-2552/aa6639

Improving zero-training brain-computer interfaces by mixing model estimators. / Verhoeven, T.; Hübner, D.; Tangermann, M.; Muller, Klaus; Dambre, J.; Kindermans, P. J.

In: Journal of Neural Engineering, Vol. 14, No. 3, 036021, 06.04.2017.

Research output: Contribution to journalArticle

Verhoeven, T, Hübner, D, Tangermann, M, Muller, K, Dambre, J & Kindermans, PJ 2017, 'Improving zero-training brain-computer interfaces by mixing model estimators', Journal of Neural Engineering, vol. 14, no. 3, 036021. https://doi.org/10.1088/1741-2552/aa6639
Verhoeven T, Hübner D, Tangermann M, Muller K, Dambre J, Kindermans PJ. Improving zero-training brain-computer interfaces by mixing model estimators. Journal of Neural Engineering. 2017 Apr 6;14(3). 036021. https://doi.org/10.1088/1741-2552/aa6639
Verhoeven, T. ; Hübner, D. ; Tangermann, M. ; Muller, Klaus ; Dambre, J. ; Kindermans, P. J. / Improving zero-training brain-computer interfaces by mixing model estimators. In: Journal of Neural Engineering. 2017 ; Vol. 14, No. 3.
@article{ea4d22847a254650b5e2fb403f349700,
title = "Improving zero-training brain-computer interfaces by mixing model estimators",
abstract = "Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. Approach. We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method's strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. Main results. Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. Significance. Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.",
keywords = "brain-computer interface, event-related potential, machine learning, P300 speller",
author = "T. Verhoeven and D. H{\"u}bner and M. Tangermann and Klaus Muller and J. Dambre and Kindermans, {P. J.}",
year = "2017",
month = "4",
day = "6",
doi = "10.1088/1741-2552/aa6639",
language = "English",
volume = "14",
journal = "Journal of Neural Engineering",
issn = "1741-2560",
publisher = "IOP Publishing Ltd.",
number = "3",

}

TY - JOUR

T1 - Improving zero-training brain-computer interfaces by mixing model estimators

AU - Verhoeven, T.

AU - Hübner, D.

AU - Tangermann, M.

AU - Muller, Klaus

AU - Dambre, J.

AU - Kindermans, P. J.

PY - 2017/4/6

Y1 - 2017/4/6

N2 - Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. Approach. We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method's strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. Main results. Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. Significance. Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.

AB - Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. Approach. We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method's strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. Main results. Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. Significance. Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.

KW - brain-computer interface

KW - event-related potential

KW - machine learning

KW - P300 speller

UR - http://www.scopus.com/inward/record.url?scp=85020483876&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85020483876&partnerID=8YFLogxK

U2 - 10.1088/1741-2552/aa6639

DO - 10.1088/1741-2552/aa6639

M3 - Article

C2 - 28287076

AN - SCOPUS:85020483876

VL - 14

JO - Journal of Neural Engineering

JF - Journal of Neural Engineering

SN - 1741-2560

IS - 3

M1 - 036021

ER -