Abstract
Brain–computer interfaces (BCIs) can help people with limited motor abilities to interact with their environment without external assistance. A major challenge in electroencephalogram (EEG)-based BCI development and research is the cross-subject classification of motor imagery data. Due to the highly individualized nature of EEG signals, it has been difficult to develop a cross-subject classification method that achieves sufficiently high accuracy when predicting the subject’s intention. In this study, we propose a multi-branch 2D convolutional neural network (CNN) that utilizes different hyperparameter values for each branch and is more flexible to data from different subjects. Our model, EEGNet Fusion, achieves 84.1% and 83.8% accuracy when tested on the 103-subject eegmmidb dataset for executed and imagined motor actions, respectively. The model achieved statistically significantly higher results compared with three state-of-the-art CNN classifiers: EEGNet, ShallowConvNet, and DeepConvNet. However, the computational cost of the proposed model is up to four times higher than the model with the lowest computational cost used for comparison.
Original language | English |
---|---|
Article number | 72 |
Pages (from-to) | 1-9 |
Number of pages | 9 |
Journal | Computers |
Volume | 9 |
Issue number | 3 |
DOIs | |
Publication status | Published - 5 Sept 2020 |
Bibliographical note
Funding Information:Funding: Naveed Muhammad has been funded by the European Social Fund via the IT Academy program.
Publisher Copyright:
© 2020 by the authors. Licensee MDPI, Basel, Switzerland.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.