Background
In the past few years, motor imagery brain-computer interface (MIBCI) has become a valuable assisting technology for the disabled. However, how to effectively improve the motor imagery (MI) classification performance by learning discriminative and robust features is still a challenging problem.
Methods
In this study, we propose a novel loss function, called correntropy-based center loss (CCL), as the supervision signal for the training of the convolutional neural network (CNN) model in the MI classification task. With joint supervision of the softmax loss and CCL, we can train a CNN model to acquire deep discriminative features with large inter-class dispersion and slight intra-class variation. Moreover, the CCL can also effectively decrease the negative effect of the noise during the training, which is essential to accurate MI classification.
Results
We perform extensive experiments on two well-known public MI datasets, called BCI competition IV-2a and IV-2b, to demonstrate the effectiveness of the proposed loss. The result shows that our CNNs (with such joint supervision) achieve 78.65% and 86.10% on IV-2a and IV-2b and outperform other baseline approaches.
Conclusion
The proposed CCL helps the learning process of the CNN model to obtain both discriminative and robust deeply learned features for the MI classification task in the BCI rehabilitation application.