Online BCI systems: cross-subject motor imagery classification based on weighted time-domain feature extraction methods

Motor imagery electroencephalogram (MI-EEG) is becoming increasingly important. This paper solves the problem of online signal recognition for motor imagery across subjects by finding common features across multiple subjects to improve the generality of the classification model. We analysed the EEG data from left/right-hand motor imagery of eight subjects and proposed a weighted time-domain (WTD) feature extraction method based on a weighted channel screening method. The classification model constructed by combining this feature extraction method with the support vector machine (SVM) classification method was faster in classification and achieved good cross-subject classification accuracy (The average offline classification accuracy was 91.39%). In this paper, an online control system for asynchronous brain-controlled wheelchairs was built with good performance. The online average motor imagery classification accuracy was 81.67%, and the average response time was 1.36s. This method contributes to bringing the online Brain-computer interface (BCI) system out of the laboratory and into wider application.


Introduction
The Brain-Computer Interface (BCI) system enables communication between the human brain and devices (Wolpaw et al. 2002).One of the most common BCI paradigms is Electroencephalogram (EEG) Motor Imagery (MI), which has been extensively used in smart healthcare applications such as post-stroke rehabilitation and mobile assistive robots (Altaheri et al. 2023).As a non-muscular channel, BCI provides patients with severe neuromuscular disease the ability to interact with the outside world.In computer interfaces, the external action (pressing a button, clicking an icon, issuing a command in the case of a voice interface) that produces the effect is one of the basic stages of interaction.The BCI omits this stage and uses only brain activity to issue commands (Kopeć et al. 2021).
Due to the specificity of individual subjects and the micro sensitivity of spontaneous EEG, there is inter-subject variation in the signal.Cognitive and neurological factors such as brain function, anatomical features, mood, etc., may have an impact when performing motor imagery tasks between subjects.For instance, cognitive factors such as fatigue, memory load, attention, and reaction time modulate transient brain activity, resulting in different BCI performances (Samanta, Chatterjee, and Bose 2020).This inter-subject variability limits the use of BCI as an interaction model outside the laboratory.To develop BCI solutions for the real world, Gordon et al. (2017) argued that cross-subject and cross-domain robust models are necessary.The features extracted by the model are bounded.Improving the generalisation of extracted features is a significant challenge for MI-EEG decoding (Jia, Song, and Xie 2023).
The time required to collect data for calibration for each subject makes it difficult for BCI to be widely available.Our goal was to find common features across multiple subjects, construct classification methods to improve the cross-subject generalizability of the MI-BCI and apply the constructed methods to an asynchronous brain-controlled wheelchair system.First, we analysed the EEG data of left/right-handed motor imagery of eight subjects to find common features.The results showed that several subjects had varying degrees of contralateral specificity in the time-domain dimension of the motor imagery signal.An important factor affecting the efficiency of an online BCI system is the number of channels.Considering the control accuracy, MI-BCI usually requires multiple channels of EEG data, but too many channels can lead to long processing times, thus hindering its application in real-life situations (Dai et al. 2020).In this paper, we propose a Weighted Time-domain feature extraction method.The channels are filtered by extracting time-domain features for analysis, assigning corresponding weights to channels with different influences, and constructing feature vectors fed into a support vector machine (SVM) model for classification training.Finally, we built a wheelchair control system based on the weighted time-domain feature extraction method and carried out online control experiments of the wheelchair to verify its feasibility.
The main contributions of this paper are as follows: (1) A weighted time-domain feature extraction method was proposed to improve the classification accuracy and response speed of a cross-subject left-and right-handed motor imagery task.The usability of the method was verified by applying the model to the online control of a brain-controlled wheelchair.(2) A weight-based channel screening method is proposed, which can retain channel information better than the traditional quantitative dominance screening method.
This paper is organised as follows: Section 2 describes common feature extraction and classification methods and related work on cross-subject motor imagery; Section 3 describes the dataset, signal processing, and feature extraction methods; Section 4 describes the experimental details and results; Section 5 discusses and analyses the results; and Section 6 concludes the study.

Related work
A range of brain activity monitoring modalities is also available for brain research, including EEG, Magnetoencephalography (MEG), and Magnetic Resonance Imaging (MRI).However, EEG is the preferred choice for BCI systems because of its portable and non-invasive nature (MaslovaMaslova et al. 2023).An EEG-based BCI is a system that provides a pathway between the brain and external devices by interpreting EEG (Värbu, Muhammad, and Muhammad 2022).Since the pioneering work in the 1970s, much progress has been made in BCI techniques and the reliability of brain activity classification using EEG signals.The EEG sensor has become a prominent sensor in the study of brain activity (Soufineyestani, Dowling, and Khan 2020).In the last few decades, BCI research has focused predominantly on clinical applications, notably to enable severely disabled people to interact with the environment.However, recent studies rely mostly on the use of non-invasive EEG devices, indicating that BCI might be ready to be used outside laboratories (Douibi et al. 2021).Therefore, this study will improve the accuracy of the classification of brain activity signals across subjects, using BCI outside the laboratory as one of the objectives.

BCI paradigms
EEG-based BCI systems are mainly divided into three major paradigms: Motor Imagery (MI), Event-Related Potential (ERP), and Steady-state Visually Evoked Potential (SSVEP) (Lee, Kim et al. 2019).SSVEP and ERP are bottom-up, require less training to achieve activation, and are more easily for users to learn.However, Piotr et al. (Stawicki et al. 2017) concluded that users are prone to fatigue due to the need to maintain attention to receive stimuli for long periods.Less than half of the subjects in their study found the evoked stimulus signaling system to be easy to use.In real-life scenarios, it is difficult to accurately determine the user's intention and provide evoked stimuli, and the user needs to issue commands autonomously.The spontaneous BCI is more advantageous in terms of user experience and groundedness.In a spontaneous system, the user needs to perform a mental task to produce changes in brain signals that can be detected by the BCI, such as a motor imagery task that involves imagining limb movements.The motor imagery signal, as a type of spontaneous signal, has a better connection for BCI systems that aim at motor control.Usually, the variation of mu rhythm (8 ∼ 12 Hz) and beta rhythm (18 ∼ 26 Hz) is used to identify motor imagery (Wolpaw et al. 2002).The BCI system aims to translate the changes observed in the mu and beta rhythms into meaningful commands (Gaur et al. 2021).When activity in a specific frequency band increases, it is referred to as Event-Related Synchronisation (ERS), and a decrease in a specific frequency band is referred to as Event-Related Desynchronisation (ERD).There is a contralateral specificity of ERD/ERS during motor imagery, i.e. when imagining the left hand, the ERS phenomenon occurs with right brain activation and ERD phenomena with left brain inhibition (Pfurtscheller 1992).The motor imagery EEG signal as an input signal to BCI is based on the ERD/ERS feature.

Feature extraction and classification methods
Extracting effective features from raw EEG signals to improve the classification accuracy of MI applications on Brain-Computer Interfaces (BCIs) remains a significant challenge (Xu et al. 2020).Traditionally, features are extracted from time, frequency, or time-frequency domains for MI pattern recognition achieved by classifiers (Xu et al. 2020).Typically, various features can be derived from three distinctive domains (i.e.spatial, temporal, and spectral) (Lee, Kwon et al. 2019).The Common Spatial Patterns (CSP) algorithm is a widely used algorithm for extracting spatial features, showing strengths in MI tasks such as left and righthand movements (Pfurtscheller 1992).Power Spectral Density (PSD) parameters can extract features hidden in the spectral domains of EEG signals which can help discriminate among simple MI tasks (Herman et al. 2008).Time domain feature extraction focuses on the temporal aspects of the EEG signal, with the advantage of its low computational complexity while retaining acceptable performance for distinguishing simple MI tasks (Vidaurre et al. 2009).To ensure the response speed of the online BCI system, this paper selects time-domain features for extraction in this paper, and CSP, PSD, and the time-domain feature extraction method based on the number dominance method for channel selection are used as the control group to verify the feasibility of the proposed weighted time-domain feature extraction method.The SVM model and Linear Discriminant Analysis (LDA) model are frequently used in motor imagery classification tasks (Jin et al. 2019;Varsehi and Firoozabadi 2021).Some studies have shown that SVM performs better in the classification of left-and right-handed motor imagery (Antony et al. 2022;Asadur Rahman et al. 2020; dos Santos, San-Martin, and Fraga 2023), so SVM is chosen for the classification of left-and right-handed motor imagery in this paper.

Cross-subject motor imagery
Difficulties faced in online signal detection of cross-subject motor imagery include individual differences, low real-time performance of detection, and external noise interference.These factors make it challenging to generalise and land the BCI online system.In addition, the diversity and complexity of EEG signals also increase the difficulty of cross-subject motor imagery online signal detection.Individuals' cognitive and affective states, differences in the location of scalp electrodes, and equipment and environmental variations may lead to signal inconsistencies, thus complicating the development of robust and generalisable detection algorithms.In the study of Saha et al. (2019), by using wavelet-based Maximum Entropy on thMean (wMEM), task-specific EEG channels are selected to predict right-hand and right-foot sensorimotor tasks.They conducted cross-subject training on pairs of subjects with high similarity, i.e. one for modelling and one for validation, with an average accuracy of 71.2%.Xu et al. (Lun et al. 2023) used a combined virtual electrodebased ESA and CNN method for feature extraction and classification of MI-EEG signals with a maximum accuracy of 82.11%.Radia et al. (Chowdhury, Muhammad, and Adeel 2023) proposed model, EEGNet Fusion V2, achieved 89.6% and 87.8% accuracy for the actual and imagined motor activity of the eegmmidb dataset and scores of 74.3% and 84.1% for the BCI IV-2a and IV-2b datasets, respectively.However, the proposed model has a bit higher computational cost, it takes around 3.5 times more computational time per sample than EEGNet Fusion.He et al. (He and Wu 2020) proposed a Euclidean Space data alignment approach to align EEG trials from different subjects in the Euclidean space to make them more similar, and hence improve the learning performance for a new subject.The average accuracy on the 7 subjects (test set) was 79%.Xu, Huang, and Lan (2021) proposed a supervised Selective Cross-subject Transfer Learning (sSCSTL) approach which simultaneously makes use of the labelled samples from target and source subjects based on Riemannian tangent space.The average accuracy on the BCI competition dataset was as high as 96.75%, and the classification time was 4.29s, which was slightly longer but acceptable.Jia, Song, and Xie (2023) proposed a novel model Metric-based Spatial Filtering Transformer (MSFT) that utilises additive angular margin loss to enforce the deep model to improve inter-class separability while enhancing intra-class compactness.The average accuracy was 62.34% but performed well on cross-task classification.Samanta, Chatterjee, and Bose (2020) proposed a novel technique for the classification of MI EEG signals to employ a multiplex weighted visibility graph (MWVG) algorithm.The accuracy is as high as 99.92%, however, its time to classify the database offline is as long as half a minute, which may not apply to online BCI systems.In online processing, the response time of the device is a critical factor as well as the classification accuracy.According to previous studies, a high classification accuracy is usually accompanied by a long classification time, which is not suitable for online applications of BCI.Therefore, the goal of this paper is to reduce the response time by processing as fast as possible while maintaining good classification accuracy.This paper contributes to helping the online BCI system out of the laboratory.

Methodology
To find common features of multiple subjects' MI-EEG signals, a cross-tabulation analysis of subjects' MI-EEG signals in the time, frequency, and spatial domains was performed on an internal laboratory data set.The analysis found that the frequency domain features have a large inter-subject variability, while the time domain features have more obvious common features.Based on the analysis results, we constructed a weighted time-domain feature extraction method.Feature vectors were constructed in the temporal domain using the amplitude differences between the left and right brain channels.We innovatively used the assignment of weighting to the channels to improve the accuracy of the model classification as well as the speed of classification, achieving an expansion of commonality to reduce heterogeneity in cross-subject BCI modelling.

Database
Data acquisition for the laboratory's internal dataset was carried out using actiCHamp amplifiers from the German company BrainProducts.EEG signals were recorded by Brian-Vision Recorder 2.1 software.A BP actiCAP standard 32-channel EEG cap was used, with electrodes located in the standard position of the international 10/20 system.The data set contained 8 subjects, 3 males and 5 females, aged between 23 and 25 years.All subjects were right-handed as assessed by the Edinburgh Sharp Handedness Questionnaire.The general flow of the experiment is shown in Figure 1, which successively consists of an experimental description, a learning test, a formal experiment, and a break.The formal experiment was divided into four phases, interspersed with three unrestricted breaks, to reduce user fatigue and increase data quality.After the rest phase, the subject pressed the  keyboard to start the next section of the formal experiment, where the subject was given a 3s countdown to return to a sitting position, i.e. with hands on both knees.The single trial flow is shown in Figure 2, starting with a '+' on the white screen to direct the subject's attention for 1000 ms, followed by a cue material with left and right-pointing arrows superimposed on the '+' to cue the subject to perform a left or right hand raising movement.The imagery task, lasting 1500 ms, is the duration of the subject's motor imagery.A white screen appears at the end, giving the subject time to recuperate and eliminate visual residuals, randomly lasting 2000ms-4000 ms.The white screen ends and the subject moves on to the next trial.The duration of a single trial was around 4.5 s-6.5 s, and each subject performed 30 random left and right motor imagery sessions throughout the experiment, obtaining a total of 60 trials, for a total experiment duration of around 10 min-15 min.We collected the database with the approval of the Ethics Committee of Zhejiang University.
The final EEG data for the 8 subjects were obtained for a total of 480 trials, 240 on each side.The quality of the experimental data was good with no overly obvious artifacts.The acquired data were pre-processed: (1) resample of the original 500 Hz data to 250 Hz; (2) bilateral papillae (TP9, TP10) as averaged referential montage; (3) filter in the range of 0.1 ∼ 40 Hz to remove most artifacts including baseline swing, Electrooculography (EOG), Electrocardiography (ECG) and Electromyography (EMG); (4) segment the data according to the marking information and intercepted from −1000 ms to 1500 ms to obtain a 3D matrix (channel x time x number of trials); (5) remove large artifacts caused by body movement, frowning, clenching of teeth, etc; (6) ICA separates ocular electrical components and rejects; (7) baseline correction with a reference baseline of −1000 ms to 0 ms.After removing the unnecessary peripheral channels (FP1, FP2, F7, F8, FT9, FT10, T7, T8, P7, P8) the data set ended up containing 467 samples, of which 236 were imagined for the left hand and 231 for the right hand, each containing information on 19 channels.The final data matrix obtained was: 19 channels x 625 sampling points x 467 trials.The total sampling duration was 2.5 s, including 1s for the reference baseline and 1.5 s for the motor imagery.

Data analysis
Analysis in the time domain dimension provides an understanding of the change in energy (amplitude) over time during motor imagery.To understand the whole-brain energy distribution of the subjects during the execution of motor imagery, the EEG data from each channel of each subject were superimposed and averaged by combining temporal and spatial information.A Brain Electrical Activity Mapping (BEAM) was drawn at 100 ms intervals during the 1.5s duration of motor imagery execution.Here is an example of the BEAM of 1 of the 8 subjects, as shown in Figure 3.All 8 subjects showed a common feature, i.e. all showed contralateral specificity between 500 and 1400 ms.When left-handed imagery was performed, negative waves of greater amplitude appeared in the right frontal area (F4), the right frontal-central region (FC3, FC6), and the central region (CP1, CP2, CP5, CP6), i.e. elevated energy in the right hemisphere.During right-handed imagery, left frontal areas (F3), left frontal-central areas (FC3, FC5), and central areas (CP1, CP2, CP5, CP6) showed negative waves of greater amplitude, i.e. elevated energy in the left hemisphere.This contralateralspecific phenomenon starts to appear around 500 ms, reaches a peak in energy around 800 ms, and plateaus around 1400 ms.
To understand the amplitude characteristics between 500 ms-1400 ms, the amplitudes of the left and right-hand segments (left-hand 236, right-hand 231) in 467 trials were superimposed and averaged over each channel using 500 ms-1400 ms as the time window, and the mean values were visualised to obtain Figure 4.The left channel (odd number) in Figure 4 is the left hemisphere, and the right channel (even number) is the right hemisphere.The positive and negative in Figure 4 indicate the direction, the further away from the value of 0, the greater the amplitude.As can be seen, when performing left-handed imagery (blue line) the right side is further from the value of 0 than the left channel, i.e. the amplitude is greater.In terms of the overall average data presented in Figure 4, there is a left-right symmetry in the average amplitude, with the amplitude of the right channel being higher than the amplitude of the left channel when left-handed motor imagery is performed and the amplitude of the left channel being higher than the amplitude of the right channel when right-handed motor imagery is performed.To further determine the importance of different channels in the motor imagery classification, the mean amplitude of each channel in the 500 ms-1400 ms time window was calculated separately for each of the 8 subjects.Then the SPSS data analysis software was used to perform independent sample t-tests on the mean amplitudes of left-and righthanded motor imagery, and P-values were recorded to determine the significance of the differences.And the results are shown in Table 1.To avoid the influence of noise on the pvalue, we were very careful in the experimental operation and filtering process.The data were analysed not only manually using SPSS analysis software for the t-test but also using Python programming software for the t-test, and the results were calibrated to mitigate the interference of random noise.Table 1 presents the t-test values for the mean amplitude of the two different motor imagery tasks in each channel, with the significance level set at P < 0.05.The channels with significant differences are marked with ' * on the table ( * P < 0.05, * * P < 0.01, * * * P < 0.001).' * indicates that the data from this channel had a greater effect on the classification of the left/right-hand motor imagery signal for this subject.' * was found on 12 channels -F3, FC5, C3, CP5, O2, P4, CP2, CP6, C4, FC6, FC2, and F4.The number of ' * represents the number of points a channel receives, i.e. the importance of that channel in subsequent weighting calculations.Of interest is the fact that the data for greater than four of these subjects differed significantly on 8 channels -F3, FC5, CP5, P4, CP6, C4, FC6, and F4.This suggests that the above 8 channels may perform better in the leftand right-handed motor imagery classification and that multiple subjects have this feature, perhaps as a common feature.
The time window for the motor imagery was locked between 500 ms and 1400 ms by the time domain analysis, and this time window will be followed in the frequency domain analysis due to the disadvantages of the low temporal resolution of the frequency domain analysis method.All 7 subjects showed a clear energy spike in the 8-12 Hz band, i.e. the alpha band was activated, and 1 subject showed an energy spike in the 15-18 Hz band, i.e. the beta low-frequency band was activated.
To further determine the importance of different channels in motor imagery classification, the alpha band energy of each channel in the 500 ms-1400 ms time window was recorded separately for each of the 8 subjects and an independent samples t-test was performed.The results of the t-test showed no significant differences between up to 4 subjects on all channels.It can be seen that there is a large inter-subject variability in the frequency domain dimension, so the more obvious common features in the time domain dimension were selected for the construction of the next feature extraction method.

Time domain feature extraction
Amplitude P-values between left and right brain regions were extracted as classification features based on the contralateral specificity phenomena found to be common to multiple subjects in the time-domain analysis.The period in which the contralateral specificity phenomenon occurs was chosen as the time window, i.e. 500 ms -1400 ms.Channels with significant differences in amplitude during left and right-hand motor imagery were screened in 3.2 based on independent sample t-tests, and the mean amplitude was extracted for each of these channels.The formula for extracting the average amplitude is as in formula (1).
,n is the corresponding sampling point.N = F S t.F S is the sampling frequency.In this study, the sampling frequency is 250 Hz. t is the length of time, i.e. a time window of 500 ms-1400 ms length selected for this study.N represents the total number of sampling points, which is the number of sampling points within 900 ms at a sampling frequency of 250 Hz.The total number of sampling points is 225.
The selection of channels was firstly needed to satisfy that the difference in amplitude between the left and right brain regions was significantly different across the motor imagery conditions, a step that was completed in section 3.2, and the results obtained are shown in Table 1.Secondly, it needs to be satisfied that significant performance is still achieved in the cross-subject condition.However, the channels do not perform identically across subjects and need to be screened again for channels that are influential in cross-subject performance.The traditional method of channel screening is the quantitative dominance method but is poor at retaining channel information.To screen channels more accurately and comprehensively, this paper innovatively proposes a weighted channel screening method.Two different time-domain feature extraction methods are constructed based on the two screening methods, which are the time-domain feature extraction method and the weighted time-domain feature extraction method.The time-domain feature extraction method based on the quantitative dominance method is specified as follows.
According to the traditional screening method of quantitative dominance, i.e. the channel was selected when the number of subjects showing significant differences was greater than or equal to half, the screening results were F3, FC5, CP5, P4, C4, CP6, FC6, F4.The average amplitude of each channel was calculated separately, and the difference between the average amplitude of the left and right brain regions was calculated to obtain the characteristics of a single trial F A1 , as shown in formula (2).
( 2 ) However, this screening method directly ignores channels of lower importance, for example, channel C3 was discarded because it appeared only 3 times.This may lose a certain degree of information and the model may lose its advantage when adapting to new subjects.Some channels with high frequencies of significant phenomena, such as FC6, appear 8 times, but their dominance is not emphasised.

Weighted time domain feature extraction
In cross-subject BCI modelling, there is a need to amplify commonalities and weaken dissimilarities, thus introducing a weight screening method.The influence of channels with significant differences at high frequencies is amplified, while for channels at low frequencies, the characteristics can be retained without bringing in too much redundant information.Assign values to each channel according to the P values presented in Table 1. 3 points were assigned when the P value was < 0.001; 2 points were assigned when P < 0.01; 1 point was assigned when P < 0.05; and 0 points were assigned when P ≥ 0.05.For example, the highest-weighted FC6 received 23 points and the lowest-weighted O2 received 2 points.The method of calculating weights using assignment scores incorporates both channel difference significance and subject commonality.A higher assigned score indicates that the channel difference is more significant and the subject commonality is more pronounced.Therefore, the higher the assigned score indicates the higher the weight and the higher the contribution of the channel.After calculating the channel weights, the channels in the left and right brain regions are weighted and averaged separately to obtain single-trial features F A2 , the weighted time-domain (WTD) feature extraction formula (3) is as follows.
Among them, A L represents the weighted average amplitude of the EEG signal of the channel with a significant difference in the left brain region, and A R represents the weighted average amplitude of the channel with a significant difference in the right brain region.W Li and W Ri are the weights of the channels, representing the contribution of each channel to the weighted average.ĀLi and ĀRi are the average amplitudes of the left and right brain hemisphere corresponding channels.Ā Calculated as in formula (1).N L and N R the numbers of channels in the left and right brain hemispheres, respectively.The computation steps for this formula are as follows: (1) For each channel i, calculate the average amplitude ĀLi for the left brain hemisphere and multiply it by the corresponding weight W Li ; (2) Sum up the weighted averages for all channels in the left brain hemisphere; (3) For each channel i, calculate the average amplitude ĀRi for the right brain hemisphere and multiply it by the corresponding weight W Ri ; (4) Sum up the weighted averages for all channels in the right brain hemisphere; (5) Compute the difference between the weighted average amplitudes of the left and right brain hemispheres.This calculation method helps quantify the amplitude differences between the left and right brain hemispheres, with the flexibility to focus on the contribution of specific channels through weight adjustments.The weights were calculated taking into account both the significance of the differences in the channels as well as the cross-subject commonalities.This may be a more effective method for cross-subject feature extraction.

PSD feature extraction
Based on the results of the frequency domain analysis, the PSD features are extracted for the 500 ms-1400 ms time window at 10 Hz-12 Hz.For a discrete signal x(n) of length N, n ∈ [0, N − 1].Compute its discrete Fourier transform and obtain the frequency domain signal . from the time domain signal x(n).The calculation is given in formula (4).
Based on the discrete Fourier transform, the PSD is calculated in a formula (5).
Channels F3, FC5, Oz, P4, CP6, CP2, C4, and F4 were selected based on the results of the independent sample t-test, and the PSD was calculated separately for each channel to obtain the single-trial eigenvector F P , as shown in formula (6).

CSP feature extraction
The CSP algorithm is a spatial domain filtering algorithm for two classification tasks.The basic principle is to use diagonalization of matrices to find an optimal set of spatial filters for projection, so that the difference in variance values between the two types of signals is maximized, resulting in a feature vector with a high degree of discrimination.The classification of left-and right-handed motor imagery is a classical two-classification problem, and the CSP algorithm is widely used in feature extraction for motor imagery (Geng et al. 2022;Ramoser, Muller-Gerking, and Pfurtscheller 2000;Selim et al. 2018).Therefore, this study uses the CSP algorithm as a control group for the time-domain feature extraction method.
The original 19-channel EEG data fragments are calculated separately to obtain the covariance matrix, and then orthogonal whitening is transformed and diagonalized to obtain a 19 × 19 spatial filter.Finally, the signal matrix is passed through the spatial filter to calculate the projection matrix, and the front and back 9 rows of features are selected to form the feature vector.

Classification
SVM is a classification recognition method constructed based on statistical theory, which is based on the principle of structural risk minimisation and has a good ability to solve small sample and high latitude pattern recognition problems (Somadder and Saha 2021).
The principle of SVM is to map linearly indistinguishable data in low dimensional space to high dimensional space by kernel function and construct an optimal classification hyperplane in high dimensional space to make the data linearly distinguishable (Subasi and Ismail Gursoy 2010).SVM is one of the most widely used classifiers in previous studies.In summary, the characteristics of SVM are more suitable for this study, so SVM is chosen as the classification method for motor imagery.And a radial base function (RBF) is applied in our SVM modelling (Bhattacharyya et al. 2011;Bousseta et al. 2018).To verify the superiority of the weighted time-domain feature extraction method, we use the CSP feature extraction method, PSD feature extraction method, and time-domain feature extraction method as the control groups.The control groups are compared with the weighted timedomain feature extraction method for the classification accuracy of left and right-hand motor imagery.The samples were randomly divided into five groups, and the average classification accuracy of five times was obtained by inputting the SVM model according to the five-fold cross-validation method as shown in Table 2.The classification results show that the weighted time-domain group (90.94%) has better classification characteristics compared to the CSP group (52.5%), the PSD group (67.80%), and the time-domain group (88.83%).This is related to its characteristic of eliminating specificity and retaining commonality when superimposed averaging.

Experiments
In this section, offline classification experiments, online wheelchair laboratory experiments, and online field experiments are conducted respectively.The offline experiment verifies the feasibility of the cross-subject model proposed in this paper, and the online experiment verifies the groundedness of the brain-controlled wheelchair online control system built in this paper.It is well known that calculation time is very important for real-time BCI applications (Jia, Song, and Xie 2023).Therefore, we use the classification accuracy of the left-and righthanded motor imagery task and the response time as performance evaluation criteria.The classification accuracy of the motor imagery task is defined as the number of correct classifications as a percentage of the total number of classifications.Response time is defined as the length of time from the start of motor imagery to the response of the hardware device.

Experimental setup
8 subjects from outside the internal laboratory data set were selected for the offline experiment.The subjects were all between 23-25 years old, with the cognitive and learning ability to understand the experimental content.Visual acuity or corrected visual acuity was normal, and all were right-handed.The subjects were informed before the experiment that EEG experiments are not harmful to humans to reduce psychological burden.Subjects were briefly trained in motor imagery before the experiment.The experimental paradigm and flow are the same as 3.1.
To further verify the generalisation ability of temporal features, the dataset is divided into 8 groups, and each group of data comes from one subject.Seven of these groups were selected as training and test sets to train the SVM model (randomly grouped by 7:3), and the remaining group was used as the validation set to verify the generalisation ability of the model.Each group is used as a validation set, and a total of 8 rounds of validation are performed.At the same time, each round is subjected to 5-fold cross-validation to calculate the average accuracy.

Results
The average accuracy of the eight validation sets was 91.39%.And offline processing takes less than 1 s.Among them, when subjects B and F were used as the validation set, the accuracy rates reached 96.61% and 98.31%.It is speculated that the reason is that the characteristics of subjects B and F are more obvious, and the matching degree with the model is high.Compared with the previous cross-subject BCI performance, the temporal feature extraction method has certain advantages in the generalisation ability of the model.The accuracy data are shown in Table 3.

Experimental setup
The brain-controlled wheelchair system as a whole includes three modules: a signal acquisition module, a signal processing module, and a drive module, as shown in Figure 5. Hardware solutions include electrode caps, electroencephalographs, laptops, and wheelchairs.The overall design idea is: that the EEG signal is collected and amplified by the electrode cap and actiCHamp amplifier, transmitted to the laptop, and the recorded EEG data is sent to the trained classification model by the BP Recorder software through TCP/IP.The classification model parses the EEG signal into the motor imagery signal of the left or right hand and then maps it to the left and right steering instructions.The instructions are transmitted to the wheelchair drive system through Bluetooth to complete the wheelchair steering.The sampling rate of the EEG signal is set to 250 Hz, and the signal is transmitted in the form of data packets with a sampling interval of 4 ms.The design of the wheelchair driving   system is shown in Figure 6, based on the ESP32 chip to realise the driving control.The control commands sent from the computer side are received wirelessly via Bluetooth, and the control chip adjusts the motor speed and direction to achieve steering through the speed difference between the left and right wheels.
In the steering experiment, three subjects were seated in wheelchairs for motor imagery.After a short system delay, the wheelchair was steered in place for 2000ms and then stopped.The system collects EEG signals in real time, and E-prime presents the guiding pictures while sending marking information (without left and right markers) as the starting point for data recognition.When the system receives the marking information, it starts to recognise and classify the number and then sends a signal to control the wheelchair steering.The experimental scenario is shown in Figure 7.To avoid the high accuracy rate caused by chance as much as possible, three subjects performed 20 consecutive motor imagery to control the wheelchair steering.
During the experiment, a video recording of the experiment was performed by a specialised and experienced experimenter.In the subsequent analysis, the recorded EEG data were combined with the time data in the video, and the reaction time in ms was recorded from the time point of the start of the imagery (the time point of the start of the contralateral-specific phenomenon) to the time point of the wheelchair's movement, which is the reaction time described in this study.In the experiment, the duration of motor imagery was set at 1.5 s.The wheelchair usually completed the command before the start of the next trial, and the subjects reported that they did not notice the delay between the wheelchair's specific direction and the motor imagery.Therefore, we believe that a response time of 1.5 s is more desirable when the subject operates more smoothly.A shorter response time and higher system fluency are important for the safety and user experience of the online system.

Results
During the experiment, it was found that the correct steering rate and response speed were better in the first 10 times than in the last 10 times, and the error rate increased and the response speed decreased as the 20 times approached.It is hypothesised that the continuous steering task caused increased brain fatigue in the subjects resulting in this phenomenon.The results of the experiment are shown in Table 4.The average accuracy rate was 81.67% (the average of 60 times the motor imagery classification accuracy rate of the three subjects), and the average response time was 1.36 s (the average of 60 times the motor imagery response time of the three subjects).

Experimental setup
The laboratory experiment sent a marking signal to the system through E-prime as the starting point of data detection.In the real application scenario, there will not be a known marking signal sent, so the detection of the imagination state is also one of the key points of the online system toward the landing.In this study, to enhance the landing of the braincontrolled wheelchair online control system, online monitoring and recognition of the motor imagery state are added to the signal processing module.The model training for motor imagination state recognition is performed by collecting resting state and motor imagery state EEG data from the subjects.
The system flow is as follows: the EEG data acquired online is stored in DataPool and detected every 100 ms, and the DataBlock of the current time point 900 ms forward is used as the detection matrix, and the data flow is shown in Figure 8.The current DataBlock is processed and analysed in real time, and the processing flow is shown in Figure 9.The preprocessed data block is judged by model A to determine whether it is performing motor imagery, and if it is a resting EEG signal, it is kept straight, and if it is performing motor imagery, it is judged by model B to turn left or right.Such a detection process is performed every 100 ms, i.e. 900 ms of data blocks are input to the model for every 100 m of wheelchair travel.
To verify the effectiveness of the left and right-hand motor imagery classification model and the motor imagery state recognition model applied to an asynchronous braincontrolled wheelchair system, three subjects A, B, and C experimented with asynchronous online wheelchair control in a real road environment.In this case, the subject needs to complete the wheelchair steering control task asynchronously, and the subject imagines the movement without guidance information, i.e. the computer screen does not provide any information for the subject.In the online road experiment, subjects were required to drive a brain-controlled wheelchair to complete a simple path task in a detached laboratory environment: go straight for a distance and then make a left turn.Figure 10 shows the experimental scenario with the green arrows on the road surface as the task path markers.

Results
Figure 11 shows the steering task paths (three in total) for the three subjects, with the green arrows showing the target task paths and the black, dashed lines showing the results of the paths completed by the subjects using brain-controlled wheelchairs.The average task time for the three subjects was 36 s.The average time required for the subjects to manually control the electric wheelchair to complete the task path was 13 s.

Experiment A: offline experiment
The offline classification accuracy of the weight-based time-domain feature extraction method in this paper is 91.39% on average, and the classification time only takes less than 1 s.Samanta et al. (Samanta, Chatterjee, and Bose 2020) proposed a Multiplex Weighted Visibility Graph (MWVG) algorithm.It was observed that an average classification accuracy of 99.92% and 99.96% is obtained using the Random Forest classifier.Although the accuracy rate is higher, the classification takes longer, even half a minute.In 2023, Jia, Song, and Xie (2023) proposed a novel model Metric-based Spatial Filtering Transformer (MSFT) that utilises additive angular margin loss to enforce the deep model to improve inter-class separability while enhancing intra-class compactness.Cross-subject average accuracy was 62.34%, but performed well on cross-task categorisation.Xu, Huang, and Lan (2021) proposed a supervised Selective Cross-Subject Transfer Learning (sSCSTL) approach which simultaneously makes use of the labelled samples from target and source subjects based on Riemannian tangent space.The average accuracy on the BCI competition dataset was as high as 96.75%, and the classification time was 4.29s, which was slightly longer but acceptable.Compared to these excellent cross-subject BCI studies in the last three years (Jia, Song, and Xie 2023; Samanta, Chatterjee, and Bose 2020; Xu, Huang, and Lan 2021), the advantages of the method proposed in this paper are as follows: good offline classification accuracy of cross-subject motor imagery is guaranteed and the processing speed is greatly improved.The reason may be that the present method retains the information of relevant channels by weighting.At the same time, the irrelevant channels are deleted, which reduces the amount of data and speeds up the processing speed.

Experiment B: online laboratory experiment
Benaroch (Benaroch et al. 2022) et al. studied the effect of the Most Discriminant Frequency Band (MDFB) selected by an optimisation algorithm during the calibration of the MI-BCI system.A constrained algorithm was proposed and obtained an online average accuracy of 64.7%.Gupta et al. (2021) et al. proposed a Time-varying Symbolic Distance (TSD)-based algorithm to control BLDC motors with motor imagery.For three different subjects, the average accuracy of the classifier was 77.7%.In contrast, the proposed method has higher online accuracy (81.67%) and good online processing time (1.36 s) with cross-subject processing.Due to the great processing speed and accuracy, the system can give the subjects the correct results feedback in time.This enhances subjects' confidence and concentration, and leads to good experimental performance.

Experiment C: online field experiment
The path comparison showed that the overlap between the real path and the planned path needs to be improved.The experimental results show that the brain-controlled wheelchair designed in this study has some feasibility but the stability of the system still needs to be improved.Online field experiment performance may be affected by several factors: 1.The online processing procedure was simplified to ensure the response speed of motor imagery, and a small number of electrooculographic artifacts may have interfered with the accuracy.2. Subjects A, B, and C were interviewed after the experiment.Subjects indicated that cyclic static noise such as the rotation sound of the wheelchair would cause them to have difficulty concentrating during the experiment.

Conclusion
In this paper, a cross-subject motor imagery classification model based on a weighted timedomain feature extraction method is proposed.Based on the phenomenon of contralateral specificity presented by BEAM, feature extraction of amplitude difference between left and right brain regions is carried out, and the weight screening method of channels is innovatively introduced: the important channels are amplified by weight assignment, and the remaining channels retain their features while reducing redundant information.The weighted time-domain feature extraction method enables the expansion of common features among subjects while ensuring generalisation ability.By this method, the number of channels is compressed, the amount of data is reduced, and the response speed of motor imagery is improved.We applied the results of our theoretical study to an asynchronous wheelchair online control system, which performed well.It proves the feasibility of our proposed method.
The findings of this research have implications in the broader context of BCI technology.First, it helps to advance the development of BCI technology by providing new possibilities for real-time recognition of motor imagery as a pattern of brain activity.Second, it is instructive for the development of smarter and more efficient neural control systems, providing support for the realisation of a wider range of application scenarios, such as neurorehabilitation and brain-controlled driving.This research also helps to address the challenges faced by online BCI systems, such as good classification accuracy and real-time processing, thus improving the stability and reliability of the system.Overall, these results provide useful insights for applying brain-computer interface technology to real-world applications, contributing to future research and development.

Limitation
The online BCI system built in this paper has several limitations: 1.The online processing procedure was simplified to ensure the response speed of motor imagery, and a small number of electrooculographic artifacts may have interfered with the accuracy.2. Subjects A, B, and C were interviewed after the experiment.Subjects indicated that cyclic static noise such as the rotation sound of the wheelchair would cause them to have difficulty concentrating during the experiment.3. Long periods of use can cause brain fatigue, which can affect system performance.4. The participants were all healthy and mostly school students, and the diversity of subjects was poor, which had some limitations on the study.

Future work
There are still some deficiencies in the research.In the future, it can be developed and improved from the following aspects: From the perspective of EEG feature extraction and construction, the dimension of features is relatively single.Too much reliance on highprecision and low-noise EEG signals cannot resist interference in complex scenarios of practical applications.For the demand for more control signals and diverse application scenarios, the existing feature extraction methods are only applicable to binary classification and easily affected by the complex environment.In future research, we consider feature construction methods in the direction of multi-signal feature fusion to adapt to more complex application scenarios.In terms of the choice of EEG equipment, although the large EEG instrument based on wet electrodes has high accuracy and is more applicable in the research stage, it has disadvantages such as being bulky, inconvenient to wear, and poor aesthetics, which is not suitable for the future brain-controlled wheelchair online system to be landed.Portable EEG headsets can be tried in future studies for application in online systems.In the selection of participants, we will enrich the diversity of participants' characteristics, such as occupation, age, and hobbies, and include participants with disabilities in our selection.In addition, in future research, we will focus on real-world BCI online systems (e.g.ambient noise-resistant BCI online systems).

Disclosure statement
No potential conflict of interest was reported by the author(s).

Figure 1 .
Figure 1.General flow chart of the experiment.

Figure 2 .
Figure 2. Timeline of a trial.

Figure 3 .
Figure 3.The brain topography of subject A's average amplitude at 100-1400 ms, (a) left-handed motor imagery (b) right-handed motor imagery.

Figure 4 .
Figure 4.The average amplitude of all trials superimposed on each channel from 500 ms to 1400 ms.

Figure 5 .
Figure 5. Design scheme of brain-controlled wheelchair system.

Figure 6 .
Figure 6.Schematic diagram of the drive system.

Figure 10 .
Figure 10.Field experiment scenario(The computer screen is turned back to the subject).

Table 1 .
The T-test value of the average amplitude of the left/right-hand motor imagery in each channel.

Table 2 .
Classification accuracy results of different feature extraction methods.

Table 3 .
Accuracy of offline experiment.