Background: Brain Computer Interface (BCI) is one of the fast-growing technological trends, which finds its applications in the field of the healthcare sector. In this work, 16 electrodes of Electroencephalography (EEG) placed according to the 10-20 electrode system is used to acquire the EEG signals. A BCI with EEG based imagined word prediction using Convolutional Neural Network (CNN) is modeled and trained to recognize the words imagined through the EEG brain signal, where the CNN model Alexnet and Googlenet are able to recognize the words due to visual stimuli namely, up, down, right, left and up to ten words. The performance metrics are improved with the Morlet Continuous wavelet transform applied at the pre-processing stage, with seven extracted features such as mean, standard deviation, skewness, kurtosis, bandpower, root mean square, and Shannon entropy.
Results: Based on the testing, Alexnet transfer learning model performed better as compared to Googlenet transfer learning model, as it achieved an accuracy of 90.3%, recall, precision, and F1 score of 91.4%, 90%, and 90.7% respectively for seven extracted features. However, the performance metrics decreased when the number of extracted features was reduced from seven to four, to 83.8%, 84.4%, 82.9%, and 83.6% respectively.
Conclusions: The Alexnet transfer learning model is selected to be the best model as compared Googlenet, as it achieved an accuracy of 90.3% and the final training option of 80 epoch, 64 batch size, scalogram pre-processing method, ratio of 80:20 training and validation set and initial learning rate of 0.0001.