3.1 Model design ideas
ResNeXt's deep learning network model structure design idea is followed. According to the data characteristics of EEG signals, a network structure Time-ResNeXt is designed for EEG time series classification.
According to the traditional idea of designing network structures to improve the accuracy of the model, most of them are to deepen or widen the structure of the network, but as the number of hyperparameters (such as the number of channels, the size of the convolution kernel, etc.) increases, neural network design The difficulty and computational overhead also increase greatly. The algorithm in this paper benefits from the repeated topology of the ResNeXt network sub-modules, which enables it to have a very high accuracy rate while slightly increasing the amount of network calculations, while also greatly reducing the number of hyperparameters.
First, I have to mention the classic VGG network and Inception network. The design idea of VGG network is: modularize the neural network to increase the depth, but such a deep network will cause network degradation due to gradients. The structure of VGG network key modules is shown in Fig. 2.
The design philosophy of the Inception network is exactly the opposite: the width of the network is increased by the split-transform-merge method, but the settings of the various hyperparameters of this Inception network are more targeted and need to be performed when applied to other data sets. Many modifications, so scalability is average. The structure of the key modules of the Inception network is shown in Fig. 3.
The ResNeXt network is based on the design idea of ResNet's cross-layer connection, and combines the VGG and Inception networks. And through the structure of ResNet cross-layer connection to improve the shortcomings of VGG network too deep degradation. The cross-layer connection structure is shown in Fig. 4.
The transformation set structure is shown in Fig. 5.
The convolution modules of the transform set are all the same. ResNeXt uses a transformation set to replace the transformation structure of the Inception network. Because each aggregated topology is the same, the network no longer needs to modify too many hyperparameters on different data sets, which has better robustness.
3.2 Model design process
The original ResNeXt-50 has five stages and a large number of parameters, as shown in Fig. 6.
During training, it is found that the results are difficult to converge and tend to be completely random. Therefore, it was determined that the network structure was too complicated. Starting from the complexity of the network, the network was tailored to try to find a suitable structure. The test results are shown in Table 2.
3.3 Time-ResNeXt network structure
The structure of Time-ResNeXt neural network is shown in Table 3.
It has two phases in total. The detailed network structure of the first phase is shown in Fig. 7.
The depth of the second phase of the network structure is 2, that is, two network structure sub-modules, each of which contains cross-layer connections, activation layers, convolutional layers, batch normalization layers, and transform set modules. The main structure is The transformation set module, which uses a network design structure in a network, is a module for forming a convolution transformation set by connecting 32 convolutional structural blocks as shown in Fig. 8 in parallel, which is the main feature extraction module.