Modeling and analysis of fatigue detection with multi-channel data fusion

To address the problem of accurate assessment of training effects of flight simulators, an intelligent algorithm using neural networks and reinforcement learning is proposed in the paper. The multi-dimensional data of facial expression features and EEG and EM physiological signals are analyzed. A new evaluation model for assessing human spatial balance, attention distribution, neurological weakness, and other pilot training states is studied through facial expression experiments mainly and eye-movement and EEG experiments supplemented. The EEG acquisition and analysis during pilot training subjects (take-off and landing) are completed, and the emotional characteristics of pilots during training are identified. We completed data fusion of multi-dimensional channels, constructed mathematical models of pilot maneuver reaction time and attention allocation, monitored and evaluated flight training effects, and conducted controlled experiments. The experimental results show that the average recognition rates of 92.598% and 87.013% were achieved for expression and neurasthenia recognition, and the human ergonomic information of facial expression and EEG and EM were effectively fused.


Introduction
According to the Federal Aviation Administration (FAA) and NASA flight accident statistics, only 12% of flight accidents are caused by the aircraft itself; more than 73% of accidents involve human factors [1], including 67% of aircraft accidents caused by crew errors; the most important factor is pilot error, which accounts for about 51% of the total number of air accidents [2].
Facial expressions are an important parameter in human factors engineering [3]. For example, by observing changes in pilots' facial expressions and promptly instructing them to make rapid psychological adjustments, it was found that pilots' positive mood levels improved [4]. Similarly, brain neurasthenia testing using EEG can also study the effect of brain neurasthenia on selective visual attention, and the results show that brain fatigue has a negative effect on visual selective attention ability.
With the development of high-throughput EEG technology and the intelligence of EEG data analysis [5], retrospective analysis of high-throughput EEG data is expected to perform localization analysis of cerebral neurological deficits [6]. In conclusion, EEG-based cerebral neurasthenia detection will develop in the direction of quantitative, fine, and accurate localization, and the fatigue detection ability and alertness assessment credibility of drivers will be continuously improved [7].
The main work of this study is to conduct a study on pilot driving alertness, which is based on the pilot's facial expression, EEG, and eye movement signals for alertness assessment, in an attempt to reduce operational errors caused by pilot neurological weakness and to ensure aviation safety from this perspective. The problem is that traditional intention inference methods do not make good use of the advantages of fusing data from multiple physiological This article is part of the Topical Collection: New Intelligent Manufacturing Technologies through the Integration of Industry 4.0 and Advanced Manufacturing sources. Therefore, a method is proposed to reduce operational errors caused by pilot fatigue by fusing EEG signals, eye movement signals, and facial expressions for alertness assessment to safeguard aviation safety from this perspective. In the process of pilot alertness research, we try to find objective parameter values and evaluation algorithms that can accurately reflect human subjective alertness so that the model is not only applicable to flight fatigue but also tries to discover the main factors that cause changes in flight alertness and provide data support for pilot training and cockpit layout. For ease of understanding, we provide a table of all abbreviations used in this article, as shown in Table 1.

Analysis of research status and development trends at home and abroad
Measurements of physiological signals include neuronal electrical activity by electroencephalogram (EEG) [8], eye movements by eye movement (EOG) [9], heart rate by electrocardiogram (ECG) [10], muscle activity by electromyogram (EMG) [11], and tissue oxygenation by near-infrared spectroscopy (NIRS) [12], etc. In the 1980s, correlations between EEG and cerebral neurological decline were carried out in foreign studies which showed that EEG is sensitive to fluctuations in alertness, EEG changes more significantly with changes in alertness, and EEG can predict the decline in brain performance caused by sustained mental effort; in the 1990s, EEG research was further advanced, and people began to focus on changes in various bands of EEG during cerebral neurasthenia, and studies found that reduced alertness and fatigue in humans. The correlation between reduced alertness and fatigue was found to be reflected in different bands of EEG, especially in the theta and alpha waves [13], and the power spectrum of theta and alpha waves increased when people were in a fatigued state; in the twenty-first century, different bands of EEG were studied in more detail, including theta, alpha, and beta bands in EEG and the combination parameters of different bands. For example, Jap [14] conducted a more comprehensive experiment in which four EEG bands, δ, θ, α, and β, and four parameters, (θ + α)/β, α/β, (θ + α)/(α + β), and θ/β, were evaluated on 52 subjects, and the results showed that during the transition from a nonfatigued to a fatigued state, the activity of δ and θ waves was more stable, α wave activity decreased slightly, and β-wave activity was significantly reduced. The linear operation of EEG signal feature extraction is mainly performed in the time domain [15], firstly, the performance of different EEG signals in the time domain is extracted using the combined filter, and then the energy of different types of EEG signals is calculated in the time domain, and then these EEG signal energies are used for linear operation.
Suppose now that there is a real signal, then the power of the signal is called, and the energy of the signal is.
The energy features of individual brain waves can thus be obtained. Zhang et al. [16] have verified that the EEG of epileptic patients can be classified by using energy signals in the time domain, and Shekari Soleimanloo et al. [17] have validated this in a large number of classification models using a public data set and obtained results ranging from 60.50 to 94.24% with different classification accuracies, further demonstrating that that energy features using EEG signals can be treated as task features for classification.
Saeed et al. [9] used, and wave energy features as a discriminatory basis in their study of EEG signals with EOG signal vigilance assessment model and obtained 84% accuracy performance using the following formula.
In the present experiment, the same three calculation methods mentioned above were used to obtain the energy characteristics of EEG signals at different wavebands.
Facial expression is an important visual signal during people's communication, which is divided into macroexpressions and micro-expressions, as shown in Fig. 1. Unlike macro-expressions, micro-expressions are a kind of facial activity with weak changes in facial muscles and a very short duration. Related psychological studies showed that micro-expressions last only 1/25 ~ 1/2 s and usually flow unconsciously when people try to hide their real emotions, which are difficult to disguise and can provide important visual clues to reveal the real human emotions. Based on these characteristics, micro-expressions have important and potential applications in many fields, such as interrogation, medical treatment, and security.

Experimental design
In the experimental design, the portable attitude sensor is fixed on the target limb. At present, the common attitude sensor can measure the acceleration value, angular velocity value, and rotation vector of the X, Y, and Z axes. The motion behavior of the target is studied by using the attitude sensor. It is not difficult to obtain the motion trajectory calculation method for three-dimensional space objects by using physical knowledge. Assuming that there is an ideal ball in space and that the velocity, angular velocity, and acceleration in all directions of its initial position are 0, the dynamic equation of the ball can be as follows: The relative acceleration is: The involved acceleration is: Coriolis acceleration is: converted into polar coordinate system, the three sub-sectors in the dynamic equation can be obtained as follows: x = −g sin cos cos y = −g sin cos sin z = −g sin 2 Assuming the initial time r t=0 = R , θ t=0 = 0 , ̇r t=0 = 0 , it is obvious that the trajectory equation can be obtained as follows: By differentiating and integrating the trajectory equation in different directions, the acceleration and displacement in three directions in space can be obtained, respectively.

A multi-dimensional monitoring and evaluation model for simulating the effectiveness of flight training
Using the pilot's line of sight, facial expression, attention distribution, and reaction time, the effectiveness of flight training can be monitored, which can objectively reflect the quality of the pilot's completion of flight subjects and the effectiveness of training, such as Fig. 3. In order to more comprehensively and objectively reflect the training effectiveness of pilots, the fuzzy comprehensive evaluation method was used to comprehensively evaluate the pilot's training time monitoring data: attention distribution, reaction time, facial expressions, EEG information, etc., to obtain the comprehensive evaluation of different pilots when completing the same flight subject, the comprehensive evaluation of the same pilot's completion of the same training subject at different times, the comprehensive evaluation of the same pilot for different flight subjects. The evaluation is then used to obtain accurate information about the pilot's ability (compared to other pilots), ability improvement (training effectiveness), and characteristics (ability to complete different subjects).
First, determine the factor domain.
where u i are attention allocation, reaction time, facial expression, and EEG information, respectively. Secondly, the evaluation level domain was determined.
Evaluation results indicated excellent, good, moderate, pass, fail.

Horizontal evaluation
The cross-sectional evaluation refers to the evaluation between different pilots. The fuzzy matrix is obtained by single factor evaluation R where r ij denotes the pilot's response to the u i indicator for v j of affiliation. A pilot's performance on the u i indicator can be represented by the vector r i1 r i2 ⋯ r im to represent the.
With the horizontal evaluation, we obtain a ranking of competencies between different pilots, which provides objective indicators for pilot assessment, selection, etc.

Longitudinal evaluation
Longitudinal evaluation refers to the evaluation of the same pilot over time. The fuzzy matrix is similar to the cross-sectional evaluation, with the difference that r ij denotes different periods of time for that pilot in terms of u i indicator on the v j of affiliation.
The longitudinal evaluation, which reflects the changes in the history of the pilot, allows an objective description of how the pilot has improved (or even regressed) through training.

Feature evaluation
Feature evaluation refers to the evaluation of the same pilot on the completion of different flying disciplines. The fuzzy matrix is similar to the cross-sectional evaluation, with the difference that r ij indicates the different flight training subjects for that pilot under the u i indicator on the v j of affiliation.
The characteristic evaluation allows us to obtain the sequential relationship between pilots in different flying disciplines, understand the characteristics of different pilots, and provide objective data to support the division of labor when working together in team tactics.
Again, the fuzzy weight vector is determined by the Delphi method for each factor u i . Assigning a weighting factor i (i = 1, 2, ⋯ , m).
Finally, a fuzzy integrated evaluation model was used to calculate the pilot's affiliation with multiple indicators under v j the affiliation degree of where matrix operations are performed using the following.

An experimental comparison study of flight training effectiveness monitoring models and pilot assessment performance
The correlation between the comprehensive evaluation model of the effectiveness of flight training and the daily pilot's assessment results provides some evidence of the effectiveness of the monitoring model. In Fig. 4, the monitoring model is analyzed here in comparison with the assessment results in three dimensions.
The first dimension is an analysis of the correlation between the results of the monitoring and assessment completed by the same pilot for the same training subject over different periods of time and the assessment results. This reflects the improvement in the corresponding pilot's ability to complete the assigned task through training.
Assume s i is the results of a pilot's previous assessments for the same training subject, and s i is the monitoring and assessment result for the corresponding training task.
The second dimension is the analysis of the correlation between the results of different flight subjects completed by the same pilot and the results of the monitoring and evaluation. This reflects the characteristics of the corresponding pilot, performing different flight subject training, and provides data to support the development of individualized training programmes.
Assume g i is the results of a pilot's previous assessments for different training disciplines, and g i is the monitoring and assessment result for the corresponding training task. The third dimension is an analysis of the correlation between different pilots completing the same training subjects. This reflects the performance of different pilots.
Assumption s h i is the assessment results of different pilots for the same training subject, and h i is the monitoring and assessment result for the corresponding training task. The correlation coefficient method and information entropy method are considered to analyze the monitoring model and assessment performance under the three dimensions, respectively. The information entropy describes the amount of information contained in the information, while the mutual information reflects the interrelationship between the two. The processing of the expression data will use a convolutional neural network to train the data set, through the expression data collection team personnel programmed in Python language to train the existing data set, and then a set of test samples collected by the laboratory simulated flight personnel to test the correct rate can overreach 85%, as shown in Fig. 5, gives the structure of each layer and data processing process.
Feature extraction is the first step in the fatigue detection method based on facial features, so it is necessary to find features from the facial region that can identify the fatigue state of the subject and extract them. Therefore, the feature extractor built in this paper based on CNN idea is shown in the figure to extract the fatigue features of the eyes, mouth, head posture and the whole face.
In order to extract the effective features using CNN, the network model has to be trained first. A loss function is used in the model to define the parameter update method for all weights and biases in the convolutional and fully connected layers, which is the sum of the costs of all training parameters. During training, the gradient descent algorithm or other methods (e.g., gradient descent with impulse) is used to optimize all parameters in the network to reduce the value of the loss function.
The composition of our experimental set and sample set is as follows: Based on the strong generalization ability of SVM, we classified the pre-processed data. A total of 8 subjects participated in the experiments, and 10 sets of data were collected from each subject. Noise reduction and classification were performed on each subject's data separately, and 70% of the samples were selected as the training set. Each subject completed 8 × 10 × 80 = 6400 experiments, from which 8 × 800 × 70% = 4480 were sampled as the training set and the rest as the test set.
When the loss function fluctuates around a certain value and remains stable through training, the feature extraction model is generated, as shown in Fig. 6.

Experimental analysis report
The flight simulation operator's long-term flight simulation needs to maintain a high concentration of attention at all times. The three types of data such as eye movement, EEG, and facial expression of exercisers vary with objective factors such as different flight tasks or subjective factors such as different levels of fatigue and psychological factors.

Eye-tracking data analysis
Analysis of the collected data, the eye movement data expresses the line of sight reaction time, the type of landing (instrument, outboard, joystick), and the degree of matching to the flight mission;

Reacting time
In the process of the subjects performing the flight mission, the research group designed a pop-up window stimulation experiment for the subjects. In this experiment, a pop-up window randomly appeared in the subject's field of view every minute, and by detecting the position of the line of sight, determine whether the subject has captured the As shown in Table 2, the EYE_2.csv file contains four columns. From left to right, they are "Experiment duration (ms) when the pop-up window stimulus starts," "Experiment duration (ms) when the subject captures the target," and "Response Duration (ms)," and "Alertness label." The members of this research group obtained the corresponding EYE_2.csv file by processing the EYE_X_Y_Z.txt file because the subjects' mood fluctuates greatly 1 min before the start of the experiment and one minute before the end of the experiment. Figure 7 includes 1143 eye-tracking tasks. According to the observation and experience of the experimenter during the experiment, when the subject's alertness is high, the response time is less than 800 ms. Therefore, the team members use 800 ms as the alertness threshold. Among them, 59 capture tasks are higher than 800 ms, which is low alertness.

Type of placement
During the flight mission, the pilot's sight points are mainly divided into three categories: instrumentation, outside cabin, and joystick. In the flight state, the pixel values of different sight points are collected as sight tracking memory. Judge the point of sight of the aircraft. Figure 8 shows its flight environment and experimental framework.
The data in it and the meaning of the point of sight represented by it: 100 represents the central view of the cockpit, 50 represents the right view of the cockpit, 150 is the left view of the cockpit, 200 is other areas in the cockpit, 20 is the instrument panel, −1 is the left cockpit. On the side screen, −2 is the right side of the cockpit screen, −3 is directly below the center screen, and −10 is other areas, as shown in Fig. 9.
As shown in Fig. 10, area A represents the outside perspective; area B represents the dashboard area; area C represents the joystick; area D is the other area with the watch.    In order to analyze the correlation between the EEG signal and the operation intention before performing the above tasks, the members of this research group believed that within 10 s before the pop-up window appeared, the subjects concentrated on the flight mission. In the pop-up window, thereby capturing the pop-up window, the test subject found the pop-up target; within 1 s after the pop-up window was closed, the subject tried to return to the previous flight mission.

EEG data analysis
As shown in Fig. 11, between the flight mission and the task of capturing the target, the -wave power under the frontal lobe AF4 orifice, the differential entropy under the temporal lobe P7 orifice, the -wave power under the parietal lobe P3 orifice, and the occipital lobe POZ under the orifice The a-wave power has strong significance; between the flight mission and the return mission, the wave power under the forehead AF4 port, the differential entropy under the temporal P7 port, and the differential entropy under the parietal PZ port have strong significance. The results show that the -wave power of the POZ orifice and under the occipital area has strong significance; the EEG characteristics collected in this group have no significant difference between the target capture mission and the return flight mission.  According to the observation and experience of the experimenters during the experiment, when the subject's alertness is high, the response time is less than 800 ms. Therefore, the team members use 800 ms as the alertness threshold, and the bombs below this threshold will be affected. EEG data within 10 s before and after window stimulation locates high alertness data. Otherwise, it is low alertness data.

Facial expression data analysis
The pilot's expression includes two states of tension and relaxation. The expression data has taken three different flight missions, the take-off phase, the level flight phase (normal flight phase), and the landing phase. This experiment defines p as the value of the pilot's stress level. When 0 ≦ p < 0.5, it is defined as relaxation; when 0.5 ≦ p ≦ 1.0, it is defined as tension.
This subject collects 660 frames of flight take-off mission pictures from multiple flight tests, of which 61 frames are defined as stressful conditions; 660 frames of landing mission pictures. It can be seen from the figure that the p value of the tension state of the pilot is generally higher during the take-off and landing phases; the p value of the tension state of the level flight state is not greater than 0.5. The tension load ratio in the take-off phase is 0.1090909, the tension load in the landing phase. The ratio is 0.0924242, and the tension load ratio in level flight is 0. (Tension load ratio(P) is: the number of tension frames/total frames).
It can be seen from Fig. 12 that the p value changes greatly during the take-off and landing phases, and the p value changes little during the level flight phase.

Conclusion
The experiment uses stochastic gradient descent (SGD). Generally speaking, compared with non-stochastic algorithms, SGD has an excellent effect in the early iterations. It can effectively use information when the information is redundant. When the number of samples is large, SGD computational complexity still has the advantage of advantages. The learning rate is 0.01, the batch_size of the data read in a single time during The following is the change of the loss function and accuracy rate with the number of iterations during the training process as shown in Fig. 13.
In this paper, in the course of researching pilot alertness, an attempt is made to find objective parameter values and evaluation algorithms that accurately reflect the personal attention of the human body, making the model not only applicable to flight fatigue but also attempting to discover the main factors that cause changes in flight alertness, providing data support for pilot training and cockpit layout.
Using intelligent algorithms such as neural networks and reinforcement learning to analyze multi-dimensional data on facial features and EEG signals, the new evaluation model for assessing the human body's spatial balance, attention distribution, and other pilot training states is studied; the EEG acquisition and analysis during pilot training subjects (takeoff and landing) is completed, the emotional characteristics of pilots during training are identified; the data fusion of multidimensional channels is completed; and the construction of a mathematical model of the pilot's reaction time and attention distribution during maneuvers, monitoring, and evaluating the effectiveness of flight training and conducting controlled experiments.
The final result is an effective intelligent mechanism for accurate training effect evaluation of multi-dimensional data, which will help to improve the pilot's operational efficiency.

Declarations
Ethics approval Not applicable.
Consent to participate All authors agreed to participate.

Consent for publication All authors agree to publish.
Competing interests Author Wenbo Huang has received research support from Xi'an Technological University. The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. Author Wenbo Huang, Changyuan Wang, Hong-bo Jia, Pengxiang Xue, and Li Wang declare they have no financial interests. The authors have no relevant financial or non-financial interests to disclose.