Towards a reverse world in automation: human as a sensor

Automation is more and more shifting towards a technologic world where the human component is however still involved. The autonomous cars represent an example where the shift towards a fully automated vehicle still requires the presence of the human component, that is a human-in-the-loop, which may represent the failure or the added value of such systems. Stress and fatigue represent the main concerns in safety issues during workloads, above all, in situations where the person’s behaviour represents a potential source of harm in human–computer interaction. After a brief survey of the recent literature on the human as a sensor concept, in this paper, some examples of an electroencephalography (EEG) based brain-computer interface showing the still limited but potentially challenging effectiveness of such an approach in systems engineering. The aim is to explore brain activities when a subject is conditioned by external stimuli during the execution of a cognitive task and to evaluate the human capability to react to the unexpected event faster than a machine. Different experiment tests have been designed to evaluate the human reactions under simulated driving scenarios and workload sessions. Accordingly, different methodologies to use this information are shown. Driver’s performance has been evaluated by EEG power spectrum analysis when unpredicted acoustic stimuli or breaking emergency situations are simulated during the driving session. A Fine Gaussian Support Vector Machine (SVM) approach has been carried out to classify the human’s brain activities when the participant has to drive a car along a challenging curvilinear path in a virtual simulated scenario or when he is exposed to visual disturbances while performing a common task. The results demonstrate an interesting potential correlation between external stimuli and driver’s brain wave activities in virtual driving environment. In addition, the EEG pattern recognition in the visual external stimulus test and in the driver’s stress simulation to tackle a road with different curve angles generated promising outcomes. However, despite the wide literature on the subject, the effective use of such signal still represents a hard although challenging task.


Introduction
In different kinds of context, from manufacturing to automotive, the human factor represents as a key element, in addition to hardware and software, which may affect but also improve overall system safety and e ciency. Advanced technological application and the arti cial intelligence are coupled in many industrial tasks to perform speci c activities and to identify potential problems, recent examples are the automated vehicle guidance or robots engineering. Those technologies are quickly evolving and they reached signi cant results to replicate the cognitive abilities of the human brain. In general, cooperation aspects among humans and machines are usually taken into account as a communication process where machines support decision makers and where decision makers take decisions to optimise the overall control of the process.
These technologies fail when human emotions also affect the decisions in the control processes thus implicit human-machine interaction has to be included in the brain-computer interface (BCI) systems (Rani P., et al., 2003). Besides, physiological signal represents the most direct element to detect changing in human emotion process (Xiao, G. et al., 2020). This paper mainly focuses on this aspect related to the interaction of machines and human when this latter, involved in the task, has to take decisions in particular mental state generated by unpredictable problems in the neighbour environment. The machine-human interaction is here analysed as a reverse problem.
The main purpose of this paper is to demonstrate that the data coming from the human brain, modelled according to the "human as a sensor" (HaaS) paradigm, represent a source of data to detect alert or dangerous situations. In the proposed approach, the objective is to demonstrate the possibility for humans to support machines in the performance of their processes, with a speci c focus on safety.
In particular, the human assumes the role of a sensor because he/she may perceive possible adverse conditions in advance in respect to the machine. Human feels emotions as fear or stress, which the machines are not able to note, but which represent the added value in the identi cation and prediction of unsafe circumstances.
The HaaS paradigm is based on the concept that humans represent a data source to interpret external events due to their capability to lter and process personal observations from the environment (Avvenuti et al.,2016). in this paper, we extend this idea, from the social sensing system, to physical network system with special reference to BCI applied to the automotive context.
Beside, HaaS is a paradigm which considers the human as the main subject which may detect complex hazard events or potential accidents better than physical sensor system (Wang et al., 2014). In the recent literature, this paradigm is mostly used in the social sensing framework due to the diffusion of online social networks. The persons, through the massive diffusion of ICT devices, are able to collect large amount of data and information from the environmental they live and to share them by communication system (Deogade S. S., 2018).
In the brain-computer interface (BCI) context, it is interesting to introduce the concept that the user, who interacts with a computer or a machine, represents a source of data that can be used to identify critical situation and predict events in different context of application such as cyber security, transportation, energy or social issues (Avvenuti et al., 2016).
In fact, the traditional approaches that manage the system automation often fail in the correct representation of the relationship among the performances of persons and his/her human machine coworking environment.
The aim of the HaaS is to improve and integrate the technical systems involved in safety and security without however replacing them. The paradigm HaaS is nothing new as there is a lot of references also reported in this paper about such approach. The novelty of the paper is to investigate on the possibility to use the human as an additional sensor where traditional sensor technologies can not succeed (such as to detect a "feeling" of danger), or simply as additional integrating information when traditional sensor technologies do provide information. The physical sensors usually acquire more reliable data in relationship to speci c physical phenomena such as temperature detection, distance measurement etc… The human, in the proposed Haas approach, assumes the role of a sensor which provide a more spread range of information about the environment also in elds which can not have one physical measure to be detected, such as fear. The brain activities in fact vary according to the external conditions which may produce different user's cognitive reactions. The possibility to complement the current ICT with user driver detection systems improves the quality of the monitoring. Rahman et al. (2017) introduce the HaaS as cyber-trustworthiness by proposing a mechanism to assign a score for individual reports based on the features of the mobile device that are monitored in real-time. In Heart eld et al. (2016), the authors explained the HaaS concept to detect semantic engineering attacks. Cameron (2015) uses the HaaS to improve emergency situation alertness in crisis management while Sakaki et al. (2010) applied it for real time detection of earthquake damages.
Intelligent automation may also involve HaaS, speci cally, putting the human in the loop, allowing him/her to add information for example to enhance the system's performance, to adjust unforeseen errors, adding missing data, or supporting the decision making process. In detail, in the speci c context of the BCI, HaaS may be taken into account as a more speci c human-in-loop feedback control which aims at investigating how the human and machine collaborate in an integrated and intelligent way (Zareian et al. 2014). In literature, several studies are dedicate to models representing the human choice in a complex system as a mixed assembly line in a manufacturing system considering the techniques based on human-in-loop approach (Busogi et al., 2017).
A challenging trend of the HaaS research deals with the integration of the human sensing data coming from the electroencephalography (EEG) signals, considering it as an integral component of the overall automated system and of speci c decision making tasks. EEG provides the electrical brain activities by monitoring the voltage variation through electrodes allocated on the scalp. This kind of analysis started with medical or clinical purposes, but recently, its use is devoted to extract from the EEG trace information about the monitored person while performing some speci c task in order to evaluate how his/her brain reacts in the different test cases (Paszkiel, S., 2020). The studies that investigate the possibility to use the biometrical or biophysical signals as a source of data to evaluate the interaction between the human's brain activity and an electronic machine fall in the human-machine interfaces (HMI) framework. The HMI, named in this context BCI, is a system which can acquire the human signals, to analyse the speci c embedded structures, to recognize the behaviour of the subject during his/her interaction with the machine or with a virtual interface as a PC or with another communication system (Ferreira, 2008). Among the paradigms used to implement BCI, EEG signal analysis represents one of the most frequently adopted approach (Roman-Gonzalez, 2012). EEG signal analysis plays an important role in a variety of applications, above all to interpret humans emotional states and to produce a feedback to identify the related behaviour to the emotions (Mohammadpour, 2017). Actually, a large part of the scienti c papers are still dedicated to select the optimal channels or features in the EEG trace which may improve markedly the correlations with the emotion recognition . However, most literature shows results on different kind of human sensations recognised by EEG spectral changes. Commonly, the main six basic emotions are fear, anger, repulsion, pleasure, sadness, and surprise described in a 2-dimensional space with dimensions valence and arousal (Al-Qazzaz, 2020). Thanks to the brain's capability to feel sophisticated emotions and to provide complex dynamic information that are reproduced by the cerebral cortex, different methods for the automatic detection of emotions through EEG signals have been studied and applied in literature. Angrisani et al. (2020) used software techniques to mitigate noises and interferences in the EEG signals from the human brain, to classify them, and processed the main desired information to improve the e ciency in the human-device interaction. In Attia et al., (2018), a steady state visual evoked potential (SSVEP)-BCI systems based on a convolutional neural network (CNN) has been developed to classify the brain activities coming from a wireless EEG in order to realize a real-time control loop for a mobile robot.
The classi er processed the EEG signals of the participants subjected to visual stimuli and generated the commands for the robot as a brain-actuated vehicle in human-in-loop con guration. Zhao et al. (2016) implemented a SVM classi er to manipulate the human EEG data in such a format that human intentions can be recognized and motion commands are then transmitted to a teleoperated robot. EEG signal analysis fundamentals and experiences to detect behavioral human states In general, the electrical activity of the brain detected by the EEG is classi ed according to different types of waves such as theta, delta, alpha, beta, and gamma. The frequency patterns of these waves identify different functional states of the brain. The classi cation consists in ve sub bands (Khosla et al., 2020): γ Gamma (frequency range >30 Hz) used to identify neurological disorders; β Beta (frequency range included in 14 and 30 Hz) associated to the parietal, somatosensory, frontal, and motor areas. Beta activity is signi cant for the states of alertness and attention and in case of perceived stimuli (Campisi & La Rocca (2014)).
α Alpha (frequency range included in 8 and 14 Hz) associated to the parietal and occipital regions. Reduction in alpha frequencies can reveal anxiety and emotional tension in the monitored subject.
Teta (frequency range included in 4 and 8 Hz) coming from the hippocampus region. In case of test, the increasing in the theta power band re ects memory demand (Campisi & La Rocca (2014)).
δ Delta (frequency range included in 0.1 and 3.5 Hz) mostly generated by the thalamus region. The increasing of Delta EEG rhythm during a mental activity is identi ed a growing attention in the subject.
Usually, the inactive brain state leads to the slowest brain rhythms while the fastest rhythms are representative when the subject performs some task processing. Wei et al. (2020) developed an EEGbased emotion recognition system to identify positive, neutral, and negative emotions. They decompose the original EEG by a Dual-tree Complex Wavelet Transform (DT-CWT) and then applied a Simple Recurrent Units (SRU) model to extract features using time, frequency and nonlinear analysis. Mehmood and Lee (2016) proposed a (late positive potential) LPP-based EEG feature extraction method for four emotional stimuli related to sadness, scare, happiness, and calm according to the arousal-valence domain (Wundt, 1948). They concluded that the theta and alpha frequency bands from EEG signals may be an optimal choice for the analysed emotion recognition. Thanapattheerakul et al. (2018) and Maria et al. (2020) proposed widespread reviews on emotion recognition patterns.
Besides, a wide range of EEG signal processing methods relevant for emotion recognition exists in literature generally distinguished in time, frequency, and time-frequency domains. Al-Nafjan et al. (2017) and Jenke et al. (2014) report signi cant surveys for the current developments in this eld.
Recently, the topical research in the estimation of emotional EEG signals is related to the safety concerns in working environment. Due to the growing interest in the simulation tools to model the dynamic of human performances, new BCI methods have been implemented to design adequately the integrated relationship between cognition and safety (Sträter, O., 2005). In literature, the mental fatigue may identify awake and sleeping states. On the other hand, in order to improve safety, there is a practical need to estimate, monitor, and predict the mental fatigue during speci c task associated to the workload or to activities which need high level of attention. Recent studies demonstrated the effectiveness in EEG patterns use to identify the mental fatigue in persons fully awake too (Trejo et al., 2015). In this context, the same importance is given to the identi cation of stress states (Mühl et al., 2014). Gharagozlou et al. (2015) analysed the EEG alpha power changes in partially sleep-deprived drivers while carrying out a simulated driving task.
Both stress and fatigue affect the person's performances and they represent relevant factors in uencing the occurrence of human's inadequate behaviour which may generate potential health risk during the common daily life activities, the workloads or the tasks based on continuous human-machine interactions.
To recognize the stress states, the stimulus perception and the stress response in the human subjects has to be identi ed (Seo S.-H. and Lee J.-T., 2010). Blanco et al. (2019) analysis alfa, beta and teta EEG signals in persons subjected to different cognitive stress states. The authors applied Stroop-type colorword interference tests and concluded that variations in signals appear immediately after the beginning of stress stimuli. Besides, the analysis of signals coming from different electrodes seems to guarantee more reliable results.
In the case of threatening conditions, the human brain stimulates several neural circuits to activate its resilience and maintain physiological integrity even in the most unfavourable situations. Psychological and physical stresses, called stressors, involve different parts of the nervous system, whose changes appear distinctly in the traces of brain activities. Besides, it has been veri ed that the stressors may be perceived in an anticipatory manner if the subject has been exposed already to similar stresses due to the increasing of predictability (Koolhaas et al., 2011). Furthermore, people who reveals high levels of anxiety seem to have more control over stress reaction and greater use of processing resources than people with low anxiety (Savostyanov et al. 2009). This concept is linked to the applications of BCI technologies in safety concerns where the user can react quickly to the stimulus with respect to the machine in based on his experience and anxious state. However, the consequences of stress may be agitation, fatigue, and fear. The incapacity to work with stress represents a risk factor in most of activities in uencing persons' mortality as an example in the transportation sector. According to Boada-Grau et al. (2012), driver stress and fatigue appear to be one of the major causes for road tra c accidents.
Despite the broad consensus among researchers to assert the reliability of EEG acquisition to analyse the event-related potential in brain activity, the works in literature also present some limitations. In general, they include relatively brief EEG tracings, which might have lower sensitivity than longer monitoring in determining participant's reactions. It is also veri ed that EEG electrode placement affects signi cantly the sensitivity and reliability of data acquisition but it is real both in short EEG recordings and in extended monitoring (Michel & Brunet, 2019). The experiments and the results of the test cases are subjectdependent and also the variability of trails in term of time and participants' skills make often necessary, for BCI researchers, to collect large set of data or to repeat large set of different monitoring sessions. Furthermore, the time-consuming process of training and model calibration limits the e ciency of BCI application for many use cases (Abiri et al.,2019).
Driver's emotion detection by EEG signal EEG signal analysis can refer to the human-in-the-loop control, which combines the human operator's cognitive reaction with the surrounding physical and mechanical environment. This concept may be relevant in the BCI driving system, estimating driver intentions in simulating braking (Kim et al., 2015), tra c lights recognition , or identi cation of turning direction (Zhan et al., 2013) during driving session. Recent studies investigated the EEG signal in the driving assistant systems in order to identify error-related brain activities (Zhang et al., 2015). Usually human errors affect the correct use of the interacting machine and the analysis of the brain functions may be used to regulate the target direct behaviour.
In stressful conditions, when an unexpected event happens, the subject has to react quickly to avoid threatening scenarios. In this context, the real-time detection of users' fear/stress may be crucial for improving safety in the different operative work sessions, as emergency control room, driving vehicles or aviation. In the literature, the biometric data such as Heart Rate (HR) or Breathing Rate (BR) are used to identify the level of stress in driving situations (De Nadai, 2015). In general, a person subject to external stimuli re ects the emotions in variations in his/her brain waves called evoked potentials (VEP) (Hashemi et al., 2014). More recently, special attention has been given to the prediction of driver's emotional states during driving performances by EEG brain signals recognition (Hajinoroozi, et al., 2016). Kohlmorgen et al. (2007) de ned a workload detector which aimed at quantifying the metal stress for drivers operating under real tra c conditions and, according to it, at regulating the interaction with the car's system. Haufe et al.(2011) proposed driver's test to identify EEG potentials prediction in emergency braking during simulated driving. They observed a signi cant event-related association among EEG signals and the critical tra c situation.  estimated the relationship between the EEG spectrum and driver's alertness in a virtualreality-based driving simulator. Li et al., (2012) analysed EEG data to determine an indicator for driver fatigue and the minimum number of electrodes to be used for the analysis. To detect fatigue in drivers, in Jap et al. (2009), the authors used EEG to analyse alpha, beta, delta e theta activities during a monotonous driving session and the results demonstrated stable delta and theta activities over time, a minor reduction of alpha activity, and a signi cant decrease of beta activity. Besides, Larue et al, (2011) evaluated the driver vigilance performance as effect of the vision of a monotonous road. Their study demonstrated that the invariability of the environment in terms of characteristics of the travelled road is a factor that may decrease the drivers alertness. Balandong et al. (2018) determined that alpha wave uctuations may re ect the intensi cation of mental effort to maintain vigilance while the beta wave is associated to high alertness and arousal. Kar et al. (2010) analysed the fatigue in human drivers in real and simulated scenarios evaluating the entropy values in the wavelet domain. They concluded that the parameters varied in the same manner in respect to simulated or actual driving conditions. Throughout the related literature, several studies have been dedicated to the EEG responses due to acoustical and visual external stimulations on the cortical EEG. Haak et al., (2010) detecting eye blink frequency in brain activity using EEG in a car driving simulations ltering the frequencies between 2 Hz and 40 Hz. The tests were conducted by introducing stressful emotions to the participants, through straight and curvy roads and with and without billboards during the driving sessions. They noticed strong correction between the EEG signals and the visual stimuli during the tests. In the work of Sonkusare et al. (2020), the authors monitored the stereo-electroencephalography (SEEG) of the drivers to explore the reactions of the temporal pole of the brain. They founded out that the stimuli generated by pictures, music and movies affected the theta-alpha frequency range. Zero et al., (2019) con rmed the correlation between drivers' brain activities associated to alpha waves under two different situations: during the participant's exposition to unexpected acoustic alarm and during visual external events in driving sessions. Kathaus et al. (2020) tested the effect of acoustic and visual stimuli on braking response in car drivers with different ages. The participants had to react to the brake lights in the preceding car, while, contemporary, different secondary stimuli generated by two loudspeakers or by sign on the screen were applied. They concluded that stimuli during a workload task affect the braking performances of the drivers and that visual external event had a greater distraction effect than acoustic stimuli.
Due to the importance of emotions in human-machine interaction (Du et al., 2020) and, in particular, in manual or automated driving, the present study aims at showing how stress affects the performance of drivers and examining the effects of emotions on their EEG signals, in two simple simulation scenarios.
In this paper, the speci c risky situations related to a driver who carries out the driving task in complex scenarios are presented and explored.
In this context, the paper focuses on the possibility to consider the HaaS paradigm. The purpose is twofold. Firstly, it is crucial to understand how the human may be represented as a sensor in order to detect his/her emotions and observations generated by the physical world in which he is inserted and in which he is acting during the driving simulations. The second one is to be able to create a classi er that allows to identify the patterns associated to the data acquired by the human brain and to link them to external events which the human is subjected to during the experiments. In order to correctly adopt the user as a sensor, the rst set of experiments are carried out involving the participant in a BCI system by different driving tests where he/she have to manage di cult tasks. Besides, another experiment, focusing on the emotions identi cation, examines the interaction between external stimulation and ongoing brain activity in order to identify human response to unexpected visual events. The power spectrum analyses have been realized by a supervised machine learning technique which aims to build a reliable model that separates data into a distinct number of classes in order to identify the different patterns associated to the events and to the related human reactions. These approaches represent the basic steps to integrate the human in the more complex hardware and software architecture able to support the human machine interaction in the framework of the automotive context.

Methods
In the proposed work, two sets of different experiments were conducted to collect and analyse the participants' EEG signals. A rst set is dedicated to the drivers' behaviour monitoring in a simulated virtual driving scenario to investigate the driver's reaction when external stimuli are presented. The second set of tests is related to analyse brain wave changes in the EEG when the participant is subjected to external visual stimuli consisted in intermittent lights generated on the right and left sides of the eld of view.
The main purpose is to demonstrate that the participant's brain activities may lead to enhance the information available to the machine. The motivation is relevant to manage the human-machine interactions in order to improve the overall system performance, above all, in industrial dangerous situations or when critical tasks are performed, such the automated guidance of a transport vehicle.

Data Recording by EEG-Enobio Cap
For EEG recording an Enobio Cap (Fig. 1) has been used with eight electrodes located according to the International Standard System 10/20 (Fig. 2). The brain areas are divided with our EEG cap in: Frontal lobe: channel 6, channel 7 and channel 8 Parietal lobe: channel 3, channel 4 and channel 5 Occipital lobe: channel 1 and channel 2 EEG Enobio cap consists of dry electrodes which have to be applied directly on the skin of the head. The EEG cap is connected to the PC through the NECBOX, a Bluetooth device situated behind the cap. This device stores and collects the data in order to elaborate off-line statistical analysis.
The EEG records the brain activity showing the areas where the electrical activities are perceived and identify the wave brain patterns related to different frequencies (Fig. 3): The technical speci cations of the EEG Enobio cap are: Bandwidth 0 to 125 Hz, Sampling rate 500 SPS Resolution 4 bits-0,05 microvolts.

DRIVING TEST SESSIONS
In the rst set of experiments, an electroencephalogram (EEG)-based Behaviour Control System (EEG-BCS) to monitor and control the human behaviour in driving environment is presented. The proposed EEG-BCS consists of two main subsystems which interact and cooperate: the simulation driving environment subsystem and a second subsystem dedicated to the EEG signal acquisition. The latter system consists of the Enobio Cap for EEG acquisition to monitor humans' cognitive states and emotional processes, whereas the virtual environment system provides the users with the realistic driving scenarios for the test sessions.
The OKTAL driving simulator has been adopted to provide the participants with a virtual driving setting and realistic tra c conditions. By the EEG-BCS, the following driving test sessions have been performed to detect: DRIVER TEST 1: EEG activity variations when the driver is in situations at risk of accident generated by unexpected acoustic stimuli; DRIVER TEST 2: driver's stress due to near miss accidents created by braking emergency situations; DRIVER TEST 3: correlation between the EEG signals and the presence of curves on the path during a driving session.
In the rst and second driving tests regarding unexpected events, the driver did not know what might happen during the session, as the goal of this simulation was to understand how the brain reacts to unexpected events. On the other hand, in the third test, the driver could see the road as the goal was to understand relationships between the curves on the path and the EEG signals.
The proposed EEG-BCS consists of different interacting components ( g. 4): The EEG cap with eight electrodes to record EEG signals. Driver Simulator realized by the Scaner Studio 1.2 software provided by OKTAL.
Elaboration data module.
The proposed EEG-BCI system allows collecting the main characteristics of the EEG signals synchronized with the speci c features of the simulator module ( g. 5). OKTAL creates realistic virtual scenarios incorporating roads and setting libraries dedicated to tra c situations, weather conditions or vehicle dynamics. The EEG-BCI system can detect abnormal value trends in the brain activity while the current information about automotive driving session are acquired and stored to carry out the analysis.

Driving simulator scenarios
The OKTAL simulator provided the users with the virtual driving environment. The OKTAL platform by the SCANeR Studio software offers tools and models which create realistic virtual scenarios addressing automotive and transport simulations. The users can de ne the road and infrastructures characteristics, different kinds of vehicle to be used, tra c, vehicles and pedestrians behaviours. The simulator comprises a video screen, a seat, a steering wheel and the pedals. The system can also detect the angle of the steering wheel and the pedal pressure applied by the driver's. At the end of the test drive, the system collects the information about the simulation in a text le.

Experimental set up in virtual driving environment
Driving data come from three participants, with driving license, of different ages, ranging from 29 to 50 years of age.
The Driver Test 1 has been performed in a virtual driving environment, showed on the screen, while the driver was subjected to unexpected acoustic disturbances. For 15 minutes, the driver conducted a car in a highway route at a speed of 80 km/h in the slowest lane, while, in the adjacent lanes, other vehicles, arti cially-induced and managed automatically by the simulator, are travelling. Fig. 6 shows the screen available for the driver in the simulated session during which the participant, who is wearing the EEG cap, drives while sitting on a common chair ( g.7). In each test, a high intensity sound has been randomly produced, between 5 and 10 minutes from the beginning of the test, for three seconds behind the driver's head, meanwhile the driver was driving. After the external event applications, the driver is monitored for 5 minutes to detect his performances and the possible brain activity changes in his EEG.
The Driver Test 2, related to the detection of driver's braking intention in critical situation, is 1 minutes long. The driver conducted a car in a simulated urban two lane route and the speed was held constant at 50 km/h in the slowest lane. During the simulation, a pedestrian preceded the car, walking on the right roadside. When the car was 20 m. far from the pedestrian, the latter crossed unexpectedly the route outside the crosswalk.
The simulation consists of evaluating the driver's behaviour when forced to immediate push the brake pedals and move the steering wheel to avoid the investment. The EEG signals of the driver have been monitored during the overall simulation to identify, in the waves frequency alterations, the reaction and the braking action in the proposed dangerous situation. The special focus of this task is to detect the cerebral activity as prediction of the driver's behavioural response.
The third simulation, the Driver Test 3, the bend route test, was 6 minutes long and it is performed taking into account a gure eight route which consists of two circular paths that intersect at a tangent point (Fig.  8). In each test, the driver conducted a car in the proposed circuit at a speed of 50 km/h in the slowest lane for 5 laps. The objective of this test is to recognize, in the EEG signals, the driver's perception of the right and left curves during the driving session.
All the proposed simulations have been performed during daylight hours and with stable weather conditions.

VISUAL STIMULI TEST SESSION
The second set of experiments is carried out to detect the correlation between the EEG signal and the human reaction to visual stimuli during a common workload. The tests aimed at identifying the effects, in the frequency spectrum of the EEG, when the participant is stressed by visual disturbances produced by two intermittent lamps, located on the extreme right and left side of the visual eld, which are switched on and off randomly.
EEG recordings were then aligned to the stimuli generation during the simulation according to a preliminary elaboration of data phase. The light bulbs system is an original system composed by (Fig.  10): AptoFun 2 channels 5V Relay-Interface-Board Raspberry pi 3 -control unit

USB -TTL converter
The control unit realized by the Raspberry device, installed a Linux operating system by Debian distribution. The Raspberry was connected by USB to a personal computer to manage, randomly, by a python script, the lamps which are continuously switched on/off.

Experimental set up in visual stress test
In the last simulation, the participant was positioned on the chair in front of a desk. The participant was asked to sit still and watch in front. Two lamps were located in extreme points of a desk, in the left and right sides. The simulation consisted in generating visual disturbances to the participant, switching on/off the lamps alternatively and randomly. Each switch lasted at minimum of 7 seconds while the total experiment was 5 minutes long. The human tester should have turned the head towards the lit lamp for the duration of the light and come back in the baseline position when the light switched off. There was a break of about 5 sec between consecutive lighting simulations.

Elaboration data module
The software NIC, produced by Enobio for EEG signals' acquisition, visualizes the data by the EEG cap both in time and in the frequency domain for a preliminary analysis. The embedded signal-processing module provides the possibility to export the EEG signals in the speci c format data and the related analyses may be performed in Matlab.
The EEG data have been differently processed according to the test simulations: For the two driver test sessions, DRIVER TEST 1 and 2, the EEG power spectrum analysis in the alpha frequency band has been performed.
For the third driver test, DRIVER TEST 3, and for the visual stimuli test, following an EEG power spectrum transformation in the alpha frequency band, a Fine Gaussian Support Vector Machine (SVM) approach has been carried out. To implement the algorithms, the overall data given by the subjects' tests are used to train the model, while the set related to just one participant is used in the validation phase.
Before applying the analysis, the EEG signal was ltered using band-pass lter between 49 and 51 HZ to delete the noise of the power network present in the laboratory. For each test, a correlation analyse between all the 8 channels was performed to reduce the numbers of the channels used in the algorithms.

Power spectrum analysis
In the proposed analysis, the wavelet transform is used for the spectrum analysis. The continuous wavelet is a powerful tool using to analyse the non-stationary signals. This method guarantees better performances in a time-frequency analysis in respect to the more traditional short time Fourier transform (STFT) especially in the signals features whose frequency grows rapidly. The following continuous wavelet transform has been used: ψŝω = e1-11-sω-μ2/σ21μ-σs,(μ+σ)s (1) where 1μ-σs,(μ+σ)s is the indicator function for ω μ is a parameter de ned in [3,6], while σ in [0.1, 1.2]. If the value of σ is large, the (1) produce a wavelet with better time location than frequency location, by contrast with small values of σ, the (1) produce a wavelet with better frequency location. In our study the values for the parameter μ and σ were respectively 5 and 0.6.

Support Vector Machine
In the Driver test 3 and in the visual external stimulus test, according to Garcia et al. (2003), we implemented a SVM model because it represents a useful approach to classify EEG signals corresponding to imagined motor movements.
In the Driver Test 3, the class label α has been assigned to the angles, in radians, subtended by the curves of road proposed in the eight course simulated scenario. The following table shows the value assignments according the amplitude of the angles curves. To interpreter the data, also in the Visual Stimuli Test Session, a classi cation of the possible subject's movement reactions has to be introduced. Three class labels β are de ned and the values 0, 1 and 2 have been associated, respectively, to baseline, right and left position of the participant's head when the stimulus is applied during the simulation.

Results
The EEG signals were recorded by the Enobio Cap during the overall tests. The signals have been analysed for speci c electrodes according to the brain regions mainly involved in the different tasks.
For the Driver Test 1 and Driver Test 2, the alpha waves coming from the parietal lobe, with special reference to channel 5, have been examined.
For the Driver Test 3 and in the Visual Stimuli Test Session, EEG was recorded from two scalp sites, in the parietal lobe, by channel 3 and channel 4.

Driver scenario simulator
In the Driver Test 1, related to the unexpected acoustic event during the driving performance, the driver has been monitored for 15 minutes. At the minute 9 of the simulation task, the driver has been subjected to the unexpected acoustic stimulus. Fig. 13 shows the time domain signal ltered in the alpha waves for one participant.
The driver's reaction is evident in the alpha waves variation highlighting a temporal correlation with the stimulus.
The same results is obtained by the analysis of the wavelet transform. In detail, in the relative alpha activity plotted for a time window of 40 seconds (Fig. 14) centred at the instant of the stimulus generation, the wavelet transforms highlights a signi cant variation in EEG signals.
In the Driver Test 2, the detection of driver's braking intention in critical situation test, the simulation is analysed for 1 minute. This time window, which represents the period in which the subject is performing the driving task, is located after the start of the simulation and it contains the participant's feedback presentation. Also in this case, the results are related to the subject who mostly reacts to the proposed test session.
The pedestrian crosses the route unexpectedly after 44 seconds from the beginning of the simulation.
The proposed event generates a variation which is evident in the time and frequency domain (Fig. 15) where the ltered signal reaches the peak at 55895.3 microvolts at the 45 th seconds. In Fig. 16, the in uence to the signal energy appears in the scalogram, the two-dimensional wavelet energy density function. The main intense reaction (in yellow) appears some instants after the critical event.
In the driver test 3 the set of the overall samples coming from the three participants has been processed contemporary. Data set obtained from the experiments consisted in 732.503 samples.  In the training phase, the SVM classi es the input samples according to the class labels. The SVM classi er, generated from the training data, is then applied to the validation set of data in order to predict the belonging of samples to the classes.
The Gaussian Kernel Function, with scale parameter equal to 0.43, has been applied for pattern recognition.
A more detailed analysis may presented by the confusion matrix which indicates the user's accuracy which means the probability that our model predicts correctly the probability that data belongs to the correct class. Each column of the matrix A, in the top of Fig. 17, represents the predicted values, while each row represents the real values. The element on row i and column j represents the number of cases in which the classi er has associated the class "true" i to the class j. The samples that have been correctly classi ed are indicated on the diagonal of the matrix A. The matrix B, in the bottom of the Figure 17, includes the positive and the false predictive values for the three classes. The probabilities to obtain a positive classi cation for the classes (0,1,2) are respectively 51.1%, 39% and 33,5%.
To identify the Producer Accuracy that is the classi cation quality of the proposed model, the Figure 18 displays the confusion matrix related to the validation data: the matrix 3x2 C, in the last two columns of the gure, indicates, respectively, the true positive and false positive rate for the classi er. With the acronyms PPV and FPV are indicate respectively the Positive Predictive Value and the False Predictive Value.
The Producer Accuracy represents the detection rates of the SVM that is the effectiveness of the proposed method. In the presented experiments, the true positive detection rates (TPR) is the probability to identify the correct movement of the tester when it perform a right/left curves by car on a simulated scenarios. In particular, there is 56.3% of probabilities to detect the driver's intention to turn the steering wheel, performing curves, with an angle <-0.8 rad and more than 61% to detect a curvilinear path. Besides, the feasibility of our proposed method is veri ed by an improved correlation, R = 0.1824, between predicted and observed data (p-value = 0).

Visual external stimulus test
The visual external stimulus test, as in the previous case, involved three participants.  In the Fig. 19, the EEG signal for the participant 1, selecting the channel 3, ltered in the alpha waves, is shown in respect to the lighting stimulus generation.
In this fourth test, the aim was to identify a correlation between the EEG signal and the detection of the participant's head movements toward the left or right triggered by the left or right light switching on respectively.
As expected from the literature (Piroska et al. 2018), alpha bands achieved the best discrimination in the speci c movement detection. By Figure 19, the alpha phase, which identi es the subject's head movements, seems to be more sensible when the light was ashed on opposite side consecutively, with respect to the continuous stimulus on the same side. Also in this case, a more detailed analysis has been performed by a Fine Gaussian SVM model. A Kernel Function with parameter of scale equal to 0.43 has been applied to the trials to classi ed the time-based features. Figures 20 and 21 represent the results for the evaluation phase. In the Figure 20, which displays the confusion matrix associated to user's accuracy, it is possible to read that the probabilities to obtain positive classi cations for the classes (0,1,2) are respectively 72.6%, 89.3% and 79.9%. In the Figure 21, the detection rates related to the producer accuracy are 95.9% for class "0", 57.0% for class "1" and 31.8% for class "2". It is evident that only the baseline position of the tester may be detect by the classi er with a relevant probability. This is due to the fact that numbers of features related to baseline position is larger than the samples of the other two classes. Unfortunately, it depends on the developments of the experiments in which the participant had to come back to the baseline position after the detection of each stimulus.

Conclusion
Despite the relevant amount of work on the subject, using EEG data to use human as a sensor is still a challenging task. From our experiments which were simple and non-invasive, as required by a work environment, the results show that electrical brain activity can be lightly correlated in a true context environment of HaaS.
Speci cally, we have shown that generally alert signals can be e ciently detected by EEG and so related to a behaviour at risk, more complex tasks as the recognition of right/left turning command while driving can be just lightly related to EEG. Besides, in this paper, we have just applied standard machine learning techniques as available from common tools as Matlab, and some further investigation should be devoted to modify and adapt such algorithms more speci cally to understand human behaviour and feelings from EEG.
Moreover, several testing conditions can be enhanced, such as using sensors with gel to improve signal acquisition, remove every source of electrical noise in the environment, limit the movement of the human etc… But at the same time, such better conditions can be hardly used in an industry or in a dynamic working environment such as driving. In addition, there is still a quite wide set of approaches where several often over tting variables can be observed and analysed, from the different channels to the different traditional and innovative analysis techniques. Despite such considerations, some positive evidence of the possibility to use the human in the loop of a control process in a non-invasive noisy working environment has been shown in our experiments. "Fear" and probably other emotions can be detected. The hard task is now to show the right approach to enhance them.

Declarations
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.