Enhance embodiment of a virtual prosthesis through a training protocol using an EMG-based human-machine interface: a case series

Background : The embodiment of a prosthesis can bring a series of benefits during the rehabilitation of people with amputation, such as improvement of motor control and sense of agency, in addition to optimizing the training process with the prosthetic limb. New therapeutic strategies capable of enhancing prosthesis embodiment are, therefore, a key point for better adaptation to and acceptance of prosthesis use. In this study, we developed a system and a new rehabilitation protocol using an EMG-based human-machine interface (HMI) to induce and enhance the embodiment of a virtual prosthesis. Methods : This is a case series with seven people of both sexes with unilateral transfemoral traumatic amputation without previous use of prostheses. Participants performed a training protocol with the EMG-based HMI during the preprosthetic rehabilitation phase, composed of six sessions held twice a week, each lasting thirty minutes. This system was composed of myoelectric control of the movements of a virtual prosthesis immersed in a 3D virtual environment. Additionally, vibrotactile stimuli were provided on the participant ’s back corresponding to the movements performed. The objectives were to evaluate virtual prosthesis embodiment, to investigate motor learning during training with EMG-based HMI and to determine whether vibrotactile stimuli could facilitate the perception of virtual limb movements. The embodiment was investigated from a set of physiological and behavioral measurements and reports before and after the training. Motor learning was assessed through performance analysis. To investigate the use of vibrotactile stimulation to guide virtual prosthesis movements, the performance was assessed during the virtual prosthesis control test without adding vision. Results and conclusions : The different features evaluated throughout the protocol training consistently showed the induction and enhancement of virtual prosthesis embodiment and increased motor control. Therefore, this protocol using EMG-based HMI was shown to be a viable option to achieve the embodiment of a virtual prosthetic limb and to train motor control. Furthermore, the participants were able to guide the prosthesis based on vibrotactile stimuli, showing that this method can be used as an alternative sensorial path to be implemented in new therapeutic strategies and neuroprostheses to facilitate the movement perception of a prosthetic limb.


Introduction
The incidence of lower limb amputations around the world varies between 2.8 and 43.9 per 100,000 population/year [1]. The indication for prosthesis use after an amputation procedure aims to recover functionality and autonomy, providing a better quality of life [2]. However, even with multidisciplinary interventions and technological advances in the development of prostheses, a considerable portion of patients do not fully adapt to the use of a prosthetic limb [3][4][5][6]. Different factors may contribute to this nonadaptation associated with prosthesis use, including aesthetic, ergonomic, functional, psychological, and cognitive aspects [5][6][7].
In this way, searching for therapeutic strategies and approaches that help the adaptation and acceptance process of prosthetic limb use is particularly important. Recent research has pointed out that a key point during this process is prosthesis embodiment, which can bring a series of benefits, such as facilitation of the learning process and motor control, adaptation to and acceptance of prosthetic limb use [8][9]. The concept of embodiment of an external object can be defined as "the ability to process properties of this object at the sensory, motor and/or affective levels in the same way that the properties of one's own body parts" [9][10]. Several studies have corroborated this concept, showing that people with amputation can have a better perception of the prosthesis when it is voluntarily controlled and/or provides somatosensory feedback [8,[11][12][13].
Based on the premises of voluntary control and somatosensory feedback, EMG-based humanmachine interface (HMI) training provides a real-time paradigm for the study of the embodiment of an assistive device. Additionally, it can be used as a complementary therapeutic option to conventional treatments [14]. In this study, a new training protocol was developed using an EMG-based HMI applied during the preprosthetic rehabilitation phase of people with transfemoral amputation. The EMG-based HMI was designed in a way that the participants could control the movements of a prosthesis immersed in a virtual reality (VR) environment using the myoelectric activity of the stump while receiving noninvasive vibrotactile stimuli applied on their back, which were mapped to represent the movements of the virtual prosthesis.
This system and protocol training aggregated previous knowledge of the literature, but they have not still been applied and integrated in the clinical context during the rehabilitation of people with amputation (myoelectric control [15][16], VR environment [17][18] and vibrotactile stimulation [19][20][21]).
Prosthetic myoelectric control has been widely explored in research and clinical environments [15][16]22]. Some studies have also used the myoelectric control of virtual prostheses mainly in the pretraining phase before the use of physical prostheses [17,23]. However, in general, these studies do not use an immersive virtual environment and focus on control conditions and not necessarily on the closed loop between control and feedback, as we propose in this work.
Training protocols using visual feedback in immersive VR environments have shown promising effects in a variety of clinical contexts [17][18][24][25] and in the induction of the embodiment of a body, limb or virtual object [26][27][28][29]. Because the learning acquired in a virtual environment is transferable to the physical environment [17][18]25], it can also optimize adaptation to the use of a physical prosthesis.
Although vibrotactile stimulation is often used as a way of providing feedback on prosthetic limbs, in general, it is used to represent tactile information [19][20][21]; here, we propose vibrotactile stimulation to represent movement [29]. One of the great challenges in the rehabilitation of people with amputation is how to provide lost sensory information again [11]. Most current lower limb prostheses do not provide sensory feedback, which makes the user largely dependent on vision to determine the prosthetic limb position and its interaction with the environment. Furthermore, reestablishing proprioceptive sensory information related to movement perception is crucial for the development of embodiment [30][31] and improvement of motor control [30,32].
The hypothesis was that training with this EMG-based HMI, combining voluntary control with immersive visual and vibrotactile feedback, could induce virtual prosthesis embodiment and train the stump muscles involved in virtual limb control. In addition, we expected that vibrotactile stimuli would facilitate the perception of virtual prosthesis movements without the aid of vision.
To evaluate these hypotheses, we analyzed a series of physiological and behavioral responses and reports associated with the embodiment process of a virtual prosthesis throughout the training protocol. In addition, we assessed the individual's performance during the execution of motor tasks along the training with the EMG-based HMI. Finally, we systematically explored whether vibrotactile stimulation can be used as an alternative sensorial path to map the movements of the virtual prosthesis.

Experimental design
This is a case series study that assessed a) as a primary outcome, the induction of embodiment of a virtual prosthesis through a training protocol with an EMG-based HMI and b) as secondary outcomes, the ability to control the virtual prosthesis during training with EMG-based HMI and the use of vibrotactile stimuli in the perception of virtual limb movements.

Participants
For the inclusion of participants in the research, the following criteria were adopted: people with unilateral transfemoral amputation, both sexes, age between 18 and 59 years, and without previous use of prostheses. People who had open skin lesions on the stump or back, deficit uncorrected visual impairment and associated neurological diseases were excluded from participation in the study.
The demographic, physical, cognitive, and psychological assessments of all participants are listed in Table 1 and Additional file 1. The participants provided written consent prior to the start of the study, and all ethical recommendations and regulations were followed.  [34][35]. *3 Measurement made using a digital dynamometer. The point of force application was considered the midpoint of the stump length. Three isometric contractions were performed for each muscle group, and the mean peak strength was calculated over the last 5 s of contraction [36]. *4 The Amputee Mobility Predictor No Prosthesis (AMPnoPRO) assesses mobility aspects of amputees and predicts functional levels related to the use of prostheses [37]. *5 The International Physical Activity Questionnaire -short version (IPAQ) was used to assess the level of physical activity [38]. *6 The Montreal Cognitive Assessment (MoCA) was used to assess cognitive functions [39]. *7 The Hospital Anxiety and Depression (HAD) Scale was used to assess levels of anxiety and depression [40]. ** For participant 'D', it was not possible to assess the strength of the adductor muscles due to the small size of the stump.

EMG-based human-machine interface
The EMG-based HMI was designed to work using the electrophysiological activity of the muscle on the stump. Through a real-time recording and processing of this activity, the participants were able to control the knee movements of a virtual prosthesis while receiving patterns of vibrotactile stimulation on their back, representing the current position of the virtual prosthesis ( Figure 1). Recording of muscle activity. The activity of the hip flexor and extensor muscles on the stump was recorded using surface electromyography (EMG) (Figure 1-A.1). Due to the surgical procedure and muscle reinsertion, we were unable to say precisely which muscle was being recorded. Therefore, we established the criteria of using a hip flexor and a hip extensor muscle; once under normal conditions, these muscles are also recruited during knee extension and knee flexion, respectively [41].
Electrode placement for each muscle and participant was determined by applying excitomotor electrical current stimulation and visualizing the muscle contraction response. These positions were mapped for each person and used in all training sessions. Two channels of an Intan Technologies® chip were used to amplify the electrophysiological signals, and the chip was connected to the OpenEphys® analog-digital converter board in communication with its software [42][43]. The electrophysiological signals were sampled at a rate of 10 kHz.
Real-time processing. Data were processed using MATLAB ® (R2017b). For real-time control, every 200 ms, the EMG signals in each channel were loaded in blocks of 5120 samples, resulting in a 60% overlap with the previous sample [44]. Then, the samples were filtered using a twentieth-order IIR bandpass filter in the frequency range from 10 to 500 Hz and filtered at 60 (± 2) Hz with its harmonics [45]. Then, the EMG signal in each window was resampled to 2 kHz, and its root mean square (RMS) was calculated to estimate the muscle contraction level [46].
To control feedback, two criteria needed to be satisfied: a) Agonist muscle activation threshold. The RMS of the agonist muscle signals had to be greater than 2 SD in relation to the baseline signal for the system to recognize the direction of movement (knee extension or flexion). b) Relationship of activation between agonist and antagonist muscles. Initially, the RMS of the antagonist muscle could not exceed 80% in relation to the agonist (this variable was also used as a criterion for the progression in difficulty levels during training). Therefore, a higher level of activity associated isometric contraction (MVIC) used to normalize the signals of each muscle [46] were considered the maximum activity. Therefore, the greater the contraction force exerted by the agonist muscle, the faster the movement of the virtual prosthesis.
Virtual reality environment. The virtual environment was developed on the Unity3D® platform (2018.4). The environment was conceived to simulate a regular clinical room where the users would see themselves as a humanoid avatar using a transfemoral prosthesis in the corresponding lower limb. The subjects were able to control the knee extension and flexion movements of the prosthetic limb within a range between 0° and 90° (Figure 1-B.1). Moreover, the virtual environment was designed to enable gamification of the protocol with different stages and motivational messages to reinforce learning. The participants accessed the virtual environment using a Samsung® Odyssey Oculus Head-Mounted Display (HMD) that provided a first-person view in a fixed sitting position [26][27] and the ability to visually explore the whole 3D virtual environment.
Vibrotactile stimulation device. A total of 16 vibrotactile actuators (10 mm x 6 mm; 5 V-DC) were assembled in a 4x4 matrix and positioned on the subject's back [47], with an average distance of 2) Schematic diagram of the real-time processing of electromyographic activity and RMS calculations to estimate the level of muscle contraction. The RMS was normalized by the MVIC of each muscle. Regarding recognition of the movement direction, the activity of the agonist muscle should be twice as high as the average of the baseline signal, and the antagonist muscle could not exceed a threshold relative to the agonist, which was initially set at 80%. The recognized EMG patterns were mapped into visual and vibrotactile feedback. B) Feedback. B.1) Visual feedback. Avatar modeled with a transfemoral prosthesis and visualization from the first-person perspective are shown. The range of motion available to the prosthetic knee was set between 0° and 90°. B.2) Vibrotactile feedback scheme. The positioning of vibrotactile actuators on the participant's back was organized in a 4x4 matrix. The paradigm for the applied vibratory stimuli was associated with the movements of the virtual prosthesis: upward vibration during knee extension and downward vibration during knee flexion. The vibratory intensity peak of a given row corresponded to a specific angle of knee movement (row A, 0°; B, 30°; C, 60°; and D, 90°), with an overlap of 30° between adjacent rows.

Training protocol
Each session consisted of the following steps: a) the bipolar surface electrodes (Ag/AgCl) were positioned for EMG recording on the stump hip flexor and extensor muscles (the reference electrode was positioned on the tibial tuberosity of the opposite limb) after skin preparation with local trichotomy and asepsis with 70% alcohol to decrease the local impedance [49]; b) the participants were positioned in the chair, where they remained throughout the whole training protocol; c) the electrodes were placed for electroencephalographic (EEG) recording (the reference electrode was positioned on the mastoid process on the right side) [50][51]; d) the vibrotactile actuators were positioned on the subject's back; e) hearing protectors were used to minimize noise interferences from the external environment, including the noise produced by the vibrotactile actuators; and f) the HMI was calibrated by recording the EMG basal activity and MVIC relative to the muscles involved in controlling the knee movements of the virtual prosthesis ( Figure 2-A).
Two preliminary sessions were conducted prior to the start of the training protocol to familiarize participants with the EMG-based HMI. In these sessions, the participants learned to associate the stump muscular contraction with virtual prosthesis movements (for details, see Additional file 3). After this stage, the training was based on an operant conditioning paradigm, in which there was a progressive increase in the difficulty of the tasks with contingent feedback and rewards to reinforce learning. Overall, contingent feedback itself has a positively reinforcing effect, but this was supplemented with motivational messages, such as "congratulations", at the end of each task block [48,52].
In total, six training sessions lasting 30 minutes each and consisting of tasks involving motor control were conducted twice a week. For each task, the participants moved the virtual prosthesis until they reached a specific predetermined position set at four target angles: 0°, 30°, 60° or 90° (a combination of angles with targets at 0°, 45° and 90° was also used as a preliminary stage for each new level of difficulty). To guide the movements in real time, the participants were presented with a visual clue (semicircular ruler) indicating the position to which they should move the virtual prosthesis ( Figure 2-B).
The following criteria were adopted to increase the task difficulty: a) Tolerance of contraction of the antagonist muscle. Antagonist muscle activation up to 80% in relation to agonist was initially established, which decreased by 10% at each new difficulty level; b) Precision of movement. For a task to be considered correctly performed, a range of positions was adopted in relation to the target angle. The difficulty levels varied from the target angle as follows: 15°, 10° and 5°. Therefore, initially, there was no need for refined muscle control (regarding the isolation of agonist muscle contraction) and movement precision; however, this became necessary as the difficulty gradually increased (Figure 2-C).
In this manner, given a particular difficulty combination (tolerance of contraction of the antagonist muscle and precision of movement), the participants performed a preliminary block and then a task block composed of a set of target angles, 0°, 30°, 60° and 90° (each presented randomly four times), for a total of sixteen tasks for each block. After an attempt of 20 s or if the target angle was hit, the next task was presented (if the participant did not hit the task within 20 s, it was considered a failure, although the participant was not informed about the failure). The performance was assessed at the end of the task block, and the difficulty was increased if the participant had a success rate of 75% or more; otherwise, the same difficulty combination was performed again. To record the MVIC, three repetitions were performed for each muscle, with 6 s and 45 s of rest between each repetition. A.2) Test to verify the proper functioning of the system with visual feedback on the screen. Visualization of the contraction level relative to each muscle performing knee extension and knee flexion of the virtual prosthesis (represented by the red bar: rising bar → knee extension; descending bar → knee flexion).
The blue and green horizontal bars are thresholds for the system to recognize the signal as a voluntary contraction. B) Training protocol diagram. Feedback within the virtual environment consisted of visual clues indicating the target angles that the participants had to reproduce. The target angles used were 0°, 30°, 60° and 90°. Each angle was randomly presented four times during each task block (the participant had 20 s to establish each target angle). In addition to visual feedback, the participants received concomitant vibrotactile feedback on their back. The training sessions lasted thirty minutes, and within that time, as many task blocks as possible were performed. C) Difficulty progression. Two criteria were adopted to increase the difficulty: i) Tolerance of contraction related to the antagonist muscle. Initially, the antagonist muscle could have up to 80% activation in comparison to the agonist muscle. The tolerance decreased progressively by 10% at each new difficulty level (the lower the tolerance was, the greater the need to isolate the agonist muscle contraction). ii) Precision of movement. To evaluate if a target angle has been reached, different ranges of prosthesis position, in relation to the target angle, were adopted (15°, 10°, and 5°: the lower the range was, the greater the necessary precision of movement). Given a tolerance of contraction related to the antagonist muscle, the different precision difficulties were progressively combined. If the participant had a success rate ≥75% on a task block with a certain combination of difficulties, the next block instituted a new combination of difficulties.

Embodiment and behavioral assessment
We assessed a set of measurements at the beginning and end of the experimental protocol to examine the induction of virtual prosthesis embodiment (Figure 3). This test set was selected based on affective, spatial perception, and motor mechanisms. These three features were proposed by De Vignemont (2011) and underlie the development of the object's embodiment: a) Affective measurement -skin conductance response (SCR) was used to detect inherent physiological responses when the virtual prosthesis was threatened [53][54][55].
b) Spatial perception measurementa crossmodal congruency task (CCT) was used to identify visuotactile interference in the peripersonal space [28][29]56]. c) Motor measurement -the participants' performances during the training were used to assess their ability to control the virtual prosthesis [10]. This analysis also contemplates the secondary objective of investigating the ability of motor control throughout the training related to the learning process [57]. d) Quantification of the ownership and agency sense with regard to the virtual prosthesis through a numerical scale to assess body self-perception [54]. e) Recording of resting-state EEG activity to evaluate eventual functional changes expressed through electrophysiological brain patterns [58]. For the measurement of the skin conductance response, two surface electrodes were placed on the intermediate phalanges of the second and third left hand fingers, and the SCR was recorded once a chandelier dropped on the virtual prosthesis, representing a threatening stimulus. B) Spatial perception measurement. Crossmodal congruency task. The protocol consisted of visualizing the prosthesis movements with (VR + VT) or without (VR only) concomitant vibrotactile stimulation and subsequent performance of the CCT. During the CCT, visual stimuli were applied within the VR environment close to the avatar's feet (close to the hallux or heel) soon after the appearance of the visual distractor, and a vibratory stimulus was applied on the participant's back (thoracic or lumbar). The CCT was composed of sixteen different combinations of visual and vibrotactile stimuli, each presented four times at random, for a total of sixty-four trials. The participants were instructed to press a button corresponding to the location on their back where they received the vibratory stimulation as quickly as possible while ignoring the visual distractor. C) Performance (success rate and execution time) during the different levels of difficulty of the training with EMG-based human-machine interface. D) Self-perception. Quantification of the ownership and agency sense with regard to the virtual prosthesis through a numerical scale. E) Resting-state EEG activity. Thirty-two surface electrodes were placed on participants' scalps to record resting-state activity for two minutes with their eyes open at the beginning of each session.
Furthermore, at the end of the training protocol, we applied a test based on the same protocol as the training but without visual stimuli to assess the perception of virtual prosthesis movements from vibrotactile stimuli. During all the assessments described above, the participants were arrayed as shown in Figure 1.
Affective measurement. SCR is a physiological measure that allows indirect assessment of the activity of the sympathetic autonomic nervous system (SNAS) related to changes in emotional state during specific events [53,59]. SCR acquisition was accomplished using the e-Health® (2.0) system coupled to an Arduino Uno®, with a sampling rate of 20 Hz. The SCR recording was performed at the initial session and at the penultimate training session; for this, surface electrodes (Ag/AgCl) were placed on the intermediate phalanx of the second and third left hand fingers [19]. This recording was made 2 minutes before and during the simulation of a threat -a chandelier falling on the virtual prosthesis [60] (Figure 3-A). At the beginning of the training sessions, all participants watched a video showing the fall of the chandelier on the virtual prosthesis, and they were informed that at some point during the sessions, the same event could occur, thereby minimizing the effects of surprise on the measurements [55]. The participants did not know on which day this test would be conducted. Finally, the magnitude of the SCR was analyzed [59]. was randomly presented four times for a total of 64 repetitions in each task block. A visual distractor was presented and followed 100 ms later by vibrating stimulation for 350 ms. The participants were then instructed to press a button based on the place on their back that they had received the vibratory stimulation while ignoring the visual distractor. They had two options: upper (thoracic) or lower (lumbar). If the participant did not press the button within 2 s, the next combination was presented [29] ( Figure 3-B).
In summary, the CCT protocol consisted of observing the virtual prosthesis performing knee flexion and extension movement (at an angular speed of 45°/s for 1 minute) with or without concomitant vibratory stimulation related to virtual prosthesis movements. This observation sequence was random, and the CCT task block was performed after each paradigm. All participants previously underwent training and started this task only after reaching an accuracy of 85% in localizing the vibratory stimulus [29]. In this manner, the CCE was calculated as the difference in the reaction time between incongruent (for instance, when a visual distractor was localized on the upper part of the foot and the vibratory stimulation was in the lumbar region) and congruent conditions (for instance, when a visual distractor was localized on the upper part of the foot and the vibratory stimulation was in the thoracic region) [29,61].
Motor measurement. Analysis of the execution time and success rate in tasks during training with the EMG-based HMI system was performed while considering the different levels of difficulty during the tasks, the tolerance for antagonist muscle contraction and the precision of movement.
Self-perception. The sense of ownership can be defined as our ability to perceive our own body and to differentiate it from other bodies or objects using sensory information [62]. The sense of agency, in contrast, is related to the perception of control of one's own body movements and distinguishing our actions from those of other people or objects [63]. The participants quantified on a scale from 0 to 10, where 0 indicated "nothing" and 10 indicated "totally", how much they felt the virtual prosthesis was part of their own body and how much they felt that they could control it [54].
Resting-state EEG activity. EEG activity was noninvasively recorded during all training sessions. An Intan Technologies® chip was used to amplify the electrophysiological recordings and was connected to an OpenEphys® analog-digital converter board with communication with OpenEphys® software [42][43]. The sampling rate was set at 10 kHz. Thirty-two electrodes were positioned on the participants' scalps following the recommendations of the International System 10- Perception of virtual prosthesis movements using vibrotactile stimuli. A test with a similar arrangement to the EMG-based HMI training sessions was performed. In this test, the participants were asked to accomplish the same tasks by voluntarily controlling the virtual prosthesis and achieving the target angles but now using vibratory stimulation patterns as the guide without the aid of vision.
For each task, a visual clue associated with the target angle was presented to the participant; subsequently, the screen of the Oculus HDM was turned off, and the participant had to use the vibratory stimuli on the back to guide their movements and reach the target angle. The target angles were defined as 0°, 30°, 60° and 90°, and each angle was presented four times at random during each task block, for a total of 16 tasks per block. The same criteria for movement precision were adopted (15°, 10° and 5° variation in position relative to the target angle). Four task blocks were performed for each of the 3 levels of difficulty associated with the precision of movements (Figure 3-C). Finally, variables related to performance during this test, execution time and success rate were analyzed.

Electrophysiological signal processing
Regarding the SCR signal analysis, the following steps were performed: a) smoothing the original x(t) signal by averaging it over a 3-s sliding window with 50% overlap along the whole signal and producing an x'(t) signal; b) calculating the phase signal from the difference y(t) = x(t)x'(t); and c) applying a logarithmic scale over the magnitude of the signals and considering the 3 s of signal before and 3 s after the application of the visual stimulus (i.e., the moment when the chandelier enters the visual field of the participant within the VR environment) [59]. The SCR signals from participant "B" were excluded from the analysis due to noise issues during registration.
We developed a MATLAB® (R2017b) routine to process the EEG signals, which interfaced with the EEGLab toolbox. First, signal preprocessing was performed, which consisted of a) signal resampling from 10 kHz to 1 kHz; b) filtering with an FIR bandpass filter that considered only the range between 4 Hz and 45 Hz [64]; and c) visually inspecting the signals from each channel over time. All temporal periods with evident noise (i.e., body and eye movement artifacts) were excluded.
In this step, the FC4 channel was removed from the study due to the presence of noise with high amplitude throughout the measurements; and d) common average reference (CAR) spatial filtering was applied to remove other common noise between channels. The filtered signals were evaluated in the frequency domain; for this, we calculated the power spectral density (PSD) using an adapted version of Welch's technique with 0.5 s windows and 50% overlap [65].

Statistical analysis
The analyses of the data and electrophysiological signals were performed in MATLAB® (R2017b) using its statistical toolbox and algorithms. Initially, the Kolmogorov-Smirnov (KS) normality test was applied to evaluate the distribution of the data. Parametric or nonparametric hypothesis tests were used based on the classification of the KS test [66]. Differences were considered significant when p < α, where α = 0.05.
To compare SCR magnitudes among the 4 different periods (pre-and postthreat in the initial and final sessions), a two-way ANOVA was used with a Tukey-Kramer post hoc correction. A oneway MANOVA was applied followed by canonical discriminant analysis to determine whether the set of variables (SCR amplitude waveforms) exhibited specific clusters based on each period of threat exposure [67].
In the analysis of the EEG signals, for each channel k, all the PSDs were normalized according

Results
The training protocol induced and enhanced virtual prosthesis embodiment, in addition to providing an improvement in the motor control of the stump muscles. Furthermore, the participants were able to guide the movements of the virtual prosthesis using vibrotactile stimuli without the aid of vision.

Virtual prosthesis embodiment
Virtual prosthesis embodiment and enhancement through the training protocol using EMG- c) Motor measurement -motor training with the EMG-based HMI provided an improvement in the ability to control the virtual prosthesis, considering that there was a success rate > 75%, even with the progressive increase in the difficulty of the tasks (Figure 6-A.3).
d) High self-perception regarding the sense of ownership and agency of the virtual prosthesis by most participants from the beginning of the training, with scores ≥ 7. In most cases, these scores were increased or maintained throughout the protocol, except for two participants, "C" and "D", who reported a decrease in the sense of agency at the end of the training (Figure 4-C). These results associated with qualitative reports (see Additional file 1) showed that EMG-based HMI training induced the participants to self-perceive that the virtual prosthesis was part of their own body and that they could voluntarily control it. e) Significant changes in resting-state brain rhythms at the end of the training in comparison to those in the beginning. These changes were observed on specific electrodes: theta activity decreased at the F8 and CZ electrodes; alpha activity increased at P3, P8 and PZ and decreased at CP1; beta activity increased at FZ and P7; and gamma activity decreased at the PZ, P3, O1, and C1 electrodes ( Figure 5).
The increase in SCR amplitude occurred from the beginning of the sessions. At the end of the training, this increase was significantly greater in magnitude than at the initial evaluation (Figure 4

Improvement of the motor control
Training using an EMG-based HMI was critical to improve virtual prosthesis embodiment and motor control training of the amputated limb. Motor training resulted in an increase in the subjects' ability to control the virtual prosthesis considering that there was a high success rate (> 75%) even with progressive increases in task difficulty ( Figure 6-C and Additional file 5). However, although the success rate was always high, the execution time was longer in the more difficult/complex conditions.
However, the logit model was able to correctly identify, with 89% probability, the 90° target. With the intermediate targets, the model correctly identified the 30° target angle with 59% probability and the 60° angle with 31% probability. Although there was a high misclassification rate between these two angles, the 60° target angle was classified as 30° with a probability of 41% and the 30° angle as 60° with a probability of 21% (Figure 8-A).
In scenario (b), a high level of distinction was observed between the extreme target angles and the intermediate angles, with low specificity between each of the four executed angles. Although there was low accuracy in classifying the extreme target angles (0° was correctly recognized with 20% probability and 90° with 24% probability) and there was no specific identification between the intermediate target angles (30° correctly recognized with 48% probability and 60° with 47% probability), the classifier was able to distinguish the intermediate angles from the extreme angles with high accuracy (Figure 8-B).

Discussion
The experimental protocol used in this study was able to induce and enhance embodiment of a virtual prosthesis in people with transfemoral amputation. In addition, better motor control associated with the stump muscles was observed. We also identified changes in resting-state brain activity patterns, suggesting neuroplasticity throughout the training sessions. Furthermore, the participants were able to guide virtual prosthesis movements using vibrotactile stimuli without the aid of vision, showing that these stimuli can be used as an alternative sensorial path to facilitate movement perception.

Virtual prosthesis embodiment
In summary, in relation to the virtual prosthesis embodiment, we observed that the affective response was immediate, but with training, there was an amplification of this response. These findings, along with the recalibration of the peripersonal space and correlation between these responses and the increased control capacity with training, showed an improvement in the embodiment over time. The high indices of self-perception declared by the subjects regarding their sense of ownership and agency over the virtual prosthesis and the changes in resting-state EEG activity also corroborated it.
The immediate affective response can be explained by visual and proprioceptive congruence of the real and virtual body experienced through the first-person perspective, which is not dependent on visuomotor or visuotactile stimulation [69][70]. The recalibration of the peripersonal space, which occurred only at the end of the training, may be linked to the processing of body perception depending on learning motor skills acquired during the training sessions [56,[71][72]. The increase in the ability to control the virtual prosthesis during training indicated that the participants were able to use visual and vibrotactile feedback for motor planning and execution in the control of virtual prosthesis movements [10].
Participants also reported high self-perception that the virtual prosthesis was part of their own body and that they could voluntarily control it [62][63]. This perception remained stable or increased over the course of the training in most cases. Only two participants reported a decreased sense of agency at the end of the protocol. However, for both of them, the score given in the initial evaluation for the agency sense was already the maximum value. Most likely, this result was related to the expectations created by these participants that control would be easier throughout the sessions, which did not occur due to the progressive increase in the difficulty imposed during training. Additionally, it is worth noting that this effect did not affect their sense of ownership because reports from both individuals increased at the end of the training, which reinforces this interpretation.
In relation to the study of the patterns of resting-state EEG activities, we found significant changes at the end of the training in records made on the prefrontal, frontal, parietal and occipital regions [73]. The attenuation of theta activity identified in the prefrontal cortex could indicate permanent neural changes associated with the cognitive effort and concentration required during the training [74][75]. In addition, changes in the theta, beta and gamma oscillations detected in central front electrodes corresponding to brain areas responsible for motor processing [73] suggested adaptations in cognitive [75][76][77] and motor processing [78].
We also observed changes in alpha, beta and gamma rhythms in recordings on the posterior parietal cortex (PPC). The wide changes in electrophysiological activity in this region pointed to neural adaptations associated with sensory integration both to guide action and to build body perception [78][79][80][81][82][83][84]. Moreover, we identified an attenuation of the gamma rhythm on the occipital cortex (OC), which may also be involved in sensory processing, considering that the OC is interconnected with the PPC and with the temporal cortex, yielding important pathways for visual orientation and object recognition [48,[81][82]. Other studies have also described that the frontoparietal network, involving the PPC and premotor cortex, important regions for processing peripersonal space representation [79,85], is also related to the development of ownership and/or agency sense over an external object [86][87][88][89][90].
These findings together suggested that functional changes in the patterns of resting-state EEG activities can reflect neural adaptations linked to sensorimotor processing and body perception, thereby contributing to the development and enhancement of virtual prosthesis embodiment as well as motor learning.
To our knowledge, this is the first clinical study that investigates the embodiment of a virtual prosthesis, covering different aspects such as physiological and behavioral responses, in addition to the participants' own reports. The results found are promising and open the way for future studies in the areas of cognitive neuroscience and clinical application.

Improvement of the motor control
Performance analyses during the training using an EMG-based HMI indicated that there was an improvement in the ability to control the virtual prosthesis due to a motor learning process [57,91].
The time required for the participants to perform the tasks was longer for the intermediate target angles This interpretation can be supported by motor control theories based on feedforward and feedback mechanisms [92][93]. Thus, in conditions where movement strategies were simpler, as in the tasks with extreme target angles, motor control occurred largely through feedforward mechanisms from the estimation of sensory consequences using copies of the efferent motor commands. In this way, the execution times were shorter because the predicted movements were similar to real movements, without the need for major corrections during the execution. However, during tasks with more complex motor control strategies, those with intermediate target angles and higher precision requirements, motor control occurred mainly from sensory feedback by comparing predicted and actual movements [91][92][93]. In these cases, corrections and adjustments of the movement in real time were determinant and explained the longer execution times during these tasks.
Training protocols with EMG-based HMI have already been used in other clinical contexts [15][16]94], but there are still few studies on people with amputation involving immersion in VR environments. In view of the results presented here, this system can also be used as a therapeutic resource for motor control training.

Perception of virtual prosthesis movements using vibrotactile stimuli
At the end of the training sessions, a test was performed to evaluate how and whether vibrotactile stimuli could facilitate the perception of virtual prosthesis movements without a visual aid.
All participants were able to guide the virtual prosthesis movements by using vibrotactile stimulation as a reference. By analyzing participant performance during this test, a pattern of execution and motor control similar to that found during the training (visual and vibrotactile stimulations) was identified: longer execution times during tasks involving the intermediate target angles and shorter execution times during the tasks involving the extreme target angles. However, in this test, the level of movement precision was not a determinant of the execution time.
The success rate during the execution of tasks involving extreme target angles was high for all levels of movement precision. However, for intermediate angles, although the success rate was high in conditions with lower movement accuracy, the participants achieved a lower success rate in conditions with high levels of difficulty. These results showed that for tasks requiring more complexity in the motor control strategies (intermediate angles and greater precision), the visual aid was critical to produce appropriate movement.
Similar performances in the execution times during the tasks in this test (vibrotactile stimulation without the aid of vision) and the training protocol (visual and vibrotactile stimulation) indicated a process of motor learning that was maintained even when the visual afferences were removed. The correlations evidenced by the logistic regression analyses corroborated this interpretation, suggesting that the participants adopted the same motor control strategies in both protocols. Thus, the high success rate with a shorter time to execute the extreme angles may be related to the updating of internal models and to the improvement in movement prediction, which does not require many corrections during execution [91,93]. Moreover, in conditions requiring more complex motor strategies, e.g., to reach intermediate target angles, motor control was again more dependent on real-time corrections based on sensory feedback [92][93]. Mainly, the tasks that demanded higher precision, where the success rate was low, showed how much the visual stimulus was determinant for performing and correcting errors during these movements. Therefore, we can conclude that vibratory stimulation facilitated the perception of virtual prosthesis movements, but there is a limit regarding motor actions with higher levels of complexity and precision; in these cases, vision played a crucial role.
The implications of these results are relevant both in the development of new therapeutic approaches and in the field of neuroprostheses, showing that the use of vibrotactile stimulation provides a viable alternative to facilitate the perception of movements.

Limitations
Although all variables evaluated in this protocol were based on the interaction of the participant with the virtual prosthesis and completely different from those performed during regular rehabilitation, the possible contributions from the multidisciplinary treatment carried out concurrently on the results found cannot be excluded.
Future studies with a larger sample and control groups, in addition to randomized clinical trials, are still necessary. Studies with follow-up are also indicated for a better understanding of whether the modifications are permanent and can be extended to the use of physical prostheses.

Conclusion
This study showed that by using an EMG-based HMI that integrated visual and vibrotactile feedback to the volitional control of a virtual prosthesis associated with a set of directed and progressive tasks, it is possible to induce and enhance virtual prosthesis embodiment. The protocol proposed here also provided training of the stump muscles involved in the control of virtual prosthesis movements. These results show that the use of an EMG-based HMI associated with interactive training is a viable option to study and induce embodiment of a virtual prosthetic limb. In addition, it can be used as a therapeutic protocol for motor control training and to assist during the process of adaptation to and acceptance of the use of prostheses. Finally, the vibrotactile stimuli facilitated the perception of virtual prosthesis movements without vision aid, indicating that the use of this simple, noninvasive, and low-cost method is a viable alternative to be implemented in the development of neuroprostheses and new therapeutic strategies.

Consent for publication
Consent to use the image was obtained from all participants.