In the present study, performance of the subjects was better in task C. Thus, EMG-based visual feedback has enhanced the motor control of the user and has significantly improve accuracy during the trials. This feedback allowed the subjects to monitor their EMG activations levels during the tasks and compare them with the activation threshold predefined in the previous calibration in a simple way. This enable subjects to regulate their EMG activity with respect to these threshold levels and to better control the movement of the hand exoskeleton.
On the other hand, the kinesthetic feedback does not provide significant improvement in the performance of the subjects. Neither does it improve when both feedbacks are present. This result may be related to the fact that subjects do not need to conscious pay attention to kinesthetic activity, as it is more straightforward and intuitive than the visual EMG feedback.
The main factor behind these findings is the instant of time at which each feedback modality is provided to the user. The EMG-driven control of the RobHand robotic platform works as follows (Fig. 5): the recorded EMG signals are rectified (rEMGED and rEMGFDS) and normalized (nEMGED and nEMGFDS). The gesture recognition module determines the hand gestured based on the values of the normalized signals and the thresholds calculated in the calibration. The position controller generates the control signal (u) in function of the detected gesture so that the actuators move to reach that gesture. Hence, EMG-based visual feedback provided to the user is earlier in time than the kinesthetic feedback. Therefore, the performance of the subject is better when he/she is provided with the visual feedback as it has a longer reaction time than when the feedback is kinesthetic.
In fact, the real-time visual EMG feedback allows the user to modulate the exerted force at that very moment and thus, directly influence the position controller input. In contrast, with the kinesthetic feedback, the user modulates the exerted force once he/she has felt the movement performed by the actuators of the exoskeleton so that the force modulation is not instantaneous.
Furthermore, the hand motion generation process is not instantaneous. The hand motion is achieved by muscle contraction and this motor control signal is delivered from the central neural system. For any intended motor action which implies muscle contractions, it is well known that there is a time delay between the onset of the EMG signal and the onset of force production. This time delay is known as the electromechanical delay (EMD) and is about 10–300 ms , .
In addition, the electromechanical characteristics of the actuators should be also considered: (1) The dynamic response (time interval from the instant the actuator receives a position command to the onset it starts to move) is low (2) The speed is not too high due to the type of application (rehabilitation) and its maximum speed is the maximum mm/s. The time delays of the RobHand system that are considered relevant to the human-robot interaction were determined in : the Motion-Selection Time (MST) is 0.55 ± 0.6 s, the Motion-Completion Time (MCT) is 1.90 ± 1.65 s, varying from 0.98 s (close to rest movement) to 3.42 s (open to close movement).
With the visual feedback the subject modulates their force from the data of the gesture recognition module (nEMG signals and detected gesture) and anticipates the exoskeleton movement response. However, with the kinesthetic feedback the user modulates their force once the action has been performed by the exoskeleton. If the user perceives that the movement performed by the exoskeleton does not correspond to their intention, the subject can modulate their muscle activity to correct it but it takes much longer than if he/she had corrected it based on the real-time EMG visual feedback.
Inferences in this study are based on differences in performance with and without the two proposed feedbacks. There are some limitations of the current study in the following aspects. For attaining reliable EMG, the experimental tasks were very constrained and standardized However, the surface electrodes placement has a direct influence on the performance.
Another limitation of the present study is the possibility of muscular fatigue during the trials, which will deteriorate the user’s performance. Three-minute breaks were included between task to avoid it.
It was possible, although highly unlikely due to the low number of repetitions, that the learning effect on the user performance could appear during the trials. No statistically significant differences were observed in the results in function of the order in which the tasks were performed.
For the calculation of the L2 distance that is used to performance evaluation, it is necessary to previously synchronize the generated and the target signals. The time delay between these two signals has been considered constant throughout each task, although in may vary between gestures.
Experimental trials have been performed on healthy subjects, so they cannot be extrapolated to patients who have suffered damage affecting their cognitive abilities. The patient with impaired cognition and perception may become confused and distracted with the EMG biofeedback, resulting in deterioration of task performance.
Furthermore, the EMG monitorization was carried out in a simple way and there was no need to have any knowledge about electromyogram. Furthermore, no additional visual information that may distract the user was provided apart from the target hand gesture. There were two bars (one for opening and one for hand closing) of variable length (proportional to the muscle activation) and two colors (exceed or not the predefined threshold). Thus, evidence has been found of effectiveness on this type of EMG-based visual feedback but cannot be applied with other types. Further studies should be undertaken to confirm if other types of EMG feedback have the same positively effect and whether its combination with other virtual objects (exergames based on virtual reality) has also this type or effect or the user gets distracted.