Multichannel Sensorimotor Integration with a Dexterous Artificial Hand

Abstract Background People use their hands to perform sophisticated tasks like playing a musical instrument by integrating manifold and diverse sensations of touch with motor control strategies. In contrast, prosthetic hands lack the capacity for multichannel haptic feedback and multitasking functionality remains rudimentary. There is a dearth of research exploring the potential of upper limb absent (ULA) people to integrate multiple channels of haptic feedback into dexterous prosthetic hand control strategies. Methods In this paper, we designed a novel experimental paradigm for three ULA people and nine additional subjects to investigate their ability to integrate two simultaneously activated channels of context-specific haptic feedback into their dexterous artificial hand control strategies. Artificial neural networks (ANN) were designed for pattern recognition of the array of efferent electromyogram signals that controlled the dexterous artificial hand. ANNs were also used to classify the directions that objects were sliding across two tactile sensor arrays on the index (I) and little (L) fingertips of the robotic hand. The direction of sliding contact at each robotic fingertip was encoded by different stimulation frequencies of wearable vibrotactile actuators for haptic feedback. The subjects were tasked with implementing different control strategies with each finger simultaneously depending upon the perceived directions of sliding contact. This required the 12 subjects to concurrently control individual fingers of the artificial hand by successfully interpreting two channels of simultaneously activated context-specific haptic feedback. Results Subjects were able to accomplish this complex feat of multichannel sensorimotor integration with an overall accuracy of 95.53% ± 0.23%. While there was no statistically significant difference in the classification accuracy between ULA people and the other subjects, the ULA people required more time to correctly respond to the simultaneous haptic feedback slip signals, suggesting a higher cognitive load required by the ULA people. Conclusion ULA people can integrate multiple channels of simultaneously activated and nuanced haptic feedback with their control of individual fingers of an artificial hand. These findings provide a step toward empowering amputees to multitask with dexterous prosthetic hands, which remains an ongoing challenge.


I. INTRODUCTION
The capacity for dexterous control of a prosthetic hand and the sensation of touch are inherently intertwined [1].
Unfortunately, the potential for ULA people to engage in sophisticated activities like playing sports or musical instruments remains elusive because current clinical practice with prosthetic hands affords limited control capabilities, with minimal or no awareness of corresponding touch sensations. Mastery of dexterous tasks requires both the capacity for simultaneous and individual finger control along with context-dependent touch sensations, such as awareness of the direction that fingers are sliding [2] along the strings of a violin [3], for example ( Fig. 1). Although this level of control mastery remains largely in the future for amputees, it provides an ambitious goal. To restore potential for multiple independent artificial hand control channels, there are several invasive methods that have been published to enable simultaneous control of multiple degrees of freedom (DOF). Pattern recognition via surgical procedures such as targeted muscle reinnervation [4] or regenerative peripheral neural interfaces [5] can enable simultaneous control of multiple DOFs of a prosthetic arm. Another recent approach detailed 32 intramuscular EMG electrodes that were implanted in the forearms of two amputees to decode individual digit movements with several different deep learning algorithms [6]. There have also been strong advances with neural implants in the peripheral nerves of amputees for direct neural feedback and control [7][8][9], which could open the gateway to higher control capabilities. While these invasive approaches are truly meritorious, they are not always the best solution for every patient [10].
For this reason, there have been numerous efforts to explore noninvasive methods to increase the human-prosthesis bandwidth [11]. There have been several publications that have demonstrated noninvasive methods for EMG control of a prosthetic wrist [12] while simultaneously opening and closing a prosthetic hand. These efforts detailed multi-layer perceptron [13], linear discriminant analysis [14][15][16], convolutional neural networks [17], and neuromusculoskeletal models [18] to detect hand motions while wrist pronation/supination or flexion/extension motions were simultaneously performed.
There have also been prior efforts to investigate the ability for simultaneous control of multiple grips or fingers of prosthetic hands independently via noninvasive methods [19]. One major obstacle of this goal is the substantial crosstalk that occurs between muscle Fig. 1. Complex activities such as playing a musical instrument present a great challenge to upper limb amputees. Tasks such as these require accurate slip control of multiple fingertips simultaneously across different surfaces. In this paper, we explore the potential for three amputees and nine non-amputees to simultaneously control the state of sliding contact at two fingertips simultaneously by integrating two channels of variable frequency vibrotactile haptic feedback into their motor control strategies. The rendering of the man in this image was licensed and modified for public display.
recording locations on the forearm [20]. However, there has been progress to decode individual finger movements with ANNs using 32 surface EMG electrodes placed on limb-intact people [21] and 19 electrodes placed on a transradial amputee [22]. Another approach used eight electrode pairs on the forearms of five limb-absent subjects to classify seven different kinds of imagined finger movements with an average classification accuracy of 79% [23]. One paper has detailed the use of different combinations of linear discriminant analysis, principal component analysis, and support vector machines to classify twelve different imagined individual finger movements from six amputees and also combined finger movements from ten limb-intact subjects; high classification accuracies for five distinct finger movements were obtained using six EMG electrode pairs [24].
Regarding noninvasive feedback of haptic sensations, there have been myriad approaches explored in the literature including mechanotactile [25][26][27], vibrotactile [25,28,29], and electrotactile feedback methods [30][31][32], see [1,10,33] for reviews. Particularly relevant to this paper, vibrotactile haptic feedback has been explored due to the light weight and low power requirements of these actuators, which make them conducive to use in wearable prosthetic limbs [34]. Vibrotactile feedback has been of interest to convey haptic feedback for grasped object slip sensations [33] and has been shown to surpass both visual and proportional force haptic feedback in grasped object slip prevention experiments with an artificial hand [35]. The sense of object slippage is particularly important for amputees because they are often unaware of the applied grip force, leading to inadvertently dropping the grasped object [36].
While there have been numerous efforts to prevent slip of grasped objects [37], one commonly overlooked aspect of control is that it is often desirable to permit slip to occur, such as in haptic exploration tasks [38] to manually gather information about the environment [39] with tactile sensor arrays [40] similar to how people use myriad mechanoreceptors in the human fingertips [41]. Fine details of the situation at hand can dictate whether the person would desire to prevent or permit slip [3], such as when handing an object to another person-sliding contact is likely to occur during the handoff [2].
Grasp control challenges compound when multitasking or controlling multiple functions simultaneously.
Multichannel haptic feedback arrays show promise to help overcome this difficulty by conveying a broader picture of the hand to the amputee [42] from multiple fingertips simultaneously [43] While there is extant research on EMG pattern recognition for simultaneous multi-DOF control of artificial hands and methods for conveying multiple channels of haptic feedback to ULA people, there is a dearth of research investigating integration of the latter with the former. The novel contribution of this paper is an exploration of ULA people's capacity for multichannel sensorimotor integration to concurrently control individual fingers of a dexterous artificial hand based on multiple channels of simultaneously activated context-specific haptic feedback.

A. Human Subjects
Twelve human subjects participated in these experiments (six females). Three male subjects had an upper limb amputation or congenital deficiency. The first subject (S1) had a bilateral amputation; transhumeral on the right side, to his residual limb. The other nine subjects (S4-S12) had no amputation or congenital limb deficiencies. All participants gave informed written consent under a protocol approved by Florida Atlantic University's IRB, which was in accordance with the declaration of Helsinki.

B. Robotic System Hardware
The robotic system used during this experiment includes an E3M Dexterous Shadow Hand (Shadow Robot Company, London, U.K.) fit with BioTac SP tactile sensor arrays (SynTouch, CA) on both the I and L fingertips ( Fig.   2(a-e)). The Shadow Hand has 20 tendon-driven DOFs; however, for the purposes of exploring the capacity for simultaneous slip control in this paper, only the metacarpophalangeal joints of the I and L fingers were under control of the users, effectively limiting the DOFs to two. The BioTac SP fingertip tactile arrays are deformable fluid filled sensors that have 24 internal electrodes with impedances that vary as external forces displace the fluid within the cavity ( Fig. 2(a)). There is also a hydrophone sensor in each BioTac SP to measure the steady state pressure (PDC) within the entire cavity. The dynamic pressure (PAC) is also gleaned from PDC with a high pass filter.
Two independently controlled stepper motors (Autonics A2K-M243, TEquipment, NJ) were mounted vertically on an aluminum plate to induce slip either up or down on the BioTacs of the I and L fingers ( Fig. 2(b-e)). The stepper motors were rigidly connected to flat rectangular surfaces that were 3D printed with PLA material using an Ultimaker S5 (Ultimaker, Zaltbommel, the Netherlands). Sliding speeds were constant at 0.5 mm/s.

C. ROS Network Configuration Overview
The system was controlled through a robot operating system (ROS) network comprised of four nodes (Fig. 3). The first three ROS nodes were implemented with the Teensy 3.6 (PJRC.COM, LLC. Sherwood, OR, USA). ROS node 1 was used to control the stepper motors to initiate the slip with respect to the I and L fingers. ROS node 2 was used to control the actuation frequency of the haptic feedback from the vibrotactile stimulators [44] to correspond to either slip in the up or down directions at the I and L fingertips [2]. ROS node 3 was used to sample six EMG signals from the forearms of the human subjects for efferent control of the Shadow Hand to prevent or permit slip with each finger individually or simultaneously. ROS node 4 was deployed in MATLAB/Simulink to implement ANNs to classify the efferent EMG signals and afferent BioTac SP fingertip slip sensations.

D. Classifying Context-Dependent Slip Sensations From Two Fingertips Simultaneously
To design ANN classifiers to detect the direction of slip on two fingertips simultaneously, data were collected from the BioTac sensors attached to the I and L fingers ( were trained with scaled conjugate gradient backpropagation using the nprtool in MATLAB. To train and test the ANN, the collected data were divided into 3 categories: training, testing, and validation. The ANNs were trained using 70% of the data and the network was adjusted based on the error generated from this dataset. Network generalization was measured using the 15% validation dataset, and the training was halted when generalization error stopped improving. The 15% testing dataset did not affect the network error and provided a new performance measure that was independent from the training and validation performance measure. See our prior work for more details on the process to classify the direction of sliding contact with ANNs [2].

E. Haptic Encoding of Task-Relevant Slip Sensations from Two Fingertips Simultaneously
To encode a directional sense of the sliding motion, each vibrotactile stimulator was toggled on and off at either 1 Hz (Slow) to convey the upward direction or at 200 Hz (Fast) to indicate the downward direction of slip. To objectively quantify the haptic feedback encoding methods from the vibrotactile stimulators, the BioTac SP sensor was utilized as a measurement tool. To that end, the vibrotactile stimulator was connected firmly to the BioTac SP sensor on the I fingertip ( Fig. 4(a)). The Slow and Fast vibrotactile activation signals for this test were created in Simulink using the ROS toolbox. The steady state pressure (PDC) and dynamic pressure signals (PAC) from the BioTac were recorded for both the Slow ( Fig. 4(b, d)) and Fast vibration modes ( Fig. 4(f, h)).
The time-frequency spectrogram representation for the data was calculated using a 512-point FFT with 0.08 s frame length and a Hanning window with 90% overlap. The time-frequency power distribution data show the frequency components of PDC and PAC for the Slow (slip up; Fig. 4(c, e)) and Fast (slip down; Fig. 4(g, i)) vibrotactile actuation modes. These produced very different activation signatures, which enabled people to clearly distinguish between the different sensations corresponding to upward and downward sliding contact.

F. Overview of Situationally Aware Multichannel Sensorimotor Integration Experiments for Simultaneous Slip Control
Subjects were trained over the course of approximately two hours to integrate two channels of vibrotactile haptic feedback into their simultaneous control strategies for the I and L fingers. When an object slides along a fingertip, this can be desirable or undesirable depending upon the context. For example, an object sliding upward could be caused when handing an item to another person whereas an object sliding downward could be due to inadvertently dropping an object onto the floor. In the former case, permitting slip would be desirable to hand an object to the other person.
In the latter situation, it would be desirable to prevent slip by increasing the grip force. We simulated these scenarios and encoded the vibrotactile actuation frequency from the I and L fingertips to be either Slow ( Fig. 4(b-e)) or Fast  This necessitated training the subjects to simultaneously interpret the two channels of context dependent haptic feedback to make the corresponding control action with each finger concurrently. To achieve this feat of multichannel sensorimotor integration, an ANN for efferent control was developed for EMG pattern recognition with each subject.
Next, subjects were trained to interpret the haptic feedback sensations from each fingertip individually, and then simultaneously. Finally, the multitasking experiments with the entire robotic system were performed.

1) Training for Simultaneous EMG Slip Control
Six EMG signals were recorded from the forearms of each human subject (four electrodes are 13E200 AC, Otto Bock, and 2 electrodes are from Myolab II (Motion Control, Inc., Salt Lake City, USA)). The six electrodes were arranged circumferentially around the forearm with an approximately equal distance between each electrode ( Fig.   5(c)). The subjects were asked to flex muscles in their forearms to ensure that all signals responded well and to finetune electrode locations and gains.
Next, ANN classifiers were trained for each subject with six EMG inputs and four output classes. The outputs from the ANN classifier were No Motion (NM), Index (I) finger flex, Little (L) finger flex, and Simultaneous (Sim) finger flexion of both index and little fingers ( Fig. 5(a, b)). For each class, 10 trials were collected in which the human subjects were asked to follow a 0.1 Hz rectangular pattern with 50% duty cycle generated in Simulink that was displayed on the monitor in front of them (Fig. 5(c)). Time domain data were amplified and then lowpass filtered before using it to train the subject-specific ANN classifiers for EMG pattern recognition. EMG data were separated into four different matrices corresponding to each class and were labeled to train the ANN classifier offline using the same methods previously described for the tactile slip sensation classification problem. After the ANN was trained for each subject, the utility of the classifier was quickly verified in real time in Simulink by asking every subject to produce each EMG class several times before the robotic experiments took place (Fig. 5(c)).

2) EMG Control Algorithms for Simultaneous Slip Prevention
EMG signal processing to control slip simultaneously or individually with two fingers independently or concurrently was accomplished first by normalizing the EMG signals from each electrode. Next, the desired force (FD) for the index (I) or little (L) finger (∅ ϵ I, L) was specified as: Where, ∅ was the desired force gain for finger ∅.
The ∅ term was specified by a look-up  The two desired forces ( ,∅ ) were inputs to two hybrid force-velocity controllers [45] that were implemented in MATLAB/Simulink to control the I and L fingers (Fig. 6). The two different controllers for each finger had an outer force feedback loop and an inner velocity feedback control loop. The two hybrid force-velocity controllers for the I and L fingers were designed as described in our prior work [43]. Summarizing, the outer force feedback loop for each fingertip used the corresponding BioTac PDC value to control the applied fingertip force and the measured joint angles ( ∅ ) to control the velocity.

3) Learning Context-Dependent Multi-Digit Haptic Feedback
Training the subjects to interpret the context-dependent haptic feedback from the vibrotactile stimulators was the next step. For the congenitally limb-absent subject S2 and the nine non-amputee subjects S4-S12, the vibrotactile stimulators were placed ipsilaterally on the upper arm over the biceps and triceps for haptic feedback corresponding to the I and L fingers, respectively. However, the amputee subjects S1 and S3 had poor sensitivity in several locations on their upper arms due to scar tissue and requested different haptic feedback locations. After a brief searching process, suitable haptic feedback sites were identified nearer the deltoid and elbow.
The haptic feedback training process started by demonstrating the Slow and Fast vibrotactile stimulator activation modes through Simulink in real time. These two frequencies of activation produced different vibratory sensations (Fig. 4). For example, the amplitude of the steady state pressure (PDC) at the Fast frequency ( Fig. 4(f)) was approximately half that of the Slow activation frequency (Fig. 4(b). The power spectral distribution of each vibration None None x x NM mode was quite different for both PDC (Fig. 4(c, g)) and PAC (Fig. 4(e, i)), which also helped produce distinguishable haptic signatures for each vibration mode.
Next, the subjects were informed the correlation between the Fast/Slow vibration modes and the direction of slip at the fingertips. Specifically, the Slow vibration mode indicated upward slip, while the Fast vibration mode indicated downward slip. After describing this mapping to the human subjects, their haptic perception was systematically tested.
A pseudo-random sequence of 25 trials were generated using the MATLAB randperm command to test their ability to simultaneously detect the directions of slip from both vibrotactile stimulators at the same time. There were five repetitions of each of the five possible combinations of simultaneous vibration activation modes (TABLE 2). All subjects were able to correctly interpret the simultaneous vibrotactile activation modes with 100% accuracy due to the significantly different sensations produced by the Slow and Fast frequencies of actuation (Fig. 4).

4) Training for Multichannel Sensorimotor Integration
Next, the four different combinations of simultaneous slip up/down with the I/L fingers were demonstrated several times for the subjects with the robotic system that was under EMG control. Subjects were allowed approximately five minutes to gain familiarity with multichannel sensorimotor integration using the robotic system to learn how the EMG control signals would stop slip of the two objects individually or simultaneously when desired. After this initial phase, an opaque blockade was placed between the Shadow Hand and the subjects to prevent them from seeing the robotic system, forcing them to rely purely on haptic feedback. The human subjects were also asked to wear noise cancelling headphones (Fig. 6).
During the experiment, the different combinations of simultaneous slip modes were tested with a pseudo-random sequence of 50 trials (10 repetitions for each of the five options listed in TABLE 2, including the control option where haptic feedback was disabled). All the human subjects went through the same pseudo-random sequence that was generated with the MATLAB randperm command. The subjects controlled the pace of the experiments by pressing the Enter key on the keyboard in front of them to initiate each trial. Every trial was designed to be 15 s long, during Fig. 6. Control system for multichannel sensorimotor integration. Six EMG signals were classified by subject specific ANNs to specify which finger(s) the subjects wanted to control. The EMG signals were also used to specify the desired forces for the index (I) and little (L) fingers that were realized by hybrid force-velocity controllers. ANNs were used with the BioTac SPs on the I and L fingers to classify the 24 taxels at each fingertip into sensations of sliding contact, either up or down. These sensations of touch from each fingertip were encoded via the frequency of vibration and fed back to the subjects with the haptic armband. The subject (S1) gave permission for the use of his image.
which, subjects were asked to identify the vibrotactile stimulator activation modes and to maintain the correct EMG class to prevent downward slip. They were trained to permit upward slip at the finger(s) where they perceived upward slip (TABLE 2).
The efferent ANN classification accuracy was calculated as the percentage of time that the subjects maintained the correct EMG class during each 15s trial after they responded to the simultaneous slip haptic feedback stimuli. In this scenario, the EMG classification accuracy is also a metric to describe the extent to which subjects were able to integrate the two channels of nuanced haptic feedback into their motor control strategies. Additionally, the response time (RT) was tabulated for each subject and class as the amount of time each person required to interpret the haptic feedback and respond, in other words, the time difference between onset of haptic feedback and activation of an EMG class ( Fig. 7(h, j)). The human reaction time to haptic stimuli has often been used as a metric of the cognitive load to operate a prosthetic hand [46,47].
For both dependent variables (EMG classification accuracy, RT), two factor ANOVA were performed in MATLAB with human subject and EMG class as the two independent variables. Interaction between the human subjects and EMG class was also tested for statistically significant impact upon the classification accuracy and RT. An unbalanced ANOVA was also performed to determine if there was a statistically significant difference between amputee and nonamputee subjects for both dependent variables.

III. RESULTS
Sample data show the four possible cases of simultaneous slip at the I and L fingers (Fig. 7). When both stepper motors drove the slip platforms downward ( Fig. 7(a, b)) the afferent ANNs correctly classified the sensations and the vibrotactile actuators corresponding to the I and L fingers were both activated in the Fast frequency (Fig. 7(c)). This prompted the subject to increase his EMG signals (Fig. 7(d)) to produce the Sim class ( Fig. 7(e)) using the efferent ANN, which caused both the slip (Fig. 7(b)) and haptic feedback to cease (Fig. 7(c)). When the sliding contact at the I finger was downward while slip at the L finger was concurrently upward (Fig. 7(f, g)) the vibrotactile actuator corresponding to the I finger was actuated in the fast mode while the corresponding haptic feedback from the L finger was actuated simultaneously in the slow mode ( Fig. 7(h)). This caused the subject to increase his EMG signals ( Fig.   7(i)) to produce the I class using the EMG ANN ( Fig. 7(j)) which halted slip at the I finger ( Fig. 7(g)) and the corresponding haptic feedback from the I finger; the haptic feedback from the L finger remained on in the Slow mode since the user desired to permit slip at the L finger only (Fig. 7(h)). When slip at both fingertips was upward ( Fig. 7(k,   l)), both vibrotactile stimulators were actuated in the Slow mode ( Fig. 7(m)) and the subject chose to take no action ( Fig. 7(n)) which produced the NM class ( Fig. 7(o)) to permit slip at both fingers in this situation ( Fig. 7(k, l)). When slip at the I finger was upward while slip at the L finger was simultaneously downward (Fig. 7(p,q)), the afferent ANNs classified this correctly and triggered the corresponding vibrotactile stimulators to be actuated in the Slow and Fast modes to supply haptic feedback from the I and L fingers, respectively (Fig. 7(r)). This prompted the subject to increase his EMG signals (Fig. 7(s)) to produce the L class ( Fig. 7(t)) which halted slip at the L finger only (Fig. 7(q,   r)).

A. EMG Classification Accuracy During Multichannel Sensorimotor Integration
All participants achieved reasonably high EMG classification accuracy with an overall mean classification accuracy of 95.53% ± 0.23% (Fig. 8), which demonstrated high proficiency for sensorimotor integration during the simultaneous slip control problem. The subject with the highest classification accuracy was S4 with mean accuracy of 97.95% ± 8.15%, while the subject with the lowest classification accuracy was S1 with mean accuracy 91.24%± 20.10%. Averaged across all 12 subjects, the highest classification accuracy within an EMG class was the NM class with 100% mean accuracy while the classification accuracy for the I, L and Sim classes was 94.93% ± 4.43%, 91.72%± 9.27%, and 91.30% ±5.61%, respectively.
The ANOVA showed that there was no significant difference between the subjects' EMG classification accuracy or the different classes (p>0.05). However, the interaction between subjects and the classes was significant (p<0.05).
An example of this interaction can be seen with subject S8 who had lower accuracy with the Sim class while subject S10 had lower accuracy with the L class.
The mean EMG classification accuracy for amputees was 94.90% ± 13.88%, while the mean EMG classification accuracy for non-amputees was 94.36% ± 15.88%. The unbalanced ANOVA showed that the amputees' EMG classification accuracy was not significantly different from the non-amputees (p>0.05).

B. Subjects' Response Time to Multichannel Haptic Feedback
The mean RT of the three ULA people was 3.45s ± 0.80s, while the mean RT for non-amputee subjects was 2.53s ± 0.76s (Fig. 9). The average RT across all 12 subjects was 2.75s ± 0.54s. Subject S4 had the fastest mean RT of 1.62s ± 0.53s while subject S1 had the slowest RT of 3.59s ± 0.94s. The three amputee subjects (S1-S3) had the three slowest average RTs within the cohort.
The ANOVA on the RT across all subjects showed a statistically significantly difference (p<0.01). The RT for each class was also statistically significant (p<0.01). However, the interaction between the subject's performance and the classes was not statistically significant (p>0.05). The unbalanced ANOVA showed that the amputee RT was significantly different than the non-amputee RT (p<0.01).

IV. DISCUSSION
It remains an open question on the best approaches to maximize the human-prosthesis bandwidth for bidirectional information transfer. EMG pattern recognition for efferent control signals has shown a great deal of interest in recent years. Building off this prior research, several commercially available products have emerged. Otto Bock has the Myo Plus pattern recognition system that relies upon eight EMG electrodes to classify the intended grip pattern of the amputee from forearm muscle signals [48]. Complete Control is a related product offered by Coapt for EMG pattern recognition and control of multiple DOFs of a prosthetic arm [49].
In contrast, most prosthetic hands still do not have tactile sensors [50] despite the significant amount of research on sensors and signal processing [51]. Consequently, haptic feedback for ULA people in clinical practice is largely unrealized. The explanation for this dearth of haptic feedback is multifaceted. The strict size, mass and power constraints on prosthetic limbs impose a difficult barrier to overcome, where performance tradeoffs must always be made. There are also the questions of the most important modalities of haptic feedback [25,52], whether to encode tactile events spatially or temporally [53], and the number of independent haptic feedback channels required for satisfactory control [10].
Expanding upon the question of the required number of haptic feedback channels, one reasonable viewpoint is that the capacity for sensation should be commensurate with the capability for control. Clearly, this has yet to come to fruition in terms of commercial products available to amputees-efferent control advancements have outpaced afferent feedback products. However, a different viewpoint is that the richness of haptic feedback should exceed the capability for control. There is a strong physiological precedent for this notion as the quantity of mechanoreceptors per fingertip number in the thousands [41]. We have explored the latter viewpoint in this paper, where the sensation space was larger than the action space both at the robotic level and from the perspective of the human in the loop. At the robotic level, we used ANNs for 24 taxels per fingertip to produce two haptic feedback sensations corresponding to upward or downward sliding contact. And for the human in the loop, there were five possible combinations of sensations with only four possible control classes (TABLE 2) because of the situations where the same control choice would be a desirable response to different sensations. Enhancement of artificial hand sensation and haptic feedback will be fruitful avenues of future investigation to improve prosthetic arm embodiment [54] and reduce the prevalence of artificial limb rejection [55], which remains high because of poor dexterity and lack of sensory feedback, among other reasons [56].

A. Cognitive Load and Response Time to Haptic Stimuli
The impact of haptic feedback upon the cognitive load to operate a prosthetic hand has often been quantified by the human reaction time to haptic stimuli [46,47]. For example, it has been shown that amplitude modulation from a single vibrotactile stimulator can help reduce reaction time and cognitive load compared to spatially encoding the feedback via multiple stimulators [53]. It is well known that haptic feedback from a single channel can help reduce the cognitive load to perform a single task; however, the cognitive load will increase when scaling up the number of feedback channels [10]. Nevertheless, dexterous prosthetic hands of the future will require more sensory feedback to improve their functionality. In our paper, we used two channels of haptic feedback that were simultaneously activated with frequencies dependent upon the direction of sliding contact. The ANOVA showed a statistically significant impact of the simultaneous vibration mode upon the RT (p < 0.05; Fig. 9). In particular, the Sim class had the slowest RT averaged across all subjects and this is the situation where both vibrotactile stimulators were activated with the same Fast frequency. The other situations requiring an action from the subjects (I, L classes in Fig. 9) had both the Slow and Fast frequencies of vibration simultaneously from the two stimulators, and so could have been easier for most subjects to discern more rapidly. Subjects S6 and S12 were the only two people who had the fastest mean responses with the Sim class; the other ten subjects responded more quickly on average in situations where the frequencies of vibration from the two stimulators were different (I,L classes).
Regarding real-time efferent control, EMG pattern recognition approaches can reduce the cognitive load required to operate myoelectric hands [57], but there are some differing reports regarding the utility of this approach for ULA people relative to non-amputees. For example, a comparison between two amputees and 12 non-amputees showed that amputees had a lower success rate and required more time to complete a pick and place task with different objects using a regularized discriminant analysis grip classification algorithm and a hybrid EMG-inertial measurement unit sensor suite [58]. In contrast, another study used a KNN algorithm for EMG pattern recognition in which no statistically significant difference was found between non-amputees and amputees for the time needed to select a desired motion of myoelectric prostheses [23]. But as is noted in the aforementioned studies, additional experiments should be done with a larger sample size to explore the impact of congenital or traumatic hand amputation upon cognitive load required to operate dexterous prostheses. It is possible that with additional training time [59], differences that may exist between amputees and non-amputees could be mitigated with use of supporting apps to enable in-home practice and virtual visitations with clinicians. The ultimate goal should be to maximize prosthetic hand dexterity and functionality while minimizing cognitive load required for operation.
The data in our paper support the idea that amputees may have a higher cognitive load required in sensorimotor integration tasks involving pattern recognition of multiple channels of efferent and afferent information in comparison to non-amputees. This is supported by the fact that the three amputees in our study had a significantly slower RT to perform the multitasking experiments in comparison to the nine non-amputees ( Fig. 9; p<0.01); however, the amputees' EMG classification accuracies were not significantly different than the non-amputees ( Fig. 8; p>0.05). A traumatic upper limb amputation drives a large scale cortical reorganization at the network level, extending beyond the sensorimotor cortex [60], which could have affected the RT of the ULA subjects S1 and S3 who suffered amputations as adults. However, subject S2 who has a congenital limb deficiency, also had a significantly slower RT.
During development, children with congenital limb deficiencies are deprived of afferent peripheral sensations that would normally shape the function of the assumed hand territory of the brain. Instead, compensatory movements involving the feet, lips, and forearm may provide afferent inputs to this region of the brain during maturation [61]. It therefore stands to reason that providing haptic feedback to children who are born without a hand could be highly beneficial to their future potential as adults.

V. CONCLUSION
The data in our paper have shown that people can interpret context-dependent information from two different vibrotactile stimulators simultaneously to concurrently control individual fingers of a dexterous artificial hand with EMG pattern recognition using an ANN (Fig. 8). We explored the potential of three amputees and nine non-amputees to perform this complex feat of multichannel sensorimotor integration in the context of deciding whether to permit or prevent sliding contact at two fingertips of an artificial hand individually and simultaneously. This kind of situational awareness underpins refined manual dexterity inherently required to perform sophisticated tasks such as playing sports or a musical instrument like a violin (Fig. 1) [3]. Our results showed that the subjects were able to correctly interpret the vibrotactile feedback from both haptic feedback channels simultaneously to correctly and concurrently control two fingers of the artificial hand with 95.53% ± 0.23% accuracy averaged across all 12 subjects. There was no statistically significant difference in the classification accuracy between the amputees and non-amputees; however, the amputees did require significantly more time to respond to the haptic stimuli in comparison to the non-amputees. This suggests that the amputees experienced a higher cognitive load to perform the same sensorimotor integration task that required interpretation of two channels of haptic feedback simultaneously. These results provide a small step towards the grand challenge of enabling amputees to dexterously control prosthetic hands. which was in accordance with the declaration of Helsinki.

B. Consent for publication
The subjects gave permission for the use of their images.

C. Availability of data and materials
The data that support the findings of this study are available upon request.

D. Competing interests
The authors declare that they have no competing interests.

E. Funding
The research reported in this study was supported by the National Institute of Biomedical Imaging and