Cortical Reorganization in Unilateral Deafness Revealed by Auditory Evoked Response


 Profound unilateral deafness reduces the ability to detect the location of sounds, which is achieved with binaural hearing. Furthermore, the findings from previous studies have shown that unilateral deafness can cause a substantial change in the pattern of cortical activation, thereby leading to central reorganization in the whole brain. In the present study, we compared N1/P2 auditory cortical activities and the pattern of hemispheric asymmetry of normal hearing, unilaterally deaf, and simulated acute unilateral hearing loss groups during passively listening to speech sounds at different locations. The results show that P2 latencies were prolonged for left-side stimulation with greater angles in the horizontal plane. In the source analysis, a differential lateralization pattern was revealed such that the N1 source activation in the normal hearing subjects was greater in the left hemisphere, while contralateral activity was found in response to the stimulated side for the right-sided deaf and simulated acute hearing loss groups. However, no hemispheric lateralization was found for the left-sided deaf or simulated acute hearing loss groups. In addition, the cortical N1/P2 activities were inversely related to the duration of deafness in the right auditory region. These findings indicate that the cortical reorganization induced by monaural hearing deprivation differs depending on the side and duration of deafness.


Introduction
In the United States, at least 3-8.3% of the general population are unilaterally deaf, while 12-27 per 100,000 people have acquired unilateral hearing loss 1 . Although the unilaterally deaf population is increasing and the age at onset of deafness substantially varies from childhood to old age, understanding the neurophysiological changes caused by unilateral deafness is limited. Perceptual issues that are mainly associated with unilateral hearing loss include sound localization and understanding speech in noisy environments 2 . Both sound localization and speech perception in noise require adequate functioning of binaural processing. For each task, a listener must successfully encode the information of interaural level difference (ILD) and interaural time difference (ITD), and is sensitive to spectral information [3][4][5] . However, unilateral hearing loss signi cantly impairs spatial hearing by disrupting binaural hearing. Moreover, monaural hearing deprivation yields a maladaptive change in the brain that may not be recovered if the deprivation occurs in a critical period of brain development 6 . The ndings from a previous report based on large datasets suggest that approximately 50% of children with unilateral hearing loss are faced with auditory, linguistic, and behavioral issues that cannot be accounted for by audiological factors such as duration of hearing loss or etiology 7 . This has led to the hypothesis that cortical/cognitive factors, and perhaps the degree of neural plasticity, auditory memory, and attention, are together accepted as the source of the issues 8 .
The plasticity of central auditory processing induces functional and/or structural changes in the brain to reorganize neural network following auditory deprivation [9][10][11][12][13] . People with unilateral hearing loss develop a distinct pattern of brain reorganization that is related to poor peripheral representation of the spatial features of sound (e.g., ITD, ILD, and interaural phase difference). In normal hearing listeners, sounds are predominantly processed in the hemisphere contralateral to the sound location to integrate spatial cues for precise estimation of the sound location when passively listening to sounds from various positions 14 .
Nonetheless, in unilaterally deaf people, the common pattern of contralateral dominance for the ear of stimulation decreased, and more bilateral activation for sound processing was observed 15 . Previous studies have suggested that the enhanced ipsilateral activity to the hearing ear can likely be attributed to the bilateral activation rather than a decrease in contralateral response 10,16 . More importantly, it has been suggested that left-sided and right-sided deafness in uence the neurological changes in the brain differently. For example, researchers conducting a neuroimaging study found stronger bilateral responses to complex modulated sounds in left-sided deaf individuals and strong ipsilateral activation on the side of the stimulation in right-sided deaf individuals 15 . Increased bilateral activity in left-sided deaf persons has been repeatedly shown 16,17 and could be related to a greater degree of cortical reorganization encompassing the frontal cortical regions that are frequently recruited to process degraded speech sounds 10 . Similarly, a recent study assessing cross-modal plasticity in long-term unilateral deafness reported that left-sided deaf individuals exhibited increased connectivity between the brain regions involving visual and sensorimotor networks and the auditory cortex, indicating a larger degree of crossmodal plasticity in left-sided deaf individuals compared to right-sided deaf individuals 18 . However, it has not yet been determined whether such plastic changes in the brain occur to achieve recovery from abnormal binaural sensations to some extent or are merely a consequence of hearing loss 9,10 Given that scalp-recorded electroencephalography (EEG) represents neural mechanisms relevant to sound processing at different levels of the auditory system, it has been applied to assess the pattern and degree of cortical reorganization induced by monaural auditory deprivation. The general ndings are that deafness in one ear results in substantial changes in neural activity and the pattern of the changes is revealed differently in left-and right-sided deafness. However, ndings from some EEG studies examining the in uence of the side of deafness on the cortical reorganization have been inconsistent. For example, Hanss et al. 16 found that left-sided deaf individuals exhibited greater neurophysiological changes, including bilateral activation in response to tone bursts and more ipsilateral activity to the side of stimulation for speech sounds. Contrary to this nding, a signi cant inverse relationship of cortical N1 responses with behavioral speech perception ability and the duration of deafness was observed only in right-sided deaf persons, indicating greater neurophysiological changes due to right-side deafness over the left-side deafness 19 . In children, the alpha and theta activities during speech-in-noise listening reveal leftward asymmetries for normal hearing, while more lateralized activation on the side of the stimulation was found in unilaterally deaf children, irrespective of the deafness side 12 . Meanwhile, a study on applied N1 source waveform analysis led the authors to conclude that long-term unilateral deafness did not alter hemispheric asymmetry 20 . Taken together, the effect of the side of deafness on the hemispheric asymmetry in unilateral deafness remains unclear.
In this study, we compared cortical activities of long-lasting unilaterally deaf and NH participants with one ear noise-masked and occluded to simulate acute unilateral hearing loss. For the simulated acute hearing loss control group, monaural occlusion caused temporal hearing deprivation and imbalance between the two ears. The experimental model allowed us to predict how unilateral hearing loss causes functional changes in the central nervous system at the initial stage of chronic unilateral deafness. Considering the high prevalence and wide age range of people with unilateral hearing loss, understanding the neural changes induced by acute unilateral hearing loss would provide important insights into the optimal treatment for asymmetrical hearing loss. Evidence of cortical plasticity following acute unilateral hearing loss has been offered by numerous animal studies [21][22][23][24] . It has been reported that neurophysiological changes in the central auditory system are initiated soon after the loss of hearing sensation in one ear. Unilaterally deafened animals showed a considerable threshold shift in the hemisphere ipsilateral to the hearing ear with relatively normal activation in the contralateral side 24 . Kral et al. 21 demonstrated similar cortical reorganization in cats with acute unilateral hearing loss such that ipsilateral activity to the intact ear increased, and that the degree of enhancement of ipsilateral activity decreased as the onset of deafness was delayed. To date, there have been a limited number of studies directly examining the effect of acute unilateral hearing loss on the human brain. The ndings from those studies have suggested that acute unilateral hearing loss can alter normally observed contralateral dominance for the side of stimulation 25 . In addition to the change in hemispheric asymmetry, altered functional connectivity involving widespread areas for auditory, visual, attention, and memory have also been exhibited in acute unilateral hearing loss individuals 26 .
In the current study, our aim was to compare the patterns of cortical activation and hemispheric asymmetry among long-term unilateral deafness, acute unilateral hearing loss, and normal hearing controls. Given the relationship between behavioral spatial processing and cortical reorganization in unilateral deaf, we chose auditory stimuli varied in azimuth to evoke brain responses. Previous studies have suggested that a lesser degree of cortical reorganization was associated with better auditory and speech perception, indicating a clinical need for early intervention for individuals with asymmetrical hearing loss 27,28 . We hypothesized that long-term unilateral deafness would reveal signi cant cortical reorganization induced by monaural deprivation, and the pattern of hemispheric asymmetry would appear differently in the long-term unilaterally deaf and acute unilateral hearing loss groups. Last, we were concerned with how deafness-induced modulation is related to an audiological factor as measured by the duration of deafness. We hypothesized that the N1/P2 cortical activities in response to auditory stimuli are related to the duration of deafness in unilaterally deaf individuals, suggesting that these cortical responses can be used as markers for neurophysiological change with unilateral hearing loss.

CORTICAL POTENTIALS
Effects of sound location and unilateral deafness on N1/P2 cortical potentials Figure 1 shows the grand mean waveforms for stimuli at -60°, -15°, 0°, +15°, and +60° azimuth angles for the NH, RAUHL, LAUHL, RUD, and LUD groups. The overall response was characterized by an N1 evocation at around 100 ms after stimulus onset, followed by a P2 response. Modulation of N1 as a function of sound location was more apparent at -60° and +60°, and less so at -15° and +15° azimuth angles.

P2 amplitude is related to the duration of deafness in unilateral deaf
To determine whether N1/P2 cortical activities are altered by unilateral deafness and whether the change is related to audiological factors, we examined the relationship between electrophysiological measurements and the duration of deafness in the unilaterally deaf subjects. Figure 2 shows that the averaged P2 amplitudes (across all azimuth angles) were negatively correlated with the duration of deafness in the unilaterally deaf groups (r = -0.7, p = 0.025), indicating that P2 decreases as the duration of deafness becomes longer. HSD post hoc test results reveal that the N1 source at 0° had greater leftward activation compared to -60°( p = 0.03), -15° (p = 0.009), and +15° (p = 0.036). In addition, the results of ANOVA of LI indicate differences across the groups [F( 4, 46 ) = 2.76; p = 0.045] such that for +15° azimuth angle, the N1 source activation in the NH group was more lateralized to the left whereas the RAUHL (p = 0.029) and RUD (p = 0.016) groups had rightward asymmetries. No hemispheric lateralization was found for the LAUHL and LUD groups.
Differences in source space Among many possible comparisons of the conditions, we focused on -60° vs. +60° for the following reasons: (1) the results for the cortical potentials suggest that neural modulation as a function of sound location was more robust for -60° and +60° than the other azimuth angles (Fig. 1); (2) the ndings in previous reports suggest that N1 cortical activity is larger for stimuli containing more prominent spatial cues than for less spatially distinguishable stimuli 36 ; and (3) given that the -60° and 60° azimuth angles are closer to the hearing and deafened ears of the subjects than the other angles, these conditions could better represent the effect of unilateral deafness on source activation at the cortical level. Figure 4 shows t-test comparisons of -60° with +60° for the NH, RAUHL, and RUD groups. For the NH group, comparing -60° and +60° revealed signi cant clusters (p = 0.001) indicating a greater left temporal area (Figure 4 top). For the RAUHL group, signi cant clusters (p = 0.021) in the right anterior temporal and right frontal regions indicate that source activity at +60° was larger than at -60° (Figure 4 middle). In addition, for the RUD group, a signi cant cluster (p < 0.001) in the right frontal lobe indicates greater activation at -60°c ompared to at +60° (Figure 4 bottom). No signi cant differences were found for the LAUHL and LUD groups.
N1 source activation is related to the duration of deafness in unilateral deaf N1 source activation values were averaged across all azimuth angles to test them for correlation with the duration of deafness in the unilaterally deaf participants. Figure 5 shows a signi cant correlation between averaged N1 source activation and the duration of deafness involving the auditory regions. Figure 5a suggests that lower N1 source activation was associated with a longer duration of deafness. In the brain source space, a negative correlation (r = -0.65, p = 0.013) was found bilaterally (but more lateralized to the right hemisphere) in the temporal lobe (Figure 5b). Figure 5c shows that signi cant clusters survived after multiple comparison corrections included the right auditory cortex and right inferior temporal lobe.

Discussion
The aim of this study was to characterize neural changes induced by monaural hearing deprivation during passively listening to consonant-vowel syllables from different locations and to relate these changes to audiological factors in unilaterally deafened people. The neural activities in chronic unilaterally deaf individuals were compared with those in people with acute unilateral hearing loss to assess differences in neural changes between acute and chronic auditory de cits. The results for source analysis suggest that the N1 source activity in the NH group was lateralized to the left hemisphere, while no hemispheric asymmetry was found in the left-sided deafness and acute hearing loss groups. Meanwhile, greater contralateral activation to the side of stimulation was found in the right-sided groups.
In addition, both sensor-level P2 and source-level N1 activities were associated with the duration of deafness in unilaterally deaf subjects.
In this study, leftward asymmetry was revealed in NH participants whereas the LUD/LAUHL and RUD/RAUHL groups showed no asymmetry and right-sided lateralization, respectively. Contrary to our ndings, right-sided dominance for sound localization in normal hearing individuals has been reported in a number of studies since auditory spatial information such as interaural time and intensity cues are mainly processed in the right hemisphere [37][38][39] . The discordance in the hemispheric asymmetry between our study and the literature could have been raised by the listening paradigm evoking neural responses. Unlike most previous studies in which spectrally varied arti cial sounds such as tones, clicks, and noise bursts were applied 37,38 , we used natural speech syllables with a rapid temporal change. It is well recognized that temporal and spectral information sources are preferentially processed in the left and right hemispheres, respectively 30,40,41 . Thus, the greater left hemispheric activation in the NH participants shown in this study could be related to central auditory processing for temporal information. Another possible explanation for the left hemispheric asymmetry is based on the theoretical assumption that passive listening to sounds is relevant to the implicit learning of acoustic information, which enhances the e ciency of sensory processing 42 . In a lesion study 43 , the evidence suggests that implicit spatial cues containing spatial release from masking were not processed in persons with left hemispheric damage, and explicit sound localization performance is related to the right parietal and the opercular cortex, indicating distinct neural networks for implicit and explicit spatial processing.
In this study, N1 source activation in the RUD and RAUHL groups was greater in the right hemisphere. More speci cally, in the RUD group, auditory stimulation of the intact ear elicited greater source activity in the right fronto-parietal cortex whereas larger right hemisphere activation in response to stimuli near the occluded ear was found in the RAUHL group when comparing − 60° versus + 60° (see Fig. 4). It is unclear why the RUD and RAUHL groups had distinct neural representations of the N1 source activity according to the side of stimulation. Nonetheless, we speculate that acute unilateral hearing loss motivated focusing on the stimulus that was di cult to perceive due to the relatively long distance from the hearing ear whereas long-term unilateral deaf subjects were more sensitive to sounds close to their intact ear, which enhanced sensory processing. Under the assumption that the auditory perception of distant stimuli is relevant to sound processing in degraded listening conditions, an increase in N1 response with degraded stimuli could account for the enhancement in N1 source activity to contralateral sounds on the hearing side 44,45 . This nding calls for further study to assess the mechanisms underlying the distinct neural representations of acute hearing loss and chronic deafness.
In contrast to the RUD and RAUHL groups, the LUD and LAUHL groups did not reveal signi cant hemispheric lateralization in the source analysis. Similar ndings from neuroimaging data have been reported in that right-sided deafness enhanced the contralateral response to monaural stimulation whereas left-sided deafness did not promote hemispheric asymmetry 10,16,20 . These results indicate that right-and left-sided deafness could affect cortical reorganization differently because right-sided deafness may be more resistant to plastic changes following hearing deprivation compared to left-sided deafness 17,46 . This notion is possibly supported by the ndings of Khosla et al. 17 who assessed hemispheric asymmetry using dipole source modeling; they found decreased ipsilateral-contralateral amplitude differences only for left-sided deafness, and the reduced interhemispheric activity observed in left-sided deaf individuals is accounted for by increased synaptic plasticity to achieve an excitatory-inhibitory balance between the hemispheres 24 . In addition, the results of an earlier study were interpreted as symmetrical hemispheric activity for unilaterally deaf persons is due to increasing ipsilateral activity rather than decreasing contralateral activity to the hearing ear 47 . In the current study, the absence of hemispheric asymmetry in the left-sided deaf subjects could thus be related to enhanced ipsilateral activity to monaural stimulation because of synapse plasticity.
A change in the sound location of the speech sounds evoked N1/P2 responses in this study. Among these, P2 latencies differed across the subject groups and sound locations such that P2 responses were prolonged in response to left-side stimulation and azimuth angles of -60° and + 60° compared to rightside stimulation and azimuth angles of -15°, + 15°, and 0°. The delayed response to the left monaural stimulation could be related to the main hemisphere for processing spatial cues such as the ITD or interaural phase. The spectral information required for accurate sound localization is known to be predominantly processed in the right hemisphere 37,38 . For the RUD and RAUHL group subjects, sounds were presented to the left ear, thereby requiring a longer time to arrive at the right hemisphere through contralateral projection, which could have yielded the P2 prolongation. Similar ndings in a previous EEG study were reported in that N1 latencies measured over the ipsilateral hemisphere in response to the side of stimulation were longer for right-sided deaf participants than left-sided deaf participants 19 . Meanwhile, the longer latency for the ± 60° azimuth angles is likely related to N1/P2 modulation as a function of sound location in that changes in the ITD and the interaural phase or coherence elicited delayed N1/P2 responses because of the longer time needed to process these spatial cues 48,49 . Using the mismatch negativity evoked by infrequent changes in the sound location, Sonnadata et al. 49 found a longer latency in the positive peak in response to a large angle (90 o ) compared to a smaller one; they also alluded that this could be partially interpreted by applying the spatial channel theory suggested by Boehnke and Phillips who posited that 0 o and 30 o are processed in the same spatial channel whereas 0 o and 90 o (angles larger than 30 o ) do not share the channel 50 . Moreover, according to the theory, spatial location information belonging to different channels is not processed in the lower level of the auditory system but rather in the higher order auditory cortex. Thus, we assume that the prolonged latency for ± 60° azimuth angles could be associated with the precise spatial processing properties of the central auditory system for between-channel discrimination.
Sensor-level P2 responses and source-level N1 activities were signi cantly associated with the duration of deafness in the area encompassing the auditory cortex. This result suggests that the longer the duration of unilateral deafness, the more substantial the neurological changes at the cortical level, as was re ected by the N1/P2 responses. These ndings are in agreement with those from previous studies showing that the changes in cortical activity in unilaterally deaf individuals occur gradually over time after the onset of deafness 19,46,51 . In fact, the relationship between N1/P2 responses and deafness duration is somehow predictable because a large number of auditory-evoked potential studies examining the cortical plasticity have demonstrated that the N1/P2 components are highly responsive to neural changes caused by hearing deprivation and restoration with hearing devices such as cochlear implants [52][53][54] . For example, in adult unilaterally deaf persons, a decrease in N1 response is associated with reduced ability in speech-in-noise perception, and contralateral activity to the side of simulation weakens as the duration of deafness becomes longer 12,19 . Taken together, the ndings of studies on unilaterally deafness suggest that people who have experienced a shorter duration of deafness exhibit relatively robust cortical activity due to smaller plastic changes compared to those who have experienced a longer period of hearing loss, indicating a clear need for the early treatment of people with asymmetrical hearing loss.
In summary, we have provided herein additional evidence that unilateral hearing loss can incur a substantial change in central auditory processing and the changes are more prominent for people with longer duration of hearing loss. The neural changes are re ected in N1/P2 cortical responses and the hemispheric lateralization in the brain source space differently for left-sided and right-sided deafness. Our data suggest that early intervention including adequate use of hearing assistive devices could have bene cial outcomes for people with unilateral hearing loss.

Participants
Ten adults who were right-sided unilaterally deaf (RUD; 6 female, mean age: 52.7 ± 6.2 years) and 10 who were left-sided unilaterally deaf (LUD; 6 female, mean age: 41.9 ± 16.8 years) were recruited through the Department of Otolaryngology in the Hallym University Medical Center. All unilaterally deaf participants were right-handed and had profound hearing loss in one ear (average pure-tone audiometry threshold > 90 dB HL) without hearing devices for more than one year and normal hearing (pure-tone thresholds < 20 dB HL from 0.25 to 4 kHz, and present OAEs) in the other ear. None of the unilaterally deaf groups had used a hearing aid before participating in this study. Thirty age-and gender-matched normal hearing adults were recruited for comparison with the unilaterally deaf groups. The normal controls were subdivided into three groups of 10: a normal hearing group (NH, 7 female, mean age: 52.2 ± 6.9 years), 10 with their left ear noise-masked and occluded (LAUHL: left-side acute unilateral hearing loss, 7 female, mean age: 51.2 ± 8.3 years), and 10 with their right ear noise-masked and occluded (RAUHL: right-side acute unilateral hearing loss, 7 female, mean age: 44.1 ± 16.4 years). The RAUHL, LAUHL, and NH group participants had normal pure-tone average thresholds in both ears and no neurological and cognitive issues. Informed consent was obtained from all participants prior to testing. All experimental protocols used in this study were approved by the Hallym University Medical Center Institutional Review Board (IRB no. 2019-02-019). All the methods used in this study were performed in accordance to the guidelines and regulations outlined in the Hallym University Medical Center Institutional Review Board (IRB no. 2019-02-019). A summary of the demographic data and statistical comparisons among the groups is provided in Table 1. Stimuli and procedure Figure 6 shows the speech stimuli and sound localization paradigm applied in this study. Natural /ba/-/pa/ speech stimuli were used to evoke cortical responses. The speech stimuli were recorded from utterances by a standard Korean male speaker. The overall duration of each stimulus was 470 ms, and the voice onset times were 30 and 100 ms for /ba/ and /pa/, respectively (Fig. 6a). The stimuli were presented through a StimTracker (Cedrus Corporation, CA, USA) system that allowed for EEG synchronization with the sound, and they were calibrated using a Brüel and Kjaer (2260 Investigator, Naerum, Denmark) sound level meter set for frequency and slow time weighting with a ½ inch free-eld microphone.
Speech stimuli were presented through ve loudspeakers at ve different azimuth angles of -60°, -15°, 0°, +15°, and +60°, where '+' indicates the right side while '-' indicates the left side (Fig. 6b). Subjects were seated in the center of the speaker array in a sound-attenuated booth. All speakers were located 1.2 m away from the subject at ear level and sounds were presented at 70 dB sound pressure level (SPL). Note that for a UHL group, one ear was masked with a masking noise that was delivered through a Bluetooth earphone (QCY 5.0 Earbuds, Beijing, China). The noise masker was speech-shaped noise taken from the speech stimuli used in this study with an overall intensity at a root-mean-squared level of 55 dB SPL. The inter-stimulus interval from sound offset to onset was xed at 1.5 s, and stimuli were randomly presented. A total of 1000 trials (100 trials each for /ba/ and /pa/ sounds at the ve different azimuth angles) were presented across two blocks. During recording, subjects were instructed to ignore sounds while they watched a closed-captioned movie of their choice. Breaks were given upon request. The total recording time was approximately 40 min.

EEG recording
Electrophysiological data were collected using a 64-channel actiCHamp Brain Products recording system (Brain Products GmbH, Inc., Munich, Germany). An electrode cap was placed on the scalp with electrodes positioned at equidistant locations 29,30 . The reference channel was positioned at the vertex while the ground electrode was located on the midline 50% of the distance to the nasion. Continuous data were digitized at 1000 Hz and stored for o ine analysis.

Data processing
Electrophysiological data were preprocessed using Brain Vision Analyzer 2.0 (Brain Products GmbH, Inc., Munich, Germany). Data were band-pass ltered (1-50 Hz) and down-sampled to 500 Hz. Visual inspection of the data included the removal of artifacts related to subject movement (exceeding 500 mV).
Independent component analysis (ICA) 31 implemented in Brain Vision Analyzer was applied to remove artifacts related to eye blinking and movement, and cardiac activity.
After ICA artifact reduction, the data were low-pass-ltered at 20 Hz and segmented from -200 to 1000 ms with 0 ms at the onset of the stimulus and re-referenced to the average reference. Averages were obtained for each of the azimuth angles. Subsequent peak detection was performed on fronto-central electrodes for the N1/P2 components. Since we used an electrode cap with equidistant locations, N1/P2 were measured from the averaged activities of three electrodes located at Cz in the international 10-20 system 30,32 .

Source analysis
Averaged segments were analyzed in BESA (Brain Electrical Source Analysis) for each electrode location.
swLORETA was performed as has been previously described 30,33 . As a rst step, swLORETA analysis yielded maximal brain source activations as a function of time. For auditory N1 responses, swLORETA modeling was conducted in a 20 ms window in which maximal peaks were revealed in the grand mean waveform. Under most conditions, the local maxima include the left and right auditory and frontal regions. Once the source maxima had been identi ed, the Talairach coordinates of the left and right auditory cortices were used to create grand averaged virtual source time (VST) activation for each condition. Next, two dipoles were inserted at each of the source maxima to obtain activation time courses. In this step, the mean source activation in the 20 ms window was averaged to obtain VST activation separately for the left and right cortices. The VST was used to compute a lateralization index (LI) for each condition. Positive and negative LI values indicate left-and rightward asymmetries, respectively, and values exceeding ±0.2 were considered lateralized 34 .

Statistical analysis
Repeated-measures ANOVA was performed for the N1/P2 potentials to examine the impact of sound location and subject group on amplitudes and latencies for each component. Post hoc comparisons were conducted using Tukey's Honest Signi cant Difference (HSD) test. To examine relationships between audiological factors and brain responses in unilaterally deaf groups, cortical measures of the N1/P2 were compared with the duration of deafness using Pearson product-moment correlations. Differences in the strength of the brain source space across the listening conditions were tested by applying paired t-tests corrected for multiple comparisons and Monte-Carlo resampling techniques implemented with BESA Statistics 2.0 35 . Clusters of voxels with p-values of less than 0.05 were considered signi cant. BESA Statistics was also used to perform correlations between the duration of deafness and source activity for each unilaterally deaf subject. This process yielded a correlation value for each voxel in the brain space related to the source activity and the duration of deafness. Nonparametric cluster permutation tests were conducted to determine the statistical signi cance of correlations between source activation and the duration of deafness.

Declarations
Data availability. The datasets generated during the current study are available from the corresponding author on reasonable request.