Participants
Ten adults who were right-sided unilaterally deaf (RUD; 6 female, mean age: 52.7 ± 6.2 years) and 10 who were left-sided unilaterally deaf (LUD; 6 female, mean age: 41.9 ± 16.8 years) were recruited through the Department of Otolaryngology in the Hallym University Medical Center. All unilaterally deaf participants were right-handed and had profound hearing loss in one ear (average pure-tone audiometry threshold > 90 dB HL) without hearing devices for more than one year and normal hearing (pure-tone thresholds < 20 dB HL from 0.25 to 4 kHz, and present OAEs) in the other ear. None of the unilaterally deaf groups had used a hearing aid before participating in this study. Thirty age- and gender-matched normal hearing adults were recruited for comparison with the unilaterally deaf groups. The normal controls were sub-divided into three groups of 10: a normal hearing group (NH, 7 female, mean age: 52.2 ± 6.9 years), 10 with their left ear noise-masked and occluded (LAUHL: left-side acute unilateral hearing loss, 7 female, mean age: 51.2 ± 8.3 years), and 10 with their right ear noise-masked and occluded (RAUHL: right-side acute unilateral hearing loss, 7 female, mean age: 44.1 ± 16.4 years). The RAUHL, LAUHL, and NH group participants had normal pure-tone average thresholds in both ears and no neurological and cognitive issues. Informed consent was obtained from all participants prior to testing. All experimental protocols used in this study were approved by the Hallym University Medical Center Institutional Review Board (IRB no. 2019-02-019). All the methods used in this study were performed in accordance to the guidelines and regulations outlined in the Hallym University Medical Center Institutional Review Board (IRB no. 2019-02-019). A summary of the demographic data and statistical comparisons among the groups is provided in Table 1.
Table 1.
Demographic data for the unilateral deafness, acute unilateral hearing loss, and normal hearing groups.
|
LUD (n=10)
|
RUD (n=10)
|
RAUHL (n=10)
|
LAUHL (n=10)
|
NH (n=10)
|
Statistics
|
Age (year, mean/SD)
|
41.9 ± 16.8
|
52.7 ± 6.2
|
44.1 ± 16.4
|
51.2 ± 8.3
|
52.2 ± 6.9
|
F = 1.78, p = 0.14
|
Gender (male/female)
|
4/6
|
4/6
|
3/7
|
3/7
|
3/7
|
c2 = 0.53, p = 0.97
|
Duration of deafness (year, mean/SD)
|
14 ± 16.2
|
19.6 ± 12.1
|
|
|
|
t = 0.86, p = 0.39
|
Deafness onset (year, mean/SD)
|
33.4 ± 22.4
|
24.7 ± 22.4
|
|
|
|
t = 0.27, p = 0.78
|
LUD, left-sided unilaterally deaf; RUD, right-sided unilaterally deaf; LAUHL, left-side acute unilateral hearing loss; RAUHL, right-side acute unilateral hearing loss; NH, normal hearing.
Stimuli and procedure
Figure 6 shows the speech stimuli and sound localization paradigm applied in this study. Natural /ba/-/pa/ speech stimuli were used to evoke cortical responses. The speech stimuli were recorded from utterances by a standard Korean male speaker. The overall duration of each stimulus was 470 ms, and the voice onset times were 30 and 100 ms for /ba/ and /pa/, respectively (Fig. 6a). The stimuli were presented through a StimTracker (Cedrus Corporation, CA, USA) system that allowed for EEG synchronization with the sound, and they were calibrated using a Brüel and Kjær (2260 Investigator, Nærum, Denmark) sound level meter set for frequency and slow time weighting with a ½ inch free-field microphone.
Speech stimuli were presented through five loudspeakers at five different azimuth angles of -60°, -15°, 0°, +15°, and +60°, where ‘+’ indicates the right side while ‘-’ indicates the left side (Fig. 6b). Subjects were seated in the center of the speaker array in a sound-attenuated booth. All speakers were located 1.2 m away from the subject at ear level and sounds were presented at 70 dB sound pressure level (SPL). Note that for a UHL group, one ear was masked with a masking noise that was delivered through a Bluetooth earphone (QCY 5.0 Earbuds, Beijing, China). The noise masker was speech-shaped noise taken from the speech stimuli used in this study with an overall intensity at a root-mean-squared level of 55 dB SPL. The inter-stimulus interval from sound offset to onset was fixed at 1.5 s, and stimuli were randomly presented. A total of 1000 trials (100 trials each for /ba/ and /pa/ sounds at the five different azimuth angles) were presented across two blocks. During recording, subjects were instructed to ignore sounds while they watched a closed-captioned movie of their choice. Breaks were given upon request. The total recording time was approximately 40 min.
EEG recording
Electrophysiological data were collected using a 64-channel actiCHamp Brain Products recording system (Brain Products GmbH, Inc., Munich, Germany). An electrode cap was placed on the scalp with electrodes positioned at equidistant locations29,30. The reference channel was positioned at the vertex while the ground electrode was located on the midline 50% of the distance to the nasion. Continuous data were digitized at 1000 Hz and stored for offline analysis.
Data processing
Electrophysiological data were preprocessed using Brain Vision Analyzer 2.0 (Brain Products GmbH, Inc., Munich, Germany). Data were band-pass filtered (1-50 Hz) and down-sampled to 500 Hz. Visual inspection of the data included the removal of artifacts related to subject movement (exceeding 500 mV). Independent component analysis (ICA)31 implemented in Brain Vision Analyzer was applied to remove artifacts related to eye blinking and movement, and cardiac activity.
After ICA artifact reduction, the data were low-pass-filtered at 20 Hz and segmented from -200 to 1000 ms with 0 ms at the onset of the stimulus and re-referenced to the average reference. Averages were obtained for each of the azimuth angles. Subsequent peak detection was performed on fronto-central electrodes for the N1/P2 components. Since we used an electrode cap with equidistant locations, N1/P2 were measured from the averaged activities of three electrodes located at Cz in the international 10-20 system30,32.
Source analysis
Averaged segments were analyzed in BESA (Brain Electrical Source Analysis) for each electrode location. swLORETA was performed as has been previously described30,33. As a first step, swLORETA analysis yielded maximal brain source activations as a function of time. For auditory N1 responses, swLORETA modeling was conducted in a 20 ms window in which maximal peaks were revealed in the grand mean waveform. Under most conditions, the local maxima include the left and right auditory and frontal regions. Once the source maxima had been identified, the Talairach coordinates of the left and right auditory cortices were used to create grand averaged virtual source time (VST) activation for each condition. Next, two dipoles were inserted at each of the source maxima to obtain activation time courses. In this step, the mean source activation in the 20 ms window was averaged to obtain VST activation separately for the left and right cortices. The VST was used to compute a lateralization index (LI) for each condition. Positive and negative LI values indicate left- and rightward asymmetries, respectively, and values exceeding ±0.2 were considered lateralized34.
Statistical analysis
Repeated-measures ANOVA was performed for the N1/P2 potentials to examine the impact of sound location and subject group on amplitudes and latencies for each component. Post hoc comparisons were conducted using Tukey’s Honest Significant Difference (HSD) test. To examine relationships between audiological factors and brain responses in unilaterally deaf groups, cortical measures of the N1/P2 were compared with the duration of deafness using Pearson product-moment correlations. Differences in the strength of the brain source space across the listening conditions were tested by applying paired t-tests corrected for multiple comparisons and Monte-Carlo resampling techniques implemented with BESA Statistics 2.035. Clusters of voxels with p-values of less than 0.05 were considered significant. BESA Statistics was also used to perform correlations between the duration of deafness and source activity for each unilaterally deaf subject. This process yielded a correlation value for each voxel in the brain space related to the source activity and the duration of deafness. Nonparametric cluster permutation tests were conducted to determine the statistical significance of correlations between source activation and the duration of deafness.