Effect of Pleasantness and Unpleasantness of Music on the Acceptable Noise Level

DOI: https://doi.org/10.21203/rs.3.rs-1245093/v1

Abstract

Introduction

Music plays a major role in life—it can be a favorite, annoying, or even a distracting noise. Previous research has suggested that the processing of brain hemispheres is affected by the pleasantness/unpleasantness of music, which could be utilized as a signal or noise in auditory neuroscience. By using the acceptable noise level (ANL) test, which is the quantification of noise tolerance while listening to a running speech, we investigated whether pleasantness/unpleasantness of music affects the ANL results under monotic-listening and dichotic-listening conditions.

Design

Based on the subjective scale scores, pleasant and unpleasant music (10 songs, three genres of classic, pop, and rock music) were selected as alternatives to babble noise or running speech for testing 50 subjects for seven monotic and dichotic listening conditions.

Results

While pleasant music changed the ANL significantly under monotic listening conditions, the higher level of babble noise was tolerated, and both characteristics of music pleasantness and unpleasantness changed ANL significantly for various dichotic conditions. The range of the ANL for dichotic conditions is wider than that for monotic conditions.

Conclusion

Music can affect the ANL results in terms of pleasantness and unpleasantness for both monotic and dichotic listening conditions, with a greater effect on dichotic conditions, indicating the role of hemispheric specification in emotional music processing.

Introduction

Music is a central feature in daily life across cultures [1, 2]; therefore, Americans listen to music more than they use prescribed drugs [3]. It has been reported that people listen to music for more than five hours a day [4, 5]. Music, which is composed of physically complex sounds, results in an emotion that is related to harmony, rhythm, and melody. Consequently, the listener decides whether to prefer the heard music [6]. Someone likes or dislikes a piece of music depending on subject-related factors such as biography, preferences, age, and the duration of exposure to music [79]. Additionally, factors such as society, culture, experience, and personality affect music preferences [10].

Although the left and right hemispheres of the brain are dominant for language processing [11, 12] and emotional processing (especially negative feelings), respectively [11, 1316], there are overlapping neural resources for music and language processing [1719]. It has been proven that a higher rate of musical unpleasantness correlates with the activation of the right parahippocampal gyrus, whereas a lower rate of unpleasantness is correlated with the activation of the frontopolar, orbitofrontal, and subcallosal cingulate cortex [20]. In addition, pleasant music activates limbic and paralimbic areas (e.g., subcallosal cingulate cortex, anterior insula, posterior part of the hippocampus, and part of the ventral striatum) [21]. Thus, while the auditory cortex is associated with perceptual analysis [20, 22], the paralimbic areas are associated with an emotional evaluation of dissonance [20, 23]. Unpleasant music activates the right frontopolar and paralimbic areas, but pleasant music activates the left primary auditory posterior temporal, inferior parietal, and prefrontal areas [18].

Substantial information on the effects of music on humans, even as noise, has been provided by psychological studies [24]. Music selection and preference are strongly based on explicit characteristics, such as age, personality, and values [25]. If music is not as an auditory target for a listener, despite being pleasant and preferred, it would be considered as noise, leading to a listener’s lack of attention to that auditory target. Compared with other typical noises (e.g., environmental, babble, and computer noises), music significantly affects people [2628].

The acceptable noise level (ANL), originally presented by Nabelek [29], is used to quantify the individual tolerance of noise while listening to an auditory target (running speech). While some factors (age, sex, and gender of the speaker) do not affect the ANL [2933], it is highly meditated by non-peripheral factors such as the auditory efferent system [34], higher auditory processing centers [34], and the type of noise [29, 35, 36]. Binaural processing centers, beyond the level of the superior olivary complex, are involved in the ANL. In addition, Nabelek showed that light music used as background noise increased the ANL [29], implying that music (with meaningful content) acts more effectively than babble noise (with less meaningful content). However, using six types of rock music and babble noise as the background noise, it was observed that subjects tolerated a higher level of music than twelve-talker babble noise [37].

An investigation of the effects of the type of music (classic, pop, and Korean pop) as background noise has shown that the type of music affects the ANL [36]. Top-down processing influences the ANL [2]. The meaningfulness of the content affects the ANL; more noise is tolerated for meaningful speech materials than for non-meaningful materials. Various auditory stimuli have been used in previous research for measuring the ANL [35, 3840]; however, a few studies used music as the background noise for the ANL measurement [29, 36, 37]. However, music has not been tested as a replacement for running speech in the traditional ANL measurement. Conversely, monotic and dichotic ANL testing has been accomplished by running speech and babble noise as the signal and noise, respectively [34]. There has been no research on the ANL measurement by listening to music dichotically. Dichotic listening, which simultaneously presents different stimuli to each ear, is a behavioral testing approach to assess hemispheric asymmetry in auditory modality [41]. Studies have shown that the left hemisphere is specialized for verbal processing, while the right hemisphere dominates in the processing of musical characteristics, such as the detection of timbre [42].

Therefore, this study aimed to investigate the effect of pleasant/unpleasant music on the ANL that was monotically and dichotically measured using pleasant or unpleasant music as noise and signal under headphones. These results were then compared with the typical ANL measures obtained by running speech and babble noise as the signal and background noise, respectively.

Material And Method

Participants

A total of 50 subjects (34 females and 16 males) aged 18–39 years participated in this study. All subjects were right-handed students of the Rehabilitation School (Beheshti University of Medical Sciences, BUMS, Tehran, Iran) with normal hearing, which was confirmed by normal results from otoscopy, the immittance test (tympanometry and acoustic reflex), and pure tone and speech audiometry. None of the participants had any history of neurologic pathologies. All participants signed a written informed consent. All methods were carried out in accordance with relevant guidelines and regulations of BUMS. All experimental protocols were approved by Human Research Ethics Committee (BUMS, Ethical Code No. IR.SBMU.RETECH.REC.1396.571). 

Music Selection

In the first phase of the study—selecting songs—we chose 10 songs out of the 25 most popular songs from bestselling Persian song albums (in the reference of Keihan Newspaper, Tehran, Iran), as well as those with the highest number of downloads (https://www.radiojavan.com/) in the last two years. All songs, originally Persian, were played for 50 people who were then asked to subjectively rate each song from 1 (low) to 10 (high). Based on these ratings, 10 songs were finally chosen.

In the second phase of the study—ranking songs—all participants were asked to listen to the ten selected songs as much time as they wanted to. Then, they were asked to rate them on a visual scale from 1 (less preferred) to 10 (highly preferred). The score was recorded separately for each song in a form, all song-related components were addressed—the voice of the singer, rhythm, and subjective feelings post listening (Table 1). 

In the third phase, the songs were saved as separate “wav.” files in a computer, and the root mean square (RMS) of all the songs was derived using Adobe Audition software (Adobe Co, 2017, USA).  

Table 1. Music tracks used as pleasant and unpleasant music in terms of the means of subjective scores provided by participants

Song & Singer

Mean Score

30 Salegi

by Ehsan Khajeh Amiri

 

7.84

Roya-ye Bi Tekrar

by Ali Zand Vakili

 

7.02

Harmless Ruler

by Mohsen Chavoshi

 

6.70

The Road’s Dancing

by Charttaar band

 

6.14

Full Length Mirror

by Mehdi Yarrahi

 

5.76

Dele Majnoon

by Mohammad Reza Shajarian

 

5.18

Khoda Hamin Havalie

by Hamed Homayoun

 

5.02

Ta Nafas Hast

by Shahram Shokoohi

 

4.40

Ansolute Nothingness

by Hafez Nazeri

 

4.00

Manshour

by Kave Yaghmaei

 

2.98

The ANL Testing

In sum, the ANL tests were approached in the following ways: 

  1. The typical ANL test, which plays a female speaker’s running speech (as the speech signal) with a 12-talkers babble noise (as the noise signal), and 
  2. The modified ANL test includes composites of various conditions such as a female speaker’s running speech, and less preferred music and highly preferred music (as the speech signal and noise signals, respectively).

For typical ANL testing, the Persian version of the ANL test was used (Tehran University of Medical Sciences, TUMS) [43]. For the modified ANL conditions, the RMS of all songs were calibrated in terms of female speakers’ speech or 12-talker babble noise using Adobe Audition software (Adobe Co, 2017, USA).

All signals were played through headphones (TDH-39) using a laptop (Dell Co, USA), connected via a 2.5 mm audio jack to a clinical audiometer (AC40, Interacoustics Co, Denmark), which was calibrated in concordance with the American National Standards Institute code [44]. Both the volume of the laptop and auxiliary audiometer input were set at 0 volume units using a calibrated tone of 1000 Hz, which was included in the typical ANL Persian test. All the above ANL tests were performed under monotic and dichotic listening conditions. For the monotic listening condition, the right ear was tested; both the signals and noises were presented monoaurally. For the dichotic condition, the signals and noises were presented to the right and left ears, respectively.

Each ANL testing consists of three stages, as listed below: 

1) The most comfort level (MCL) measurement: A female speaker’s running speech is presented by a calibrated audiometer at 30 dB HL through a headphone, which the subject is asked to listen and provide feedback about its level. The level of the speech is then increased and decreased, depending on the subject’s signal (thumb up and thumb down, respectively), in steps of 5 dB. Near the final adjustment, 2 dB steps are used for the exact determination of the MCL. After two repetitions, the average is recorded as the MCL measurement. 

2) The background noise level (BNL) measurement: While presenting the female speaker’s running speech at the measured MCL, a background noise (12-talker noise) is presented at a starting level of 30 dB HL. The level of noise is increased in steps of 5 dB, and the subject is asked to moderate the level until it becomes intolerable. Again, after reaching the final level of noise, the 5 dB steps are replaced by 2 dB steps. The BNL is the highest level of noise that a subject cannot tolerate. After two repeated measurements, the average is recorded as the BNL. Since there are various vocal and musical components within song, the BNL measurements are obtained in three different and sufficiently lengthy parts of the running music. The averaged measure is then recorded as the BNL for music. 

For the dichotic condition, while the running speech or music signals were presented to the right ear at the MCL, babble or music noise as the background noise was presented to the left ear. 

3) The ANL measure: The ANL is obtained by subtracting the BNL from the MCL (ANL = MCL - BNL).

In total, ANL measures were obtained for seven conditions under both monotic and dichotic listening. The total duration for testing each participant was approximately 90 min. For each monotic and dichotic listening condition, various other conditions were tested randomly for every subject, and several resting periods were provided.  

Statistical Method

SPSS (v. 24.0) was used to analyze the study data (IBM Corp, Armonk, New York, USA). Descriptive statistical parameters, such as the means, standard deviations, and ranges of the MCL, BNL, and ANL results were considered. The Shapiro-Wilk test showed a normal distribution of data. A repeated measures ANOVA test was used. As the overall results were significant, Bonferroni correction was used to determine statistically significant differences for ANLs under different conditions.

Results

The descriptive statistics (range, mean, and standard deviation) for the MCL, BNL, and ANL are presented separately for monotic and dichotic listening conditions in Table 2. As indicated in the table, the range of ANL for typical conditions (speech and babble noise) is -5–12 and -40–20 dB for monotic and dichotic conditions, respectively. Generally, there is a wide range of ANLs for all dichotic conditions compared to monotic conditions. In addition, the ANL means for dichotic conditions are lower than their monotic peers, especially when the signal is a running speech presented to the right ear (-12 ± 16.2 vs. 0.6 ± 3.4 and -9.7 ± 14.6 vs. -1.1 ± 4.7 dB under the Speech + Babble Noise and Speech + Pleasant Music conditions, respectively). There is a difference of 12.6 and 8.6 dB between monotic and dichotic listening under the Speech + Babble Noise and Speech + Pleasant Music conditions, respectively. This finding indicates that subjects tolerated much more noise in the dichotic listening condition when the speech signal was presented to the right ear and the babble noise or pleasant/unpleasant music to the left ear. For dichotic listening, when the signal is pleasant or unpleasant music, the ANL gets higher (-1.2 ± 17.2 and 2.8 ± 16.9 dB for Unpleasant Music + Pleasant Music and Pleasant Music + Unpleasant Music conditions, respectively). The worst-case scenario for the dichotic measure of ANL is when pleasant music is presented to the right ear and unpleasant music is presented to the left ear.

Table 2

The range and mean of the MCL, BNL, and ANL for monotic and dichotic listening under seven various conditions (SP – BN: Speech and Babble Noise, SP – PM: Speech and Pleasant Music, SP – UM: Speech and Unpleasant Music, PM – BN: Pleasant Music and Babble Noise, UM – BN: Unpleasant Music and Babble Noise, PM – UM: Pleasant Music and Unpleasant Music, UM – PM: Unpleasant Music and Pleasant Music).

 

SP – BN

SP – PM

SP – UM

PM – BN

UM – BN

PM – UM

UM – PM

 

MCL

(dB HL)

50 to 80

(64.3±7.9)

45 to 70

(61.7±8.1)

45 to 75

(61.5±7.6)

45 to 90

(64.4±9.7)

45 to 80

(62.9±8.8)

45 to 85

(64.8±8.7)

45 to 80

(62.5±9.1)

Monotic

45 to 80

(63.1±8.3)

45 to 85

(63.9±8.9)

45 to 80

(64.1±8.7)

50 to 85

(69.4±9.4)

45 to 85

(64.3±10.2)

45 to 90

(68.7±9.5)

45 to 85

(65.9±10.5)

Dichotic

BNL

(dB HL)

46 to 80

(63.7±9.1)

45 to 81

(62.8±8.9)

38 to 81

(61.4±9.5)

41 to 90

(66.4±11.0

40 to 88

63.9±10.0

40 to 86

(64.4±10.5)

41 to 80

(62.3±9.5)

Monotic

35 to 100

(75.1±17.7)

40 to 100

(73.6±15.5)

35 to 95

(67.5±15.7)

30 to 95

(68.7±17.3)

35 to 95

(67.8±16.7)

20 to 95

(65.9±17.4)

40 to 100

(67.1±17.8)

Dichotic

ANL (dB)

-5 to 12

(0.6±3.4)

-7 to 10

(-1.1±4.7)

-9 to 19

(0.08±6.0)

-11 to 9

(-2±5.0)

-13 to 8

(-0.94±4.7)

-13 to 13

(0.36±6.1)

-12 to 9

(0.12±5.0)

Monotic

-40 to 20

(-12±16.2)

-40 to 20

(-9.7±14.6)

-40 to 25

(-3.4±14.6)

-45 to 40

(0.7±15.9)

-50 to 25

(-3.5±14.8)

-40 to 35

(2.8±16.9)

-45 to 35

(-1.2±17.2)

Dichotic

For the ANLs under monotic listening (Figure 1), the repeated measures ANOVA test showed statistically significant differences between the baseline condition (Speech + Babble Noise) and Speech + Pleasant Music (P < 0.006) and Pleasant Music + Babble Noise (P < 0.001) conditions. When babble noise is replaced by pleasant music, the ANL tends to be lower, implying that the subjects tolerated pleasant music more easily than babble noise. In contrast, when listening to unpleasant music as noise, the results are similar to those of the babble noise condition. Furthermore, the best scenario among the monotic listening conditions is for the Pleasant Music + Babble Noise condition that has a lower ANL in comparison with its peer speech condition (Speech + Babble Noise condition).

For dichotic conditions (Figure 1), there were statistically significant differences among the baseline condition (Speech + Babble Noise) and all other conditions (P < 0.001 for all comparisons), except when the baseline condition (Speech + Babble Noise) was compared with the Speech + Pleasant Music condition (P = 1.000). For dichotic conditions, the lowest and highest ANLs belonged to the Speech + Babble Noise condition and Pleasant + unpleasant condition, respectively, which was statistically significant (P < 0.001). This was observed only for dichotic listening, suggesting that the emotional processing of signals affects the ANL measure when applying dichotically. Most importantly, when applying dichotic listening, the two highest ANLs were observed when presenting pleasant music and unpleasant music to the right and left ears, or vice versa.

Discussion

In summary, the current study demonstrated the following results for monotic and dichotic conditions.

Monotic condition

Replacing unpleasant music with babble noise did not change the ANL, but the pleasant music changed it significantly. In fact, pleasant music decreases the ANL (i.e., subjects tolerate high levels of noise). This finding indicates that babble noise and unpleasant noise play similar roles, while pleasant music is not as bad as a disliked noise such as 12-talker babble noise or unpleasant music. The pleasantness of music decreases its negative conceptual effects as noise. Thus, subjects can tolerate this type of noise at high levels. In other words, pleasantness reduces the perceived noisiness of music, rendering it a relatively tolerable noise.

In addition, the replacement of pleasant music for running speech alongside a 12-talker babble noise can decrease the ANL. This suggests that pleasant music is a strong signal that cannot be easily affected by babble noise. To affect pleasant music, a high level of babble noise is required. Therefore, pleasantness makes the signal stronger, such that obscurant noise cannot easily affect it. However, this pattern was not obtained when the unpleasant music was replaced with speech alongside the 12-talker babble noise. Comprehensively, these findings suggest that pleasantness and unpleasantness are affected differently by obscurant noise, such as 12-talker babble noise. Thus, music is shown to be consistently involved in the neurocognitive functions of the brain in auditory processing by affecting the mood, and consequently, the scope of auditory selective attention. Moreover, research on the effect of mood on auditory attention has shown that music, even if unfamiliar, can broaden the scope of attention via its effects on mood [45]. Specifically, listening to a happy musical rendition results in not only augmented event-related potentials to to-be-ignored novel sounds but also reduced responses to target sounds that are reflected in behavioral measurement [45].

Dichotic procedure

First, based on 95% confidence intervals, the ANL results are more widespread for dichotic listening than for monotic listening. This suggests that, when presenting dichotically, individuals’ performance, dependent on the variables involved in binaural processing, is different.

In dichotic listening, subjects tolerate much more noise than in the monotic condition. The highest noise intolerance is exhibited in the condition wherein the speech and babble noise are presented dichotically. Compared to the monotic condition, the ANL is more than 11 dB lower for the dichotic condition. This finding indicates that subjects can tolerate more noise when each speech signal and babble noise are presented separately into each ear. Interestingly, when babble noise was replaced by unpleasant music, this difference was reduced significantly—a finding that was not observed for the pleasant music condition. Since the binaural processing centers affect the ANL, this finding indicates that unpleasantness strongly interferes with dichotic presentation. On average, it reduces the subject’s intolerance by more than 8 dB. Thus, unpleasantness of music, such as noise, can modulate brain processing during dichotic listening. The emotional factor of music, at least as noise, can interfere with binaural processing in the brain hemispheres.

When the speech signal is replaced by pleasant/unpleasant music in dichotic conditions, the ANL increases. This means that 12-talker babble noise affects pleasant/unpleasant music effectively; however, a reverse result was observed for the monotic counterpart. For the monotic condition of pleasant/unpleasant music and 12-talker babble noise, the ANL is lower compared to the monotic presentation of speech and babble noise. For dichotic conditions, although there is no statistically significant difference, babble noise affects pleasant music more effectively than unpleasant music (there is a difference of 4.2 dB between them). For the monotic condition of pleasant music, the ANL is reduced by 2.6 dB (i.e., the subjects can tolerate a 2.6 dB higher level of noise), while it is increased by 12.7 dB for the dichotic condition (i.e., the subjects could tolerate a 12.7 dB lower level of noise) Consistently, for the monotic condition of unpleasant music, the ANL is reduced to 4.1 dB (i.e., the subjects could tolerate a 4.1 dB higher level of noise), while it was increased by 8.5 dB for the dichotic condition (i.e., the subjects could tolerate a 12.7 dB lower level of noise). A comparison of the monotic and dichotic conditions shows that the ANL mechanism is related not only to the presentation of signal and noise to the ears, but also to the content of signal and noise. Research has consistently shown that the ANL outcome depends on the meaningfulness of signal materials [40]. Moreover, the findings of this study are consistent with the theory of dichotic perception. Since pleasant/unpleasant music and 12-talker babble noise were presented to the right and left ears, respectively, the music perception deteriorated; this is because the babble noise involves the right brain hemisphere that is necessary for experiencing music as pleasant or unpleasant, which is relayed from the left ear contra pathways to the left hemisphere and finally to the right hemisphere. The question of why pleasant music is more affected than unpleasant music is left unanswered. Determining the difference in perception between pleasant and unpleasant music in relation to hemispheric processing and the role of the corpus callosum, is an interesting subject for future neuroscientific research, especially in brain imaging research. However, we did not observe any statistically significant differences in this study.

Similar to the research methodology of the current study to determine the effect of meaningfulness and semantic coherence of signals on ANL [40], we changed the semantic coherence and meaningfulness of both signal and noise (i.e., whether the pleasant/unpleasant music is used both together as the signal and noise or separately, alongside speech or babble noise). Consistent with their conclusion (that meaningfulness, but not semantic coherence of the speech material, affected the ANL), our findings indicate that meaningfulness plays the central role, but only for monotic listening conditions. Our results revealed the role of semantic coherence in the ANL dichotic processing as a new finding.

Moving from speech and babble noise conditions to pleasant and unpleasant music conditions consistently reduced the ANL, indicating that emotional processing is involved when presented dichotically rather than monotically. A greater separation between the emotional level of the signal and noise leads to a greater difference between the physical level of their intensities, and consequently, greater attention to the signal and greater noise tolerance. However, a lesser emotional level of the signal and noise (such as using music instead of speech and babble) leads to lesser attention paid to musical signals and lesser tolerance for musical noise. Therefore, we should consider the emotional effects of the signal and noise when testing the ANL dichotically.

When presenting dichotically white noise and music (or poetry) to subjects, both music and poetry are judged as more pleasant [46]. Due to its special meaningful content, music acts beyond simple noise. Listening to music can affect functional cerebral asymmetries in emotional and cognitive laterality tasks [47]. In addition, some of the musical specifications affect the ANL relatively more, such that music genre affects the ANL rather than music tempo [36]. Our study showed that pleasantness and unpleasantness can also affect ANL, especially when measured dichotically. The ANL is meditated by higher binaural centers (from the lower brainstem to the higher brain centers) when presented dichotically [34] and the emotion with music is encoded in both the brainstem level and various higher brain centers [48]. Therefore, the combination of music and the ANL measurements can effectively measure behavioral cognitive auditory processing for various aspects of music.

Conclusion

Music can affect the ANL results in terms of pleasantness and unpleasantness for both monotic and dichotic listening conditions, with a greater effect on dichotic conditions, indicating the role of hemispheric specification in emotional music processing.

Declarations

Conflict of interest

There are no competing financial interests.

References

  1. DeNora, T., Music in Everyday Life. 2000. Cambridge University Press: Cambridge.
  2. Blacking, J., R. Byron, and B. Nettl, Music, Culture, and Experience: Selected Papers of John Blacking. 1995. University of Chicago Press.
  3. Huron, D., Is music an evolutionary adaptation? Ann N Y Acad Sci, 2001. 930: p. 43–61.
  4. Levitin, D.J., This is Your Brain on Music: The Science of a Human Obsession. 2006, Dutton.
  5. N, M. Is there too much music? London Telegraph. 2009 April 9, in.
  6. Kleć, M. The Influence Listener Pers Music Choices, 2017. 2017: p. 18(2).
  7. McDermott, J.H., A.J. Lehr, and A.J. Oxenham, Individual differences reveal the basis of consonance. Curr Biol, 2010. 20(11): p. 1035–41.
  8. Nieminen, S., et al., The development of aesthetic responses to music and their underlying neural and psychological mechanisms. Cortex, 2011. 47(9): p. 1138–46.
  9. Shahin, A.J., et al., Development of auditory phase-locked activity for music sounds. J Neurophysiol, 2010. 103(1): p. 218–29.
  10. Josh, H.M., Chapter 10, Auditory Preferences and Aesthetics: Music, Voices, and Everyday Sounds. 2012. p. 227-56.
  11. Bulman-Fleming, M.B. and M.P. Bryden, Simultaneous verbal and affective laterality effects. Neuropsychologia, 1994. 32(7): p. 787–97.
  12. Ley, R.G. and M.P. Bryden, A dissociation of right and left hemispheric effects for recognizing emotional tone and verbal content. Brain Cogn, 1982. 1(1): p. 3–9.
  13. Borod, J.C., Interhemispheric and intrahemispheric control of emotion: a focus on unilateral brain damage. J Consult Clin Psychol, 1992. 60(3): p. 339–48.
  14. Liotti, M. and D.M. Tucker, in Brain Asymmetry. The MIT Press: Cambridge, MA, US. 1995, Emotion in asymmetric corticolimbic networks. p. 389–423.
  15. Bryden, M.P. and L. MacRae, Dichotic laterality effects obtained with emotional words. Neuropsychiatry Neuropsychol Behav Neurol, 1988. 1(3): p. 171–6.
  16. Erhan, H., et al., Identification of emotion in a dichotic listening task: event-related brain potential and behavioral findings. Brain Cogn, 1998. 37(2): p. 286–307.
  17. Koelsch, S. and W.A. Siebel, Towards a neural basis of music perception. Trends Cogn Sci, 2005. 9(12): p. 578–84.
  18. Flores-Gutiérrez, E.O., et al., Metabolic and electric brain patterns during pleasant and unpleasant emotions induced by music masterpieces. Int J Psychophysiol, 2007. 65(1): p. 69–84.
  19. Koelsch, S., Toward a neural basis of music perception – a review and updated model. Front Psychol, 2011. 2: p. 110.
  20. Blood, A.J., et al., Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nat Neurosci, 1999. 2(4): p. 382–7.
  21. Brown, S., M.J. Martinez, and L.M. Parsons, Passive music listening spontaneously engages limbic and paralimbic systems. NeuroReport, 2004. 15(13): p. 2033–7.
  22. Peretz, I., et al., Cortical deafness to dissonance. Brain, 2001. 124(5): p. 928–40.
  23. Koelsch, S., et al., Investigating emotion with music: an fMRI study. Hum Brain Mapp, 2006. 27(3): p. 239–50.
  24. Eggermont, J.J., in Noise and the Brain: Experience Dependent Developmental and Adult Plasticity. 2013. Academic Press.
  25. Greenberg, D.M., et al., Musical preferences are linked to cognitive styles. PLOS ONE, 2015. 10(7): p. e0131151.
  26. Stroop, J.R., Studies of interference in serial verbal reactions. J Exp Psychol, 1935. 18(6): p. 643–62.
  27. Staum, M.J. and M. Brotons, The effect of music amplitude on the relaxation response. J Music Ther, 2000. 37(1): p. 22–39.
  28. Furnham, A. and L. Strbac, Music is as distracting as noise: the differential distraction of background music and noise on the cognitive test performance of introverts and extraverts. Ergonomics, 2002. 45(3): p. 203–17.
  29. Nabelek, A.K., F.M. Tucker, and T.R. Letowski, Toleration of background noises: relationship with patterns of hearing aid use by elderly persons. J Speech Hear Res, 1991. 34(3): p. 679–85.
  30. Freyaldenhoven, M.C., et al., Acceptable noise level: reliability measures and comparison to preference for background sounds. J Am Acad Audiol, 2006. 17(9): p. 640–8.
  31. Rogers, D.S., et al., The influence of listener’s gender on the acceptance of background noise. J Am Acad Audiol, 2003. 14(7): p. 372-82; quiz 401.
  32. Plyler, P.N., et al., Effects of speech signal content and speaker gender on acceptance of noise in listeners with normal hearing. Int J Audiol, 2011. 50(4): p. 243–8.
  33. Harkrider, A.W. and J.W. Tampas, Differences in responses from the cochleae and central nervous systems of females with low versus high acceptable noise levels. J Am Acad Audiol, 2006. 17(9): p. 667–76.
  34. Harkrider, A.W. and S.B. Smith, Acceptable noise level, phoneme recognition in noise, and measures of auditory efferent activity. J Am Acad Audiol, 2005. 16(8): p. 530–45.
  35. Gordon-Hickey, S. and R.E. Moore, Acceptance of noise with intelligible, reversed, and unfamiliar primary discourse. Am J Audiol, 2008. 17(2): p. 129–35.
  36. Ahn, H.J., J. Bahng, and J.H. Lee, Measurement of acceptable noise level with background music. J Audiol Otol, 2015. 19(2): p. 79–84.
  37. Gordon-Hickey, S. and R.E. Moore, Influence of music and music preference on acceptable noise levels in listeners with normal hearing. J Am Acad Audiol, 2007. 18(5): p. 417–27.
  38. Brännström, K.J., et al., Acceptance of background noise, working memory capacity, and auditory evoked potentials in subjects with normal hearing. J Am Acad Audiol, 2012. 23(7): p. 542–52.
  39. Ho, H.C., et al., The equivalence of acceptable noise level (ANL) with English, Mandarin, and non-semantic speech: a study across the US and Taiwan, Mandarin. Int J Audiol, 2013. 52(2): p. 83–91.
  40. Koch, X., et al., Type of speech material affects acceptable noise level test outcome. Front Psychol, 2016. 7: p. 186.
  41. Hoch, L. and B. Tillmann, Laterality effects for musical structure processing: a dichotic listening study. Neuropsychology, 2010. 24(5): p. 661–6.
  42. Hugdahl, K., et al., Brain activation during dichotic presentations of consonant-vowel and musical instrument stimuli: a 15O-PET study. Neuropsychologia, 1999. 37(4): p. 431–40.
  43. Ahmadi, A., et al., Developing and evaluating the reliability of acceptable noise level test in Persian language. Sci J Rehabil Med, 2015. 4(4): p. 109–17.
  44. ANSI, American National Standard Maximum Permissible Ambient Noise Levels for Audiometric Test Rooms. 2008. ANSI: New York.
  45. Putkinen, V., T. Makkonen, and T. Eerola, Music-induced positive mood broadens the scope of auditory attention. Soc Cogn Affect Neurosci, 2017. 12(7): p. 1159–68.
  46. Beaton, A.A., Hemispheric emotional asymmetry in a dichotic listening task. Acta Psychol, 1979. 43(2): p. 103–9.
  47. Hausmann, M., S. Hodgetts, and T. Eerola, Music-induced changes in functional cerebral asymmetries. Brain Cogn, 2016. 104: p. 58–71.
  48. Koelsch, S., Investigating the neural encoding of emotion with music. Neuron, 2018. 98(6): p. 1075–9.