Page 44 - 2018Fall
P. 44

Imaging the Listening Brain
challenging listening condition compared with a controlled baseline listening condition (Eckert et al., 2016). This cingu- lo-opercular system (Figure 4b) is composed of the dorsal anterior cingulate, anterior prefrontal, and anterior insula/ frontal operculum regions and generally shows increased neural activity when a perceptual or cognitive task becomes more difficult while they are still engaged in the task. Spe- cifically, this cingulo-opercular executive-control system is thought to reflect task monitoring over time and increased activity here has also been associated with increased effort (Burke et al., 2013).
Age is another interesting factor that influences how this executive-control system is called on to help us understand speech in complex environments. Recruitment in the cingu- lo-opercular system in older adults performing a language task (Harris et al., 2009; Erb and Obleser, 2013), with or without hearing loss compared with their younger counter- parts, is hypothesized to reflect the compensatory mecha- nism for age-related declines in sensory perception and cognition. However, the increased in activity of this cingulo- opercular system can come at a cost; the amount of listening effort is so taxing for the listeners that at some point, they would rather give up than attempt to understand speech in these challenging settings.
Conclusion and Outstanding Questions
Six decades since Cherry (1953) first outlined the “cocktail party problem,” we are slowly uncovering the neurobiological mechanism that enables us to understand speech in a daily multitalker environment. However, there are still many out- standing questions in our field. For example, looking at the talker’s face helps us understand speech better, especially in extremely noisy settings. Although there has been great work in both humans and primates exploring how auditory and vi- sual processing are combined in the cortex, exactly how visual information can aid us in segregating sounds has not been systematically investigated. Additionally, although we know that listeners with hearing loss struggle to communicate in multitalker environments, the ways in which hearing loss im- pacts object formation in an auditory scene (see Figure 2c) as well as how the attention and executive-control networks are recruited to compensate in these conditions are not well un- derstood. Obviously, much more work needs to be done to an- swer these vexing questions. But next time you are at the ASA social gathering, take a moment to marvel at all the neuro- biological machinery underlying your ability to “tune in” to a single conversation in such a crowded auditory environment.
Acknowledgments
I thank the National Institute on Deafness and Other Com- munication Disorders (NIDCD), National Institutes of Health (NIH), for the initial support combining my post- doctoral neuroimaging work with my graduate study on auditory attention through the Pathway to Independence Award Mechanism (K99/R00-DC-010196) as well as the continual support through the Research Grant Program (R01-DC-013260). I thank Tiffany Waddington and Dr. Mark Wronkiewicz for their artistic contributions to Figure 1.
Biosketch
Adrian KC Lee obtained his BEng (electrical) from The Uni- versity of New South Wales, Sydney, Australia, in 2002 and his ScD from the Harvard-MIT Division of Health Sciences and Technology in 2007. He is an
associate professor in the Department of Speech & Hearing Sciences and the Institute for Learning & Brain Sciences, University of Washington, Seattle. His research focuses on mapping the cortical dynamics of auditory attention and developing new neuroengineering approaches to improve hearing aid technology. He is an associate editor and co- ordinating editor of The Journal of the Acoustical Society of America in physiological and psychological acoustics.
References
Bharadwaj, H. M., Lee, A. K. C., and Shinn-Cunningham, B. G. (2014). Measuring auditory selective attention using frequency tagging. Frontiers in Integrative Neuroscience 8, Article 6. doi:10.3389/fnint.2014.00006.
Bregman, A. (1990). Auditory Scene Analysis. MIT Press, Cambridge, MA.
Burke, C. J., Brünger, C., Kahnt, T., Park, S. Q., and Tobler, P. N. (2013). Neural integration of risk and effort costs by the frontal pole: Only upon request. The Journal of Neuroscience 33(4), 1706-1713.
Cherry, E. (1953). Some experiments on the recognition of speech, with one and with two ears. The Journal of the Acoustical Society of America 25(5), 975-979. doi:10.1121/1.1907229.
Cusack, R. (2005). The intraparietal sulcus and perceptual organization. Journal of Cognitive Neuroscience 17(4), 641-651.
Da Costa, S., van der Zwaag, W., Miller, L. M., Clarke, S., and Saenz, M. (2013). Tuning in to sound: Frequency-selective attentional filter in hu- man primary auditory cortex. The Journal of Neuroscience 33(5), 1858- 1863.
Desimone, R., and Duncan, J. (1995). Neural mechanisms of selective visu- al-attention. Annual Review of Neuroscience 18, 193-222.
     42 | Acoustics Today | Fall 2017
















































































   42   43   44   45   46