Page 30 - Volume 12, Issue 2 - Spring 2012
P. 30
HEARING LOSS AND FREQUENCY ANALYSIS OF COMPLEX SOUNDS
Marjorie Leek and Michelle Molis
National Center for Rehabilitative Auditory Research Portland VA Medical Center
Portland, Oregon 97239
The sounds that surround us in “There is considerable Each of these forms of partial or, in
our everyday lives range from
very simple and tonal, such as
one might hear from a flute or a whis-
tle, to highly complex containing mul-
tiple tones and noises, such as listening
to a single speaker in the presence of
background noise or the babble of
many other talkers at the same time.
For the most part, the sounds that are
relevant to us are complex, which, by
definition, means they are made up of
multiple frequencies with multiple
sound levels. Some may be tonal, some
may have a noisy quality, and often
there will be important temporal fea-
tures such as sequence or order effects.
Speech is just such a signal—perhaps
the most relevant and complex signal
heard by humans. The auditory system is exquisitely designed to encode information from these complex sounds. Encoding, combining, and recombining informa- tion allows us to sort out our sound environment and gain information about who (or what) is near or in the distance, who is speaking, if we are in danger, or if there is something good to eat out there. The study of psychoacoustics and physiology allows us to understand what those encoding mechanisms are, how they work separately and together, and how they might fail us from time to time.
Although the sense of hearing is traditionally associated with the ears, hearing is actually accomplished in the brain, after acoustic information has been transmitted through the peripheral auditory system and coded throughout the audi- tory nervous system to combine information from the two ears across frequency and time. For this processing to be complete and accurate, leading to successful perception of the intended acoustic signal, auditory structure and physio- logical function must be intact, or nearly so.
Hearing loss is a significant barrier to everyday commu- nication and represents a financial, psychological, economic, and vocational burden on millions of people around the world. According to the National Institute on Deafness and Other Communication Disorders (NIDCD), some 36 million adults in the United States alone experience hearing loss, and that number is growing as the population ages (http://www.nidcd.nih.gov). It is rather rare for individuals to have no hearing at all, and much more common to have reduced hearing sensitivity ranging from mild to more severe hearing loss. It is not unusual for an individual to have some frequency regions of normal or near-normal hearing accom- panied by some regions where hearing is severely impaired.
some cases, nearly complete loss of hearing results in changes in the encod- ing mechanisms that feed acoustic information to the brain. Some of these coding deficits can be partially over- come with treatments, most commonly in the form of either a hearing aid or a cochlear implant. But these treatments can bring with them their own charac- teristic distortions and signal degrada- tions that combine with impaired audi- tory function to make everyday listen- ing difficult.
The most common source of hear- ing loss in adults occurs as the result of damage to the hair cells in the inner ear, which provide the initial stages of pro- cessing (outer hair cells) as well as stim-
ulation to the auditory nerve (inner hair cells). The hair cells are arrayed longitudinally in four rows on the basilar mem- brane stretching along the length of the cochlea. In humans, the length of the basilar membrane is about 30 mm, and it runs from the cochlear base, at the interface between the middle ear and the inner ear, to the apex, which is at the tip of the cochlear spiral. A sound wave that is transmitted to the inner ear will be distributed in frequency along the length of the basilar membrane, with different frequencies providing stimulation in different regions along the length of the cochlea. The location of maximum stimulation along the basilar membrane is tonotopically organized according to frequency, with higher frequencies encoded near the base of the cochlea and lower frequencies stimulating maximally at the apex of the cochlea. The tonotopic arrangement is main- tained throughout the auditory pathways up to and including the primary auditory cortex.
While the inner hair cells provide stimulation to the auditory nerve, the three rows of outer hair cells do not trans- mit auditory information, but instead function as sound processors to enhance auditory perception. These hair cells respond in a nonlinear manner to different levels of stimulat- ing sound, amplifying low to moderate levels, but not pro- viding gain at high levels. The nonlinearity of level process- ing accounts for the wide dynamic range of sound levels that we can hear, and it also supplies the sharp frequency tuning of the auditory response in normally functioning ears.
The outer hair cells are particularly vulnerable to damage from excessive sound levels (i.e., noise exposure) and other sources of hearing loss such as disease states, ototoxic drugs, and the aging process. Damage or destruction of outer hair cells disrupts nonlinear processing in the cochlea, resulting
research showing that spectral analysis is impaired in the presence of sensorineural hearing loss, and this loss of frequency resolution is influential in the difficulty understanding speech in noisy environments.”
Hearing Loss and Frequency Analysis of Complex Sounds 29