Page 33 - Volume 12, Issue 2 - Spring 2012
P. 33

  Fig. 3. Outputs of a bank of normal auditory filters (left panel) and a bank of filters with three times the normal width (right panel) for the spoken vowel /a/. The channels centered at the same frequencies are shown for both filter banks. These channels were selected to show the first three formant frequencies, as well as some non-formant chan- nels. Note the greater complexity of the simulated hearing-impaired outputs for all channels.
 called the Auditory Image Model (AIM). The model devel- oped for normal-hearing individuals may be modified to change the normal auditory channel bandwidths and view the hypothesized temporal outputs of selected channels simulating both a normal filter bank and one with abnor- mally broad filters. Figure 3 shows seven channels of output for the vowel /a/ (seen in spectral form in Fig. 2) for a nor- mal-width bank of auditory filters on the left, and for filters that are three times the normal bandwidth on the right, simulating a moderate-to-severe hearing loss. The output of each auditory filter provided to its associated auditory nerve fibers may be visualized as a waveform, constructed of combinations of frequencies that are passed by each fil- ter. The channels shown were selected to be those with cen- ter frequencies at the first, second, and third formants of the vowel /a/, as well as four intermediate non-formant fre- quency channels. The narrow filter bandwidths, found at lower frequencies in a normal cochlea, will pass only a few separate frequencies, and thus the output of those narrow filters will be rather simple, resembling pure tones. At high- er frequency regions, more component frequencies are passed by the filters that become progressively broader, and therefore the outputs of those channels are more complex as more frequencies interact.
These channel outputs which combine component fre- quencies in the stimulus, provide complementary informa- tion to the auditory system about the input sound. The hypothesized channel outputs in a hearing-impaired cochlea, with broader auditory filters, become distorted in character- istic ways: first, there are fewer channels that output simple waveforms because more frequency components are passed through impaired channels relative to normal channels for the same frequency location, and second, the production of the more highly complex output waveforms occurs at lower-
 than-normal frequency regions. In addition, the broader fil- ters in the impaired ear have greater overlap with each other, further distorting the processed sound. The increased com- plexity for the hearing-impaired model outputs indicates more interactions among frequency components, perhaps leading to greater distortion as this impaired version of cochlear outputs is encoded for passage into the brain. This demonstrates not only a spectral distortion as seen in the excitation pattern, but also a temporal waveform distortion at the outputs of the frequency channels, which likely also inter- feres with the accurate recognition of speech.
The analysis of complex sounds provided in the cochlea is obviously not limited only to the frequency analysis described here. However, there is considerable research showing that spectral analysis is impaired in the presence of sensorineural hearing loss, and this loss of frequency resolu- tion is influential in the difficulty understanding speech in noisy environments (ter Keurs et al., 1992). Additional impairments to auditory encoding, such as temporal pro- cessing and impaired loudness perception, are likely also involved in the distortions experienced by listeners with hearing loss. In fact, there is growing evidence that the reduction of fine temporal precision with hearing loss may be a significant problem that underlies both the extraction and understanding of speech in noise, as well as the poor pitch perception manifested in an inability of hearing- impaired people to enjoy music (Lorenzi et al., 2006; Moore, 2008).
The effects of impaired auditory processing result in dif- ficulties extracting a target speech signal from among many other talkers. Hearing loss may also make it difficult to rec- ognize voices and to localize sounds in the environment. All of these impaired functions correspond to the auditory expe- riences described by people with hearing loss, who complain
32 Acoustics Today, April 2012




























































































   31   32   33   34   35