Page 37 - Acoustics Today Summer 2011
P. 37

                                                                                ducing feedback oscillation, and by the extent of the cochlear damage. Even when audibility of high-frequency speech sounds can be restored, some patients with severe-to-pro- found hearing loss may not benefit from amplification and
12
high-frequency speech cues have shifted the high-frequency information into lower frequency regions in which hearing loss is less severe. Such operations can be performed most easily on the frequency spectrum (which is convenient, because the above described multiband compressors operate in this domain). One strategy shifts the spectral components in the neighborhood of a high-frequency peak in the spec- tral envelope down to lower frequencies. This approach is described as frequency transposition, and is illustrated in Fig. 6 by means of an artificial frequency spectrum having two broad peaks near 1500 and 6000 Hz, the latter taken to be above the patient’s range of audible hearing. The black dashed line in the spectrum plots represents the unprocessed spectrum. Frequency transposition moves the higher of the two peaks to a lower frequency, presumably within the patient’s range of aidable hearing. Vertical stems represent a set of harmonic frequency components following the spectral envelope, with the transposed components depicted in red.
Another strategy warps, or compresses the high-fre- quency part of the spectrum into a narrower frequency range. This approach is most often called frequency compres- sion, and is illustrated in Fig. 7. Here, the frequency spectrum above 2500 Hz is compressed by a factor of 3, moving the 6 kHz peak just below 4 kHz. As in the previous figure, vertical stems represent harmonic frequency components under the unprocessed spectral envelope, with components in the com- pressed region depicted in red.
Both frequency transposition and frequency compres- sion risk audible distortion due to disruption of the harmon- ic structure of the processed sound. Harmonic components in the translated or compressed region of the spectrum do not appear at harmonic frequencies after processing. This inharmonicity is evident in the distribution of transposed and compressed harmonic components depicted in red stems in Figs. 6 and 7, respectively. These algorithms must therefore be applied with caution.
An alternative technique estimates the spectral envelope of the sound, and replicates high-frequency envelope features (peaks) at lower frequencies where they can be heard. This algorithm operates like a dynamic filter that introduces a low-frequency spectral feature whenever a high-frequency feature is detected. Because spectrum components are not translated, there is no risk of disrupting the underlying har- monic structure of the sound. This approach is illustrated in Fig. 8. Here, a peak in the spectral envelope is detected at 6 kHz and replicated at a lower frequency (3.1 kHz), within the aidable hearing range of the patient. AT
References
1 S. A. Fulop, Speech Spectrum Analysis (Springer, Berlin, 2011).
2 T. A. C. M Claasen and W. F. G. Mecklenbräuker, “The Wigner
distribution—A tool for time-frequency signal analysis, Part II:
Discrete-time signals,” Philips J. Res. 35(4/5), 276–300 (1980).
3 L. Cohen, “Generalized phase-space distribution functions,” J.
Math. Physics 7(5), 781–786 (1966).
4 Y. Zhao, L. E. Atlas, R. J. Marks, II, “The use of cone-shaped ker-
nels for generalized time-frequency representations of nonsta- tionary signals,” IEEE Trans. Acoust. Speech Signal Process. 38(7), 1084–1091 (1990).
5 D. J. Nelson, “Cross-spectral methods for processing speech,” J. Acoust. Soc. Am. 110(5), 2575–92 (2001).
6 D. J. Nelson, “Instantaneous higher order phase derivatives,” Digital Sig. Proc. 12, 416–28 (2002).
7 J. D. Markel and A. H. Gray, Jr., Linear Prediction of Speech (Springer, Berlin, 1976).
8 L. R. Rabiner and B.-H. Juang, Fundamentals of Speech Recognition (Prentice-Hall, Upper Saddle River, NJ, 1993).
9 T. F. Quatieri, Discrete-time Speech Signal Processing (Prentice- Hall, Upper Saddle River, NJ, 2002).
10 S. B. Davis and P. Mermelstein, “Comparison of parametric rep- resentations for monosyllabic word recognition in continuously spoken sentences,” IEEE Trans. Acoust. Speech and Signal Process. 28, 357–366 (1980).
11 B. Edwards, “Hearing aids and hearing impairment,” in S. Greenberg, W. A. Ainsworth, A. N. Popper, and R. H. Fay (eds.) Speech Processing in the Auditory System (Springer, Berlin, 2004) pp. 339-421.
12 B. C. J. Moore, “Dead regions in the cochlea: Diagnosis, percep- tual consequences, and implications for the fitting of hearing aids,” Trends in Amplification 5(1), 1–34 (2001).
may perceive the amplified sound as distorted.
Recent strategies for restoring audibility of these critical
    CONSULTANTS IN ACOUSTICS
Sound Power: OEM Acculab Reference Sound Source
  DESIGN & SURVEY
N Industrial Noise Control
N Auditoriums & Music Halls
N Classroom&Education FacilitiesNANSI 12.60
FIELD TESTING
N Building Acoustics N RT60, C80, D50. G
  N HVAC Mechanical Noise
N Multifamily Structures
N Transportation Noise
N Seismic Vibration Surveys
N AMCA, ASHRAE, ISO N ASTM ASTC, AIIC
N E966, HUD, FAA
N Scientific, Residential
Angelo Campanella P.E., Ph,D., FASA
3201 Ridgewood Drive, Columbus(Hilliard), OH 43026-2453
614-876-5108 // cell = 614-560-0519 a.campanella@att.net // fax = 614-771-8740
SEE: http: // www.CampanellaAcoustics.com
  Creating a quieter environment since 1972
 Speech and Hearing 33





















































   35   36   37   38   39