Page 17 - Winter2018
P. 17

 said, especially when the background noise levels are high, such as in a restaurant. It is important to remember that although hearing aid digital technology has improved dra- matically in the last few decades, amplification cannot fully compensate for neural-processing deficits.
Hearing aid algorithms attempt to provide appropriate am- plification to compensate for hearing loss at each frequency and to maintain sound levels within the dynamic range of the listener, based solely on the audiogram and measures of loudness discomfort. These algorithms also attempt to improve ease of listening in noise using directional micro- phone and noise reduction strategies so that the listener does not exert as much effort to understand what is being said. However, it can be difficult to evaluate the real-world effec- tiveness of amplification, particularly in a clinical setting. Audiologists use probe-microphone measurements to verify the appropriateness of the hearing aid fitting. During probe- microphone measurement, the audiologist places a thin tube in the ear canal a few centimeters from the ear drum. This tube is attached to a microphone, and the hearing aid is then placed alongside the tubing in the ear canal. Speech and oth- er stimuli are presented at varying levels, and the audiologist determines if the amplified sound levels reaching the ear- drum adequately compensate for hearing loss based on the pure-tone thresholds. Although verifying hearing aid fitting in this way is important, this measurement does not provide information about how speech is being processed by the in- ner ear, the central auditory system, and the brain.
Because of the limitations of probe-microphone measure- ments, interest in the use of EEG measures to assess the ben- efit of hearing aids is increasing. Several studies have focused on verifying detection of speech signals using EEG recordings in infants or other individuals who may not be able to provide feedback (e.g., Easwar et al., 2015). These measures may be useful in determining if amplification is providing sufficient audibility to detect speech consonants across a range of fre- quencies, but the effectiveness of EEG measures for provid- ing information about the ability of the brain to discriminate between consonants has not yet been demonstrated (Billings et al., 2012). Furthermore, a detection measure may not be as relevant for an individual who can provide feedback about the audibility of different speech sounds.
What may be more useful is a measure that can provide infor- mation about the processing of conversational level speech rather than soft, threshold-level sounds. Age- and hearing (loss)-related deficits in temporal and frequency processing are observed at listening levels well above the speech thresh-
Figure 4. Example of an in-ear electrocencephalographic (EEG) mount shown as a single earplug (left) and in the ear (right). Own work by Mikkelsen.kaare, Creative Commons Attribution- ShareAlike 4.0 International Public License (CC BY-SA 4.0). commons.wikimedia.org/w/index.php?curid=51268329.
old. A few studies have assessed the effects of amplification on the neural processing of conversational level speech stim- uli (e.g., Jenkins et al., 2017), but more research is needed to determine if EEG measures can be used to assess improve- ments in neural processing in a clinical setting, with the goal of improved performance.
Despite advancements in digital noise reduction and direc- tional microphone technologies, understanding speech in noise continues to be the greatest challenge experienced by individuals who use hearing aids. The main limitation is that current technology is unable to distinguish between a target talker, who should be amplified, and a background of mul- tiple talkers, who should be attenuated. Generally, hearing aid algorithms use multiple microphones to focus amplifi- cation in the direction of whomever the listener is facing. It is reasonable to assume that the listener is usually facing the speaker. However, there may be times when the listener hears an interesting fragment from another speaker in the group and would prefer to listen in on that conversation without turning his/her head. Hearing aids generally will not be able to quickly and easily adjust to this scenario.
Future Directions
These limitations in current digital noise reduction technol- ogy have led to an interest in the development of cognitively driven or attention-driven hearing aids (Das et al., 2016). This research is based on evidence of the ability of EEG or MEG measurements to reveal the listener’s object of atten- tion (Ding and Simon, 2012; O’Sullivan et al., 2015). The idea behind the research is that discreet in-the-ear electrodes might be used to convey information to the hearing aid re- garding the listener’s focus of attention. The hearing aid pro- cessing algorithm would then selectively amplify the desired speech stream of interest to the listener. Figure 4 shows an in-ear EEG mount. Another recent innovation is the “visu- ally guided hearing aid prototype” that uses an eye tracker
 Winter 2018 | Acoustics Today | 15

























































































   15   16   17   18   19