Page 17 - Winter Issue 2018
P. 17

said, especially when the background noise levels are high, -‘ l s V
such as in a restaurant. It is important to remember that  sit’-. _ ._
although hearing aid digital technology has improved dra- V a ~
matically in the last few decades, amplification cannot fully l’
compensate for neural-processing deficits. ‘_ ’‘
Hearing aid algorithms attempt to provide appropriate ain- s

. . . Figure 4. Example of an in—ear electmcencephalographic (EEG)
plification to compensate for hearing loss at each frequency _ _ ,

_ _ _ _ _ mount shown as a single earplug (left) and in the ear (right).
and to mamtam sound levels wlthm the dynamlc range of Own work by Mikkelsen.kaare Creative Commons Attribution-
llle llsteners based solely 0n the audlbgrarn and measures ShareAlike 4.0 International Public License (CC BY—SA 4.0).
of loudness discomfort. These algorithms also attempt to commons. php?curid=51268329.
improve ease of listening in noise using directional micro-  
phone and noise reduction strategies so that the listener does old_ A few Studies have assessed the effects of amplification
not exert as much effort to understand what is being Said‘ on the neural processing of conversational level speech stim-
However, it can be difficult to evaluate the real-world effec- uli (6 g lenkins ei al 2017) but more research is needed to
tiveness of amplification, particularly in a clinical setting. determine if EEG measures can be used to assess imPmve_
Audiologists use Pr°be'mlCr°P_h°ne_ measulemenls to verify ments in neural processing in a clinical setting, with the goal
the appropriateness of the hearing aid fitting. During probe- of improved Performance
microphone measurement, the audiologist places a thin tube
in the ear canal a few centimeters from the ear drum. This Despite advancements in digital noise reduction and direc-
tube is attached to a microphone, and the hearing aid is then llonal lnler°Pll°ne leenn°l°gles> understandlng sPeeen ln
Placed alongside the tubing in the ear cana1_ Speech and oth_ noise continues to be the greatest challenge experienced by
er stimuli are Presented at Varying ieveisi and the audiologist individuals who use hearing aids. The main limitation is that
determines if the amplified sound levels reaching the ear- Current teCnn°l°gY ls unable to dlsungulsll between a target
drum adequately compensate for hearing loss based on the talker: Who snould be arnPllned> and a background °f rnul'
pure-tone thresholds. Although verifying hearing aid fitting llPle tall<ers> Wll° should be auenualed GenerallY> llearlng
in this way is important, this measurement does not provide ald algorlllnns use lnulllPle lnler°Pll°nes to rdeus alnPlln'
information about how speech is being Processed by the in. cation in the direction of whomever the listener is facing.
ner ear) the central auditory system) and the brain_ It is reasonable to assume that the listener is usually facing

the s eaker. However, there ma be times when the listener
Basallsa of llla_ llmllallolls of Plalsaallllslallllolls msaslllsa hear: an interesting fragment from another speaker in the
ments, interest in the use of EEG measures to assess the ben- group and would Prefer to listen in on that Conversation
efit of hearing aidsiis increasing. several studies have focused without turning his/her head. Hearing aids generally will
on verifying detection of speech signals using EEG recordings not be able to quickly and easily adiust to this Scenario
in infants or other individuals who may not be able to provide '
feedback (e.g., Easwar et al., 2015). These measures may be Future Directions
useful la dalslmllllllg ll ampllasallall ls Plovldlllg slllaslslll These limitations in current di ital noise reduction technol-
audibility to detect speech consonants across a range of fre- h l d . . if d l f . . l
_ _ _ ogy ave e to an interest in e eve opmento cognitive y
quencies, but the effectiveness of EEG measures for provid- . . . . .
in information about the abili of the brain to discriminate dnven or attennon-dnven heanna aids (Das at al.’ 2016)-
b am t h t ltyb d t t d (Bin. This research is based on evidence of the ability of EEG or
e een consonan s as no e een emons ra e in s . , .
et al. 2012). Furthermore a detection measure may not begs MEG lllssslllslllsllls to reveal the hslener S oblect of atten-
’ _ _ _ ’ _ tion (Ding and Simon, 2012; O’Sullivan et al., 2015). The
relevant for an individual who can provide feedback about the . . . . .
dibii f diff t h d idea behind the research is that discreet in-the-ear electrodes
all l we area spaaa saun s' might be used to convey information to the hearing aid re-
W'hat may be more useful is a measure that can provide infor- garding the listener’s focus of attention. The hearing aid pro-
mation about the processing of conversational level speech cessing algorithm would then selectively amplify the desired
rather than soft, threshold-level sounds. Age- and hearing speech stream of interest to the listener. Figure 4 shows an
(loss)-related deficits in temporal and frequency processing in-ear EEG mount. Another recent innovation is the “visu-
are observed at listening levels well above the speech thresh- ally guided hearing aid prototype” that uses an eye tracker
Winter 2018 | Acuuseics Thday | 15

   15   16   17   18   19