Page 14 - Winter 2020
P. 14

SPEECH AND APHASIA
  Figure 2. The network associated with the consonants [b], [d], and [t] and the phonetic features defining them. Red lines signify their positions in in the network: solid line, closest sounds; dotted line, more distant sounds.
(hence, it is called voiced), whereas in the production of [t], the vocal cords vibrate after the consonant is produced (hence it is called voiceless; see Figure 2). Thus, [b] is more “similar” to [d] than it is to [t] in terms of both its articulation and its acoustic correlates.
Critically, in current computational and neural models (McClelland and Elman, 1986; Mesgarani et al., 2014), speech sounds and their features are connected to each other in a network-like architecture, the strength of their connections varying depending on how close or similar the sounds are to each other in terms of their articulatory and acoustic features. Sounds that are more similar to each other are closer to each other and hence are more likely to influence each other.
The pattern of sound substitution errors in aphasia reflect this relationship. The more similar sounds are to one another, the more likely they are to trigger a substitution error. There are many more one-feature errors, such as [b] produced as [d], than errors of more than one feature, such as [b] produced as [t] (see Figure 2) in both Broca’s and Wernicke’s aphasia and irrespective of the lesion locus.
A number of conclusions can be drawn from the common pattern of sound substitution errors in Broca’s and Wernicke’s aphasia. First, the sounds of language are not lost but are vulnerable to errors. Second, that the same pattern emerges across syndromes and brain areas suggests that the speech production system is neurally distributed and not narrowly localized. Additionally, a common deficit underlies the production of sounds in Broca’s and Wernicke’s aphasia. Here, I propose that the brain injury introduces noise into the system, leaving
intact the architecture (the connections) of the system and the speech sounds and their features. However, the differences are blurred, making sounds distinguished by a single feature more likely to be mistakenly selected. Interestingly, similar patterns occur in the production of sound substitutions in analyses of “slips of the tongue” (Fromkin, 1980), indicating that the occasional lapses in production that you and I have reflect “temporary” noise introduced into an otherwise normal production system.
One important difference does emerge between Broca’s and Wernicke’s aphasia. There are many more production errors in Broca’s than in Wernicke’s. These results are not surprising, given that the frontal areas are recruited for motor planning
and articulatory implementation (see Figure 1).
On To Acoustics
One of the challenges in analyzing speech production errors is the difficulty of ensuring that examiners can accurately identify correct and incorrect productions. Indeed, how listeners perceive the speech of others is shaped and biased by their native language (Cutler, 2012), making it difficult, even with training, to accurately phonetically transcribe speech.
A case in point is the foreign accent syndrome. The characteristics of this syndrome are well-described by its name. After brain injury, a speaker sounds like she is now talking with a foreign accent (watch youtu.be/uLxhSu3UuU4). For example, someone who speaks English may now sound likesheisaFrenchspeakertalkinginEnglish.Thereisoften disagreement among listeners about what foreign accent the patient seems to have “acquired” (Farish et al., 2020). The question is whether the patient has truly acquired a foreign accent or whether the listener is perceiving the changes in the patient’s speech incorrectly as a foreign accent.
Acoustic analyses can provide the answer by examining the different ways language articulates the sounds of its language. For example, a common feature used across languages of the world is voicing (Lisker and Abramson, 1964). Voicing relates to the timing of the vibration of the vocal cords (called voice-onset time [VOT]) as a sound is being produced (Figure 3, left). Both English and French use VOT to distinguish the voiced stop consonants [b d g] from the voiceless stop consonants [p t k] but not in the same way.
 14 Acoustics Today • Winter 2020





















































































   12   13   14   15   16