Page 18 - Winter 2020
P. 18

SPEECH AND APHASIA
discrimination performance again supports the view that the architecture of speech (the connections between sounds) is spared. The perception of the acoustic features of speech sounds is also spared in aphasia, but there is a deficit in identifying the category to which a sound belongs. Naming or identifying heard sounds is more difficult because it requires not only perceiving the acoustic cues but also classifying them into their appropriate sound.
Conclusion
As an experiment in nature, the study of aphasia has provided a unique window into the intersection of mind (speech) and brain (neurology). It not only has informed the scientific study of speech, but it has also placed this research in the context of a disorder that affects millions of people and their families.
A number of broad conclusions can be drawn from this review. In particular, both speech production and speech perception are neurally distributed, encompassing posterior and anterior brain structures, with the exception being those neural areas underlying speech output (motor areas) and speech input (auditory areas). Whether examining speech production or speech perception impairments, the architecture of the speech system is spared. That is, the sounds of language and the features associated with them as well as the network connecting them are intact. Brain injury introduces noise into the system, rendering production and perception processes less efficient and more prone to errors, which are nonetheless systematic.
This same pattern of results emerges even in the absence of brain injury. Single-feature errors are more common in production, as shown in analyses of slips of the tongue (Buckingham, 1992), and in perception, as shown by analyses of errors when listening to speech in noise (Miller and Nicely, 1955; Cutler et al., 2008). The similarity in the pattern of errors in speech production and perception in individuals with and without brain injury suggests that these two speech systems share a common representation for both sounds and their features. As Figure 6 shows, in speech perception, the analysis of spectrotemporal (frequency-time) properties map onto sounds and features and, ultimately, to the words (lexicon) of a language. In speech production, the selection of a word maps to sounds and features and then to articulatory commands that implement the spectrotemporal properties of sounds.
Taken together, speech production and speech perception are integrated systems that are critical for language communication. They not only connect the speaker and listener to the outside world but also shape the internal structure and organization of the language system.
Acknowledgments
Many thanks to Rachel Theodore for her assistance in the preparation of the figures for publication. Special thanks to the people with aphasia who have graciously participated in much of the research reported here; to the National Institute on Deafness and Other Communication Disorders, National Institutes of Health (Bethesda, MD), for its support over the past 50 plus years; and to the many colleagues and students who were an integral part of my research program.
  Figure 6. A theoretical model of speech perception and speech production showing shared representations for the stages involved in speech perception and speech production. Left up arrow, direction of information flow in speech perception. The acoustic input that the listener hears is analyzed and converted to spectrotemporal properties of speech that are then mapped onto segments (sounds) and their associated features. These sounds access the lexicon (the mental dictionary) where a word candidate is selected from the set of words that sound similar to the candidate word. Right down arrow, direction of information flow in the stages involved in speech production. A word is selected from the words in the lexicon that are similar to it. The sounds of the word are represented in segments and their associated features, which are then put together (synthesized) and converted via articulatory commands to an acoustic output to be heard by the listener.
18 Acoustics Today • Winter 2020
























































































   16   17   18   19   20