Page 53 - Fall2019
P. 53
Gick, B., and Derrick, D. (2009). Aero-tactile integration in speech percep- tion. Nature 462(7272), 502-504. https://doi.org/10.1038/nature08572. Gick, B., Johannsdottir, K. M., Gibraiel, D., and Muhlbauer, J. (2008). Tac-
tile enhancement of auditory and visual speech perception in untrained perceivers. The Journal of the Acoustical Society of America 123(4), 72-76. https://doi.org/10.1121/1.2884349.
Giles, H., Coupland, N., and Coupland, I. (1991). Accommodation theory: Communication, context, and consequence. In H. Giles, J. Coupland, and N. Coupland (Eds.), Contexts of Accommodation: Developments in Applied Sociolinguistics. Cambridge University Press, Cambridge, UK, pp. 1-68.
Goldinger, S. (1998). Echoes of echoes? Shadowing words and nonwords in an episodic lexicon. Psychological Review 105, 251-279.
Grant, K. W., and Seitz, P. F. P. (2000). The use of visible speech cues for improving auditory detection of spoken sentences. The Journal of the Acoustical Society of America 108(3), 1197-1208. https://doi. org/10.1121/1.422512.
Hazan, V., Sennema, A., Iba, M., and Faulkner, A. (2005). Effect of audiovi- sual perceptual training on the perception and production of consonants by Japanese learners of English. Speech Communication 47(3), 360-378. https:// doi.org/10.1016/j.specom.2005.04.007.
Hickok, G., Holt, L. L., and Lotto, A. J. (2009). Response to Wilson: What does motor cortex contribute to speech perception? Trends in Cognitive Sciences 13(8), 330-331. https://doi.org/10.1016/j.tics.2009.06.001.
Ito, T., Tiede, M., and Ostry, D. J. (2009). Somatosensory function in speech perception. Proceedings of the National Academy of Sciences of the United States of America 106(4), 1245-1248. https://doi.org/10.1073/ pnas.0810063106.
Jerger, S., Damian, M. F., Tye-Murray, N., and Abdi, H. (2014). Children use visual speech to compensate for non-intact auditory speech. Journal of Experimental Child Psychology 126, 295-312. https://doi.org/10.1016/j. jecp.2014.05.003.
Lachs, L., and Pisoni, D. B. (2004). Specification of cross-modal source infor- mation in isolated kinematic displays of speech. The Journal of the Acoustical Society of America 116(1), 507-518. https://doi.org/10.1121/1.1757454.
Matchin, W., Groulx, K., and Hickok, G. (2014). Audiovisual speech integration does not rely on the motor system: Evidence from articula- tory suppression, the McGurk effect, and fMRI. Journal of Cognitive Neuroscience 26(3), 606-620.
McGurk, H., and MacDonald, J. (1976). Hearing lips and seeing voices. Nature 264, 746-748.
Miller, R., Sanchez, K., and Rosenblum, L. (2010). Alignment to visual speech information. Attention, Perception, & Psychophysics 72(6), 1614- 1625. https://doi.org/10.3758/APP.72.6.1614.
Montgomery, A. A., Walden, B. E., Schwartz, D. M., and Prosek, R. A. (1984). Training auditory-visual speech reception in adults with moder- ate sensorineural hearing loss. Ear and Hearing 5(1), 30-36. https://doi. org/10.1097/00003446-198401000-00007.
Munhall, K. G., and Vatikiotis-Bateson, E. (2004). Spatial and temporal con- straints on audiovisual speech perception. In G. A. Calvert, C. Spence, and B. E. Stein (Eds.), Handbook of Multisensory Processes. MIT Press, Cambridge, MA, pp. 177-188.
Musacchia, G., Sams, M., Nicol, T., and Kraus, N. (2006). Seeing speech affects acoustic information processing in the human brainstem. Experimental Brain Research,168(1-2), 1-10. https://doi.org/10.1007/ s00221-005-0071-5.
Namasivayam, A. K., Wong, W. Y. S., Sharma, D., and van Lieshout, P. (2015). Visual speech gestures modulate efferent auditory system. Journal of Integrative Neuroscience 14(1), 73-83.
Nygaard, L. C. (2005). The integration of linguistic and non-linguistic prop- erties of speech. In D. Pisoni and R. Remez (Eds.), Handbook of Speech Perception. Blackwell, Malden, MA, pp. 390-414.
Pardo, J. S. (2006). On phonetic convergence during conversational interac- tion. The Journal of the Acoustical Society of America 119(4), 2382-2393. https://doi.org/10.1121/1.2178720.
Proverbio, A. M., Massetti, G., Rizzi, E., and Zani, A. (2016). Skilled musi- cians are not subject to the McGurk effect. Scientific Reports 6, 30423. https://doi.org/10.1038/srep30423.
Reed, C. M., Rabinowitz, W. M., Durlach, N. I., Braida, L. D., Conway‐ Fithian, S., and Schultz, M. C. (1985). Research on the Tadoma method of speech communication. The Journal of the Acoustical Society of America 77(1), 247-257. https://doi.org/10.1121/1.392266.
Reich, L., Maidenbaum, S., and Amedi, A. (2012). The brain as a flexible task machine: Implications for visual rehabilitation using noninvasive vs. invasive approaches. Current Opinion in Neurobiology 25(1), 86-95.
Remez, R. E., Fellowes, J. M., and Rubin, P. E. (1997). Talker iden- tification based on phonetic information. Journal of Experimental Psychology: Human Perception and Performance 23(3), 651-666. https:// doi.org/10.1037/0096-1523.23.3.651.
Riedel, P., Ragert, P., Schelinski, S., Kiebel, S. J., and von Kriegstein, K. (2015). Visual face-movement sensitive cortex is relevant for auditory- only speech recognition. Cortex 68, 86-99. https://doi.org/10.1016/j. cortex.2014.11.016.
Rosenblum, L. D. (2013). A confederacy of the senses. Scientific American 308, 72-75.
Rosenblum, L. D. (2019). Audiovisual speech perception and the McGurk effect. In Oxford Research Encyclopedia of Linguistics. Oxford University Press, Oxford, UK.
Rosenblum, L. D., and Saldaña, H. M. (1996). An audiovisual test of kine- matic primitives for visual speech perception. Journal of Experimental Psychology. Human Perception and Performance 22(2), 318-331. https://doi. org/10.1037/0096-1523.22.2.318.
Rosenblum, L. D., Dias, J. W., and Dorsi, J. (2016). The supramodal brain: Implications for auditory perception. Journal of Cognitive Psychology 28, 1-23.
Rosenblum, L. D., Miller, R. M., and Sanchez, K. (2007). Lip- read me now, hear me better later: Cross-modal transfer of talker-familiarity effects. Psychological Science 18(5), 392396. https://doi. org/10.1111/j.1467-9280.2007.01911.x.
Rosenblum, L. D., Yakel, D. A., Baseer, N., Panchal, A., Nodarse, B. C., and Niehus, R. P. (2002). Visual speech information for face recogni- tion. Perception & Psychophysics 64(2), 220-229. https://doi.org/10.3758/ BF03195788.
Sams, M., Manninen, P., Surakka, V., Helin, P., and Kättö, R. (1998). McGurk effect in Finnish syllables, isolated words, and words in sentences: Effects of word meaning and sentence context. Speech Com- munication 26(1-2), 75-87.
Sanchez, K., Dias, J. W., and Rosenblum, L. D. (2013). Experience with a talker can transfer across modalities to facilitate lipreading. Atten- tion, Perception & Psychophysics 75, 1359-1365. https://doi.org/10.3758/ s13414-013-0534-x.
Sanchez, K., Miller, R. M., and Rosenblum, L. D. (2010). Visual influ- ences on alignment to voice onset time. Journal of Speech, Language, and Hearing Research 53, 262-272.
Schall, S., and von Kriegstein, K. (2014). Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception. PLoS ONE 9(1), e86325. https://doi.org/10.1371/ journal.pone.0086325.
Fall 2019 | Acoustics Today | 53