Page 15 - Fall 2007
P. 15

 face, (Fig. 2) were included in analyses (gaze on the face of a speaker is sufficient for visual influence to occur, Paré et al., 2003).
There were no differences between the groups in time spent gazing off the face of the speaker. For the children with ASD, one trial each had to be dropped from analyses because the children were not looking at the face at the time of con- sonantal closure. For the typically developing children, the mean number of looks off-face at time of consonantal closure was one (range of 0-2). For the AV integration stimuli, a response was considered to be visually influenced if the par- ticipant reported hearing /na/ for the mismatched (visual /ga/ + auditory /ma/) AV trials. A one-way analysis of vari- ance (ANOVA) was run for the trials on which children were attending to the face of the speaker during consonantal clo- sure. There was a significant mean difference in visual influ- ence between the groups, F (1,3) = 17.8, p<.02. The mean visual influence for the children with ASD was 44% (range 41-47%) and 85.7% for the typically developing children (range 75-100%), suggesting as hypothesized, that the chil- dren with ASD were less visually influenced than their typi- cally developing peers, even when fixated on the face of the speaker (see Fig. 3).
Future directions
These intriguing preliminary findings suggest less visual influence on heard speech in children with autism as com- pared to typically developing controls. Ongoing research will closely examine ASD and control perceivers’ sensitivity to audiovisual speech processing in a range of contexts, includ- ing audiovisual integration, detection of audiovisual asyn- chrony and perception of audiovisual speech in the context of auditory noise. This research will move the study of percep- tual processing of audiovisual speech in children with autism forward by determining whether AV speech processing in children with ASD is fundamentally disrupted, or whether visual influence can be modulated depending on task condi- tions such as in the presence of auditory noise. This work will have implications for the identification and characterization of audiovisual integration deficits in children with ASD, including the potential for identifying novel subgroups with- in ASD, and the design of targeted interventions to improve sensitivity to visual speech information.
The continued examination of basic processes in speech perception in children with autism spectrum disorders will make it possible to characterize the nature of auditory and audiovisual speech perception in this population. Results from this area of inquiry will move the field forward signifi- cantly by providing basic data to guide focused interventions related to speech perception and processing in the popula- tion of children with ASD. Understanding the mechanisms that underlie delays and deficits in language acquisition is particularly important because language ability is a key prog- nostic factor for long-term outcomes among children and adults with ASD (e.g., Lord and Venter, 1992) and early inter- vention for individuals with language deficits is associated with long term improved developmental and cognitive out- comes (Lord, 1995; Vostanis et al., 1994; Robins et al., 2001).AT
  Fig. 2. “Look Zones” superimposed on a speaker’s face.
 ored dots that appeared on screen. Each participant was first presented with auditory stimuli consisting of the consonant- vowel (CV) tokens /ma/ and /na/. Each participant’s verbal responses were coded for each trial from videotape by two separate coders, who were at 100% agreement. Participants reported hearing either /ma/ or /na/ for all trials. Both groups easily identified the matched /ma/ and /na/ stimuli (i.e., visu- al /ma/ + auditory /ma/), with only a single error across par- ticipants, with one child with ASD misidentifying one /ma/ as /na/.
To compare the amount of visual influence between groups on the AV speech trials, only those trials on which participants’ gazes were on the face of the speaker during consonantal closure (which gives the critical visual informa- tion for what consonant is being produced) were assessed. Two independent coders examined videos of the crosshair that indicates participant’s gaze superimposed on the face of the speaker (see the red cross on Figure 2). For those video frames displaying consonantal closure, trials where coders were in agreement that the crosshair was on the face of the speaker (see the largest outlined area containing the speaker’s
 Fig. 3. Visually influenced responses when fixated on the speaker’s face.
Autism Spectrum Disorders 13

























































































   13   14   15   16   17