Skip to navigation – Site map

HomeIssuesVol. 25, Issue 3, 2009Audio-Visual Perception and Integ...

Audio-Visual Perception and Integration in Developmental Dyslexia: An Exploratory Study Using the McGurk Effect

Mireille Bastien-Toniazzo, Aurélie Stroumza and Christian Cavé

Abstracts

The McGurk effect was investigated in a group of ten-year-old dyslexic children and in two control groups of normal readers. Audio, video, and audio-visual stimuli were presented in silence or with a masking noise. In the audio-visual condition, the auditory information was presented either 160 ms before, or at the same time as, the visual information. For the incongruent audio-visual stimuli, the dyslexics exhibited a smaller McGurk effect than did the normal readers of the same chronological age, but did not differ from the controls of the same reading age. The results indicated no significant differences between the three groups for the auditory stimuli. Taken together, the results suggest a developmental lag in dyslexics.

Top of page

Full text

The authors would like to thank the teachers and pupils of the Grands Cyprès Elementary School in Avignon and Jean-Pierre Becvort, school psychologist, for their participation in the study. Thanks are also extended to Cyril Deniaud from our lab's technical staff for his assistance with the recording and audio-visual montages, and to Vivian Waltz for her translation into English.

1Developmental dyslexia is a disorder involving chronic difficulty in learning to read. According to the World Health Organization (WHO), the reading disability of dyslexics cannot be ascribed to neurological abnormalities or anatomical abnormalities of the human speech apparatus, nor to sensory impairments, mental retardation, or factors related to the child's social or emotional environment. In spite of this definition, many studies have provided supporting evidence for a neural and/or genetic substratum of dyslexia (for an excellent review published recently by the French National Institute for Health and Medical Research, see INSERM, 2007).

2The hypothesis of a visual-attentional deficit in dyslexia, although not widely proposed, has been examined in recent studies (Valdois, 2004; Ducrot & Grainger, 2007; Lété & Ducrot, 2008). Spinelli, De Luca, Judica, and Zoccolotti (2002), for example, found evidence of a crowding effect that was more pronounced in a group of dyslexics than in a group of normal readers, and suggested contrast-dependent visual persistence as an explanation (Stein & Walsh, 1997). However, the most widely investigated hypothesis is that the disability is rooted in a phonological impairment, although the nature of this deficit is still under debate. Several interpretations have been advanced. The difficulty dyslexic children have manipulating the sounds of speech could be due to a speech-specific perception deficit (Serniclaes, Sprenger-Charolles, Carré, & Démonet, 2001) that would cause them to build incorrect phonological representations (e.g., Messbauer & De Jong, 2003; Swan & Goswami, 1997; Werker & Tees, 1987; Ziegler, Pech-Georges, George, Alario, & Lorenzi, 2005). The difficulty could arise from deterioration in temporal processing (Tallal, 1980) or in the perception of speech rhythm (Goswami, Thomson, Richardson, Stainthorp, Hughes, Rosen, & Scott, 2002), leading to inaccurate perception and discrimination of sounds containing rapid changes (Rocheron, Lorenzi, Füllgrade, & Dumont, 2003).

3Another hypothesis forwarded in numerous studies is that dyslexics have a categorical-perception deficit that causes speech sounds to be poorly classified into phoneme categories. A perceptual deficit along the /ba/da/ continuum (Reed, 1989) or the /ba/pa/ continuum (Manis, McBride-Chang, Seidenberg, Keating, Doi, Munson, & Petersen, 1997) suggests impaired categorical processing, although this impairment is not found in all dyslexic children (7 out of 25 in the Manis et al. study). Another possibility is that dyslexics make too many perceptual categories, as suggested by Serniclaes, Van Heghe, Mousty, Carré, and Sprenger-Charolles (2004), who demonstrated dyslexics' high degree of perceptual (but irrelevant) sensitivity to phonemic distinctions in their linguistic environment. Considering the substantial heterogeneity observed among dyslexics, both of these hypotheses can be entertained and each one could help explain why dyslexics are phonologically impaired.

4Given that visual information provided by articulatory movements (Nikov, 1992) enhances the intelligibility of speech by adding to the auditory information of the speech wave (Colin & Radeau, 2003), it is reasonable to assume that visual (e.g. labial) information could help dyslexic children integrate phonological information. This raises the question addressed in the present study: Do dyslexics integrate audio-visual information in the same way as normal readers? The integration of audio-visual information can give rise to a phenomenon known as the McGurk effect (McGurk & MacDonald, 1976), which generates an illusory percept (Figure 1). In this effect, "the presentation of a perfectly audible acoustic message simultaneously with articulatory movements corresponding to a different message often generates a percept that does not correspond to the auditory information but integrates some features of the visual message" (Colin & Radeau, 2003, p. 499).

5

Figure 1. An example of the McGurk effect: the mismatched information provided by the two sensory channels produces an illusory percept.

6The McGurk effect has been well documented for adults in several languages, including French. It has also been studied in children, although less often among children with language impairments, and to our knowledge, it has never been examined in dyslexic children. Burnham and Dodd (2004) — see also Dodd, McIntosh, Erdener, & Burnham, 2008 — demonstrated the phenomenon in a group of 4-year-olds with the stimuli /ba/ and /ga/. The experimental group was placed in situations where illusory percepts could emerge, whereas the control group received auditory information only, with /ba/, /da/, and a/ as the stimuli. The children's familiarity with some of the stimuli was then tested. The results showed that the experimental group judged /da/ as significantly more familiar than /ba/, even though this sound had never been presented to them during the experiment. The difference was not significant for the control group. In another study, older children who were learning-disabled but had not been diagnosed as dyslexics were tested (Hayes, Tiippana, Nicol, Sams, & Kraus, 2003). That study used three /aCa/ stimuli (/apa/, /ada/, and /aka/) in which the consonants were voiceless stops that differed in place of articulation (bilabial /p/, dental /d/, and velar /k/). The stimuli were presented unimodally (auditory-only, visual-only) or bimodally (audio-visual). In the latter condition, the audio and visual components were either congruent (corresponded to the same sound) or incongruent and thus likely to produce the McGurk effect. Considering that oral language learning does not take place in a noise-free environment, the authors presented the stimuli under three background noise conditions (quiet, 0 dB, -12 dB). The performance of the children with learning disabilities did not differ from that of the controls (normal-learning children of the same chronological age) when they were identifying unimodal stimuli or congruent bimodal stimuli, but when the auditory and visual components were incongruent, the learning-disabled reported fewer perceptual illusions when the background noise level was high. In addition, the disabled subjects' identification errors showed that they relied more on the visual than the auditory modality. The authors concluded that there is an audio-visual integration pattern specific to learning-disabled children.

7The purpose of the present study was to find out whether this particular pattern is also found among children diagnosed with developmental dyslexia. We used Hayes et al.'s (2003) experimental protocol, but with two modifications to permit better interpretation of the data collected. Firstly, the group of dyslexic children was compared not only to a group of same-age children, but also to a group of children with the same reading level (for a discussion of methodological considerations, see Casalis, Colé, & Sopo, 2004). We reasoned that response patterns differing from those of both control groups would argue in favor of the idea that dyslexic children have specific processing strategies linked to their reading impairment. Secondly, insofar as a potential deficiency in temporal processing has often been evoked for dyslexics, we included an audio-visual condition with a marked desynchronization between the sound and the image. More specifically, the sound was presented 160 ms before the corresponding image. We know that, as a general rule, a sound delay, even as long as several tens of milliseconds with respect to the visual image, does not interfere with perception, whereas presentation of the sound in advance is almost always detected for visual delays of 20 ms or more (Cavé, Ragot, & Fano, 1992). For the same type of temporal lag in speech, the McGurk effect disappears beginning at a delay of 60 ms (Massaro & Cohen, 1993, 1996).

METHOD

Participants

8The participants were 36 pupils from an elementary school that has a special class for dyslexics. Parental consent was obtained in advance. Three groups were set up (see Table 1): a group of 12 dyslexic children (Group Dys: 8 boys and 4 girls), a control group of children matched on chronological age (Group CA: 3 boys and 9 girls), and a control group of children matched on reading age (Group RA: 6 boys and 6 girls). The reading levels of the two control groups were assessed on the K-ABC subtest for reading and sight-reading aloud (Kaufman & Kaufman, 1993). This same test had been used by professionals as one of the selection criteria for placing the dyslexic children in the special class. All of the children were right-handed except one, all had normal or corrected-to-normal vision, and none had any hearing difficulties.

Table 1. Mean chronological age (in months) and mean reading age (in months) of the three groups (standard deviations in parentheses). Dys: dyslexic children. CA: chronological-age controls. RA: reading-age controls.

Dys

CA

RA

Chronological age

119.83

(11.28)

119.91

(10.61)

83.33

(11.08)

Reading age

90.5

(11.08)

115.16

(12.4)

92.75

(10.42)

Materials

9The stimuli were produced by a native speaker of French and pre-recorded in a soundproof room. The lower portion of her face was filmed as she spoke. The images obtained were used as the visual stimuli, and as the visual portion of the audio-visual montages that would become the mismatched stimuli. The same stimuli as in the Hayes et al. (2003) study were used, i.e., /apa/, /ata/, and /aka/. The stimuli for the practice sessions were /ale/, /oto/, and /aki/. The montages were produced at a computer workstation using Pinnacle® software. The mismatched stimuli were created by presenting the sound of /aka/, for example, and the video of /apa/ (see Table 4 for the full set of stimuli used in the experiments). The stimuli were presented for a fixed duration of 350 ms in all presentation conditions. The durations of the stimuli were equalized by editing out the dead space at the beginning and end of the recordings.

Procedure

10Two computer-controlled experiments were run on 36 children at a two-week interval. In both experiments, the children were tested individually in a quiet room of the school (the library). A laptop computer was placed on a table with the screen located 35 cm from the child. Two small loudspeakers were placed on each side of the screen. The tasks used in the experiments were presented as a game and lasted about 10 minutes each. In all cases, the tasks involved identifying auditory or visual stimuli presented alone (Experiment 1) or together (Experiment 2). The participant was asked to say out loud what he/she had perceived in the auditory-only condition, and to guess what was being said while watching the lips on the screen in the visual-only condition. The responses were noted by an experimenter (who remained behind the child) and were also recorded on tape for later review in cases of doubt.

11Unimodal perception tasks (Experiment 1). The first experiment consisted of two unimodal identification tasks: auditory-only and visual-only. For all participants, the tasks were run in the auditory-only then visual-only order and were separated by a short break. Each task was preceded by a practice phase that could be repeated if necessary. In the auditory-only task, the stimuli were presented in three listening conditions: no background noise, or mixed with white noise to obtain a signal-to-noise (S/N) ratio of either 0 dB or -12 dB. Each stimulus was presented six times, so the participants had to identify 18 stimuli in all. In order to have the same number of stimuli in the two tasks, the visual-only stimuli were presented six times. The same random order was used for all participants in both tasks.

12Bimodal perception tasks (Experiment 2).Two weeks after the first experiment, the same children participated in Experiment 2, which also consisted of two tasks separated by a break. Each task was preceded by a practice session that was repeated until the instructions were well understood. The same images and sounds as in Experiment 1 were used, except they were made up of both auditory and visual information and were therefore bimodal. In the congruence task (congruent bimodal stimuli), the visual (labial) and auditory information matched; in the incongruence task, the visual and auditory information did not match (incongruent bimodal stimuli). The bimodal stimuli used were the same as in the Hayes et al. (2003) study: auditory /apa/ + visual /ata/, auditory /ata/ + visual /apa/, auditory /aka/ + visual /apa/, and auditory /apa/ + visual /aka/ (hereafter abbreviated A/apa/+V/ata/, A/ata/+V/apa/, etc.). In each task, the stimuli were presented in the same three noise conditions as in Experiment 1, i.e., -12 dB, 0 dB, and no noise. Since we know that the McGurk effect is weaker or even disappears when the sound precedes the image (for a review, see Colin & Radeau, 2003), a synchrony factor was included in both tasks: the auditory information was presented either 160 ms before, or at the same time as the visual information. Each bimodal stimulus was presented twice, making a total of 36 stimuli to identify on the congruence task and 48 on the incongruence task. The stimuli were presented in the same random order for all participants.

RESULTS

Results of Experiment 1

13The results of the two unimodal tasks are presented in Table 2.

Table 2. Mean number of correct identifications in the auditory-only (max = 2) and visual-only (max = 6) tasks (standard deviations in parentheses). Dys: dyslexic children. CA: chronological-age controls. RA: reading-age controls.

Auditory

Dys

CA

RA

Visual

Dys

CA

RA

/apa/ -12 dB

0.33

(0.65)

0.67

(0.78)

0.42

(0.67)

/apa/

0 dB

0.33

(0.78)

0.33

(0.78)

0.42

(0.67)

/apa/

No noise

1.42

(0.9)

1.33

(0.65)

1.17

(0.72)

/apa/

3.41

(1.62)

5.5

(0.67)

4.03

(2.11)

/ata/

-12 dB

0.67

(0.78)

0.83

(0.72)

0.42

(0.67)

/ata/

0 dB

1.08

(0.99)

1.25

(0.62)

0.75

(0.75)

/ata/

No noise

1.75

(0.52)

1.92

(0.29)

1.75

(0.62)

/ata/

2.91

(2.06)

3.58

(1.44)

1.83

(2.08)

/aka/

-12 dB

0.83

(0.94)

0.75

(0.75)

0.92

(0.79)

/aka/

0 dB

0.83

(0.83)

0.92

(0.79)

1.00

(0.85)

/aka/

No noise

1.92

(0.29)

2.00

(0)

1.83

(0.58)

/aka/

1.08

(1.56)

3.66

(2.10)

2.50

(2.43)

14The data from the auditory-only task were analyzed in a 3*2*3 ANOVA with group (Dys, CA, RA) as a between-subject factor, and noise (-12 dB, 0 dB, no noise) and stimulus (/apa/, /ata/, /aka/) as within-subject factors. The number of correct identifications was the repeated measure. No group effect was found (F(2,33) = 1.228, p = .30) nor any interactions (all but one F < 1 at p > .80; one F > 1 at p > .20). The noise factor had a highly significant effect (F(2,66) = 125.664, p < .0001): it was in the no-noise condition that the identifications were the best (mean 83%, vs. 38% for 0 dB and 32% for -12 dB). The stimulus effect was also highly significant (F(2,66) = 9.101, p < .0005): /apa/ was identified significantly less well (36%) than /ata/ (58%) and /aka/ (61%), which did not differ from each other.

15The data from the visual-only task were analyzed in a 3*3 ANOVA with group (Dys, CA, RA) as a between-subject factor, and stimulus (/apa/, /ata/, /aka/) as a within-subject factor. The number of correct identifications was the repeated measure. Unlike the auditory-only task, there was a significant group effect (F(2,33) = 8.286, p < .005). Group CA (71%) outperformed groups Dys and RA, which did not differ from each other (41% and 47%, respectively). There was also a significant stimulus effect (F(2,66) = 11.577, p < .0001). The viseme /apa/ was correctly identified significantly more often (72%) than the visemes /ata/ and /aka/, which did not differ (46% and 40%, respectively). The stimulus-by-group interaction was nonsignificant (F(4,66) = 1.686, p > .16).

16The results of the two tasks of Experiment 1 confirmed and supplemented the findings obtained in the Hayes et al. (2003) study comparing normal-learning children to children with learning disabilities. Although our stimulus-presentation time was shorter (350 ms instead of 600 ms in the Hayes et al. study), our three groups of children did not differ on auditory identification, whereas Group CA outperformed the other two groups on visual identification. However, contrary to their study, no group-by-stimulus interaction was found in the visual-only condition.

17Whereas our three groups performed better on /apa/ than on the other two visemes, this result was obtained only for the learning-disabled group in Hayes et al. In fact, the scores of our three groups pooled for the visemes /apa/, /ata/, and /aka/ (72%, 46%, and 40%, respectively) were similar to those of their learning-disabled group (82%, 49%, and 46%). Curiously, their group of normal-learning children identified the viseme /aka/ the best (93%). While not specifically stated by the authors, one can assume that they filmed the entire face of their speaker. In this case, the normal-reading children may have made use of several visual cues (not only lip movements, but also facial expressions, for example), which the learning-disabled would have had trouble doing. In our study, we presented only the lower part of our speaker's face (mouth and chin). In addition to being unusual, this situation prevented the subjects from processing cues other than lip movements. The fact that the viseme /apa/ contains a bilabial consonant made it much easier to identify than the visemes with a dental or velar place of articulation.

18The stimuli in our two presentation conditions gave rise to very different performance patterns. The identification patterns in the auditory-only and visual-only conditions were reversed for /apa/ (36% and 72%, respectively) and /aka/ (61% and 40%). This phenomenon (i.e., better visual identification of /p/ and better auditory identification of /k/) could be due to the perceptual salience of stimuli produced in each of these modalities (Colin, Radeau, Deltenre, Demolin, & Soquet, 2002): /p/ is a bilabial so it conveys substantial visual information, whereas /k/ is a velar or palatal with a greater burst energy (at least in French) that conveys more auditory information than labials. Also, and as expected, background noise impeded identification for all groups in the auditory-only condition.

19Two important findings stand out from this experiment on unimodal information processing. First of all, the dyslexic children did not differ from the two control groups on the auditory identification of these stimuli. Secondly, while the dyslexics' performance was worse when they had to make use of labial information, it was only worse compared to their same-age peers, not compared to children of the same reading level. This finding reduces the explanatory power of the phonological-impairment hypothesis. It all seems as if an improvement in reading skills is accompanied by better processing of labial information.

20This raises the question of whether dyslexic children would differ from the other two groups when responding to stimuli containing both auditory and visual information. In other words, are they as prone as normal readers to the McGurk effect (Burnham & Dodd, 2004) or is their sensitivity to this effect diminished as it was among the learning-disabled children in the Hayes et al. (2003) study? The aim of Experiment 2 was to answer this question.

Results of Experiment 2

21Congruence task. The results of the congruence task (matched auditory and visual stimuli) are presented in Table 3.

Table 3. Mean number of correct identifications (max = 2) on the congruence task, by group, lag between the auditory and visual information, and noise level (standard deviations in parentheses).

Time lag

Noise

Stimulus

Dys

CA

RA

-12 dB

/apa/

1.33

(0.78)

1.67

(0.65)

1.42

(0.9)

-12 dB

/ata/

0.92

(0.79)

1.25

(0.62)

1.00

(0.74)

-12 dB

/aka/

0.5

(0.52)

1.00

(0.85)

0.58

(0.79)

No lag

0 dB

/apa/

1.25

(0.87)

1.83

(0.58)

1.42

(0.67)

0 dB

/ata/

1.5

(0.67)

1.33

(0.89)

1.25

(0/87)

0 dB

/aka/

0.58

(0.67)

1.5

(0.79)

0.83

(0.83)

No noise

/apa/

1.83

(0.39)

1.83

(0.58)

1.67

(0.78)

No noise

/ata/

1.83

(0.57)

1.83

(0.58)

1.92

(0.29)

No noise

/aka/

1.92

(0.29)

1.67

(0.78)

2.00

(0)

-12 dB

/apa/

1.58

(0.67)

1.83

(0.58)

1.00

(0.85)

-12 dB

/ata/

1.00

(0.73)

1.12

(0.83)

0.83

(0.72)

-12 dB

/aka/

0.67

(0.89)

1.42

(0.79)

0.83

(0.72)

Lag:

0 dB

/apa/

1.75

(0.62)

1.67

(0.65)

1.67

(0.65)

auditory 160

0 dB

/ata/

1.42

(0.67)

1.33

(0.78)

1.08

(0.9)

ms before

0 dB

/aka/

0.25

(0.62)

1.25

(0.87)

0.67

(0.78)

visual

No noise

/apa/

1.83

(0.39)

1.75

(0.62)

1.92

(0.29)

No noise

/ata/

1.83

(0.39)

1.75

(0.62)

1.75

(0.62)

No noise

/aka/

1.92

(0.29)

1.75

(0.62)

2.00

(0)

22The congruence-task data were analyzed in a 3*2*3*3 ANOVA with group (Dys, CA, RA) as a between-subject factor, and synchrony (lag, no lag), noise (-12 dB, 0 dB, no noise), and stimulus (/apa/, /ata/, /aka/) as within-subject factors. The number of correct identifications was the repeated measure. No group effect (F(2,33) = 1.674, p = .20) or synchrony effect (F(1,33) = .038, p = .84) was obtained. Only the noise (F(2,66) = 80.068, p < .0001) and stimulus (F(2,66) = 14.933, p < .0001) factors had main effects. The louder the noise, the more difficult it was to identify the stimulus. And /apa/ was easier to identify than /ata/ (p < .005), which in turn was easier than /aka/ (p < .05) (81%, 69%, and 59%, respectively). The only significant interactions were between noise and group (F(4,66) = 4.958, p < .005) and between noise and stimulus (F(4,132) = 10.139, p < .0001). As Figure 2 shows, the dyslexic children performed similarly to the RA children. These two groups were more sensitive to noise effects than Group CA was. In addition, in this situation where both auditory and visual information were available, the performance pattern was the opposite of that found for the auditory-only task in Experiment 1: /aka/ was the stimulus most affected by noise (Figure 3). It appears as though the hindrance created by the noise led the children to rely more on visual than auditory information. The advantage of bilabial information on the consonant /p/ was found again here, as in the visual-only task of Experiment 1.

23In order to assess the extent to which visual information contributed to auditory-stimulus identification, an additional analysis was conducted to compare the auditory-only performance with the performance in the congruent situations where there was no audio-to-visual lag. This was done by analyzing the number of correct responses in a 3*2*3*3 ANOVA with group (Dys, CA, RA) as a between-subject factor, and task (auditory only, congruent auditory and visual stimuli), noise (-12 dB, 0 dB, no noise), and stimulus (/apa/, /ata/, /aka/) as within-subject factors. For all groups, the availability of both kinds of information improved performance, particularly in the conditions with noise (task-by-noise interaction: F(2,66) = 4.894, p < .01). For S/N ratios of -12 dB and 0 dB, the performance gains were 21% (p < .0001) and 25% (p < .0001), respectively, whereas the gain was only 8% in the no-noise condition (p <.10). However, it was the chronological-age controls that benefited the most from the visual information. This group effect (F(2,33) = 3.574, p < .05) was in fact caused by the CA group alone, since it differed from the other two groups while they did not differ from each other (p = .95). This result is consistent with the one obtained in Experiment 1 and thus supplies an additional argument supporting the facilitating effect of better reading skills.

Figure 2. Mean number of correct identifications (max = 2), by group and noise level. Dys: dyslexics. CA: same-chronological-age controls. RA: same-reading-age controls.

Figure 3. Mean number of correct identifications (max = 2), by stimulus and noise level.

24Incongruence task.In the incongruence task (mismatched auditory and visual stimuli), A/apa/+V/ata/ almost never gave rise to an illusion in any of the three groups. This situation was therefore left out of the data analysis presented in Table 4.

25The measure here was the number of perceptual illusions — whether fusions (/aCa/) or combinations (/aCCa/) — predicted by the McGurk effect. The possible fusions and combinations were /apta/ or /atpa/ for A/ata/+V/apa/; /ata/, /apka/, or /akpa/ for A/aka/+V/apa/; and /ata/, /apka/, or /akpa/ for A/apa/+V/aka/.

Table 4. Mean number of illusions (max = 2) on the incongruence task for each audio-visual stimulus (standard deviations in parentheses).

Time lag

Noise

Stimulus

Dys

CA

RA

-12 dB

A/ata/+V/apa/

0

0.17

(0.39)

0.42

(0.79)

-12 dB

A/aka/+V/apa/

0.17

(0.39)

0.5

(0.52)

0.67

(0.78)

-12 dB

A/apa/+V/aka/

1.33

(0.78)

1.42

(0.67)

1.33

(0.78)

0 dB

A/ata/+V/apa/

0

0.42

(0.67)

0.5

(0.79)

No lag

0 dB

A/aka/+V/apa/

0.17

(0.39)

0.25

(0.45)

0.58

(0.79)

0 dB

A/apa/+V/aka/

1.25

(0.62)

0.67

(0.49)

1.17

(0.83)

No noise

A/ata/+V/apa/

0.08

(0.29)

0.67

(0.78)

0.42

(0.79)

No noise

A/aka/+V/apa/

0

0.33

(0.65)

0.17

(0.39)

No noise

A/apa/+V/aka/

0.5

(0.67)

0.83

(0.83)

0.33

(0.65)

-12 dB

A/ata/+V/apa/

0

0.67

(0.78)

0.58

(0.79)

-12 dB

A/aka/+V/apa/

0.25

(0.45)

0.17

(0.39)

0.5

(0.67)

-12 dB

A/apa/+V/aka/

0.75

(0.75)

0.83

(0.72)

0.75

(0.75)

Lag:

0 dB

A/ata/+V/apa/

0

0.17

(0.39)

0.58

(0.67)

auditory 160

0 dB

A/aka/+V/apa/

0.58

(0.73)

0.33

(0.85)

0.42

(0.67)

ms before

0 dB

A/apa/+V/aka/

1.17

(0.83)

0.42

(0.67)

1.17

(0.72)

visual

No noise

A/ata/+V/apa/

0

0.58

(0.67)

0.5

(0.79)

No noise

A/aka/+V/apa/

0

0.5

(0.67)

0.42

(0.79)

No noise

A/apa/+V/aka/

1.00

(0.74)

0.25

(0.62)

0.33

(0.65)

26The incongruence-task data were analyzed in a 3*2*3*3 ANOVA with group (Dys, CA, RA) as a between-subject factor, and synchrony (lag, no lag), noise (-12 dB, 0 dB, no noise), and stimulus (A/ata/+V/apa/, A/aka/+V/apa/, A/apa/+V/aka/) as within-subject factors. The number of illusions was the repeated measure. No group effect (F(2,33) = 1.836, p = .17) or synchrony effect (F(1,33) = 1.725, p = .19) was found. The noise factor had a significant effect (F(2,66) = 4.331, p < .01): illusions in noise (29% and 27% for -12 dB and 0 dB, respectively) significantly outnumbered ones in no noise (19%). The noise-by-group interaction was significant too (F(4,66) = 2.799, p < .05). However, pairwise comparisons showed that this effect was due solely to the 0 dB condition, in which, curiously, Group CA perceived fewer illusions than the other two groups. The stimulus also had a main effect on the number of perceived illusions (F(2,66) = 32.489, p < .0001) — A/apa/+V/aka/ triggered a significantly higher number of illusions than the other two stimuli (all p's < .0001), which did not differ from each other (p = .85). However, the stimulus-by-group interaction (F(4,66) = 4.233, p < .005) revealed an interesting phenomenon: it was only on stimulus A/apa/+V/aka/ that Group Dys reported perceptual illusions; this effect was stronger than in the other two groups (Figure 4).

Figure 4. Mean number of illusions (max = 2), by group and stimulus.

27The facilitatory role of A/apa/+V/aka/ in the perception of illusions was demonstrated by two interactions: stimulus-by-synchrony (F(2,66) = 4.649, p < .05) and stimulus-by-noise (F(4,132) = 6.272, p < .0001). This was the only stimulus where the lag between the auditory and visual information produced an effect: the presentation of the /apa/ sound decreased the McGurk effect when it preceded the viseme /aka/ by 160 ms (Figure 5). This was also the only stimulus where a noise effect was observed: there were more illusions when there was any amount of noise than when there was no noise at all (Figure 6).

Figure 5. Mean number of illusions (max = 2), by stimulus and synchrony condition.

Figure 6. Mean number of illusions (max = 2), by stimulus and noise level.

28Responses based solely on visual information or solely on auditory information were also analyzed. This was done by calculating an index equal to the number of visual responses divided by the number of auditory responses. A value greater than 1 means that visual information was favored, whereas a value less than 1 indicates that auditory information took precedence. The mean values of this index were 1.4, 1.5, and 0.9 for Groups Dys, CA, and RA, respectively. The difference between these values was not significant, however, so no conclusion can be drawn here. At best, we can note that this result pattern did not replicate Hayes et al.'s (2003) study, where learning-disabled children were found to give more vision-based responses than did normal-learning children in the elevated-noise condition.

DISCUSSION

29The two experiments reported here explored the audio-visual integration of speech by native French-speaking dyslexic children. Dyslexic performance was compared with that of two peer groups, one of the same chronological age and the other of the same reading level. The tasks performed by the children consisted of identifying three /aCa/ stimuli in which the consonants were voiceless stops that differed by their place of articulation. The stimuli were presented unimodally (auditory-only or visual-only) in Experiment 1 and bimodally (auditory and visual) in Experiment 2. In the second experiment, the information provided in the two modalities was either congruent or incongruent, so the latter condition was likely to give rise to illusory percepts of the McGurk type. To simulate natural listening and speaking conditions, the auditory stimuli (alone or combined with visual stimuli) were presented in three conditions of background noise. In addition, whenever both auditory and visual information was provided, the auditory information was presented either at the same time as the visual information or 160 ms in advance.

30In Experiment 1, despite the fact that the stimulus presentation time was shorter than in the Hayes et al. (2003) experiment, the dyslexic children did not differ from their same-age or same-reading-level controls in the auditory-only condition, regardless of how much background noise accompanied the stimuli. The only group effect was observed in the visual-only condition — the same-age control group outperformed the other two groups. This result is particularly interesting. It reduces the explanatory power of the phonological-impairment hypothesis and suggests that better reading skills can lead to better processing of information conveyed by the lips. In this vein, it would be worthwhile to compare the performance of dyslexic children to that of deaf children who use cued speech. Also, the viseme /apa/ was correctly identified more often than the other two. The lack of a stimulus-by-group interaction indicated that the higher /apa/ identification rates were of the same magnitude for all three groups. It seems, then, that dyslexics are not impaired in this area: they appear to process speech-related visual information as well as normal readers of the same reading age but less well than normal readers of the same chronological age. Comparing the data obtained for each sensory modality, we can see that the results reflect the differing perceptual characteristics of each modality, i.e., the phonemes (stop consonants) were identified by way of the burst energy, and the visemes were identified on the basis of the visual information they conveyed.

31In Experiment 2, the noise-by-group interaction observed in cases where congruent auditory and visual information was presented showed that the same-chronological-age children were the least sensitive to the noise effect. Thus, dyslexics were just as hindered by noise as the reading-age controls in our study.

32Desynchronization (by 160 ms) greatly reduced the number of fusions. This finding confirms previously published results (Massaro & Cohen, 1993) and supports the idea that this type of temporal lag destroys the configurational coherence of audio-visual stimuli (Cathiard & Tiberghien, 1994; Abry, Lallouache, & Cathiard 1996; Vroomen, Keetels, de Gelder, & Bertelson, 2004).

33As a whole, the results of this study — which showed that the performance of dyslexic subjects differed from that of their same-age peers only — argue for a developmental lag. We interpret the greater ability of the same-age group to process visual (labial) cues as being an effect of their better reading skills.

34The point on which the dyslexic participants differed clearly from the normal readers, regardless of their age, was that they rarely gave combination responses; dyslexics did not differ from other two groups on fusion responses. This finding may be related to the difficulties experienced by dyslexic children in perceiving and producing consonant clusters, as mentioned by Bruck and Treiman (1990).

Top of page

Bibliography

Abry C., Lallouache, M.T., & Cathiard, M.-A. (1996). How can coarticulation models account for speech sensitivity to audio-visual desynchronization? In D. Stork & M. Hennecke (Eds.), Speechreading by Humans and Machines, NATO ASI Series F: Computer and Systems Sciences, vol. 150, pp. 247-255, Springer-Verlag, Berlin.

Burnham, D., & Dodd, B. (2004). Auditory-visual integration by prelinguistic infants: Perception of an emergent consonant in the McGurk effect. Developmental Psychology, 45(4), 204-220.

Dodd, B., McIntosh, B., Erdener, D., & Burnham, D. (2008). Perception of the auditory-visual illusion in speech perception by children with phonological disorders. Clinical Linguistics & Phonetics. 22, 69-82.

Bruck, M., & Treiman, R. (1990). Phonological awareness and spelling in normal children and dyslexics: The case of initial consonant clusters. Journal of Experimental Child Psychology, 50, 156-178.

Cathiard, M.A., & Tiberghien, G. (1994). Le visage de la parole: une cohérence bimodale temporelle ou configurationnelle. Psychologie française, Special issue "La reconnaissance des visages", 39(4), 357-374.

Casalis, S., Colé, P., & Sopo, D. (2004). Morphological awareness in developmental dyslexia. Annals of Dyslexia, 54(1), 114-138.

Cavé, C., Ragot, R., & Fano, M. (1992). Perception of sound-image synchrony in cinematographic conditions. Fourth Workshop on Rhythm Perception and Production, 62.

Colin, C., & Radeau, M. (2003). Les illusions McGurk dans la parole: 25 ans de recherche. L'Année Psychologique, 104, 497-542.

Colin, C., Radeau, M., Deltenre, P., Demolin, D., & Soquet, A. (2002). The role of sound intensity and stop-consonant voicing on McGurk fusions and combinations. European Journal of Cognitive Psychology, 14(4), 475-491.

Ducrot, S., & Grainger, J. (2007). Deployment of spatial attention to words in central and peripheral vision. Perception and Psychophysics, 69(4), 578-590.

Goswami, U., Thomson, J., Richardson, U., Stainthorp, R., Hughes, D, Rosen, S., & Scott, S.K. (2002). Amplitude envelope onsets and developmental dyslexia: A new hypothesis. Proceedings of the National Academy of Sciences, 99 (16), 10911-10916.

Hayes, E.A., Tiippana, K., Nicol, T.G., Sams, M., & Kraus, N. (2003). Integration of heard and seen speech: a factor in learning disabilities in children. Neuroscience Letters, 351, 46-50.

INSERM (2007). Dyslexie, dysorthographie, dyscalculie: Bilan des données scientifiques. Les éditions Inserm, Paris.

Kaufman, A.S., & Kaufman, N.L. (1993). K-AbC, batterie pour l'examen psychologique de l'enfant. Paris: Editions du Centre de Psychologie Appliquée.

Lété, B., & Ducrot, S. Visuo-attentional deficits in dyslexic readers in the Reicher-Wheeler Task. Current Psychology Letters: Behaviour, Brain and Cognition, vol. 24, no. 1. 2008, p. 1-20.

McGurk, H., & MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264, 746-748.

Manis, F.R., Seidenberg, M.S., Keating, P., Doi, L.M., Munson, B., & Petersen, A. (1997). Are speech perception deficits associated with developmental dyslexia? Journal of Experimental Child Psychology, 66(2), 211-23.

Massaro, D.W., & Cohen, M.M. (1993). Perceiving asynchronous bimodal speech in consonant-vowel and vowel syllables. Speech Communication, 13(1-2), 127-134.

Massaro, D.W., & Cohen M.M. (1996). Perceiving speech from inverted faces. Perception and Psychophysics, 58(7), 1047-1065.

Messbauer, V.C.S, & De Jong, P.F. (2003). Word, nonword, and visual paired associate learning in Dutch dyslexic children. Journal of Experimental Child Psychology, 84(2), 77-96.

Nikov, M. (1992). Phonétique générale et française. Paris: Presses Universitaires de France.

Reed, M.A. (1989). Speech perception and the discrimination of brief auditory cues in reading disabled children. Journal of Experimental Child Psychology, 48(2), 272-292.

Rocheron, I., Lorenzi, C., Füllgrade, C., & Dumont, A. (2003). Temporal envelope perception in dyslexic children. Neuroreport, 13, 1-6.

Serniclaes, W., Sprenger-Charolles, L., Carré, R. & Démonet, J.F. (2001). Perceptual discrimination of speech sounds in dyslexics. Journal of Speech Language and Hearing Research, 44, 384-399.

Serniclaes, W., Van Heghe, S., Mousty, P., Carré, R., & Sprenger-Charolles, L. (2004). Allophonic mode of speech perception in dyslexia. Journal of Experimental Child Psychology, 87(4), 336-361.

Spinelli, D., De Luca, M., Judica, A., & Zoccolotti, P. (2002). Crowding effects on word identification in developmental dyslexia. Cortex, 38, 179-200.

Stein, J., & Walsh, V. (1997). To see but not to read: The magnocellular theory of developmental dyslexia. Trends in Neurosciences, 20(4), 147-152.

Swan, D., & Goswami, U. (1997). Phonological awareness deficits in developmental dyslexia and phonological representations hypothesis. Journal of Experimental Child Psychology, 66(1), 18-41.

Tallal, P. (1980). Language disabilities in children: a perceptual or linguistic deficit? Journal of Pediatric Psychology, 5(2), 127-140.

Valdois, S., Bosse, M.L., & Tainturier, M.J. (2004). Cognitive correlates of developmental dyslexia: Review of evidence for a selective visual attentional deficit. Dyslexia, 10, 1-25.

Vroomen, J., Keetels, M. de Gelder, M., & Bertelson, P. (2004). Recalibration of temporal order perception by exposure to audio-visual asynchrony. Cognitive Brain Research, 22(1), 32-35.

Werker, J.F., & Tees, R.C. (1987). Speech perception in severely disabled and average reading children. Canadian Journal of Psychology, 41, 48-61.

Ziegler, J.C., Pech-Georges, C., George, F., Alario, F.X., & Lorenzi, C. (2005). Deficits in speech perception predict language learning impairment. PNAS, 102(39), 14110-14115.

Top of page

List of illustrations

Caption Figure 1. An example of the McGurk effect: the mismatched information provided by the two sensory channels produces an illusory percept.
URL http://journals.openedition.org/cpl/docannexe/image/4928/img-1.jpg
File image/jpeg, 16k
Caption Figure 2. Mean number of correct identifications (max = 2), by group and noise level. Dys: dyslexics. CA: same-chronological-age controls. RA: same-reading-age controls.
URL http://journals.openedition.org/cpl/docannexe/image/4928/img-2.jpg
File image/jpeg, 16k
Caption Figure 3. Mean number of correct identifications (max = 2), by stimulus and noise level.
URL http://journals.openedition.org/cpl/docannexe/image/4928/img-3.jpg
File image/jpeg, 16k
Caption Figure 4. Mean number of illusions (max = 2), by group and stimulus.
URL http://journals.openedition.org/cpl/docannexe/image/4928/img-4.jpg
File image/jpeg, 12k
Caption Figure 5. Mean number of illusions (max = 2), by stimulus and synchrony condition.
URL http://journals.openedition.org/cpl/docannexe/image/4928/img-5.jpg
File image/jpeg, 12k
Caption Figure 6. Mean number of illusions (max = 2), by stimulus and noise level.
URL http://journals.openedition.org/cpl/docannexe/image/4928/img-6.jpg
File image/jpeg, 12k
Top of page

References

Electronic reference

Mireille Bastien-Toniazzo, Aurélie Stroumza and Christian Cavé, “Audio-Visual Perception and Integration in Developmental Dyslexia: An Exploratory Study Using the McGurk Effect”Current psychology letters [Online], Vol. 25, Issue 3, 2009 | 2010, Online since 18 January 2010, connection on 28 March 2024. URL: http://journals.openedition.org/cpl/4928; DOI: https://doi.org/10.4000/cpl.4928

Top of page

About the authors

Mireille Bastien-Toniazzo

Laboratoire Parole et Langage, UMR 6057, Aix-Marseille Université & CNRS 29 avenue Robert-Schuman 13621 Aix-en-Provence Cedex 1 France bastien@up.univ-aix.fr

Aurélie Stroumza

Université de Provence

Christian Cavé

Laboratoire Parole et Langage, UMR 6057, Aix-Marseille Université & CNRS

Top of page

Copyright

The text and other elements (illustrations, imported files) are “All rights reserved”, unless otherwise stated.

Top of page
Search OpenEdition Search

You will be redirected to OpenEdition Search