Abstract

We used electrophysiology to determine the time course and distribution of neural activation during an English word rhyme task in hearing and congenitally deaf adults. Behavioral performance by hearing participants was at ceiling and their ERP data replicated two robust effects repeatedly observed in the literature. First, a sustained negativity, termed the contingent negative variation, was elicited following the first stimulus word. This negativity was asymmetric, being more negative over the left than right sites. The second effect we replicated in hearing participants was an enhanced negativity (N450) to nonrhyming second stimulus words. This was largest over medial, parietal regions of the right hemisphere. Accuracy on the rhyme task by the deaf group as a whole was above chance level, yet significantly poorer than hearing participants. We examined only ERP data from deaf participants who performed the task above chance level (n = 9). We observed indications of subtle differences in ERP responses between deaf and hearing groups. However, overall the patterns in the deaf group were very similar to that in the hearing group. Deaf participants, just as hearing participants, showed greater negativity to nonrhyming than rhyming words. Furthermore the onset latency of this effect was the same as that observed in hearing participants. Overall, the neural processes supporting explicit phonological judgments are very similar in deaf and hearing people, despite differences in the modality of spoken language experience. This supports the suggestion that phonological processing is to a large degree amodal or supramodal.

INTRODUCTION

People born severely or profoundly deaf have repeatedly been shown to demonstrate at least some knowledge of the phonological (sublexical) structure of spoken words. Although awareness of phonological structure is usually poorer than that of hearing peers, deaf children and adults often perform above chance level when asked to make phonological judgments, such as whether two words rhyme (Dyer, MacSweeney, Szczerbinski, Green, & Campbell, 2003; Campbell & Wright, 1988). Orthographic overlap can be a powerful, though nondeterministic, cue to phonological structure. If available, deaf people will use orthographic information to inform phonological judgments. For example, rhyme judgment accuracy to word pairs such as “cat/mat” is typically higher than to nonoverlapping pairs such as “chair/bear” (James, Rajput, Brinton, & Goswami, 2009; Sterne & Goswami, 2000; Campbell & Wright, 1988). However, numerous studies have shown that people born profoundly deaf can have some level of phonological awareness of speech that goes beyond inferences made on the basis of orthography. It is hypothesized that they derive information about the sublexical structure of a spoken word from speechreading (lipreading; Charlier & Leybaert, 2000; Dodd, 1980) and articulatory information (MacSweeney, Brammer, Waters, & Goswami, 2009; MacSweeney, Waters, Brammer, Woll, & Goswami, 2008).

Despite the fact that deaf and hearing people use different inputs upon which to establish phonological representations of speech we, and others, have recently shown that deaf and hearing adults recruit very similar neural systems when making rhyme decisions in response to pictures (MacSweeney et al., 2008, 2009; Aparicio, Gounot, Demont, & Metz-Lutz, 2007). This network consists of the left inferior frontal gyrus, the left inferior and superior parietal lobules, and the superior frontal gyrus/anterior cingulate. This left-lateralized frontoparietal network has been engaged during numerous previous studies of phonological processing in hearing people (MacSweeney et al., 2008, 2009; Aparicio et al., 2007). Given the overwhelming similarity in the network engaged by deaf and hearing people performing a rhyme task we argued that this network was to a large extent “amodal” or “supramodal.” These data provide neurobiological support to the interpretation of previous behavioral studies in this field (e.g., Leybaert, 1998).

In the current study, we sought support for the hypothesis that speech-based phonological processing is “amodal” using a complementary neuroimaging approach: electrophysiology. The aim of the current study was to contrast the ERP responses observed in deaf and hearing adults as they performed a rhyme judgment task in response to written, sequentially presented, English words. One word was presented (stimulus 1 [S1]), followed after a short interval by a second word (stimulus 2 [S2] or target). The participant's task was to decide whether the target rhymed with S1. Unlike fMRI, the timing resolution of ERP allows us to examine responses to the S1 and S2 separately. In hearing adults performing such a task, two modulations of the ERP waveform are reliably observed in response to these two inputs.

First, a sustained negativity, termed contingent negative variation (CNV; Walter, Cooper, Alridge, McCallum, & Winter, 1964), is elicited following the S1 (Rugg & Barrett, 1987; Rugg, 1984a, 1984b). In hearing people performing visual rhyming tasks, using words, letters, or pictures, this negativity is asymmetric and is more negative over the left than right anterior electrodes. Different CNV distributions are observed during nonlinguistic tasks, such as letter case judgment, even when the stimuli rhyme (Rugg, 1984b). One interpretation of the CNV asymmetry during a rhyme task is that it may reflect rehearsal of S1 in anticipation of target, because it is greater at longer than shorter ISIs (Rugg, 1984b). However, in a developmental study Grossi et al. (Grossi, Coch, Coffey-Corina, Holcomb, & Neville, 2001) showed that, although the asymmetry of the CNV developed with age and correlated with reading and spelling scores, it did not correlate with measures of working memory. Grossi et al. therefore suggested that it is unlikely to reflect rehearsal, but rather that it may reflect “allocation of task-specific resources to areas specific to phonological decoding of written words” (p. 621)—specifically, they argue, left prefrontal cortex.

The second component reliably observed during visual rhyme tasks in hearing adults occurs in response to the second stimulus, the target word. This consists of greater negativity at around 450 msec to nonrhyming than rhyming written words, which is largest over right, temporo-parietal sites (Weber-Fox, Spencer, Cuadrado, & Smith, 2003; Grossi et al., 2001; Perez-Abalo, Rodriguez, Bobes, Gutierrez, & Valdes-Sosa, 1994; Barrett & Rugg, 1989, 1990; Rugg & Barrett, 1987; Rugg, 1984b). Although this effect is reliably larger at right than left electrodes, Khateb et al. (Khateb et al., 2007) report that the actual generators of this modulation may be left temporal and frontal regions during rhyming trials and bilateral temporal regions during nonrhyming trials. Although this component is similar to that found in studies of semantic incongruity (Kutas & Hillyard, 1980a, 1980b, 1982), it does not appear to be dependent on semantic processing, because the same effect has been found for nonwords (Coch, Hart, & Mitra, 2008; Rugg, 1984a) and single letter stimuli (Coch, George, & Berger, 2008; Coch, Hart, et al., 2008). Nevertheless, the responses to phonological and semantic incongruity are thought to belong to the same general class of negativities, peaking around 400 msec, reflecting a response to a stimulus that does not conform to expectancy (Rugg, 1984a). Given the similarity between the components reflecting the effect of semantic and phonological context, the enhanced negativity in response to nonrhymes compared with rhymes has been given a number of different names to distinguish the two effects. The component has been referred to as the N450 (Barrett & Rugg, 1990), the N350 (Weber-Fox, Spencer, Spruill, & Smith, 2004; Weber-Fox et al., 2003), and the rhyme effect, defined as the difference wave between rhyming and nonrhyming trials (Grossi et al., 2001). For consistency with the early studies conducted by Rugg and colleagues, we will refer to this component as the N450 (Barrett & Rugg, 1989, 1990; Rugg & Barrett, 1987; Rugg, 1984a, 1984b; Rugg, Lines, & Milner, 1984) and the enhanced negativity of this component in response to nonrhyming words as the “N450 modulation.”

Our previous fMRI study suggested that the same core network was recruited by both deaf and hearing people during a rhyme judgment task. However, activation within this network was not identical. We reported enhanced activation in left inferior frontal regions in deaf than hearing participants, though they were matched on reading and on rhyme task performance (MacSweeney et al., 2008). We have also reported the same weighting within this network in a group of hearing adult dyslexics (MacSweeney et al., 2009). Deaf people and hearing people with dyslexia both have difficulties with phonological awareness. Given the similarity of data observed in our fMRI study, we turn to ERP studies of dyslexic adults to inform our predictions from this study. To our knowledge, there is only one such study in the literature. Russeler et al. (Russeler, Becker, Johannes, & Munte, 2007) report that dyslexic adults performing a written word rhyme task show a delayed and longer-lasting N450 modulation, with no reduction in amplitude, when compared with typically reading controls. In contrast, studies of dyslexic children performing a written word rhyme task have reported reduced amplitude of the N450 modulation (Jednorog, Marchewka, Tacikowski, & Grabowska, 2010; McPherson, Ackerman, Holcomb, & Dykman, 1998; Ackerman, Dykman, & Oglesby, 1994). Reduced amplitude of the N450 modulation in dyslexic children is more likely because of reduced phonological processing proficiency than maturational factors, since the distribution, onset latency, and amplitude of modulation of the N450 is similar in hearing nondyslexic children from the age of 7 years through to adulthood (Weber-Fox et al., 2003; Grossi et al., 2001; Ackerman et al., 1994).

In the current study, we examined the identity and timing of the neural processes supporting rhyming in deaf and hearing adults. On the basis of previous studies of phonological judgment in deaf adults, it was predicted that the deaf group would perform above chance level on the rhyme judgment task, but that their performance would be poorer than that of hearing adults. With regard to the ERP data, given the similarity in the neural systems recruited during a rhyme picture task in our previous fMRI study (MacSweeney et al., 2008, 2009), we predicted very similar, but nonidentical, patterns of time course and activation of ERPs in deaf and hearing adults performing the rhyme task. With regard to the CNV, we made no specific predictions about group differences given that there are no relevant ERP studies with deaf people and previous studies of dyslexic adults have not analyzed this component. With regard to the N450, we predicted that both groups would show enhanced negativity to nonrhyming than rhyming words and the topography of this effect should be similar between groups. However, with regard to the timing of the N450 modulation, on the basis of the one existing ERP study of dyslexic adults performing a rhyme task (Russeler et al., 2007), we predict that a later onset latency and longer duration of the N450 modulation may be seen in deaf than hearing participants.

METHODS

Participants

Thirty participants were tested. All were right-handed (Edinburgh Handedness Inventory; Oldfield, 1971), had normal or corrected vision, and had no known additional disabilities or neurological damage, and none were taking any medication that may affect their brain.

Fifteen deaf participants (5 men) all reported being severely or profoundly deaf from birth. Audiometry data that were available for 7 of the 15 participants confirmed this, with all having a hearing loss of at least 85 dB in their better ear. To ensure consistency of language background, all were native signers of American Sign Language (ASL), 14 having acquired it from their deaf parents. One additional participant was included who had learned ASL from his extended family, which included 46 deaf relatives through 14 generations, and from his hearing mother, who was a native signer of ASL having acquired it from her deaf parents. For the purposes of this study, he was therefore considered to be a native signer. Fifteen hearing participants (5 men) were also tested. All were native monolingual speakers of American English and knew no ASL.

Before the ERP testing session all participants completed a questionnaire regarding hearing status, language background and education level (see Table 1 for participant characteristics). Participants indicated their education level on a 7-point scale: 1 = less than 7th grade (approximately 12 years) to 7 = masters or PhD degree. They also performed pen and paper tests of reading and nonverbal IQ (NVIQ). The NVIQ test was a shortened version of the Ravens Advanced Progressive Matrices (Bors & Stokes, 1998). Because this test is not fully standardized, the raw scores from this test were used to match deaf and hearing groups. Both the vocabulary and comprehension subsections from the Gates–MacGinitie reading test (Level 4, Form S) were administered (MacGinitie, MacGinitie, Maria, & Dreyer, 2000). The Gates–MacGinitie reading test assesses reading in terms of grade equivalent up to a “post-high-school” classification (raw scores are reported in Table 1 for group comparisons). Approximate reading ages were established on the basis of these scores. A “post-high-school” classification was attributed the very conservative reading age of 18 years.

Table 1. 

Characteristics of Deaf and Hearing Participants Tested: Mean (SD) [Range] of Age, NVIQ (Shortened Version of Advanced Progressive Matrices [APM]), Reading Level (Gates–McGinitie Reading Test) and Education Level


Age* (Months)
NVIQ (APM) (Max = 12)
Reading Raw Score** (Max = 93)
Education Level (Max = 7)
Deaf (n = 15) 409 (129) [255–677] 7.1 (3.1) [3–12] 79.9 (11.8) [49–93] 5.9 (1.0) [4–7] 
Hearing (n = 15) 306 (97) [224–549] 7.6 (3.0) [3–11] 90.4 (1.5) [87–92] 5.6 (0.74) [5–7] 

Age* (Months)
NVIQ (APM) (Max = 12)
Reading Raw Score** (Max = 93)
Education Level (Max = 7)
Deaf (n = 15) 409 (129) [255–677] 7.1 (3.1) [3–12] 79.9 (11.8) [49–93] 5.9 (1.0) [4–7] 
Hearing (n = 15) 306 (97) [224–549] 7.6 (3.0) [3–11] 90.4 (1.5) [87–92] 5.6 (0.74) [5–7] 

Deaf and hearing participants did not differ on NVIQ or education level (ps > .1). However, hearing participants were younger (*p < .05) and better readers than deaf participants (**p < .005).

NB: see Table 3 for characteristics of participants included in the ERP analyses.

Deaf and hearing participants did not differ on NVIQ, p > .1, and educational level, p > .1. However, hearing participants were younger, t(1, 28) = 2.46, p < .05, and better readers than deaf participants, t(1, 28) = −3.43, p < .005. Fourteen of fifteen hearing participants had a “post-high-school” reading age. The one exception had a reading age of approximately 17 years 6 months (grade, 12.5). Five of the 15 deaf readers had a reading age classed as “post-high-school.” The mean reading age of the group as a whole was 14 years 6 months (SD = 39 months; range = 8 years 8 months, approx. 18 years [post-high-school]).

Stimuli

One hundred rhyming pairs of English words were established. The spelling of the rhyme was always different across items (e.g., chair/bear). All words were monosyllables, had a single coda and were judged, by a deaf colleague, to be appropriate for use with the target deaf adult population. These stimuli were then split into two sets of 50 rhyming pairs. The S1s and the targets were matched within each set, and also between the two sets, on number of letters, number of phonemes, frequency (Kucera & Francis, 1967), and from the MRC database (Coltheart, 1981) on number of orthographic neighbors, familiarity, concreteness, and imagibility (all ps > .1). The two sets of rhyming pairs were also matched for the number of examples they included of each vowel type.

Each target was then re-paired with a new S1, with which it did not rhyme but shared the same degree of orthographic overlap as did the rhyming word pairs. This was to ensure that orthography could not be used as a cue to indicate whether or not two words rhymed. As with rhyming S1s, all orthographic control S1s were monosyllables and had no clustered codas. To ensure that all rhyming and nonrhyming pairs shared the same degree of orthographic overlap, all orthographic control S1 stimuli were selected using the following criteria: (1) The letters shared by rhyming S1s and targets were identified. For example, tale and snail share the letters “a” and “l.” Where possible, the nonrhyming S1 had the same shared letters in the same “absolute” position as in the rhyming S1 (e.g., calm/snail). (2) When this was not possible, the letters were maintained in the same relative position (e.g., ham–lamb; arm–lamb). (3) Finally, the orthographic similarity metric devised by Davis (Davis, 2010) was used. Using this metric, there was no significant difference in the degree of orthographic overlap between rhyming (mean = .34; SD = .13) and nonrhyming pairs (mean = .36; SD = .14; p > .1).

These steps meant that shared written vowels corresponded to a different phoneme in rhyming and nonrhyming pairs. However, in nearly all cases the coda (word final consonant sounds) was necessarily the same as in the rhyming trial (e.g., float–quote; sport–quote). In addition to these item-by-item criteria, rhyming and nonrhyming S1s were matched on number of letters, frequency (Kucera & Francis, 1967), and orthographic neighborhood density and familiarity from the MRC database (Coltheart, 1981; all p's > .1). Nonrhyming S1s were also matched to targets on number of letters, frequency (Kucera & Francis, 1967), and on orthographic neighborhood density, familiarity, concreteness, and imagibility from the MRC database (Coltheart, 1981; all p's > .1).

Each participant saw each target word only once. For half the participants, this was preceded by a rhyming word (e.g., bear–fair), and for the remaining participants it was preceded by a nonrhyming word, with the same degree of orthographic overlap as the rhyming pair (e.g., scar–fair). This design ensured that ERPs to targets could be examined to the same words, but in different contexts: either preceded by a rhyming or a nonrhyming word. S1s did however differ between rhyming and nonrhyming conditions. However, these stimuli were extremely well matched. Twenty practice items were established; 10 rhyming and 10 nonrhyming pairs. Only words not seen in the main task were used. All rhyming pairs had different spellings and orthographic overlap of nonrhyme pairs was established in the same way as in the experimental lists.

Procedure

During the ERP testing session, participants sat in a sound attenuating and electrically shielded booth, 100 cm away from the computer screen. Stimuli were presented in courier new font, size 24. The vertical visual angle of the stimuli was therefore approximately 0.4° and the horizontal visual angle less than 0.7°. Participants saw a prompt (“Ready?”), which indicated they were to press a button when they were ready to continue with the next trial. A fixation line then appeared which remained for the duration of the trial. After 1000 msec, the S1 appeared on the screen, just above the line, for 500 msec. After an ISI of 1000 msec, the target appeared on the screen for 500 msec (SOA = 1500 msec). The fixation line disappeared from the screen 1500 msec after the target disappeared. This was replaced with the prompt “rhyme?” at which the participant was required to press a button with one hand if the words rhymed and with the other hand if they did not. See Figure 1 for the sequence of events. Response hand was counterbalanced across participants. Because the response was delayed, to avoid movement artifact in the ERPs, RT data were not collected. Participants were instructed to keep as still as possible and not to blink while the white line was on the screen. Stimuli were presented in a different randomized order to each subject.

Figure 1. 

Schematic representation of sequence of events during the rhyme judgment task.

Figure 1. 

Schematic representation of sequence of events during the rhyme judgment task.

Before the actual test session, rhyme pair examples were given to deaf participants and then a number of novel examples were worked through with the experimenter. Both deaf and hearing participants then conducted 20 practice trials on the computer, in the ERP booth, before the start of the actual test session.

ERP Recording and Analysis

The EEG was recorded from all participants by fitting an elastic cap (Electro-Cap International) containing 61 tin electrodes. Scalp electrode positions are shown in Figure 2. Impedances at all electrode sites included in analyses were maintained below 3 kΩ for the duration of the experiment. EEG was amplified by isolated bioelectric amplifiers (SA Inst., Co., Encinitas, CA) with a bandpass of 0.01–100 Hz and sampling rate of 250 Hz. EOG was recorded to monitor eye movements from additional electrodes at the outer canthus of both eyes, below the left eye as well as anterior cap electrodes above the eyes. Trials during which blinks or eye movements occurred were rejected before averaging following visual inspection of the responses and the subsequent use of an automated artifact rejection routine on a subject by subject basis. All electrodes were referenced to a right mastoid location during recording and later rereferenced to the average of the left and right mastoids. A 60-Hz notch filter was applied during the individual data averaging procedure. ERPs to S1s and S2s were averaged over an epoch of 1200 msec, with a 100-msec prestimulus baseline.

Figure 2. 

Distribution of electrodes used to collect ERP data. Unfilled electrodes were not included in the analysis. The two most lateral columns of electrodes (black), in the left and right hemispheres were combined in the analyses. Similarly, the two most medial columns of electrodes (dark gray) in the left and right hemispheres were also combined. This consolidation of the data resulted in two levels of the lateral/medial factor.

Figure 2. 

Distribution of electrodes used to collect ERP data. Unfilled electrodes were not included in the analysis. The two most lateral columns of electrodes (black), in the left and right hemispheres were combined in the analyses. Similarly, the two most medial columns of electrodes (dark gray) in the left and right hemispheres were also combined. This consolidation of the data resulted in two levels of the lateral/medial factor.

As anticipated, there were no differences in the ERPs to S1s preceding rhyming and non-rhyming words (all ps > .1). Therefore, ERPs to all S1s were averaged together. Electrodes in the most anterior row were excluded from the analyses because of increased impedances caused by skin potentials. In addition, because lateralization of neural responses was of specific interest in the current study, the midline electrodes were also excluded from the analyses (see Figure 2).

The remaining 46 electrodes were included in the analyses. These were analyzed using repeated-measures ANOVAs (Greenhouse–Geisser adjusted) with the following factors: Hemisphere (left/right), Lateral/Medial (two levels), and Anterior/Posterior (six levels: frontal, frontal-temporal, temporal, central, parietal, and occipital). The Lateral/Medial levels were established by establishing the mean measurements for the two most lateral columns of electrodes combined (e.g., F5, F7) and two most medial columns (e.g., F3, F1). This was the case at all except the most posterior row where the measurements for O1/O2 alone were included, because midline electrodes had been excluded from the analyses (see Figure 2). Rhyme (rhyme/nonrhyme) was included as a within-subject factor in the analyses of responses to S2s. Separate group analyses were first carried out to establish the pattern of effects in each group. Groups differences were then explored by including Group (deaf/hearing) as a between-subject factor in a mixed-model ANOVA.

RESULTS

Behavioral Data

A mixed-model ANOVA [Group (deaf/hearing) × Rhyme (rhyme/nonrhyme)] indicated: better performance by hearing than deaf participants [main effect of Group, F(1, 28) = 194, p < .001]; better performance on nonrhyming than rhyming trials [main effect of Rhyme, F(1, 28) = 12.8, p < .005] and also a significant interaction, F(1, 28) = 14, p < .005. Simple main effects showed that deaf participants were significantly poorer on rhyming than nonrhyming trials, t(14) = −3.75, p < .005, whereas hearing participants showed no difference between stimuli (Table 2).

Table 2. 

Mean Accuracy (SD) and [Range] on Rhyming and Nonrhyming Trials by All Deaf and Hearing Participants (n = 15 Each Group) and by Deaf Subgroup Performing Above Chance on Rhyme Task and Hearing Controls Matched on Age, NVIQ, and Gender (n = 9 Each Group)


Rhyme (Max = 50)
Nonrhyme (Max = 50)
Total Correct (Max = 100)
Deaf (n = 15) 24.7 (8.5) [9–38] 36.9 (7.1) [23–47] 61.7 (9.2) [48–78] 
Hearing (n = 15) 48.5 (1.1) [47–50] 48.2 (2.7) [40–50] 96.7 (3.1) [88–100] 
Deaf–above chance (n = 9) 29.8 (5.5) [22–38] 37.67 (7.2) [23–47] 67.4 (7.1) [59–78] 
Hearing (n = 9) 48.6 (1.1) [47–50] 48.8 (1.6) [40–50] 97.3 (2.3) [93–100] 

Rhyme (Max = 50)
Nonrhyme (Max = 50)
Total Correct (Max = 100)
Deaf (n = 15) 24.7 (8.5) [9–38] 36.9 (7.1) [23–47] 61.7 (9.2) [48–78] 
Hearing (n = 15) 48.5 (1.1) [47–50] 48.2 (2.7) [40–50] 96.7 (3.1) [88–100] 
Deaf–above chance (n = 9) 29.8 (5.5) [22–38] 37.67 (7.2) [23–47] 67.4 (7.1) [59–78] 
Hearing (n = 9) 48.6 (1.1) [47–50] 48.8 (1.6) [40–50] 97.3 (2.3) [93–100] 

Performance on the rhyme task by the deaf group as a whole was significantly above chance (test value = 50; t(14) = 4.89, p < .001). However, at the individual level, only 9 of the 15 deaf participants could be considered to be performing above chance level (>58% correct; p < .05). Mean accuracy for this subgroup was 67% (SD = 7.1; range = 59–78%). Given that “correct” responses in participants performing at chance level in many cases may have been guesses, only the deaf participants who performed above chance were included in the ERP analyses. These participants were matched to nine hearing participants on sex (3 male in each group), age, NVIQ, and education level (ps > .1). However, as in the main analyses, this subgroup of hearing participants were better readers, t(16) = −2.81, p < .02, than the deaf subgroup (see Table 3 for subgroup characteristics). Audiograms were available to confirm level of deafness for six of nine participants.

Table 3. 

Characteristics of Deaf Participants Who Performed Above Chance on the Rhyme Task and Their Hearing Control Group: Mean (SD) [Range] of Age, NVIQ (Shortened Version of Advanced Progressive Matrices [APM]), Reading Level (Gates–McGinitie Reading Test), and Education Level


Age (Months)
NVIQ (APM) (Max = 12)
Reading Raw Score (Max = 93)*
Education Level (Max = 7)
Deaf (n = 9) 403 (127) [255–677] 6.4 (3.1) [3–12] 81.1 (9.4) [63–93] 5.7 (3.1) [4–7] 
Hearing (n = 9) 349 (105) [231–549] 6.7 (3.4) [3–10] 90.1 (1.8) [87–92] 5.6 (0.73) [5–7] 

Age (Months)
NVIQ (APM) (Max = 12)
Reading Raw Score (Max = 93)*
Education Level (Max = 7)
Deaf (n = 9) 403 (127) [255–677] 6.4 (3.1) [3–12] 81.1 (9.4) [63–93] 5.7 (3.1) [4–7] 
Hearing (n = 9) 349 (105) [231–549] 6.7 (3.4) [3–10] 90.1 (1.8) [87–92] 5.6 (0.73) [5–7] 

Deaf and hearing participants did not differ on sex (3 men each group), age, NVIQ, or education level (ps > .1). However, hearing participants were better readers than deaf participants (p < .02).*

ANOVA of the rhyme performance data in the two subgroups showed better performance by hearing than deaf participants [main effect of Group, F(1, 16) = 147, p < .001] and better performance on non-rhyming than rhyming trials [main effect of Rhyme, F(1, 16) = 4.97, p < .05; see Table 2]. However, the Group × Rhyme interaction just failed to reach significance, F(1, 16) = 4.44, p = .051. The trend was for a greater difference between groups in the rhyme trials compared with the nonrhyme trials. In neither the deaf nor hearing groups did rhyme performance correlate with reading level or NVIQ (all ps > .1). Such a relationship would not be expected for hearing participants because they were at ceiling on the task. With regard to deaf adults, the lack of a relationship in the current data set may be related to the small sample size. The previous literature is mixed regarding the relationship between reading and speech-based phonological processing in deaf people, however, with negative results also found in larger samples (Mayberry, del Giudice, & Lieberman, 2011).

ERP Data

Only data from the nine deaf participants who scored significantly above chance level on the rhyme judgment task were included in the ERP analyses. They were contrasted with nine hearing participants matched on sex, age, NVIQ, and education level (ps > .1). Because only data from correct trials were included, less trials were included for the deaf: (rhyming trials n = 183; mean per participant = 20.3; nonrhyming trials: n = 211; mean per participant = 23.4) than hearing participants (rhyming trials n = 342; mean per participant = 37.6; nonrhyming trials: total number included: n = 349; mean per participant = 39.1). Despite this, the standard deviation of number of trials included was similar across groups and conditions (deaf, rhyming: SD = 5.8; range = 13–29; nonrhyming: SD = 6.8; range = 14–32; hearing, rhyming: SD = 8.5; range = 23–48; nonrhyming: SD = 7.9; range = 27–48). Therefore, the deaf data do not seem to be noisier than that of the hearing participants.

As predicted, S1s elicited a slow negative deflection (CNV–mean amplitude measured 600–1200 msec) and targets elicited a negative deflection peaking around 450 msec (N450–mean amplitude measured 300–600 msec). Furthermore, following inspection of the ERP plots for deaf individuals, and in line with our predictions, the mean amplitude from 700 to 1200 msec post target onset was also analyzed.

CNV in Response to S1s (600–1200 msec)

A long-lasting slow negative wave (the CNV) was observed between 600 and 1200 msec following S1 onset. In both groups, the CNV at lateral electrodes was more negative at posterior than anterior sites, whereas at medial electrodes, the CNV was most negative at the most anterior and posterior sites compared with central electrodes [Medial/Lateral × Anterior/Posterior: hearing, F(5, 40) = 10.55, p < .001; deaf, F(5, 40) = 3.82, p < .05]. In hearing participants, as predicted, the CNV was more negative over the left than right hemisphere, F(1, 8) = 12.13, p < .01, especially at lateral sites [Hemisphere × Lateral/Medial, F(1, 8) = 7.18, p < .05]. There was also a trend toward this asymmetry being largest at anterior sites, as predicted on the basis of previous studies [Hemisphere × Lateral/Medial × Anterior/Posterior, F(5, 40) = 3.68, p = .062]. In the deaf group there was also a trend toward the CNV being more negative over the left than right hemisphere [hemisphere, F(1, 8) = 3.92, p = .083]. No interactions involving Group as a factor approached significance (ps > .1). The similarity in the waveform between deaf and hearing groups at frontal sites is evident in Figure 3.

Figure 3. 

Left and right hemisphere waves are plotted together over frontal electrodes. Plots show ERP responses in (A) deaf (performing above chance level) and (B) hearing subgroups (n = 9 each group) to the first stimulus (S1s) collapsed across rhyming and nonrhyming trials. There was no significant difference between subgroups in hemispheric asymmetry of the CNV.

Figure 3. 

Left and right hemisphere waves are plotted together over frontal electrodes. Plots show ERP responses in (A) deaf (performing above chance level) and (B) hearing subgroups (n = 9 each group) to the first stimulus (S1s) collapsed across rhyming and nonrhyming trials. There was no significant difference between subgroups in hemispheric asymmetry of the CNV.

Distribution and Amplitude of N450 Modulation in Response to Targets

Replicating previous studies, the N450 in hearing participants was more negative to nonrhyming than rhyming target words and this was largest over the right hemisphere [Rhyme × Hemisphere, F(1, 8) = 20.72, p < .005], particularly at medial and posterior sites [Rhyme × Lateral/Medial, F(1, 8) = 8.94, p < .05; Rhyme × Hemisphere × Lateral Medial, F(1, 8) = 28.34, p < .005; Rhyme × Hemisphere × Lateral/Medial × Anterior/Posterior, F(5, 40) = 4.17, p < .05; see Figure 4]. The deaf group showed the same effect of greater negativity to nonrhyming than rhyming words [Rhyme, F(1, 8) = 9.27, p < .05]. As in the hearing group, this effect was largest at posterior, medial sites (Rhyme × Lateral/Medial, F(1, 8) = 7.25, p < .05; Rhyme × Lateral/Medial × Anterior/Posterior, F(5, 40) = 4.58, p < .05; see Figure 4]. However, unlike the hearing group, no interactions involving hemisphere approached significance in the deaf group (all ps > .1). Despite this, the difference in laterality of modulation of the N450 between deaf and hearing participants just failed to reach significance [Rhyme × Hemisphere × Lateral/Medial × Group, F(1, 16) = 3.39, p = .08; see Figure 4].

Figure 4. 

Right, medial parietal sites are displayed. Plots show ERP responses in (A) deaf (performing above chance level) and (B) hearing subgroups (n = 9 each group) to the second stimulus (S2s). Enhanced negativity of the waveform between 300 and 600 msec in response to nonrhyming words is visible in both groups. The difference between subgroups within this time window failed to reach significance (p = .08).

Figure 4. 

Right, medial parietal sites are displayed. Plots show ERP responses in (A) deaf (performing above chance level) and (B) hearing subgroups (n = 9 each group) to the second stimulus (S2s). Enhanced negativity of the waveform between 300 and 600 msec in response to nonrhyming words is visible in both groups. The difference between subgroups within this time window failed to reach significance (p = .08).

To examine this pattern further, we contrasted the deaf and hearing data from all participants (n = 15 each group; correct trials only). The overall group contrast showed that the effect of rhyme was significantly greater in the hearing than the deaf group over right, medial, parietal electrodes [Rhyme × Hemisphere × Lateral/Medial × Group, F(1, 28) = 4.6, p < .05]. Because participants with extremely variable accuracy levels were included in this analysis, we also tested for correlations between the magnitude of the N450 modulation, over right, medial, posterior sites, and behavioral measures (rhyme accuracy, reading, NVIQ). No significant correlations were evident, in either the deaf or hearing subgroups, complete groups, or when deaf and hearing were combined (all ps > .1).

Onset Latency of N450 Modulation in Response to Targets

To assess whether the effect of rhyme on the N450 began at different times in deaf and hearing groups, the onset latency of the N450 modulation was established for each individual, taking into account their overall amplitude of the N450. The amplitude difference between rhyming and nonrhyming waves was established for 20-msec bins at the right posterior medial electrodes, where the response was the largest in hearing participants (C2, C4, CP2, CP4, P2, P4). The time window showing the largest difference between rhyming and nonrhyming responses (nonrhyme more negative than rhymes) was established for each individual and the equivalent of 40% of this value was set as an arbitrary cutoff point for that person. We then identified 20-msec bins in which the amplitude difference between rhyming and nonrhyming waves was greater than this cutoff. We defined the start of the N450 modulation as the start of the first of three consecutive bins with mean amplitude differences above this cutoff. Using these criteria, there was one deaf and one hearing participant for whom the start of the N450 modulation could not be identified. Using this measure, there was no significant difference in the onset latency of the N450 modulation between deaf (mean = 425 msec; SD = 151 msec; range = 260–600 msec) and hearing participants [mean = 375 msec; SD = 79 msec; range = 240–480 msec, t(14) = .83, p > .1].

Late, Negative Going Response to Targets (700–1200 msec)

The greater negativity to nonrhymes than rhymes appeared to persist in the waveforms of deaf participants, in the form of a negative-going response lasting to the end of the recording epoch (1200 msec; see Figure 4). There were no significant effects involving rhyme within this timewindow in the hearing group (all ps > .1). In contrast, the deaf group showed a significant effect of rhyme in this time window [Rhyme, F(1, 8) = 5.39, p < .05], which was largest over medial, posterior sites [Rhyme × Lateral/Medial × Anterior/Posterior: F(5, 40) = 4.34, p < .05, see Figure 4]. However, the interaction involving Group just failed to reach significance [Rhyme × Lateral/Medial × Anterior/Posterior × Group: F(5, 80) = 2.90, p = .068].

To examine this pattern further we contrasted the deaf and hearing data from all participants (n = 15 each group; correct trials only). There were no significant interactions involving group within this time window in the whole group analysis (all ps > .1). The correlation between degree of modulation of the waveform at bilateral posterior, medial sites, and rhyme accuracy was positive, but was not significant (r = .42 (15), p = .12).

DISCUSSION

In prior work using fMRI, we have established that the neural networks activated for rhyming judgments are very similar in deaf and hearing participants. In the current study, we used the complementary approach of EEG to further determine the identity and timecourse of the neural systems supporting rhyme processing in deaf and hearing adults. As predicted, rhyme judgment performance by the deaf group was above chance; however, their performance as a group was significantly poorer than that of hearing controls. At the individual level only 9 of the 15 deaf participants performed above chance. Given concern regarding the nature of ERP responses in the six deaf participants who performed below chance level, we chose here to report only ERP data from the correct trials from the nine deaf participants who performed above chance. This extremely conservative approach ensures that we can be confident that the data included in the analyses were not likely to be guesses by the participants.

In hearing participants, we replicated the two well-established ERP effects observed during a written word rhyme task (Rugg, 1984b). First, a late slow negative-going wave in response to S1s—the CNV (600–1200 msec)—was more negative over the left than right hemisphere, especially at lateral, anterior sites. However, the involvement of the anterior/posterior factor just failed to reach significance in this interaction, probably because of our small sample size. The distribution of the CNV is influenced by task: It is greater at right temporal sites when participants are required to make letter case judgments (Rugg, 1984b) and greater over the right than left hemisphere during face matching (Barrett, Rugg, & Perrett, 1988). In the current study and previous studies of rhyming, the CNV was more negative over the left than right hemisphere. Whether this asymmetry reflects rehearsal, as originally proposed by Rugg (Rugg, 1984b), allocation of resources for the upcoming matching task (Grossi et al., 2001) or some other process is still however unclear (Coch, Hart, et al., 2008). The second well-established ERP effect we replicated in hearing participants was the enhanced negativity of the N450 in response to nonrhyming targets. Convergent with previous studies, the effect was largest over right, medial, parietal sites. The enhanced N450 is thought to reflect the processing of a word that is phonologically incongruent with the preceding context (i.e., nonrhyming). There is growing evidence that this reflects strategic postlexical processing, rather than a simple phonological priming effect (Coch, Hart, et al., 2008). Replication of these two rhyme-related ERP components in hearing participants suggests that our rhyming paradigm and stimuli were valid.

Overall, the pattern of neural activity reflected in the ERPs of the deaf participants who performed above chance level was remarkably similar in polarity, distribution, and timing to that of the hearing group, despite the fact that the deaf participants' rhyming and reading abilities were much poorer. In our analyses of the CNV in response to S1s, we found no significant interactions involving group, and none approached significance. With regard to the N450, the waveform was modulated by rhyme status of the target words in the same way in deaf participants as hearing participants: Both groups showed greater negativity to nonrhyming than rhyming words. In both groups, the effect was greatest over medial posterior sites. Furthermore, the onset latency of the N450 modulation did not differ significantly between groups.

Critically, the modulation of the N450 by rhyme status in deaf participants must be a phonological effect, as phonology is the only way in which the rhyming and nonrhyming words differed. Unlike our previous fMRI study (MacSweeney et al., 2008), here we used words instead of pictures. This enabled a direct comparison with previous ERP studies of written word rhyming, and also allowed a particularly rigorous control for orthographic overlap, which is much more difficult when constrained to picture stimuli. An important feature of the current design was that orthographic overlap was well-matched between rhyming (e.g., float–quote) and nonrhyming word pairs (e.g., sport–quote) and was therefore not informative regarding the rhyme decision. Thus, the enhanced negativity of the N450 to nonrhyming words in deaf people could not be because of orthographic incongruity. Clearly, the deaf participants were sensitive, to some degree, to the phonological overlap between words. As discussed in the Introduction, it is likely that the sensitivity to rhyme in our deaf participants is because of information about phonological structure derived from speechreading and the motoric component of speech (MacSweeney et al., 2008, 2009; Aparicio et al., 2007). Despite being born profoundly deaf and having vastly different experience of spoken language to that of hearing people, the neural systems supporting rhyme processing as measured by electrophysiology are largely similar in both groups and the ERPs are modulated in a similar way to whether or not words are presented in a rhyming context. This finding supports our previous study using fMRI, in which we demonstrated that the same neural network is recruited by deaf and hearing participants during a picture rhyme judgment task (MacSweeney et al., 2008, 2009).

Although the neural systems engaged by both groups were very similar, there were indications in the current data, just as in our previous fMRI study, of subtle differences between deaf and hearing participants in their neural responses during rhyme judgments. In both deaf and hearing groups, the modulation of the N450 in response to targets was largest at posterior, medial sites. This effect was right lateralized for the hearing group. It appeared to be less right lateralized for the deaf group (Figure 4), but the interaction involving group just failed to reach significance. An indication that this may nevertheless be a real group difference, yet not linked to task performance, comes from an analysis of data from participants who performed both above and below chance level on the task, which showed a significant interaction. Such a difference is consistent with data from a visual hemifield study by D'Hondt and Leybaert (2003). In that study, deaf and hearing participants were required to make rhyme judgments in response to sequentially presented words. Hearing participants showed a right visual field (left hemisphere) advantage during the yes, but not the no, rhyme responses (see also Khateb et al., 2007). In contrast, deaf participants, who were matched to hearing participants on reading level and rhyme performance, showed no laterality differences in either the yes or no responses during the rhyme task. One possible interpretation of reduced hemispheric asymmetry in deaf than hearing participants during the matching stage of a rhyme task is that this is because of the influence of late language acquisition. All participants in the current study were native signers of ASL and learned English late as their second language. Phonological processing is thought to be particularly vulnerable to late language acquisition (Edwards & Zampini, 2008).

It should be noted that data from previous fMRI studies of rhyming have not reported reduced hemispheric specialization in deaf than hearing participants (MacSweeney et al., 2008, 2009; Aparicio et al., 2007). It is possible that the laterality effect may emerge only during the stimulus matching stage. Previous fMRI studies of rhyming in deaf participants have presented the stimulus pairs simultaneously, rather than sequentially, and therefore, these stages have not been studied separately (MacSweeney et al., 2008, 2009; Aparicio et al., 2007).

On the basis of previous studies of dyslexic adults (Russeler et al., 2007), we predicted a longer duration of the N450 modulation in deaf than hearing participants. Inspection of the group averages showed that indeed there appeared to be greater negativity to nonrhymes than rhymes in deaf participants from 700 msec (the end of the N450 analysis period) to the end of the recording epoch (1200 msec). The statistical analyses confirmed the presence of a significant N450 modulation in the deaf but not the hearing group. However, the group interaction just failed to reach significance. Interestingly, in our analyses including all participants, this effect did not approach significance. Thus, the modulation of the EEG by rhyme status in this late time window in the deaf group may not be a real effect. However, given that the effect was predicted on the basis of a previous study of other participants with phonological deficits—adult dyslexics (Russeler et al., 2007), it would be interesting to examine this later time window in a larger group of congenitally deaf participants who perform above chance on a rhyme task. If present, this pattern may represent recruitment of additional processes when participants with phonological deficits (deaf and dyslexic participants) perform the rhyme task accurately, relative to normally reading hearing controls. Further studies are needed to test this hypothesis.

Conclusion

We have shown that the modulation of the ERP waveform in a rhyme judgment task is similar in deaf and hearing people. Despite differences in rhyme performance, in both groups waveforms were significantly more negative in response to nonrhyming than rhyming targets and the onset latency of this modulation did not differ between deaf and hearing participants. The same target words were presented in the rhyme and nonrhyme conditions, and the S1s in both conditions were well matched on orthographic overlap and other psycholinguistic variables. Thus, the similarity between ERP modulations in the deaf and hearing groups must be because of sensitivity to the phonological structure of speech in the deaf group, even in the absence of auditory input. These data lend support to the suggestion that phonological processing is to a large degree amodal or supramodal. Nevertheless, trends in the data suggest that the ERP modulation of the waveform by rhyming was not identical in deaf and hearing groups. We propose that the trend toward reduced left hemispheric lateralization of the N450 modulation in deaf participants is because of language factors linked to deafness, for example, late age of English acquisition, rather than task proficiency. In contrast, the data suggest that further exploration of the longer lasting N450 modulation is needed and that this effect may relate to rhyme proficiency, specifically in those with phonological processing deficits.

The subtle patterns in our data highlight the complex interaction between the many factors involved in the linguistic profile of a person born deaf, including age of language acquisition, language proficiency and bilingual status. Our current data do not allow us to dissociate the influence of these factors on the neural systems supporting language. Our data do however provide clear predictions regarding where differences may and may not be predicted in future contrasts of deaf and hearing participants matched on phonological task performance and reading level. Such studies are clearly are needed to further tease apart the tightly entwined influences of late spoken language acquisition and spoken language proficiency on the neural systems supporting reading in people born deaf.

Acknowledgments

This research was funded by a Spencer/National Academy of Education postdoctoral fellowship awarded to M. MacSweeney and by Grant R01 DC000128 from National Institutes of Health, National Institute on Deafness and other Communication Disorders to H. Neville. M. MacSweeney is currently supported by a Career Development Fellowship from the Wellcome Trust. We would like to thank Lisa Sanders, Sarah Hafer, Eric Pakulak, Annika Andersson, Brittni Lauinger, Giordana Grossi, Yoshiko Yamada, Laura Batterink, and all deaf and hearing participants for their help with this study.

Reprint requests should be sent to Mairéad MacSweeney, Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, WC1N 3AR, or via e-mail: m.macsweeney@ucl.ac.uk.

REFERENCES

Ackerman
,
P. T.
,
Dykman
,
R. A.
, &
Oglesby
,
D. M.
(
1994
).
Visual event-related potentials of dyslexic children to rhyming and nonrhyming stimuli.
Journal of Clinical and Experimental Neuropsychology
,
16
,
138
154
.
Aparicio
,
M.
,
Gounot
,
D.
,
Demont
,
E.
, &
Metz-Lutz
,
M. N.
(
2007
).
Phonological processing in relation to reading: An fMRI study in deaf readers.
Neuroimage
,
35
,
1303
1316
.
Barrett
,
S. E.
, &
Rugg
,
M. D.
(
1989
).
Asymmetries in event-related potentials during rhyme-matching: Confirmation of the null effects of handedness.
Neuropsychologia
,
27
,
539
548
.
Barrett
,
S. E.
, &
Rugg
,
M. D.
(
1990
).
Event-related potentials and the semantic matching of pictures.
Brain and Cognition
,
14
,
201
212
.
Barrett
,
S. E.
,
Rugg
,
M. D.
, &
Perrett
,
D. I.
(
1988
).
Event-related potentials and the matching of familiar and unfamiliar faces.
Neuropsychologia
,
26
,
105
117
.
Bors
,
D. A.
, &
Stokes
,
T. L.
(
1998
).
Raven's Advanced Progressive Matrices: Norms for first-year university students and the development of a short form.
Educational and Psychological Measurement
,
58
,
382
398
.
Campbell
,
R.
, &
Wright
,
H.
(
1988
).
Deafness, spelling and rhyme: How spelling supports written word and picture rhyming skills in deaf subjects.
Quarterly Journal of Experimental Psychology
, A,
40
,
771
788
.
Charlier
,
B. L.
, &
Leybaert
,
J.
(
2000
).
The rhyming skills of deaf children educated with phonetically augmented speechreading.
Quarterly Journal of Experimental Psychology: Section A-Human Experimental Psychology
,
53
,
349
375
.
Coch
,
D.
,
George
,
E.
, &
Berger
,
N.
(
2008
).
The case of letter rhyming: An ERP study.
Psychophysiology
,
45
,
949
956
.
Coch
,
D.
,
Hart
,
T.
, &
Mitra
,
P.
(
2008
).
Three kinds of rhymes: An ERP study.
Brain and Language
,
104
,
230
243
.
Coltheart
,
M.
(
1981
).
The MRC psycholinguistic database.
Quarterly Journal of Experimental Psychology
,
33A
,
497
505
.
D'Hondt
,
M.
, &
Leybaert
,
J.
(
2003
).
Lateralization effects during semantic and rhyme judgement tasks in deaf and hearing subjects.
Brain and Language
,
87
,
227
240
.
Davis
,
C. J.
(
2010
).
The spatial coding model of visual word identification.
Psychological Review
,
117
,
713
758
.
Dodd
,
B.
(
1980
).
The spelling abilities of profoundly pre-lingually deaf children.
In U. Frith (Ed.)
,
Cognitive processes in spelling
(pp.
423
440
).
London
:
Academic Press
.
Dyer
,
A.
,
MacSweeney
,
M.
,
Szczerbinski
,
M.
,
Green
,
L.
, &
Campbell
,
R.
(
2003
).
Predictors of reading delay in deaf adolescents: The relative contributions of rapid automatized naming speed and phonological awareness and decoding.
The Journal of Deaf Studies and Deaf Education
,
8
,
215
229
.
Edwards
,
J. G. H.
, &
Zampini
,
M. L.
(
2008
).
Phonology and second language acquisition.
Amsterdam
:
John Benjamins Publishing Co
.
Grossi
,
G.
,
Coch
,
D.
,
Coffey-Corina
,
S.
,
Holcomb
,
P. J.
, &
Neville
,
H. J.
(
2001
).
Phonological processing in visual rhyming: A developmental ERP study.
Journal of Cognitive Neuroscience
,
13
,
610
625
.
James
,
D.
,
Rajput
,
K.
,
Brinton
,
J.
, &
Goswami
,
U.
(
2009
).
Orthographic influences, vocabulary development, and phonological awareness in deaf children who use cochlear implants.
Applied Psycholinguistics
,
30
,
659
684
.
Jednorog
,
K.
,
Marchewka
,
A.
,
Tacikowski
,
P.
, &
Grabowska
,
A.
(
2010
).
Implicit phonological and semantic processing in children with developmental dyslexia: Evidence from event-related potentials.
Neuropsychologia
,
48
,
2447
2457
.
Khateb
,
A.
,
Pegna
,
A. J.
,
Landis
,
T.
,
Michel
,
C. M.
,
Brunet
,
D.
,
Seghier
,
M. L.
,
et al
(
2007
).
Rhyme processing in the brain: An ERP mapping study.
International Journal of Psychophysiology
,
63
,
240
250
.
Kucera
,
H.
, &
Francis
,
W.
(
1967
).
Computational analysis of present-day American English.
Providence, RI
:
Brown University Press
.
Kutas
,
M.
, &
Hillyard
,
S. A.
(
1980
).
a). Event-related brain potentials to semantically inappropriate and surprisingly large words.
Biological Psychology
,
11
,
99
116
.
Kutas
,
M.
, &
Hillyard
,
S. A.
(
1980
).
b). Reading senseless sentences: Brain potentials reflect semantic incongruity.
Science
,
207
,
203
205
.
Kutas
,
M.
, &
Hillyard
,
S. A.
(
1982
).
The lateral distribution of event-related potentials during sentence processing.
Neuropsychologia
,
20
,
579
590
.
Leybaert
,
J.
(
1998
).
Phonological representations in deaf children: The importance of early linguistic experience.
Scandinavian Journal of Psychology
,
39
,
169
173
.
MacGinitie
,
W. H.
,
MacGinitie
,
R. K.
,
Maria
,
K.
, &
Dreyer
,
L. G.
(
2000
).
Gates-MacGinitie reading test (4th ed., Level 4, Form S).
Itasca, IL
:
The Riverside Publishing Company
.
MacSweeney
,
M.
,
Brammer
,
M. J.
,
Waters
,
D.
, &
Goswami
,
U.
(
2009
).
Enhanced activation of the left inferior frontal gyrus in deaf and dyslexic adults during rhyming.
Brain
,
132
,
1928
1940
.
MacSweeney
,
M.
,
Waters
,
D.
,
Brammer
,
M. J.
,
Woll
,
B.
, &
Goswami
,
U.
(
2008
).
Phonological processing in deaf signers and the impact of age of first language acquisition.
Neuroimage
,
40
,
1369
1379
.
Mayberry
,
R. I.
,
del Giudice
,
A. A.
, &
Lieberman
,
A. M.
(
2011
).
Reading achievement in relation to phonological coding and awareness in deaf readers: A meta-analysis.
Journal of Deaf Studies and Deaf Education
,
16
,
164
188
.
McPherson
,
W. B.
,
Ackerman
,
P. T.
,
Holcomb
,
P. J.
, &
Dykman
,
R. A.
(
1998
).
Event-related brain potentials elicited during phonological processing differentiate subgroups of reading disabled adolescents.
Brain and Language
,
62
,
163
185
.
Oldfield
,
R. C.
(
1971
).
The Assessment and Analysis of Handedness: The Edinburgh Inventory.
Neuropsychologia
,
9
,
97
113
.
Perez-Abalo
,
M. C.
,
Rodriguez
,
R.
,
Bobes
,
M. A.
,
Gutierrez
,
J.
, &
Valdes-Sosa
,
M.
(
1994
).
Brain potentials and the availability of semantic and phonological codes over time.
Neuroreport
,
5
,
2173
2177
.
Rugg
,
M. D.
(
1984a
).
Event-related potentials and the phonological processing of words and non-words.
Neuropsychologia
,
22
,
435
443
.
Rugg
,
M. D.
(
1984b
).
Event-related potentials in phonological matching tasks.
Brain and Language
,
23
,
225
240
.
Rugg
,
M. D.
, &
Barrett
,
S. E.
(
1987
).
Event-related potentials and the interaction between orthographic and phonological information in a rhyme-judgment task.
Brain and Language
,
32
,
336
361
.
Rugg
,
M. D.
,
Lines
,
C. R.
, &
Milner
,
A. D.
(
1984
).
Visual evoked potentials to lateralized visual stimuli and the measurement of interhemisphere transmission time.
Neuropsychologia
,
22
,
215
225
.
Russeler
,
J.
,
Becker
,
P.
,
Johannes
,
S.
, &
Munte
,
T. F.
(
2007
).
Semantic, syntactic, and phonological processing of written words in adult developmental dyslexic readers: An event-related brain potential study.
BMC Neuroscience
,
8
, doi:10.1186/1471-2202-8-52.
Sterne
,
A.
, &
Goswami
,
U.
(
2000
).
Phonological awareness of syllables, rhymes, and phonemes in deaf children.
Journal of Child Psychology and Psychiatry
,
41
,
609
625
.
Walter
,
W. G.
,
Cooper
,
R.
,
Alridge
,
V. J.
,
McCallum
,
W. C.
, &
Winter
,
A. L.
(
1964
).
Contingent negative variation: An electric sign of sensori-motor association and expectancy in the human brain.
Nature
,
203
,
380
384
.
Weber-Fox
,
C.
,
Spencer
,
R.
,
Cuadrado
,
E.
, &
Smith
,
A.
(
2003
).
Development of neural processes mediating rhyme judgments: Phonological and orthographic interactions.
Developmental Psychobiology
,
43
,
128
145
.
Weber-Fox
,
C.
,
Spencer
,
R. M.
,
Spruill
,
J. E.
, III
, &
Smith
,
A.
(
2004
).
Phonologic processing in adults who stutter: Electrophysiological and behavioral evidence.
Journal of Speech, Language, and Hearing Research
,
47
,
1244
1258
.