Abstract

The results of this magnetoencephalography study challenge two long-standing assumptions regarding the brain mechanisms of language processing: First, that linguistic processing proper follows sensory feature processing effected by bilateral activation of the primary sensory cortices that lasts about 100 msec from stimulus onset. Second, that subsequent linguistic processing is effected by left hemisphere networks outside the primary sensory areas, including Broca's and Wernicke's association cortices. Here we present evidence that linguistic analysis begins almost synchronously with sensory, prelinguistic verbal input analysis and that the primary cortices are also engaged in these linguistic analyses and become, consequently, part of the left hemisphere language network during language tasks. These findings call for extensive revision of our conception of linguistic processing in the brain.

INTRODUCTION

Functional neuroimaging, typically through fMRI, has supplied additional evidence to those derived from observing the effects of focal lesions that several regions of the left hemisphere may contain the neuronal networks of phonological and semantic operations. These include the regions comprising Wernicke's area, namely, the posterior part of the superior temporal gyrus (pSTG; see, e.g., Halgren et al., 2002; Poldrack et al., 2001; Scott, Blank, Rosen, & Wise, 2000), the posterior part of the middle temporal gyrus (pMTG; e.g., Rimol, Specht, Weis, Savoy, & Hugdahl, 2005; Schulz, Varga, Jeffires, Ludlow, & Braun, 2005; Pammer et al., 2004), the supramarginal gyrus (SMG; e.g., Bowyer et al., 2005; Demonet, Price, Wise, & Frackowiak, 1994), the angular gyrus (AG; e.g., Bemis & Pylkkanen, 2013; Binder, Frost, Hammeke, Rao, & Cox, 1996), and, especially for reading, the posterior part of the basal temporal cortex including the posterior part of the inferior temporal gyrus (pITG), the lingual and fusiform gyri (e.g., Rumsey et al., 1997; Price, Wise, & Frackowiak, 1996; Pugh et al., 1996), and those comprising Broca's area, that is, pars triangularis and pars opercularis (e.g., Friederici, Wang, Herrmann, Maess, & Oertel, 2000; Foundas, Eure, Luevano, & Weinberger, 1998). Given that the neuronal activation captured by functional neuroimaging procedures is typically bilateral, the involvement of these left hemisphere structures in language is assessed on the basis of the relative extent of their activation as compared with that of the homotopic areas in the right hemisphere, expressed as a laterality index. This index is typically derived by subtracting from the amount of activation in a given left hemisphere region that of its homotopic region in the right hemisphere and by dividing the residual by their sum. Thus, positive values indicate greater left activation, negative values indicate greater right activation, and a zero value indicates bilaterally symmetric activation (Papanicolaou et al., 2004; Springer et al., 1999; Simos, Breier, Zouridakis, & Papanicolaou, 1998; Gur et al., 1997; Binder, Swanson, et al., 1996; Henry et al., 1990).

In addition, it has been commonly assumed that the linguistic operations (phonological semantic and syntactic) mediated by these structures receive as their input the output of sensory analyzers located in the primary sensory cortices, whether the transverse temporal gyrus (TTG) or primary visual cortex (V1), which, activated bilaterally, perform the initial analysis of the auditory and visual signals and extract their nonlinguistic features such as the spectral composition of auditory inputs (see, e.g., Hickok, 2012; Hickok & Poeppel, 2007). The bilaterally symmetrical activation of the primary sensory cortices is thought to be reflected in the N100 component of the ERP (e.g., Mayhew, Dirckx, Niazy, Iannetti, & Wise, 2010; Spitz, Emerson, & Pedley, 1986) or the N100m component of event-related magnetic field responses to auditory and visual verbal (as well as non-verbal) stimuli (e.g., Salmelin, 2007; Kuriki, Isahai, & Ohtsuka, 2005; Verkindt, Bertrand, Perrin, Echallier, & Pernier, 1995; Baumann, Rogers, Papanicolaou, & Saydjari, 1990). It is therefore assumed that the duration of the sensory operations is the same as that of these early or “sensory” components that peak at approximately 100 msec from stimulus onset and that linguistic analysis proper is carried outside the primary projection areas, namely, in the aforementioned language-related regions, and that it follows acoustic or visual feature extraction carried out in TTG and V1.

In fact, assessment of hemispheric dominance for language is based on lateralized activation beyond the latency of the N100m component (e.g., Passaro et al., 2011; Tavabi, Embick, & Roberts, 2011; Mohamed et al., 2008; Kamada et al., 2007; Papanicolaou et al., 2004, 2006; Bowyer et al., 2004; Hirata et al., 2004; Szymanski et al., 2001). Semantic operations in particular are thought to occur at about 400 msec poststimulus onset as indexed by the negative ERP component, the N400 (e.g., Kutas & Federmeier, 2011), as well as its magnetoencephalography (MEG) counterpart (e.g., Simos et al., 1998). Other studies, however, mostly electrophysiological and mostly addressing the distribution of concept-specific (rather than linguistic operation-specific) neuronal circuitry, have provided indications that linguistic processing may be occurring as early as 150–200 msec (Hoenig, Sim, Bochev, Herrnberger, & Kiefer, 2008; Penolazzi, Hauk, & Pulvermüller, 2006; Hinojosa et al., 2001; Martin-Loeches, Hinojosa, Gómez-Jarabo, & Rubia, 2001; Pulvermüller, Lutzenberger, & Preissl, 1999; Rinne et al., 1999; Pulvermüller, Lutzenberger, & Birbaumer, 1995; Brown & Lehmann, 1980; Wood, Goff, & Day, 1971). A more recent EEG study (Egorova, Shtyrov, & Pulvermuller, 2013) has shown that semantic analyses in the context of a naming and a request speech task are detected around 120 msec in the frontocentral areas including the left inferior frontal gyrus and the right temporal pole, whereas an MEG study (Moseley, Pulvermüller, & Shtyrov, 2013) localized activation associated with different semantic word categories, peaking at about 150 msec, to the parieto-occipital and premotor cortex, bilaterally. Morevoer, using MEG, MacGregor, Pulvermüller, van Casteren, and Shtyrov (2012) demonstrated bilateral engagement of perisylvian regions during lexical processing approximately 50 msec following the onset of acoustic stimuli, further lending support to the notion that linguistic operations may occur earlier than previously expected.

Yet, no study has thus far addressed directly the issues of where in the brain does the early linguistic processing take place and whether there is a temporal ordering in the engagement of these language related regions, as it is generally assumed.

To address these issues, we extended previous research by studying normal adult participants engaged in nonlinguistic tasks involving acoustic and visual nonlinguistic operations, presumably nonlateralized, as well as in aural and written word processing tasks involving presumably lateralized phonological and semantic operations. The MEG responses of those participants during the two nonlinguistic and the two linguistic tasks were recorded, and their sources in regions universally accepted as language relevant (see above) as well as other regions not known to be involved in linguistic operations were estimated.

Following established practice, we used laterality indices derived in the manner described above as a means of identifying areas engaged in language operations (as opposed to areas containing semantic features of specific word categories, which are presumably bilateral, e.g., Chen, Davis, Pulvermuller, & Hauk, 2015; Moseley et al., 2013) and distinguishing them from areas that do not contain neuronal networks of such operations. Then on the basis of these laterality indices, we addressed the following specific questions: How early do phonological and semantic operations occur; is the acoustic and primary visual cortex involved in such operations in the presence of linguistic stimuli and can the relative onset of the different processing operations in the various component areas or hubs of the language network be resolved using MEG?

METHODS

Participants

A different number of participants performed the four experiments, although most individuals participated in nearly all experiments. Specifically, in the tone identification experiment, 1 male and 7 female participants were tested; in the auditory word comprehension experiment, 8 female participants were tested; in the kaleidoscope figure identification experiment, 1 male and 6 female participants were tested; and in the visual word comprehension task, 2 male and 7 female participants were tested. The age of participants ranged between 19 and 35 years.

All participants were right-handed according to the Edinburgh Inventory and had normal (or corrected-to-normal) vision and normal hearing. This information was collected during the recruitment process and was a prerequisite for participating in the study.

All participants were financially compensated for their participation and provided written consent. In accordance with the Declaration of Helsinki, the protocol was approved by the institutional review board of the University of Tennessee Health Science Center.

Experimental Tasks

All experimental tasks used in the study were generated using E-Prime 2.0 (Psychology Software Tools, Inc., Pittsburgh, PA). For the tone identification and auditory word comprehension tasks, all stimuli were presented binaurally via plastic tubes (6.096 m/20 ft length, 4 mm inner diameter, medical grade silicone) terminating in ear inserts (Etymotic Research, Inc., Elk Grove Village, IL). Before both experiments, stimuli were adjusted as per the participants' feedback to ensure equal sound intensity level in both ears. Once the intensity for one participant was set, it was kept constant throughout the experimental session. In addition, we made sure that only small adjustments in the sound intensity were made between participants (around a level of intensity that enabled comfortable hearing and was established during the piloting phase of the experiment). During the visual word recognition and kaleidoscope identification tasks, all stimuli were presented through a Dukane ImagePro Projector (Model 8942) on a back-projection screen located approximately 60 cm in front of the participant, subtending 1.0–3.0° and 0.5° of horizontal and vertical visual angle, respectively. The luminance of the screen and the ambient lighting conditions were kept constant for all participants.

Linguistic and nonlinguistic stimuli were presented in different block of trials. This experimental design was unavoidable because of the very different nature of the tasks. Alternating linguistic and nonlinguistic stimuli within the same block would have worsened participants' performance to such a degree that would render the data less reliable. However, to ensure an unbiased assessment of the outcome of the experimental tasks, task order was randomized across all participants.

Tone Identification Task

In the tone identification task, two pure tones of the same duration (360 msec) and rise and fall time (5 msec) but of different frequency (500 and 1000 Hz) were presented to the participants 120 times each. The task for the participants was to judge whether the presented tone was a high or a low pitch tone. The presentation of the tone followed a delay of 1000 msec after which the participants heard a cue, which indicated that they had to respond by pressing a button using the index finger of their right hand every time they heard the low tone. The intertrial interval was fixed at 3000 msec.

Auditory Word Comprehension Task

The auditory word comprehension task was adapted from the continuous auditory word recognition protocol (continuous recognition memory for words or CRM) previously described (Papanicolaou et al., 2006). It involved the auditory presentation of a set of five target words immediately before the recording session. The word stimuli had a duration between 395 and 920 msec, and they were produced by a native English speaker with a flat intonation. Target words (jump, please, little, drink, and good) included four monosyllabic and one disyllabic word and had a mean frequency in the Zeno et al. G6-7 corpus (Zeno, Ivens, Millard, & Duvvuri, 1995) of 158 occurrences per million (range: 32–194 occurrences). A slightly higher proportion of distractors were disyllabic (40%) and the remaining monosyllabic, with a mean frequency of occurrence of 150 words per million in the same corpus (range: 18–820). During the recording session, the five target words were repeated in a random order, mixed with a different set of 40 distractors (nonrepeating words) in three blocks (of 45 stimuli each). The intertrial interval varied between 400 and 1100 msec (beginning after the end of each presented word). The participants were instructed to lift the index finger of their right hand when they heard the target words. Their responses were monitored using a camera.

Visual Word Comprehension Task

This task was a visual adaptation of the auditory word comprehension task (VCRM) described above, with a modified stimulus duration and ISI. As such, the five target words were presented visually, immediately before the recording session, and during the recording, the five target words were repeated in a different random order, mixed with 40 distractors in each of three blocks (of 45 stimuli each). Stimuli were presented for 1500 msec, with a fixed ISI of 1500 msec. Again, participants were instructed to lift the index finger of their right hand when they saw the target words, and their performance was monitored using a camera.

Kaleidoscope Figure Identification Task

A series of 60 multicolored nonverbalizable kaleidoscope images were presented for 1500 msec each, with an intertrial interval of 5000 msec. The participants' task was to indicate, through a button press, using their right index finger, whether the image displayed matched or not the first image presented at the start of the trial. The evoked magnetic field to the second, test image was used for the purposes of this study.

Imaging Procedures

All MEG recordings were conducted in the Magnetic Source Imaging Laboratory, Le Bonheur Children's Hospital, using a whole-head neuromagnetometer containing an array of 248 sensors (WH 3600, 4D Neuroimaging, San Diego, CA) housed in a sound-damped and magnetically shielded room. Intrinsic noise in each channel was <2 fT/√Hz. MEG signals were recorded in continuous (DC) mode, with a sampling rate of 1017.5 Hz. The position of the sensors relative to the participant's head was determined using five coils, three of which were anchored to the nasion and the left and right periauricular points and two on the forehead. The coils were activated briefly by passing a small current through them, at the beginning and, again, at the end of the recording session, and their precise location was determined using a localization algorithm native to the recording system software. During the same process, the participant's head shape was digitized using a stylus for subsequent localization of the brain activation sources. Throughout the recording sessions, eye movements, blinks, and electrocardiogram were monitored via a bipolar montage using Ag/AgCl electrodes. For detecting eye movements, the electrodes were placed above and below the left eye (vertical) and at the outer canthi (horizontal).

Structural MR images were obtained on a 3-T scanner (Siemens Verio, Siemens AG, Munich, DE) with an eight-channel head coil. High-resolution anatomical images were acquired using a MPRAGE sequence (repetition time/echo time/inversion time/flip angle = 2300 msec/3.66 msec/751 msec/13°) with slice-select inversion recovery pulses (matrix size of 512 × 512 × 176 and 0.5 × 0.5 × 1 mm spatial resolution) for the purpose of identifying the activated brain regions (see details in the MEG Source Identification section).

Data Analysis

MEG Signal Prepreprocessing

MEG data obtained during each session were preprocessed for identification of noisy channels and for removal of artifacts. Noisy channels were identified by examining the signal similarity between each sensor and its neighbors. Channels exhibiting poor correlation or high variance ratio to neighboring channels (Winter, Nunez, Ding, & Srinivasan, 2007) were flagged and removed from further analysis. Physiological artifacts and other types of noise such as the electrocardiogram, eye movements, power supply bursting, and 1/f-like environmental noise (Mantini et al., 2011) were identified using independent components analysis and then removed from the MEG data (Escudero, Hornero, Abasolo, Fernandez, & Lopez-Coronado, 2007). Subsequently, manual inspection identified whether bad epochs where present to be removed (participants whose number of noisy epochs exceed 10% of the trials were excluded from further analysis). Subsequently, the individual epochs of each participant during each task were averaged, and the averaged evoked magnetic fields where further processed to derive their intracranial sources.

MEG Source Identification

To identify the intracranial origin of evoked magnetic fields, the magnetic flux distribution recorded simultaneously over the entire head surface at successive points was analyzed using a minimum norm estimate model (MNE; MacGregor et al., 2012; Hauk, 2004; Uutela, Hämäläinen, & Somersalo, 1999; Hämäläinen & Ilmoniemi, 1994) to obtain estimates of the time-varying strength of intracranial currents using Brainstorm software (neuroimage.usc.edu/brainstorm/). All measures were made with respect to a prestimulus baseline, calculated as the mean level of activity over 250 msec before stimulus onset. In particular, MNE solutions were noise-normalized to obtain dynamic statistical parametric maps (dSPM; Dale et al., 2000) for each participant and experimental condition. The dSPM output represents, for each source location, the MNE value divided by an estimate of the noise covariance matrix at that particular location. The source activity calculated using dSPM has an F-distribution under the null hypothesis. The dSPM outputs from each participant were then used to perform group analysis.

The noise-normalized estimated current sources were anatomically constrained by an MRI-derived surface model of each participant's brain. This model was generated by a fully automated cortical surface reconstruction procedure using FreeSurfer software (v5.3; Dale, Fischl, & Sereno, 1999) for producing a detailed geometric description (regular tessellation of the cortical surface consisting of equilateral triangles known as vertices) of the gray–white matter boundary of the neocortical mantle and the mesial temporal lobe. Subsequently, the Brainstorm software was used to construct a single compartment boundary element model using triangular tessellations to model 15,000 vertices (decimated from the original surface mesh) as potential current dipoles (7,500 per hemisphere) perpendicular to the cortical surface during the forward calculations (Tadel, Baillet, Mosher, Pantazis, & Leahy, 2011). Coregistration of each averaged reconstructed current time series with its corresponding MRI data set was performed using an automated coregistration routine within Brainstorm software, which aligns digitization points in the MEG head shape file with the fiducial points demarcated on the outer skin surface reconstruction of the MRI.

Selection of Brain Areas and Calculation of Laterality Indices

Two sets of anatomical regions were selected (Figure 1): first, the auditory and visual sensory cortices (Heschl's gyri or TTG and the primary visual cortex or V1) and, second, the following regions universally accepted as hubs of the neuronal network subserving oral and written language, namely, the pSTG, the pMTG, the SMG, and the AG comprising Wernicke's area, the pars triangularis and opercularis and the premotor region comprising Broca's area (Grodzinsky & Amunts, 2006), and the basal aspect of the posterior part of the temporal lobe including the pITG and the fusiform and lingual gyri, thought to be necessary for grapheme identification (e.g., Cohen et al., 2000). These ROIs were approximated using the Desikan–Killiany automated labeling atlas (Desikan et al., 2006). Although the activation of all ROIs was recorded, for the purposes of this study we limited analysis on the above-mentioned regions for the time period corresponding to the earliest burst of neuronal response to the stimuli.

Figure 1. 

Approximation of the primary sensory cortices (TTG and V1) and the language-relevant anatomical structures based on the Desikan–Killiany automated labeling atlas in a representative subject. Top and bottom rows correspond to the lateral and medial view of the left and the right hemispheres, where the regions of interest are shown. LH = left hemisphere; RH = right hemisphere; POP = pars opercularis; PTR = triangularis; other abbreviations mentioned in the text.

Figure 1. 

Approximation of the primary sensory cortices (TTG and V1) and the language-relevant anatomical structures based on the Desikan–Killiany automated labeling atlas in a representative subject. Top and bottom rows correspond to the lateral and medial view of the left and the right hemispheres, where the regions of interest are shown. LH = left hemisphere; RH = right hemisphere; POP = pars opercularis; PTR = triangularis; other abbreviations mentioned in the text.

To assess language lateralization, we applied the anatomically constrained noise-normalized estimation procedure to the event-related MEG recorded signals, and we computed spatiotemporal activity estimates for each of the participants. The laterality ratios were then calculated for the selected time windows using the standard formula: activation in each left hemisphere ROI minus activation of the homotopic right hemisphere ROI divided by the sum of the two and multiplied by 100 [((left − right)/(left + right)) × 100]. These time windows were selected on the basis of visual inspection of the reconstructed activation peaks from the MNE approach as shown in Figure 2 and on the basis of evidence that initial sensory processing peaks around 100 msec and that early semantic processing emerges around ∼240–280 msec, followed by the N400 responses (e.g., Kutas & Federmeier, 2011; Fujimaki et al., 2009; Bowyer et al., 2005; Simos et al., 1998). Significant laterality effects for each task were explored using one-sample t tests (two-tailed, μ = 0), with laterality indices derived at each latency and ROI as the dependent variable.

Figure 2. 

Time courses of noise-normalized estimated current sources in cortical areas involved in processing auditory and visual linguistic (CRM and VCRM) and nonlinguistic stimuli (Tones and Kaleidoscope). Early activation (shaded segment) peaks during the auditory and visual linguistic (CRM: 50–110 msec and VCRM: 50–160 msec) and nonlinguistic stimuli (Tones: 50–125 msec and Kaleidoscope: 50–160 msec) were selected on the basis of visual inspection to capture the course of that activation typically peaking at about 100 msec. The time courses were averaged across all participants participating in each task (see Participants section for details). Blue lines correspond to the reconstructed signal of the left hemisphere, and red lines correspond to the reconstructed signal of the right hemisphere. Early significantly lateralized activity is indicated by asterisks.

Figure 2. 

Time courses of noise-normalized estimated current sources in cortical areas involved in processing auditory and visual linguistic (CRM and VCRM) and nonlinguistic stimuli (Tones and Kaleidoscope). Early activation (shaded segment) peaks during the auditory and visual linguistic (CRM: 50–110 msec and VCRM: 50–160 msec) and nonlinguistic stimuli (Tones: 50–125 msec and Kaleidoscope: 50–160 msec) were selected on the basis of visual inspection to capture the course of that activation typically peaking at about 100 msec. The time courses were averaged across all participants participating in each task (see Participants section for details). Blue lines correspond to the reconstructed signal of the left hemisphere, and red lines correspond to the reconstructed signal of the right hemisphere. Early significantly lateralized activity is indicated by asterisks.

RESULTS

Behavioral Performance

The threshold for participants' performance for the auditory and visual word comprehension tasks as well as for the tone identification task was set at a success rate of 95%. The success rate for the kaleidoscope figure identification task was set at 75% because of the higher demands of this experiment and on the basis of preliminary behavioral results during the piloting phase. All participants in this study met these criteria.

Brain Activation Profiles

Figure 2 presents the time courses of the mean noise-normalized current (dSPM) yielded by the MNE analysis in the ROIs corresponding to the primary sensory cortices during the four tasks and shows that the reconstructed activation of neuronal sources exhibits several peaks of activation, roughly corresponding in latency to the peaks in the average evoked magnetic field time series.

For addressing the experimental questions, the amount of activation of each ROI during each peak, derived by integrating the activation within each time window (as is indicated by the shaded areas in Figure 2 for the first activation burst), was calculated, and laterality indices were computed (Table 1).

Table 1. 

Mean (SD) of Laterality Ratios across All Nonverbal and Verbal Tasks during the Early Latency Window

ROITonesKaleidoscopeCRMVCRM
V1 4.83 (8.6) 3.79 (10.1) 5.25 (14.6) 10.90 (10.6) 
TTG 1.63 (18.5) 5.51 (20.0) 27.60 (30.5) 6.46 (15.0) 
pSTG 7.81 (18.7) 11.90 (19.7) 35.20 (26.6) 14.80 (18.6) 
pMTG 8.33 (20.9) 4.20 (19.0) 23.10 (21.3) 10.30 (21.9) 
pITG 3.30 (19.2) 7.72 (21.8) 17.00 (18.6) 12.0 (19.5) 
Supramarginal 4.83 (12.4) 3.33 (23.7) 25.10 (19.2) 10.50 (18.6) 
Premotor −0.50 (11.6) 0.88 (7.9) 19.60 (16.5) 4.66 (10.7) 
Pars opercularis −3.72 (16.7) −6.26 (17.0) −3.38 (10.9) 4.47 (20.1) 
Pars triangularis −3.71 (12.6) 3.46 (18.2) 18.20 (18.9) 12.10 (17.0) 
Fusiform 3.16 (17.1) −1.73 (15.2) 17.90 (16.0) 10.60 (12.9) 
Angular 4.72 (13.8) 2.20 (19.6) 6.13 (26.6) 15.20 (18.3) 
Lingual −1.20 (14.0) 2.97 (14.6) 7.42 (12.7) 8.24 (12.7) 
ROITonesKaleidoscopeCRMVCRM
V1 4.83 (8.6) 3.79 (10.1) 5.25 (14.6) 10.90 (10.6) 
TTG 1.63 (18.5) 5.51 (20.0) 27.60 (30.5) 6.46 (15.0) 
pSTG 7.81 (18.7) 11.90 (19.7) 35.20 (26.6) 14.80 (18.6) 
pMTG 8.33 (20.9) 4.20 (19.0) 23.10 (21.3) 10.30 (21.9) 
pITG 3.30 (19.2) 7.72 (21.8) 17.00 (18.6) 12.0 (19.5) 
Supramarginal 4.83 (12.4) 3.33 (23.7) 25.10 (19.2) 10.50 (18.6) 
Premotor −0.50 (11.6) 0.88 (7.9) 19.60 (16.5) 4.66 (10.7) 
Pars opercularis −3.72 (16.7) −6.26 (17.0) −3.38 (10.9) 4.47 (20.1) 
Pars triangularis −3.71 (12.6) 3.46 (18.2) 18.20 (18.9) 12.10 (17.0) 
Fusiform 3.16 (17.1) −1.73 (15.2) 17.90 (16.0) 10.60 (12.9) 
Angular 4.72 (13.8) 2.20 (19.6) 6.13 (26.6) 15.20 (18.3) 
Lingual −1.20 (14.0) 2.97 (14.6) 7.42 (12.7) 8.24 (12.7) 

Positive and negative values denote leftward and rightward asymmetries, respectively.

During the two nonlinguistic tasks (judging the pitch of the tones and the kaleidoscope patterns), the primary sensory cortices (TTG and V1, respectively) were symmetrically activated in the left and in the right hemisphere not only during the earliest activation peak of interest (occurring between 50 and 125 msec at TTG in the tones identification task and 50 and 160 msec at V1 in the kaleidoscope task) but for all subsequent activation peaks (not highlighted in Figure 2). The laterality indices in those ROIs for all activation peaks were, consequently, not statistically different from zero, indicating that nonlinguistic, sensory processing of stimuli is not lateralized but is carried out by neuronal mechanisms in the respective sensory cortices of both hemispheres. However, against implicit expectations and often explicit assumptions, there is clear evidence of lateralized linguistic processing even within the sensory cortices almost simultaneous with sensory processing, during both the aural (CRM) and visual (VCRM) word recognition tasks. As indicated by asterisks in Figure 2, the TTG showed higher activation in the left hemisphere for the time window 50–110 msec (p = .037) during the auditory task (CRM) with six participants out of eight showing the effect (6/8 L > R; 2/8 R > L), whereas for the visual task (VCRM) during the same time window (50–160 msec), it was the primary visual cortex that showed higher activation in the left hemisphere (p = .014) with seven participants out of nine showing the effect (7/9 L > R; 2/9 R > L). Although it is not clear whether the significantly greater degree of activation of the primary sensory cortices in the left hemisphere reflects tasks demands or it is brought about by the nature of the stimuli, the data do suggest that lateralized language circuitry extends in the primary sensory cortex and it is activated almost simultaneously with the circuitry processing sensory, prelinguistic features of the stimuli. The early lateralized activation of the primary auditory and visual cortices during the language tasks is further illustrated in Figure 3, which shows sources active as early as 96 msec during the CRM and 82 msec during the VCRM tasks.

Figure 3. 

Estimated early cortical activity in a single participant during the auditory (CRM) and the visual (VCRM) word comprehension tasks. Left-lateralized early (<100 msec) activation of the primary sensory cortices during the two language tasks was observed in all participants. Left-lateralized activation during CRM is displayed on the lateral and for the VCRM task on the medial surface of the left and right hemispheres, shown in that order.

Figure 3. 

Estimated early cortical activity in a single participant during the auditory (CRM) and the visual (VCRM) word comprehension tasks. Left-lateralized early (<100 msec) activation of the primary sensory cortices during the two language tasks was observed in all participants. Left-lateralized activation during CRM is displayed on the lateral and for the VCRM task on the medial surface of the left and right hemispheres, shown in that order.

The activation profiles in the language-related ROI lead to the same conclusion: In these ROIs, significantly left-lateralized activation occurs as early as that in the sensory cortices and involves, for the CRM task, the pars triangularis (p = .030) with six participants out of eight showing the effect (6/8 L > R; 1/8 R > L; 1 L = R), pSTG (p = .007) with seven participants out of eight showing the effect (7/8 L > R; 1/8 R > L), SMG (p = .007) with all participants showing the effect (8/8 L > R), pITG (p = .036) with seven participants out of eight showing the effect (7/8 L > R; 1/8 R > L), pMTG (p = .02) with all participants showing the effect (8/8 L > R), fusiform gyrus (p = .016) with seven participants out of eight showing the effect (7/8 L > R; 1/8 R > L), and the premotor cortex (p = .006) with all eight participants showing the effect (8/8 L > R) but not other ROIs that are not considered language related. In the VCRM task, besides V1, significantly left lateralized activation during the first peak was also found in the fusiform gyrus (p = .038) with six out of nine participants showing the effect (6/9 L > R; 2/9 R > L; 1 L = R), angular (p = .037) with six out of nine participants showing the effect (6/9 L > R; 3/9 R > L), pMTG (p = .01) with eight out of nine participants showing the effect (8/9 L > R; 1/9 R > L), and pSTG (p = .043) also with eight out of nine participants showing the effect (8/9 L > R; 1 L = R). These results are displayed in Figure 4.

Figure 4. 

Laterality ratios of early (< 100 msec) activation during the auditory (CRM) and visual word comprehension (VCRM) tasks in the primary sensory cortices (TTG and V1) and in all known language-related regions. CRM task is shown in gray bars and VCRM task is shown in white bars. Asterisks indicate significantly left-lateralized activation. Area V1 shows significantly left-lateralized activation during VCRM task, while TTG shows significantly left-lateralized activation during CRM task. Some language-related areas show significantly left-lateralized activation during both tasks. No significant lateralization, either left or right, was obtained in any other pair of homotopic regions during the linguistic and nonlinguistic tasks. V1 = primary visual cortex

Figure 4. 

Laterality ratios of early (< 100 msec) activation during the auditory (CRM) and visual word comprehension (VCRM) tasks in the primary sensory cortices (TTG and V1) and in all known language-related regions. CRM task is shown in gray bars and VCRM task is shown in white bars. Asterisks indicate significantly left-lateralized activation. Area V1 shows significantly left-lateralized activation during VCRM task, while TTG shows significantly left-lateralized activation during CRM task. Some language-related areas show significantly left-lateralized activation during both tasks. No significant lateralization, either left or right, was obtained in any other pair of homotopic regions during the linguistic and nonlinguistic tasks. V1 = primary visual cortex

Moreover, during the VCRM task, in the second (160–330 msec) and third activation (330–600 msec) bursts, left lateralization was noted in additional language-related cortices, including the pars triangularis (160–330 msec: p = .015, with eight out of nine participants showing the effect [8/9 L > R; 1/9 R > L]; 330–600 msec: p = .001, with all nine participants showing the effect [9/9 L > R]), pars opercularis (330–600 msec: p = .004, with seven out of nine participants showing the effect [7/9 L > R; 2/9 R > L]), premotor cortex (330–600 msec: p = .009, with eight out of nine participants showing the effect [8/9 L > R; 1/9 R > L]), pSTG (160–330 msec: p = .002; 330–600 msec: p = .001, with all nine participants showing the effect in both time windows [9/9 L > R]), pMTG (160–330 msec: p = .007, with eight out of nine participants showing the effect [8/9 L > R; 1/9 L = R]; 330–600 msec: p = .0004, with all nine participants showing the effect [9/9 L > R]), SMG (330–600 msec: p = .013, with seven out of nine participants showing the effect [7/9 L > R; 2/9 R > L]), the fusiform (160–330 msec: p = .006, with eight out of nine participants showing the effect [8/9 L > R; 1/9 R > L]; 330–600 msec: p = .013, with eight out of nine participants showing the effect [8/9 L > R; 1/9 R > L]), and lingual (160–330 msec: p = .007, with eight out of nine participants showing the effect [8/9 L > R; 1/9 R > L]; 330–600 msec: p = .015, with seven out of nine participants showing the effect [7/9 L > R; 1/9 R > L; 1 L = R]) gyri, and not other ROIs that are not considered language related.

DISCUSSION

The pattern of left-lateralized activation of Broca's and Wernicke's regions as well as regions in the posterior sector of the left basal temporal cortex (the fusiform and lingual gyri) derived in this study concurs with those found consistently in the neuroimaging literature (e.g., Fridriksson et al., 2008, 2009; Turner et al., 2009; Friederici & Alter, 2004; Balsamo et al., 2002) for activation of pSTG, SMG, the AG, and Broca's region during receptive language, in dicating that even in receptive language tasks, the entire left hemisphere language network is activated. But in addition, this study revealed two salient features of language-specific activation, namely, first, that such activation begins almost at the same time as sensory feature analysis of the input in all known language-related areas of the left hemisphere and, second, that it also involves the primary sensory cortices, facts that call for extensive revision of our current understanding of the way brain mechanisms handle linguistic input.

The received notions regarding the contribution of the primary sensory cortices in language processing are that early activation in the perisylvian or pericalcarine areas likely represents sensory, that is, nonlinguistic processing of the stimuli, and is modality-specific (e.g., McDonald et al., 2009; Breier, Simos, Zouridakis, & Papanicolaou, 1999). To the best of our knowledge, the only evidence that visual areas of the left occipital cortex may support language processes comes from congenitally blind people and show that the visual cortex is not only responsive to language features corresponding to tactile properties of Braille reading (Sadato et al., 1996) but also to verbal stimuli (Bedny, Richardson, & Saxe, 2015; Bedny, Pascual-Leone, Dodell-Feder, Fedorenko, & Saxe, 2011). However, in these cases, the results reflect cortical plasticity and extensive functional reorganization of these areas in the absence of their major sensory input (e.g., Bedny et al., 2011, 2015; Amedi, Raz, Pianka, Malach, & Zohary, 2003; Röder, Stock, Bien, Neville, & Rösler, 2002).

Our current findings are striking in that they show for the first time that the primary sensory cortices are sensitive to linguistic information in healthy participants. One may rightly argue that the significantly higher degree of activation of the primary sensory cortices in the left hemisphere may reflect tasks demands (e.g., working memory load; Scott & Mishkin, 2016; Bigelow, Rossi, & Poremba, 2014; Harrison & Tong, 2009). However, the experiments involving nonlinguistic stimuli and those involving language were of comparable difficulty, and in the case of the kaleidoscope, the working memory demands may have been higher than those of the VCRM task. Therefore, the observed laterality effects cannot be attributed to task demand factors.

The question of whether the engagement of the primary auditory and visual cortices is brought about by the nature of the stimuli or by the fact that the entire system is primed to engage in language processing cannot be answered on the basis of these data, but we are currently examining this issue in the context of another investigation. Another related issue that cannot be resolved on the basis of these data is whether there is a temporal order in the engagement of the different language-related structures in the sensory and linguistic analysis of linguistic input or whether the entire language network is simultaneously active once linguistic input is expected.

Summary and Conclusions

In conclusion, this study revealed two entirely novel features of language-specific activation, namely, that such activation begins almost at the same time as sensory feature analysis of the input in all known language-related areas of the left hemisphere and that it also involves the primary sensory cortices. To the degree the MNE method is adequate for the purposes for which it is being used, it appears that the sensory cortices are involved in early language processing. These results urge us to revise our current understanding of the way brain mechanisms perform linguistic processing. However, in view of the small sample size in each experiment, the generalizability of our results is limited, but the fact to which they point, namely, that early linguistic processing takes place even in primary sensory areas, merits replication and further exploration with larger participant samples.

Acknowledgments

The authors would like to kindly acknowledge the support of the Shainberg Foundation and the Neuroscience Institute, Le Bonheur Children's Hospital (www.lebonheur.org/our-services/neuroscience-institute/) and thank Ms. Katherine Schiller and Ms. Liliya Birg for their expert technical assistance.

Reprint requests should be sent to Andrew C. Papanicolaou, Department of Pediatrics, Division of Clinical Neurosciences, University of Tennessee Health Science Center, 49 N. Dunlap St, Memphis, TN 38105, or via e-mail: apapanic@uthsc.edu.

REFERENCES

REFERENCES
Amedi
,
A.
,
Raz
,
N.
,
Pianka
,
P.
,
Malach
,
R.
, &
Zohary
,
E.
(
2003
).
Early “visual” cortex activation correlates with superior verbal memory performance in the blind
.
Nature Neuroscience
,
6
,
758
766
.
Balsamo
,
L. M.
,
Xu
,
B.
,
Grandin
,
C. B.
,
Petrella
,
J. R.
,
Braniecki
,
S. H.
,
Elliott
,
T. K.
, et al
(
2002
).
A functional magnetic resonance imaging study of left hemisphere language dominance in children
.
Archives of Neurology
,
59
,
1168
1174
.
Baumann
,
S. B.
,
Rogers
,
R. L.
,
Papanicolaou
,
A. C.
, &
Saydjari
,
C. L.
(
1990
).
Intersession replicability of dipole parameters from three components of the auditory evoked magnetic field
.
Brain Topography
,
3
,
311
319
.
Bedny
,
M.
,
Pascual-Leone
,
A.
,
Dodell-Feder
,
D.
,
Fedorenko
,
E.
, &
Saxe
,
R.
(
2011
).
Language processing in the occipital cortex of congenitally blind adults
.
Proceedings of the National Academy of Sciences, U.S.A.
,
108
,
4429
4434
.
Bedny
,
M.
,
Richardson
,
H.
, &
Saxe
,
R.
(
2015
).
“Visual” cortex responds to spoken language in blind children
.
Journal of Neuroscience
,
35
,
11674
11681
.
Bemis
,
D. K.
, &
Pylkkanen
,
L.
(
2013
).
Basic linguistic composition recruits the left anterior temporal lobe and left angular gyrus during both listening and reading
.
Cerebral Cortex
,
23
,
1859
1873
.
Bigelow
,
J.
,
Rossi
,
B.
, &
Poremba
,
A.
(
2014
).
Neural correlates of short-term memory in primate auditory cortex
.
Frontiers in Neuroscience
,
8
,
250
.
Binder
,
J. R.
,
Frost
,
J. A.
,
Hammeke
,
T. A.
,
Rao
,
S. M.
, &
Cox
,
R. W.
(
1996
).
Function of the left planum temporale in auditory and linguistic processing
.
Brain
,
119
,
1239
1247
.
Binder
,
J. R.
,
Swanson
,
S. J.
,
Hammeke
,
T. A.
,
Morris
,
G. L.
,
Mueller
,
W. M.
,
Fischer
,
M.
, et al
(
1996
).
Determination of language dominance using functional MRI: A comparison with the Wada test
.
Neurology
,
46
,
978
984
.
Bowyer
,
S. M.
,
Moran
,
J. E.
,
Mason
,
K. M.
,
Constantinou
,
J. E.
,
Smith
,
B. J.
,
Barkley
,
G. L.
, et al
(
2004
).
MEG localization of language-specific cortex utilizing MR-FOCUSS
.
Neurology
,
62
,
2247
2255
.
Bowyer
,
S. M.
,
Moran
,
J. E.
,
Weiland
,
B. J.
,
Mason
,
K. M.
,
Greenwald
,
M. L.
,
Smith
,
B. J.
, et al
(
2005
).
Language laterality determined by MEG mapping with MR-FOCUSS
.
Epilepsy and Behavior
,
6
,
235
241
.
Breier
,
J. I.
,
Simos
,
P. G.
,
Zouridakis
,
G.
, &
Papanicolaou
,
A. C.
(
1999
).
Lateralization of cerebral activation in auditory verbal and non-verbal memory tasks using magnetoencephalography
.
Brain Topography
,
12
,
89
97
.
Brown
,
W. S.
, &
Lehmann
,
D.
(
1980
).
Linguistic meaning related differences in evoked potential topography: English, Swiss-German, and imagined
.
Brain and Language
,
11
,
340
353
.
Chen
,
Y.
,
Davis
,
M. H.
,
Pulvermuller
,
F.
, &
Hauk
,
O.
(
2015
).
Early visual word processing is flexible: Evidence from spatiotemporal brain dynamics
.
Journal of Cognitive Neuroscience
,
27
,
1738
1751
.
Cohen
,
L.
,
Dehaene
,
S.
,
Naccache
,
L.
,
Lehericy
,
S.
,
Dehaene-Lambertz
,
G.
,
Hénaff
,
M. A.
, et al
(
2000
).
The visual word form area: Spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients
.
Brain
,
123
,
291
307
.
Dale
,
A. M.
,
Fischl
,
B.
, &
Sereno
,
M. I.
(
1999
).
Cortical surface-based analysis: I. Segmentation and surface reconstruction
.
Neuroimage
,
9
,
179
194
.
Dale
,
A. M.
,
Liu
,
A. K.
,
Fischl
,
B. R.
,
Buckner
,
R. L.
,
Belliveau
,
J. W.
,
Lewine
,
J. D.
, et al
(
2000
).
Dynamic statistical parametric mapping: Combining fMRI and MEG for high-resolution imaging of cortical activity
.
Neuron
,
26
,
55
67
.
Demonet
,
J. F.
,
Price
,
C.
,
Wise
,
R.
, &
Frackowiak
,
R. S.
(
1994
).
A PET study of cognitive strategies in normal subjects during language tasks. Influence of phonetic ambiguity and sequence processing on phoneme monitoring
.
Brain
,
117
,
671
682
.
Desikan
,
R. S.
,
Ségonne
,
F.
,
Fischl
,
B.
,
Quinn
,
B. T.
,
Dickerson
,
B. C.
,
Blacker
,
D.
, et al
(
2006
).
An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest
.
Neuroimage
,
31
,
968
980
.
Egorova
,
N.
,
Shtyrov
,
Y.
, &
Pulvermuller
,
F.
(
2013
).
Early and parallel processing of pragmatic and semantic information in speech acts: Neurophysiological evidence
.
Frontiers in Human Neuroscience
,
7
,
86
.
Escudero
,
J.
,
Hornero
,
R.
,
Abasolo
,
D.
,
Fernandez
,
A.
, &
Lopez-Coronado
,
M.
(
2007
).
Artifact removal in magnetoencephalogram background activity with independent component analysis
.
IEEE Transactions on Biomedical Engineering
,
54
,
1965
1973
.
Foundas
,
A. L.
,
Eure
,
K. F.
,
Luevano
,
L. F.
, &
Weinberger
,
D. R.
(
1998
).
MRI asymmetries of Broca's area: The pars triangularis and pars opercularis
.
Brain and Language
,
64
,
282
296
.
Fridriksson
,
J.
,
Moser
,
D.
,
Ryalls
,
J.
,
Bonilha
,
L.
,
Rorden
,
C.
, &
Baylis
,
G.
(
2009
).
Modulation of frontal lobe speech areas associated with the production and perception of speech movements
.
Journal of Speech, Language, and Hearing Research
,
52
,
812
819
.
Fridriksson
,
J.
,
Moss
,
J.
,
Davis
,
B.
,
Baylis
,
G.
,
Bonilha
,
L.
, &
Rorden
,
C.
(
2008
).
Motor speech perception modulates the cortical language areas
.
Neuroimage
,
41
,
605
613
.
Friederici
,
A. D.
, &
Alter
,
K.
(
2004
).
Lateralization of auditory language functions: A dynamic dual pathway model
.
Brain and Language
,
89
,
267
276
.
Friederici
,
A. D.
,
Wang
,
Y.
,
Herrmann
,
C. S.
,
Maess
,
B.
, &
Oertel
,
U.
(
2000
).
Localization of early syntactic processes in frontal and temporal cortical areas: A magnetoencephalographic study
.
Human Brain Mapping
,
11
,
1
11
.
Fujimaki
,
N.
,
Hayakawa
,
T.
,
Ihara
,
A.
,
Wei
,
Q.
,
Munetsuna
,
S.
,
Terazono
,
Y.
, et al
(
2009
).
Early neural activation for lexico-semantic access in the left anterior temporal area analyzed by an fMRI-assisted MEG multidipole method
.
Neuroimage
,
44
,
1093
1102
.
Grodzinsky
,
Y.
, &
Amunts
,
K.
(
2006
).
Broca's region
(illustrated ed.).
Oxford; New York
:
Oxford University Press
.
Gur
,
R. C.
,
Ragland
,
D.
,
Mozley
,
L. H.
,
Mozley
,
P. D.
,
Smith
,
R.
,
Alavi
,
A.
, et al
(
1997
).
Lateralized changes in regional cerebral blood flow during performance of verbal and facial recognition tasks: Correlations with performance and “effort”
.
Brain and Cognition
,
33
,
388
414
.
Halgren
,
E.
,
Dhond
,
R. P.
,
Christensen
,
N.
,
Van Petten
,
C.
,
Marinkovic
,
K.
,
Lewine
,
J. D.
, et al
(
2002
).
N400-like magnetoencephalography responses modulated by semantic context, word frequency, and lexical class in sentences
.
Neuroimage
,
17
,
1101
1116
.
Hämäläinen
,
M. S.
, &
Ilmoniemi
,
R. J.
(
1994
).
Interpreting magnetic fields of the brain: Minimum norm estimates
.
Medical & Biological Engineering & Computing
,
32
,
35
42
.
Harrison
,
S. A.
, &
Tong
,
F.
(
2009
).
Decoding reveals the contents of visual working memory in early visual areas
.
Nature
,
458
,
632
635
.
Hauk
,
O.
(
2004
).
Keep it simple: A case for using classical minimum norm estimation in the analysis of EEG and MEG data
.
Neuroimage
,
21
,
1612
1621
.
Henry
,
T. R.
,
Mazziotta
,
J. C.
,
Engel
,
J.
, Jr.
,
Christenson
,
P. D.
,
Zhang
,
J. X.
,
Phelps
,
M. E.
, et al
(
1990
).
Quantifying interictal metabolic activity in human temporal lobe epilepsy
.
Journal of Cerebral Blood Flow and Metabolism
,
10
,
748
757
.
Hickok
,
G.
(
2012
).
The cortical organization of speech processing: Feedback control and predictive coding the context of a dual-stream model
.
Journal of Communication Disorders
,
45
,
393
402
.
Hickok
,
G.
, &
Poeppel
,
D.
(
2007
).
The cortical organization of speech processing
.
Nature Reviews Neuroscience
,
8
,
393
402
.
Hinojosa
,
J. A.
,
Martin-Loeches
,
M.
,
Munõz
,
F.
,
Casado
,
P.
,
Fernandez-Fias
,
C.
, &
Pozo
,
M. A.
(
2001
).
Electrophysiological evidence of a semantic system commonly accessed by animals and tools categories
.
Brain Research. Cognitive Brain Research
,
12
,
321
328
.
Hirata
,
M.
,
Kato
,
A.
,
Taniguchi
,
M.
,
Saitoh
,
Y.
,
Ninomiya
,
H.
,
Ihara
,
A.
, et al
(
2004
).
Determination of language dominance with synthetic aperture magnetometry: Comparison with the Wada test
.
Neuroimage
,
23
,
46
53
.
Hoenig
,
K.
,
Sim
,
E. J.
,
Bochev
,
V.
,
Herrnberger
,
B.
, &
Kiefer
,
M.
(
2008
).
Conceptual flexibility in the human brain: Dynamic recruitment of semantic maps from visual, motor, and motion-related areas
.
Journal of Cognitive Neuroscience
,
20
,
1799
1814
.
Kamada
,
K.
,
Sawamura
,
Y.
,
Takeuchi
,
F.
,
Kuriki
,
S.
,
Kawai
,
K.
,
Morita
,
A.
, et al
(
2007
).
Expressive and receptive language areas determined by a non-invasive reliable method using functional magnetic resonance imaging and magnetoencephalography
.
Neurosurgery
,
60
,
296
305
.
Kuriki
,
S.
,
Isahai
,
N.
, &
Ohtsuka
,
A.
(
2005
).
Spatiotemporal characteristics of the neural activities processing consonant/dissonant tones in melody
.
Experimental Brain Research
,
162
,
46
55
.
Kutas
,
M.
, &
Federmeier
,
K. D.
(
2011
).
Thirty years and counting: Finding meaning in the N400 component of the event-related brain potential (ERP)
.
Annual Review of Psychology
,
62
,
621
647
.
MacGregor
,
L. J.
,
Pulvermüller
,
F.
,
van Casteren
,
M.
, &
Shtyrov
,
Y.
(
2012
).
Ultra-rapid access to words in the brain
.
Nature Communications
,
3
,
711
.
Mantini
,
D.
,
Della Penna
,
S.
,
Marzetti
,
L.
,
de Pasquale
,
F.
,
Pizzella
,
V.
,
Corbetta
,
M.
, et al
(
2011
).
A signal-processing pipeline for magnetoencephalography resting-state networks
.
Brain Connectivity
,
1
,
49
59
.
Martin-Loeches
,
M.
,
Hinojosa
,
J. A.
,
Gómez-Jarabo
,
G.
, &
Rubia
,
F.
(
2001
).
An early electrophysiological sign of semantic processing in basal extrastriate areas
.
Psychophysiology
,
38
,
114
124
.
Mayhew
,
S. D.
,
Dirckx
,
S. G.
,
Niazy
,
R. K.
,
Iannetti
,
G. D.
, &
Wise
,
R. G.
(
2010
).
EEG signatures of auditory activity correlate with simultaneously recorded fMRI responses in humans
.
NeuroImage
,
49
,
849
864
.
McDonald
,
C. R.
,
Thesen
,
T.
,
Hagler
,
D. J.
, Jr.
,
Carlson
,
C.
,
Devinksy
,
O.
,
Kuzniecky
,
R.
, et al
(
2009
).
Distributed source modeling of language with magnetoencephalography: Application to patients with intractable epilepsy
.
Epilepsia
,
50
,
2256
2266
.
Mohamed
,
I. S.
,
Cheyne
,
D.
,
Gaetz
,
W. C.
,
Otsubo
,
H.
,
Logan
,
W. J.
,
Carter Snead
,
O.
, III
, et al
(
2008
).
Spatiotemporal patterns of oscillatory brain activity during auditory word recognition in children: A synthetic aperture magnetometry study
.
International Journal of Psychophysiology
,
68
,
141
148
.
Moseley
,
R. L.
,
Pulvermüller
,
F.
, &
Shtyrov
,
Y.
(
2013
).
Sensorimotor semantics on the spot: Brain activity dissociates between conceptual categories within 150 ms
.
Scientific Reports
,
3
,
1928
.
Pammer
,
K.
,
Hansen
,
P. C.
,
Kringelbach
,
M. L.
,
Holliday
,
I.
,
Barnes
,
G.
,
Hillebrand
,
A.
, et al
(
2004
).
Visual word recognition: The first half second
.
Neuroimage
,
22
,
1819
1825
.
Papanicolaou
,
A. C.
,
Pazo-Alvarez
,
P.
,
Castillo
,
E. M.
,
Billingsley-Marshall
,
R. L.
,
Breier
,
J. I.
,
Swank
,
P. R.
, et al
(
2006
).
Functional neuroimaging with MEG: Normative language profiles
.
Neuroimage
,
33
,
326
342
.
Papanicolaou
,
A. C.
,
Simos
,
P. G.
,
Castillo
,
E. M.
,
Breier
,
J. I.
,
Sarkari
,
S.
,
Pataraia
,
E.
, et al
(
2004
).
Magnetocephalography: A noninvasive alternative to the Wada procedure
.
Journal of Neurosurgery
,
100
,
867
876
.
Passaro
,
A. D.
,
Rezaie
,
R.
,
Moser
,
D. C.
,
Li
,
Z.
,
Dias
,
N.
, &
Papanicolaou
,
A. C.
(
2011
).
Optimizing estimation of hemispheric dominance for language using magnetic source imaging
.
Brain Research
,
1416
,
44
50
.
Penolazzi
,
B.
,
Hauk
,
O.
, &
Pulvermüller
,
F.
(
2006
).
Early semantic context integration and lexical access as revealed by event-related brain potentials
.
Biological Psychology
,
74
,
374
388
.
Poldrack
,
R. A.
,
Temple
,
E.
,
Protopapas
,
A.
,
Nagarajan
,
S.
,
Tallal
,
P.
,
Merzenich
,
M.
, et al
(
2001
).
Relations between the neural bases of dynamic auditory processing and phonological processing: Evidence from fMRI
.
Journal of Cognitive Neuroscience
,
13
,
687
697
.
Price
,
C. J.
,
Wise
,
R. J.
, &
Frackowiak
,
R. S.
(
1996
).
Demonstrating the implicit processing of visually presented words and pseudowords
.
Cerebral Cortex
,
6
,
62
70
.
Pugh
,
K. R.
,
Shaywitz
,
B. A.
,
Shaywitz
,
S. E.
,
Constable
,
R. T.
,
Skudlarski
,
P.
,
Fulbright
,
R. K.
, et al
(
1996
).
Cerebral organization of component processes in reading
.
Brain
,
119
,
1221
1238
.
Pulvermüller
,
F.
,
Lutzenberger
,
W.
, &
Birbaumer
,
N.
(
1995
).
Electrocortical distinction of vocabulary types
.
Electroencephalography and Clinical Neurophysiology
,
94
,
357
370
.
Pulvermüller
,
F.
,
Lutzenberger
,
W.
, &
Preissl
,
H.
(
1999
).
Nouns and verbs in the intact brain: Evidence from event-related potentials and high-frequency cortical responses
.
Cerebral Cortex
,
9
,
498
508
.
Rimol
,
L. M.
,
Specht
,
K.
,
Weis
,
S.
,
Savoy
,
R.
, &
Hugdahl
,
K.
(
2005
).
Processing of sub-syllabic speech units in the posterior temporal lobe: An fMRI study
.
Neuroimage
,
26
,
1059
1067
.
Rinne
,
T.
,
Alho
,
K.
,
Alku
,
P.
,
Holi
,
M.
,
Sinkkonen
,
J.
,
Virtanen
,
J.
, et al
(
1999
).
Analysis of speech sounds is left-hemisphere predominant at 100-150ms after sound onset
.
NeuroReport
,
10
,
1113
1117
.
Röder
,
B.
,
Stock
,
O.
,
Bien
,
S.
,
Neville
,
H.
, &
Rösler
,
F.
(
2002
).
Speech processing activates visual cortex in congenitally blind humans
.
European Journal of Neuroscience
,
16
,
930
936
.
Rumsey
,
J. M.
,
Horwitz
,
B.
,
Donohue
,
B. C.
,
Nace
,
K.
,
Maisog
,
J. M.
, &
Andreason
,
P.
(
1997
).
Phonological and orthographic components of word recognition. A PET-rCBF study
.
Brain
,
120
,
739
759
.
Sadato
,
N.
,
Pascual-Leone
,
A.
,
Grafman
,
J.
,
Ibañez
,
V.
,
Deiber
,
M. P.
,
Dold
,
G.
, et al
(
1996
).
Activation of the primary visual cortex by Braille reading in blind s
.
Nature
,
380
,
526
528
.
Salmelin
,
R.
(
2007
).
Clinical neurophysiology of language: The MEG approach
.
Clinical Neurophysiology
,
118
,
237
254
.
Schulz
,
G. M.
,
Varga
,
M.
,
Jeffires
,
K.
,
Ludlow
,
C. L.
, &
Braun
,
A. R.
(
2005
).
Functional neuroanatomy of human vocalization: An H215O PET study
.
Cerebral Cortex
,
15
,
1835
1847
.
Scott
,
B. H.
, &
Mishkin
,
M.
(
2016
).
Auditory short-term memory in the primate auditory cortex
.
Brain Research
,
1640
,
264
277
.
Scott
,
S. K.
,
Blank
,
C. C.
,
Rosen
,
S.
, &
Wise
,
R. J.
(
2000
).
Identification of a pathway for intelligible speech in the left temporal lobe
.
Brain
,
123
,
2400
2406
.
Simos
,
P. G.
,
Breier
,
J. I.
,
Zouridakis
,
G.
, &
Papanicolaou
,
A. C.
(
1998
).
Assessment of functional cerebral laterality for language using magnetoencephalography
.
Journal of Clinical Neurophysiology
,
15
,
364
372
.
Spitz
,
M. C.
,
Emerson
,
R. G.
, &
Pedley
,
T. A.
(
1986
).
Dissociation of frontal N100 from occipital P100 in pattern reversal visual evoked potentials
.
Electroencephalography and Clinical Neurophysiology
,
65
,
161
168
.
Springer
,
J. A.
,
Binder
,
J. R.
,
Hammeke
,
T. A.
,
Swanson
,
S. J.
,
Frost
,
J. A.
,
Bellgowan
,
P. S.
, et al
(
1999
).
Language dominance in neurologically normal and epilepsy subjects: A functional MRI study
.
Brain
,
122
,
2033
2046
.
Szymanski
,
M. D.
,
Perry
,
D. W.
,
Gage
,
N. M.
,
Rowley
,
H. A.
,
Walker
,
J.
,
Berger
,
M. S.
, et al
(
2001
).
Magnetic source imaging of late evoked field responses to vowels: Toward an assessment of hemispheric dominance for language
.
Journal of Neurosurgery
,
94
,
445
453
.
Tadel
,
F.
,
Baillet
,
S.
,
Mosher
,
J. C.
,
Pantazis
,
D.
, &
Leahy
,
R. M.
(
2011
).
Brainstorm: A user-friendly application for MEG/EEG analysis
.
Computational Intelligence and Neuroscience
,
Article 879716
.
Tavabi
,
K.
,
Embick
,
D.
, &
Roberts
,
T. P. L.
(
2011
).
Word repetition priming induced oscillations in auditory cortex: A magnetoencephalography study
.
NeuroReport
,
22
,
887
891
.
Turner
,
T. H.
,
Fridriksson
,
J.
,
Baker
,
J.
,
Eoute
,
D.
, Jr.
,
Bonilha
,
L.
, &
Rorden
,
C.
(
2009
).
Obligatory Broca's area modulation associated with passive speech perception
.
NeuroReport
,
20
,
492
496
.
Uutela
,
K.
,
Hämäläinen
,
M.
, &
Somersalo
,
E.
(
1999
).
Visualization of magnetoencephalographic data using minimum current estimates
.
Neuroimage
,
10
,
173
180
.
Verkindt
,
C.
,
Bertrand
,
O.
,
Perrin
,
F.
,
Echallier
,
J. F.
, &
Pernier
,
J.
(
1995
).
Tonotopic organization of the human auditory cortex: N100 topography and multiple dipole model analysis
.
Electroencephalography and Clinical Neurophysiology
,
96
,
143
156
.
Winter
,
W. R.
,
Nunez
,
P. L.
,
Ding
,
J.
, &
Srinivasan
,
R.
(
2007
).
Comparison of the effect of volume conduction on EEG coherence with the effect of field spread on MEG coherence
.
Statistics in Medicine
,
26
,
3946
3957
.
Wood
,
C. C.
,
Goff
,
W. R.
, &
Day
,
R. S.
(
1971
).
Auditory evoked potentials during speech perception
.
Science
,
173
,
1248
1251
.
Zeno
,
S. M.
,
Ivens
,
S. H.
,
Millard
,
R. T.
, &
Duvvuri
,
R.
(
1995
).
The educator's word frequency guide
.
Brewster, NY
:
Touchstone Applied Science Associates
.