Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-14 of 14
Gregory Hickok
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (8): 1355–1375.
Published: 01 July 2022
FIGURES
| View All (10)
Abstract
View article
PDF
The neural basis of language has been studied for centuries, yet the networks critically involved in simply identifying or understanding a spoken word remain elusive. Several functional–anatomical models of critical neural substrates of receptive speech have been proposed, including (1) auditory-related regions in the left mid-posterior superior temporal lobe, (2) motor-related regions in the left frontal lobe (in normal and/or noisy conditions), (3) the left anterior superior temporal lobe, or (4) bilateral mid-posterior superior temporal areas. One difficulty in comparing these models is that they often focus on different aspects of the sound-to-meaning pathway and are supported by different types of stimuli and tasks. Two auditory tasks that are typically used in separate studies—syllable discrimination and word comprehension—often yield different conclusions. We assessed syllable discrimination (words and nonwords) and word comprehension (clear speech and with a noise masker) in 158 individuals with focal brain damage: left ( n = 113) or right ( n = 19) hemisphere stroke, left ( n = 18) or right ( n = 8) anterior temporal lobectomy, and 26 neurologically intact controls. Discrimination and comprehension tasks are doubly dissociable both behaviorally and neurologically. In support of a bilateral model, clear speech comprehension was near ceiling in 95% of left stroke cases and right temporal damage impaired syllable discrimination. Lesion-symptom mapping analyses for the syllable discrimination and noisy word comprehension tasks each implicated most of the left superior temporal gyrus. Comprehension but not discrimination tasks also implicated the left posterior middle temporal gyrus, whereas discrimination but not comprehension tasks also implicated more dorsal sensorimotor regions in posterior perisylvian cortex.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2020) 32 (2): 256–271.
Published: 01 February 2020
FIGURES
Abstract
View article
PDF
Left-hemisphere brain damage commonly affects patients' abilities to produce and comprehend syntactic structures, a condition typically referred to as “agrammatism.” The neural correlates of agrammatism remain disputed in the literature, and distributed areas have been implicated as important predictors of performance, for example, Broca's area, anterior temporal areas, and temporo-parietal areas. We examined the association between damage to specific language-related ROIs and impaired syntactic processing in acute aphasia. We hypothesized that damage to the posterior middle temporal gyrus, and not Broca's area, would predict syntactic processing abilities. One hundred four individuals with acute aphasia (<20 days poststroke) were included in the study. Structural MRI scans were obtained, and all participants completed a 45-item sentence–picture matching task. We performed an ROI-based stepwise regression analyses to examine the relation between cortical brain damage and impaired comprehension of canonical and noncanonical sentences. Damage to the posterior middle temporal gyrus was the strongest predictor for overall task performance and performance on noncanonical sentences. Damage to the angular gyrus was the strongest predictor for performance on canonical sentences, and damage to the posterior superior temporal gyrus predicted noncanonical scores when performance on canonical sentences was included as a cofactor. Overall, our models showed that damage to temporo-parietal and posterior temporal areas was associated with impaired syntactic comprehension. Our results indicate that the temporo-parietal area is crucially implicated in complex syntactic processing, whereas the role of Broca's area may be complementary.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2018) 30 (10): 1549–1557.
Published: 01 October 2018
FIGURES
Abstract
View article
PDF
Models of speech production posit a role for the motor system, predominantly the posterior inferior frontal gyrus, in encoding complex phonological representations for speech production, at the phonemic, syllable, and word levels [Roelofs, A. A dorsal-pathway account of aphasic language production: The WEAVER++/ARC model. Cortex, 59(Suppl. C), 33–48, 2014; Hickok, G. Computational neuroanatomy of speech production. Nature Reviews Neuroscience, 13, 135–145, 2012; Guenther, F. H. Cortical interactions underlying the production of speech sounds. Journal of Communication Disorders, 39, 350–365, 2006]. However, phonological theory posits subphonemic units of representation, namely phonological features [Chomsky, N., & Halle, M. The sound pattern of English , 1968; Jakobson, R., Fant, G., & Halle, M. Preliminaries to speech analysis. The distinctive features and their correlates . Cambridge, MA: MIT Press, 1951], that specify independent articulatory parameters of speech sounds, such as place and manner of articulation. Therefore, motor brain systems may also incorporate phonological features into speech production planning units. Here, we add support for such a role with an fMRI experiment of word sequence production using a phonemic similarity manipulation. We adapted and modified the experimental paradigm of Oppenheim and Dell [Oppenheim, G. M., & Dell, G. S. Inner speech slips exhibit lexical bias, but not the phonemic similarity effect. Cognition, 106, 528–537, 2008; Oppenheim, G. M., & Dell, G. S. Motor movement matters: The flexible abstractness of inner speech. Memory & Cognition, 38, 1147–1160, 2010]. Participants silently articulated words cued by sequential visual presentation that varied in degree of phonological feature overlap in consonant onset position: high overlap (two shared phonological features; e.g., /r/ and /l/) or low overlap (one shared phonological feature, e.g., /r/ and /b/). We found a significant repetition suppression effect in the left posterior inferior frontal gyrus, with increased activation for phonologically dissimilar words compared with similar words. These results suggest that phonemes, particularly phonological features, are part of the planning units of the motor speech system.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2018) 30 (2): 234–255.
Published: 01 February 2018
FIGURES
| View All (9)
Abstract
View article
PDF
Broca's area has long been implicated in sentence comprehension. Damage to this region is thought to be the central source of “agrammatic comprehension” in which performance is substantially worse (and near chance) on sentences with noncanonical word orders compared with canonical word order sentences (in English). This claim is supported by functional neuroimaging studies demonstrating greater activation in Broca's area for noncanonical versus canonical sentences. However, functional neuroimaging studies also have frequently implicated the anterior temporal lobe (ATL) in sentence processing more broadly, and recent lesion–symptom mapping studies have implicated the ATL and mid temporal regions in agrammatic comprehension. This study investigates these seemingly conflicting findings in 66 left-hemisphere patients with chronic focal cerebral damage. Patients completed two sentence comprehension measures, sentence–picture matching and plausibility judgments. Patients with damage including Broca's area (but excluding the temporal lobe; n = 11) on average did not exhibit the expected agrammatic comprehension pattern—for example, their performance was >80% on noncanonical sentences in the sentence–picture matching task. Patients with ATL damage ( n = 18) also did not exhibit an agrammatic comprehension pattern. Across our entire patient sample, the lesions of patients with agrammatic comprehension patterns in either task had maximal overlap in posterior superior temporal and inferior parietal regions. Using voxel-based lesion–symptom mapping, we find that lower performances on canonical and noncanonical sentences in each task are both associated with damage to a large left superior temporal–inferior parietal network including portions of the ATL, but not Broca's area. Notably, however, response bias in plausibility judgments was significantly associated with damage to inferior frontal cortex, including gray and white matter in Broca's area, suggesting that the contribution of Broca's area to sentence comprehension may be related to task-related cognitive demands.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2014) 26 (3): 606–620.
Published: 01 March 2014
FIGURES
| View All (7)
Abstract
View article
PDF
Visual speech influences the perception of heard speech. A classic example of this is the McGurk effect, whereby an auditory /pa/ overlaid onto a visual /ka/ induces the fusion percept of /ta/. Recent behavioral and neuroimaging research has highlighted the importance of both articulatory representations and motor speech regions of the brain, particularly Broca's area, in audiovisual (AV) speech integration. Alternatively, AV speech integration may be accomplished by the sensory system through multisensory integration in the posterior STS. We assessed the claims regarding the involvement of the motor system in AV integration in two experiments: (i) examining the effect of articulatory suppression on the McGurk effect and (ii) determining if motor speech regions show an AV integration profile. The hypothesis regarding experiment (i) is that if the motor system plays a role in McGurk fusion, distracting the motor system through articulatory suppression should result in a reduction of McGurk fusion. The results of experiment (i) showed that articulatory suppression results in no such reduction, suggesting that the motor system is not responsible for the McGurk effect. The hypothesis of experiment (ii) was that if the brain activation to AV speech in motor regions (such as Broca's area) reflects AV integration, the profile of activity should reflect AV integration: AV > AO (auditory only) and AV > VO (visual only). The results of experiment (ii) demonstrate that motor speech regions do not show this integration profile, whereas the posterior STS does. Instead, activity in motor regions is task dependent. The combined results suggest that AV speech integration does not rely on the motor system.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (9): 1896–1907.
Published: 01 September 2012
FIGURES
| View All (6)
Abstract
View article
PDF
Frequency modulation (FM) is an acoustic feature of nearly all complex sounds. Directional FM sweeps are especially pervasive in speech, music, animal vocalizations, and other natural sounds. Although the existence of FM-selective cells in the auditory cortex of animals has been documented, evidence in humans remains equivocal. Here we used multivariate pattern analysis to identify cortical selectivity for direction of a multitone FM sweep. This method distinguishes one pattern of neural activity from another within the same ROI, even when overall level of activity is similar, allowing for direct identification of FM-specialized networks. Standard contrast analysis showed that despite robust activity in auditory cortex, no clusters of activity were associated with up versus down sweeps. Multivariate pattern analysis classification, however, identified two brain regions as selective for FM direction, the right primary auditory cortex on the supratemporal plane and the left anterior region of the superior temporal gyrus. These findings are the first to directly demonstrate existence of FM direction selectivity in the human auditory cortex.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (10): 2629–2631.
Published: 01 October 2011
FIGURES
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (10): 2665–2674.
Published: 01 October 2011
FIGURES
Abstract
View article
PDF
Many models of spoken word recognition posit that the acoustic stream is parsed into phoneme level units, which in turn activate larger representations [McClelland, J. L., & Elman, J. L. The TRACE model of speech perception. Cognitive Psychology, 18, 1–86, 1986], whereas others suggest that larger units of analysis are activated without the need for segmental mediation [Greenberg, S. A multitier theoretical framework for understanding spoken language. In S. Greenberg & W. A. Ainsworth (Eds.), Listening to speech: An auditory perspective (pp. 411–433). Mahwah, NJ: Erlbaum, 2005; Klatt, D. H. Speech perception: A model of acoustic-phonetic analysis and lexical access. Journal of Phonetics, 7, 279–312, 1979; Massaro, D. W. Preperceptual images, processing time, and perceptual units in auditory perception. Psychological Review, 79, 124–145, 1972]. Identifying segmental effects in the brain's response to speech may speak to this question. For example, if such effects were localized to relatively early processing stages in auditory cortex, this would support a model of speech recognition in which segmental units are explicitly parsed out. In contrast, segmental processes that occur outside auditory cortex may indicate that alternative models should be considered. The current fMRI experiment manipulated the phonotactic frequency (PF) of words that were auditorily presented in short lists while participants performed a pseudoword detection task. PF is thought to modulate networks in which phoneme level units are represented. The present experiment identified activity in the left inferior frontal gyrus that was positively correlated with PF. No effects of PF were found in temporal lobe regions. We propose that the observed phonotactic effects during speech listening reflect the strength of the association between acoustic speech patterns and articulatory speech codes involving phoneme level units. On the basis of existing lesion evidence, we interpret the function of this auditory–motor association as playing a role primarily in production. These findings are consistent with the view that phoneme level units are not necessarily accessed during speech recognition.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (7): 1664–1680.
Published: 01 July 2011
FIGURES
| View All (5)
Abstract
View article
PDF
The role of Broca's area in sentence processing has been debated for the last 30 years. A central and still unresolved issue is whether Broca's area plays a specific role in some aspect of syntactic processing (e.g., syntactic movement, hierarchical structure building) or whether it serves a more general function on which sentence processing relies (e.g., working memory). This review examines the functional organization of Broca's area in regard to its contributions to sentence comprehension, verbal working memory, and other multimodal cognitive processes. We suggest that the data are consistent with the view that at least a portion of the contribution of Broca's area to sentence comprehension can be attributed to its role as a phonological short-term memory resource. Furthermore, our review leads us to conclude that there is no compelling evidence that there are sentence-specific processing regions within Broca's area.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (4): 632–639.
Published: 01 April 2010
FIGURES
Abstract
View article
PDF
Although it is generally acknowledged that at least two processing streams exist in the primate cortical auditory system, the function of the posterior dorsal stream is a topic of much debate. Recent studies have reported selective activation to auditory spatial change in portions of the human planum temporale (PT) relative to nonspatial stimuli such as pitch changes or complex acoustic patterns. However, previous work has suggested that the PT may be sensitive to another kind of nonspatial variable, namely, the number of auditory objects simultaneously presented in the acoustic signal. The goal of the present fMRI experiment was to assess whether any portion of the PT showed spatial selectivity relative to manipulations of the number of auditory objects presented. Spatially sensitive regions in the PT were defined by comparing activity associated with listening to an auditory object (speech from a single talker) that changed location with one that remained stationary. Activity within these regions was then examined during a nonspatial manipulation: increasing the number of objects (talkers) from one to three. The nonspatial manipulation modulated activity within the “spatial” PT regions. No region within the PT was found to be selective for spatial or object processing. We suggest that previously documented spatial sensitivity in the PT reflects auditory source separation using spatial cues rather than spatial processing per se.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2009) 21 (7): 1229–1243.
Published: 01 July 2009
Abstract
View article
PDF
The discovery of mirror neurons in macaque frontal cortex has sparked a resurgence of interest in motor/embodied theories of cognition. This critical review examines the evidence in support of one of these theories, namely, that mirror neurons provide the basis of action understanding. It is argued that there is no evidence from monkey data that directly tests this theory, and evidence from humans makes a strong case against the position.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2008) 20 (12): 2198–2210.
Published: 01 December 2008
Abstract
View article
PDF
Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory–motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2003) 15 (5): 673–682.
Published: 01 May 2003
Abstract
View article
PDF
The concept of auditory–motor interaction pervades speech science research, yet the cortical systems supporting this interface have not been elucidated. Drawing on experimental designs used in recent work in sensory–motor integration in the cortical visual system, we used fMRI in an effort to identify human auditory regions with both sensory and motor response properties, analogous to single-unit responses in known visuomotor integration areas. The sensory phase of the task involved listening to speech (nonsense sentences) or music (novel piano melodies); the “motor” phase of the task involved covert rehearsal/humming of the auditory stimuli. A small set of areas in the superior temporal and temporal– parietal cortex responded both during the listening phase and the rehearsal/humming phase. A left lateralized region in the posterior Sylvian fissure at the parietal–temporal boundary, area Spt, showed particularly robust responses to both phases of the task. Frontal areas also showed combined auditory + rehearsal responsivity consistent with the claim that the posterior activations are part of a larger auditory–motor integration circuit. We hypothesize that this circuit plays an important role in speech development as part of the network that enables acoustic–phonetic input to guide the acquisition of language-specific articulatory-phonetic gestures; this circuit may play a role in analogous musical abilities. In the adult, this system continues to support aspects of speech production, and, we suggest, supports verbal working memory.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1997) 9 (2): 266–276.
Published: 01 March 1997
Abstract
View article
PDF
Language comprises a lexicon for storing words and a grammar for generating rule-governed forms. Evidence is presented that the lexicon is part of a temporal-parietalhnedial-temporal “declarative memory” system and that granlmatical rules are processed by a frontamasal-ganglia “procedural” system. Patients produced past tenses of regular and novel verbs ( looked and plagged ), which require an - ed -suffixation rule, and irregular verbs ( dug ), which are retrieved from memory. Word-finding difficulties in posterior aphasia, and the general declarative memory impairment in Alzheimer's disease, led to more errors with irregular than regular and novel verbs. Grammatical difficulties in anterior aphasia, and the general impairment of procedures in Parkinson's disease, led to the opposite pattern. In contrast to the Parkinson's patients, who showed sup pressed motor activity and rule use, Huntington's disease patients showed excess motor activity and rule use, underscoring a role for the basal ganglia in grammatical processing.