Skip Nav Destination
Close Modal
Update search
NARROW
Date
Availability
1-6 of 6
Carolyn McGettigan
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (3): 483–500.
Published: 01 March 2016
FIGURES
| View All (5)
Abstract
View article
PDF
Spoken conversations typically take place in noisy environments, and different kinds of masking sounds place differing demands on cognitive resources. Previous studies, examining the modulation of neural activity associated with the properties of competing sounds, have shown that additional speech streams engage the superior temporal gyrus. However, the absence of a condition in which target speech was heard without additional masking made it difficult to identify brain networks specific to masking and to ascertain the extent to which competing speech was processed equivalently to target speech. In this study, we scanned young healthy adults with continuous fMRI, while they listened to stories masked by sounds that differed in their similarity to speech. We show that auditory attention and control networks are activated during attentive listening to masked speech in the absence of an overt behavioral task. We demonstrate that competing speech is processed predominantly in the left hemisphere within the same pathway as target speech but is not treated equivalently within that stream and that individuals who perform better in speech in noise tasks activate the left mid-posterior superior temporal gyrus more. Finally, we identify neural responses associated with the onset of sounds in the auditory environment; activity was found within right lateralized frontal regions consistent with a phasic alerting response. Taken together, these results provide a comprehensive account of the neural processes involved in listening in noise.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2014) 26 (8): 1748–1763.
Published: 01 August 2014
FIGURES
| View All (6)
Abstract
View article
PDF
The melodic contour of speech forms an important perceptual aspect of tonal and nontonal languages and an important limiting factor on the intelligibility of speech heard through a cochlear implant. Previous work exploring the neural correlates of speech comprehension identified a left-dominant pathway in the temporal lobes supporting the extraction of an intelligible linguistic message, whereas the right anterior temporal lobe showed an overall preference for signals clearly conveying dynamic pitch information [Johnsrude, I. S., Penhune, V. B., & Zatorre, R. J. Functional specificity in the right human auditory cortex for perceiving pitch direction. Brain, 123, 155–163, 2000; Scott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400–2406, 2000]. The current study combined modulations of overall intelligibility (through vocoding and spectral inversion) with a manipulation of pitch contour (normal vs. falling) to investigate the processing of spoken sentences in functional MRI. Our overall findings replicate and extend those of Scott et al. [Scott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400–2406, 2000], where greater sentence intelligibility was predominately associated with increased activity in the left STS, and the greatest response to normal sentence melody was found in right superior temporal gyrus. These data suggest a spatial distinction between brain areas associated with intelligibility and those involved in the processing of dynamic pitch information in speech. By including a set of complexity-matched unintelligible conditions created by spectral inversion, this is additionally the first study reporting a fully factorial exploration of spectrotemporal complexity and spectral inversion as they relate to the neural processing of speech intelligibility. Perhaps surprisingly, there was little evidence for an interaction between the two factors—we discuss the implications for the processing of sound and speech in the dorsolateral temporal lobes.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (11): 1875–1886.
Published: 01 November 2013
FIGURES
| View All (5)
Abstract
View article
PDF
Historically, the study of human identity perception has focused on faces, but the voice is also central to our expressions and experiences of identity [Belin, P., Fecteau, S., & Bedard, C. Thinking the voice: Neural correlates of voice perception. Trends in Cognitive Sciences, 8, 129–135, 2004]. Our voices are highly flexible and dynamic; talkers speak differently, depending on their health, emotional state, and the social setting, as well as extrinsic factors such as background noise. However, to date, there have been no studies of the neural correlates of identity modulation in speech production. In the current fMRI experiment, we measured the neural activity supporting controlled voice change in adult participants performing spoken impressions. We reveal that deliberate modulation of vocal identity recruits the left anterior insula and inferior frontal gyrus, supporting the planning of novel articulations. Bilateral sites in posterior superior temporal/inferior parietal cortex and a region in right middle/anterior STS showed greater responses during the emulation of specific vocal identities than for impressions of generic accents. Using functional connectivity analyses, we describe roles for these three sites in their interactions with the brain regions supporting speech planning and production. Our findings mark a significant step toward understanding the neural control of vocal identity, with wider implications for the cognitive control of voluntary motor acts.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (3): 636–652.
Published: 01 March 2012
FIGURES
| View All (11)
Abstract
View article
PDF
The question of hemispheric lateralization of neural processes is one that is pertinent to a range of subdisciplines of cognitive neuroscience. Language is often assumed to be left-lateralized in the human brain, but there has been a long running debate about the underlying reasons for this. We addressed this problem with fMRI by identifying the neural responses to amplitude and spectral modulations in speech and how these interact with speech intelligibility to test previous claims for hemispheric asymmetries in acoustic and linguistic processes in speech perception. We used both univariate and multivariate analyses of the data, which enabled us to both identify the networks involved in processing these acoustic and linguistic factors and to test the significance of any apparent hemispheric asymmetries. We demonstrate bilateral activation of superior temporal cortex in response to speech-derived acoustic modulations in the absence of intelligibility. However, in a contrast of amplitude-modulated and spectrally modulated conditions that differed only in their intelligibility (where one was partially intelligible and the other unintelligible), we show a left dominant pattern of activation in STS, inferior frontal cortex, and insula. Crucially, multivariate pattern analysis showed that there were significant differences between the left and the right hemispheres only in the processing of intelligible speech. This result shows that the left hemisphere dominance in linguistic processing does not arise because of low-level, speech-derived acoustic factors and that multivariate pattern analysis provides a method for unbiased testing of hemispheric asymmetries in processing.
Includes: Multimedia, Supplementary data
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (12): 4038–4047.
Published: 01 December 2011
FIGURES
| View All (4)
Abstract
View article
PDF
Several perspectives on speech perception posit a central role for the representation of articulations in speech comprehension, supported by evidence for premotor activation when participants listen to speech. However, no experiments have directly tested whether motor responses mirror the profile of selective auditory cortical responses to native speech sounds or whether motor and auditory areas respond in different ways to sounds. We used fMRI to investigate cortical responses to speech and nonspeech mouth (ingressive click) sounds. Speech sounds activated bilateral superior temporal gyri more than other sounds, a profile not seen in motor and premotor cortices. These results suggest that there are qualitative differences in the ways that temporal and motor areas are activated by speech and click sounds: Anterior temporal lobe areas are sensitive to the acoustic or phonetic properties, whereas motor responses may show more generalized responses to the acoustic stimuli.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (4): 961–977.
Published: 01 April 2011
FIGURES
| View All (7)
Abstract
View article
PDF
This study investigated links between working memory and speech processing systems. We used delayed pseudoword repetition in fMRI to investigate the neural correlates of sublexical structure in phonological working memory (pWM). We orthogonally varied the number of syllables and consonant clusters in auditory pseudowords and measured the neural responses to these manipulations under conditions of covert rehearsal (Experiment 1). A left-dominant network of temporal and motor cortex showed increased activity for longer items, with motor cortex only showing greater activity concomitant with adding consonant clusters. An individual-differences analysis revealed a significant positive relationship between activity in the angular gyrus and the hippocampus, and accuracy on pseudoword repetition. As models of pWM stipulate that its neural correlates should be activated during both perception and production/rehearsal [Buchsbaum, B. R., & D'Esposito, M. The search for the phonological store: From loop to convolution. Journal of Cognitive Neuroscience, 20, 762–778, 2008; Jacquemot, C., & Scott, S. K. What is the relationship between phonological short-term memory and speech processing? Trends in Cognitive Sciences, 10, 480–486, 2006; Baddeley, A. D., & Hitch, G. Working memory. In G. H. Bower (Ed.), The psychology of learning and motivation: Advances in research and theory (Vol. 8, pp. 47–89). New York: Academic Press, 1974], we further assessed the effects of the two factors in a separate passive listening experiment (Experiment 2). In this experiment, the effect of the number of syllables was concentrated in posterior–medial regions of the supratemporal plane bilaterally, although there was no evidence of a significant response to added clusters. Taken together, the results identify the planum temporale as a key region in pWM; within this region, representations are likely to take the form of auditory or audiomotor “templates” or “chunks” at the level of the syllable [Papoutsi, M., de Zwart, J. A., Jansma, J. M., Pickering, M. J., Bednar, J. A., & Horwitz, B. From phonemes to articulatory codes: an fMRI study of the role of Broca's area in speech production. Cerebral Cortex, 19, 2156–2165, 2009; Warren, J. E., Wise, R. J. S., & Warren, J. D. Sounds do-able: auditory–motor transformations and the posterior temporal plane. Trends in Neurosciences, 28, 636–643, 2005; Griffiths, T. D., & Warren, J. D. The planum temporale as a computational hub. Trends in Neurosciences, 25, 348–353, 2002], whereas more lateral structures on the STG may deal with phonetic analysis of the auditory input [Hickok, G. The functional neuroanatomy of language. Physics of Life Reviews, 6, 121–143, 2009].