Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-4 of 4
Emily B. Myers
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neurobiology of Language 1–42.
Published: 27 August 2024
Abstract
View article
PDF
Research over the past two decades has documented the importance of sleep to language learning. Sleep has been suggested to play a role in establishing new speech representations as well, however the neural mechanisms corresponding to sleep-mediated effects on speech perception behavior are unknown. In this study, we trained monolingual English-speaking adults to perceive differences between the Hindi dental vs. retroflex speech contrast in the evening. We examined the blood-oxygen-level-dependent (BOLD) signal using functional magnetic resonance imaging (fMRI) during perceptual tasks on both the trained talker and also on an untrained talker shortly after training, and again the next morning. We also employed diffusion tensor imaging to determine if individual differences in white matter structure could predict variability in overnight consolidation. We found greater activity in cortical regions associated with language processing (e.g. left insula) on the second day. Fractional anisotropy values in the anterior thalamic radiation and the uncinate fasciculus was associated with the magnitude of overnight change in perceptual behavior on the generalization (untrained) talker, after controlling for differences in sleep duration and initial learning. Our findings suggest that speech-perceptual information is subject to an overnight transfer of information to the cortex. Moreover, neural structure appear to be linked to individual differences in efficiency of overnight consolidation.
Journal Articles
Publisher: Journals Gateway
Neurobiology of Language (2024) 5 (3): 757–773.
Published: 15 August 2024
FIGURES
Abstract
View article
PDF
Over the past few decades, research into the function of the cerebellum has expanded far beyond the motor domain. A growing number of studies are probing the role of specific cerebellar subregions, such as Crus I and Crus II, in higher-order cognitive functions including receptive language processing. In the current fMRI study, we show evidence for the cerebellum’s sensitivity to variation in two well-studied psycholinguistic properties of words—lexical frequency and phonological neighborhood density—during passive, continuous listening of a podcast. To determine whether, and how, activity in the cerebellum correlates with these lexical properties, we modeled each word separately using an amplitude-modulated regressor, time-locked to the onset of each word. At the group level, significant effects of both lexical properties landed in expected cerebellar subregions: Crus I and Crus II. The BOLD signal correlated with variation in each lexical property, consistent with both language-specific and domain-general mechanisms. Activation patterns at the individual level also showed that effects of phonological neighborhood and lexical frequency landed in Crus I and Crus II as the most probable sites, though there was activation seen in other lobules (especially for frequency). Although the exact cerebellar mechanisms used during speech and language processing are not yet evident, these findings highlight the cerebellum’s role in word-level processing during continuous listening.
Journal Articles
Publisher: Journals Gateway
Neurobiology of Language (2023) 4 (1): 145–177.
Published: 08 March 2023
FIGURES
| View All (11)
Abstract
View article
PDF
Though the right hemisphere has been implicated in talker processing, it is thought to play a minimal role in phonetic processing, at least relative to the left hemisphere. Recent evidence suggests that the right posterior temporal cortex may support learning of phonetic variation associated with a specific talker. In the current study, listeners heard a male talker and a female talker, one of whom produced an ambiguous fricative in /s/-biased lexical contexts (e.g., epi?ode ) and one who produced it in /∫/-biased contexts (e.g., friend?ip ). Listeners in a behavioral experiment (Experiment 1) showed evidence of lexically guided perceptual learning, categorizing ambiguous fricatives in line with their previous experience. Listeners in an fMRI experiment (Experiment 2) showed differential phonetic categorization as a function of talker, allowing for an investigation of the neural basis of talker-specific phonetic processing, though they did not exhibit perceptual learning (likely due to characteristics of our in-scanner headphones). Searchlight analyses revealed that the patterns of activation in the right superior temporal sulcus (STS) contained information about who was talking and what phoneme they produced. We take this as evidence that talker information and phonetic information are integrated in the right STS. Functional connectivity analyses suggested that the process of conditioning phonetic identity on talker information depends on the coordinated activity of a left-lateralized phonetic processing system and a right-lateralized talker processing system. Overall, these results clarify the mechanisms through which the right hemisphere supports talker-specific phonetic processing.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neurobiology of Language (2020) 1 (3): 339–364.
Published: 01 August 2020
FIGURES
| View All (7)
Abstract
View article
PDF
The extent that articulatory information embedded in incoming speech contributes to the formation of new perceptual categories for speech sounds has been a matter of discourse for decades. It has been theorized that the acquisition of new speech sound categories requires a network of sensory and speech motor cortical areas (the “dorsal stream”) to successfully integrate auditory and articulatory information. However, it is possible that these brain regions are not sensitive specifically to articulatory information, but instead are sensitive to the abstract phonological categories being learned. We tested this hypothesis by training participants over the course of several days on an articulable non-native speech contrast and acoustically matched inarticulable nonspeech analogues. After reaching comparable levels of proficiency with the two sets of stimuli, activation was measured in fMRI as participants passively listened to both sound types. Decoding of category membership for the articulable speech contrast alone revealed a series of left and right hemisphere regions outside of the dorsal stream that have previously been implicated in the emergence of non-native speech sound categories, while no regions could successfully decode the inarticulable nonspeech contrast. Although activation patterns in the left inferior frontal gyrus, the middle temporal gyrus, and the supplementary motor area provided better information for decoding articulable (speech) sounds compared to the inarticulable (sine wave) sounds, the finding that dorsal stream regions do not emerge as good decoders of the articulable contrast alone suggests that other factors, including the strength and structure of the emerging speech categories are more likely drivers of dorsal stream activation for novel sound learning.