Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Bharath Chandrasekaran
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neurobiology of Language 1–62.
Published: 18 January 2023
Abstract
View article
PDF
Speech processing often occurs amidst competing inputs from other modalities, e.g., listening to the radio while driving. We examined the extent to which dividing attention between auditory and visual modalities (bimodal divided attention) impacts neural processing of natural continuous speech from acoustic to linguistic levels of representation. We recorded electroencephalographic (EEG) responses when human participants performed a challenging primary visual task, imposing low or high cognitive load while listening to audiobook stories as a secondary task. The two dual-task conditions were contrasted with an auditory single-task condition in which participants attended to stories while ignoring visual stimuli. Behaviorally, the high load dual-task condition was associated with lower speech comprehension accuracy relative to the other two conditions. We fitted multivariate temporal response function encoding models to predict EEG responses from acoustic and linguistic speech features at different representation levels, including auditory spectrograms and information-theoretic models of sublexical-, word-form-, and sentence-level representations. Neural tracking of most acoustic and linguistic features remained unchanged with increasing dual-task load, despite unambiguous behavioral and neural evidence of the high load dual-task condition being more demanding. Compared to the auditory single-task condition, dual-task conditions selectively reduced neural tracking of only some acoustic and linguistic features, mainly at latencies >200 ms, while earlier latencies were surprisingly unaffected. These findings indicate that behavioral effects of bimodal divided attention on continuous speech processing occur not due to impaired early sensory representations but likely at later cognitive processing stages. Crossmodal attention-related mechanisms may not be uniform across different speech processing levels.
Journal Articles
Publisher: Journals Gateway
Neurobiology of Language (2022) 3 (3): 441–468.
Published: 19 July 2022
FIGURES
| View All (15)
Abstract
View article
PDF
Envelope and frequency-following responses (FFR ENV and FFR TFS ) are scalp-recorded electrophysiological potentials that closely follow the periodicity of complex sounds such as speech. These signals have been established as important biomarkers in speech and learning disorders. However, despite important advances, it has remained challenging to map altered FFR ENV and FFR TFS to altered processing in specific brain regions. Here we explore the utility of a deconvolution approach based on the assumption that FFR ENV and FFR TFS reflect the linear superposition of responses that are triggered by the glottal pulse in each cycle of the fundamental frequency (F0 responses). We tested the deconvolution method by applying it to FFR ENV and FFR TFS of rhesus monkeys to human speech and click trains with time-varying pitch patterns. Our analyses show that F0 ENV responses could be measured with high signal-to-noise ratio and featured several spectro-temporally and topographically distinct components that likely reflect the activation of brainstem (<5 ms; 200–1000 Hz), midbrain (5–15 ms; 100–250 Hz), and cortex (15–35 ms; ∼90 Hz). In contrast, F0 TFS responses contained only one spectro-temporal component that likely reflected activity in the midbrain. In summary, our results support the notion that the latency of F0 components map meaningfully onto successive processing stages. This opens the possibility that pathologically altered FFR ENV or FFR TFS may be linked to altered F0 ENV or F0 TFS and from there to specific processing stages and ultimately spatially targeted interventions.
Journal Articles
Publisher: Journals Gateway
Neurobiology of Language (2021) 2 (2): 280–307.
Published: 09 June 2021
FIGURES
| View All (6)
Abstract
View article
PDF
Learning non-native phonetic categories in adulthood is an exceptionally challenging task, characterized by large interindividual differences in learning speed and outcomes. The neurobiological mechanisms underlying the interindividual differences in the learning efficacy are not fully understood. Here we examine the extent to which training-induced neural representations of non-native Mandarin tone categories in English listeners ( n = 53) are increasingly similar to those of the native listeners ( n = 33) who acquired these categories early in infancy. We assess the extent to which the neural similarities in representational structure between non-native learners and native listeners are robust neuromarkers of interindividual differences in learning success. Using intersubject neural representational similarity (IS-NRS) analysis and predictive modeling on two functional magnetic resonance imaging datasets, we examined the neural representational mechanisms underlying speech category learning success. Learners’ neural representations that were significantly similar to the native listeners emerged in brain regions mediating speech perception following training; the extent of the emerging neural similarities with native listeners significantly predicted the learning speed and outcome in learners. The predictive power of IS-NRS outperformed models with other neural representational measures. Furthermore, neural representations underlying successful learning were multidimensional but cost-efficient in nature. The degree of the emergent native-similar neural representations was closely related to the robustness of neural sensitivity to feedback in the frontostriatal network. These findings provide important insights into the experience-dependent representational neuroplasticity underlying successful speech learning in adulthood and could be leveraged in designing individualized feedback-based training paradigms that maximize learning efficacy.
Includes: Supplementary data