Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
David Poeppel
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2020) 32 (10): 1975–1983.
Published: 01 October 2020
FIGURES
| View All (4)
Abstract
View article
PDF
Understanding speech in noise is a fundamental challenge for speech comprehension. This perceptual demand is amplified in a second language: It is a common experience in bars, train stations, and other noisy environments that degraded signal quality severely compromises second language comprehension. Through a novel design, paired with a carefully selected participant profile, we independently assessed signal-driven and knowledge-driven contributions to the brain bases of first versus second language processing. We were able to dissociate the neural processes driven by the speech signal from the processes that come from speakers' knowledge of their first versus second languages. The neurophysiological data show that, in combination with impaired access to top–down linguistic information in the second language, the locus of bilinguals' difficulty in understanding second language speech in noisy conditions arises from a failure to successfully perform a basic, low-level process: cortical entrainment to speech signals above the syllabic level.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2019) 31 (3): 401–411.
Published: 01 March 2019
FIGURES
Abstract
View article
PDF
How does the human brain support real-world learning? We used wireless electroencephalography to collect neurophysiological data from a group of 12 senior high school students and their teacher during regular biology lessons. Six scheduled classes over the course of the semester were organized such that class materials were presented using different teaching styles (videos and lectures), and students completed a multiple-choice quiz after each class to measure their retention of that lesson's content. Both students' brain-to-brain synchrony and their content retention were higher for videos than lectures across the six classes. Brain-to-brain synchrony between the teacher and students varied as a function of student engagement as well as teacher likeability: Students who reported greater social closeness to the teacher showed higher brain-to-brain synchrony with the teacher, but this was only the case for lectures—that is, when the teacher is an integral part of the content presentation. Furthermore, students' retention of the class content correlated with student–teacher closeness, but not with brain-to-brain synchrony. These findings expand on existing social neuroscience research by showing that social factors such as perceived closeness are reflected in brain-to-brain synchrony in real-world group settings and can predict cognitive outcomes such as students' academic performance.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (2): 352–364.
Published: 01 February 2015
FIGURES
| View All (7)
Abstract
View article
PDF
A critical subroutine of self-monitoring during speech production is to detect any deviance between expected and actual auditory feedback. Here we investigated the associated neural dynamics using MEG recording in mental-imagery-of-speech paradigms. Participants covertly articulated the vowel /a/; their own (individually recorded) speech was played back, with parametric manipulation using four levels of pitch shift, crossed with four levels of onset delay. A nonmonotonic function was observed in early auditory responses when the onset delay was shorter than 100 msec: Suppression was observed for normal playback, but enhancement for pitch-shifted playback; however, the magnitude of enhancement decreased at the largest level of pitch shift that was out of pitch range for normal conversion, as suggested in two behavioral experiments. No difference was observed among different types of playback when the onset delay was longer than 100 msec. These results suggest that the prediction suppresses the response to normal feedback, which mediates source monitoring. When auditory feedback does not match the prediction, an “error term” is generated, which underlies deviance detection. We argue that, based on the observed nonmonotonic function, a frequency window (addressing spectral difference) and a time window (constraining temporal difference) jointly regulate the comparison between prediction and feedback in speech.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (7): 1020–1036.
Published: 01 July 2013
FIGURES
| View All (7)
Abstract
View article
PDF
The computational role of efference copies is widely appreciated in action and perception research, but their properties for speech processing remain murky. We tested the functional specificity of auditory efference copies using magnetoencephalography recordings in an unconventional pairing: We used a classical cognitive manipulation (mental imagery—to elicit internal simulation and estimation) with a well-established experimental paradigm (one shot repetition—to assess neuronal specificity). Participants performed tasks that differentially implicated internal prediction of sensory consequences (overt speaking, imagined speaking, and imagined hearing) and their modulatory effects on the perception of an auditory (syllable) probe were assessed. Remarkably, the neural responses to overt syllable probes vary systematically, both in terms of directionality (suppression, enhancement) and temporal dynamics (early, late), as a function of the preceding covert mental imagery adaptor. We show, in the context of a dual-pathway model, that internal simulation shapes perception in a context-dependent manner.
Journal Articles
Evidence for Early Morphological Decomposition: Combining Masked Priming with Magnetoencephalography
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (11): 3366–3379.
Published: 01 November 2011
FIGURES
| View All (6)
Abstract
View article
PDF
Are words stored as morphologically structured representations? If so, when during word recognition are morphological pieces accessed? Recent masked priming studies support models that assume early decomposition of (potentially) morphologically complex words. The electrophysiological evidence, however, is inconsistent. We combined masked morphological priming with magneto-encephalography (MEG), a technique particularly adept at indexing processes involved in lexical access. The latency of an MEG component peaking, on average, 220 msec post-onset of the target in left occipito-temporal brain regions was found to be sensitive to the morphological prime–target relationship under masked priming conditions in a visual lexical decision task. Shorter latencies for related than unrelated conditions were observed both for semantically transparent (cleaner–CLEAN) and opaque (corner–CORN) prime–target pairs, but not for prime–target pairs with only an orthographic relationship (brothel–BROTH). These effects are likely to reflect a prelexical level of processing where form-based representations of stems and affixes are represented and are in contrast to models positing no morphological structure in lexical representations. Moreover, we present data regarding the transitional probability from stem to affix in a post hoc comparison, which suggests that this factor may modulate early morphological decomposition, particularly for opaque words. The timing of a robust MEG component sensitive to the morphological relatedness of prime–target pairs can be used to further understand the neural substrates and the time course of lexical processing.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (3): 552–569.
Published: 01 March 2011
FIGURES
| View All (10)
Abstract
View article
PDF
Innate auditory sensitivities and familiarity with the sounds of language give rise to clear influences of phonemic categories on adult perception of speech. With few exceptions, current models endorse highly left-hemisphere-lateralized mechanisms responsible for the influence of phonemic category on speech perception, based primarily on results from functional imaging and brain-lesion studies. Here we directly test the hypothesis that the right hemisphere does not engage in phonemic analysis. By using fMRI to identify cortical sites sensitive to phonemes in both word and pronounceable nonword contexts, we find evidence that right-hemisphere phonemic sensitivity is limited to a lexical context. We extend the interpretation of these fMRI results through the study of an individual with a left-hemisphere lesion who is right-hemisphere reliant for initial acoustic and phonetic analysis of speech. This individual's performance revealed that the right hemisphere alone was insufficient to allow for typical phonemic category effects but did support the processing of gradient phonetic information in lexical contexts. Taken together, these findings confirm previous claims that the right temporal cortex does not play a primary role in phoneme processing, but they also indicate that lexical context may modulate the involvement of a right hemisphere largely tuned for less abstract dimensions of the speech signal.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2000) 12 (6): 1038–1055.
Published: 01 November 2000
Abstract
View article
PDF
The studies presented here use an adapted oddball paradigm to show evidence that representations of discrete phonological categories are available to the human auditory cortex. Brain activity was recorded using a 37-channel biomagnetometer while eight subjects listened passively to synthetic speech sounds. In the phonological condition, which contrasted stimuli from an acoustic /dæ/-/tæ/ continuum, a magnetic mismatch field (MMF) was elicited in a sequence of stimuli in which phonological categories occurred in a many-to-one ratio, but no acoustic many-to-one ratio was present. In order to isolate the contribution of phonological categories to the MMF responses, the acoustic parameter of voice onset time, which distinguished standard and deviant stimuli, was also varied within the standard and deviant categories. No MMF was elicited in the acoustic condition, in which the acoustic distribution of stimuli was identical to the first experiment, but the many-to-one distribution of phonological categories was removed. The design of these studies makes it possible to demonstrate the all-or-nothing property of phonological category membership. This approach contrasts with a number of previous studies of phonetic perception using the mismatch paradigm, which have demonstrated the graded property of enhanced acoustic discrimination at or near phonetic category boundaries.