Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Jonas Obleser
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (8): 1447–1466.
Published: 01 July 2022
FIGURES
| View All (5)
Abstract
View article
PDF
Time implicitly shapes cognition, but time is also explicitly represented, for instance, in the form of durations. Parsimoniously, the brain could use the same mechanisms for implicit and explicit timing. Yet, the evidence has been equivocal, revealing both joint versus separate signatures of timing. Here, we directly compared implicit and explicit timing using magnetoencephalography, whose temporal resolution allows investigating the different stages of the timing processes. Implicit temporal predictability was induced in an auditory paradigm by a manipulation of the foreperiod. Participants received two consecutive task instructions: discriminate pitch (indirect measure of implicit timing) or duration (direct measure of explicit timing). The results show that the human brain efficiently extracts implicit temporal statistics of sensory environments, to enhance the behavioral and neural responses to auditory stimuli, but that those temporal predictions did not improve explicit timing. In both tasks, attentional orienting in time during predictive foreperiods was indexed by an increase in alpha power over visual and parietal areas. Furthermore, pretarget induced beta power in sensorimotor and parietal areas increased during implicit compared to explicit timing, in line with the suggested role for beta oscillations in temporal prediction. Interestingly, no distinct neural dynamics emerged when participants explicitly paid attention to time, compared to implicit timing. Our work thus indicates that implicit timing shapes the behavioral and sensory response in an automatic way and is reflected in oscillatory neural dynamics, whereas the translation of implicit temporal statistics to explicit durations remains somewhat inconclusive, possibly because of the more abstract nature of this task.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2020) 32 (8): 1562–1576.
Published: 01 August 2020
FIGURES
| View All (5)
Abstract
View article
PDF
Anticipation of an impending stimulus shapes the state of the sensory systems, optimizing neural and behavioral responses. Here, we studied the role of brain oscillations in mediating spatial and temporal anticipations. Because spatial attention and temporal expectation are often associated with visual and auditory processing, respectively, we directly contrasted the visual and auditory modalities and asked whether these anticipatory mechanisms are similar in both domains. We recorded the magnetoencephalogram in healthy human participants performing an auditory and visual target discrimination task, in which cross-modal cues provided both temporal and spatial information with regard to upcoming stimulus presentation. Motivated by prior findings, we were specifically interested in delta (1–3 Hz) and alpha (8–13 Hz) band oscillatory state in anticipation of target presentation and their impact on task performance. Our findings support the view that spatial attention has a stronger effect in the visual domain, whereas temporal expectation effects are more prominent in the auditory domain. For the spatial attention manipulation, we found a typical pattern of alpha lateralization in the visual system, which correlated with response speed. Providing a rhythmic temporal cue led to increased postcue synchronization of low-frequency rhythms, although this effect was more broadband in nature, suggesting a general phase reset rather than frequency-specific neural entrainment. In addition, we observed delta-band synchronization with a frontal topography, which correlated with performance, especially in the auditory task. Combined, these findings suggest that spatial and temporal anticipations operate via a top–down modulation of the power and phase of low-frequency oscillations, respectively.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2020) 32 (2): 212–225.
Published: 01 February 2020
FIGURES
| View All (5)
Abstract
View article
PDF
In challenging listening conditions, closing the eyes is a strategy with intuitive appeal to improve auditory attention and perception. On the neural level, closing the eyes increases the power of alpha oscillations (∼10 Hz), which are a prime signature of auditory attention. Here, we test whether eye closure benefits neural and behavioral signatures of auditory attention and perception. Participants ( n = 22) attended to one of two alternating streams of spoken numbers with open or closed eyes in a darkened chamber. After each trial, participants indicated whether probes had been among the to-be-attended or to-be-ignored numbers. In the EEG, states of relative high versus low alpha power accompanied the presentation of attended versus ignored numbers. Importantly, eye closure did not only increase the overall level of absolute alpha power but also the attentional modulation thereof. Behaviorally, however, neither perceptual sensitivity nor response criterion was affected by eye closure. To further examine whether this behavioral null result would conceptually replicate in a simple auditory detection task, a follow-up experiment was conducted that required participants ( n = 19) to detect a near-threshold target tone in noise. As in the main experiment, our results provide evidence for the absence of any difference in perceptual sensitivity and criterion for open versus closed eyes. In summary, we demonstrate here that the modulation of the human alpha rhythm by auditory attention is increased when participants close their eyes. However, our results speak against the widely held belief that eye closure per se improves listening behavior.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (5): 988–1000.
Published: 01 May 2015
FIGURES
| View All (6)
Abstract
View article
PDF
The flexible allocation of attention enables us to perceive and behave successfully despite irrelevant distractors. How do acoustic challenges influence this allocation of attention, and to what extent is this ability preserved in normally aging listeners? Younger and healthy older participants performed a masked auditory number comparison while EEG was recorded. To vary selective attention demands, we manipulated perceptual separability of spoken digits from a masking talker by varying acoustic detail (temporal fine structure). Listening conditions were adjusted individually to equalize stimulus audibility as well as the overall level of performance across participants. Accuracy increased, and response times decreased with more acoustic detail. The decrease in response times with more acoustic detail was stronger in the group of older participants. The onset of the distracting speech masker triggered a prominent contingent negative variation (CNV) in the EEG. Notably, CNV magnitude decreased parametrically with increasing acoustic detail in both age groups. Within identical levels of acoustic detail, larger CNV magnitude was associated with improved accuracy. Across age groups, neuropsychological markers further linked early CNV magnitude directly to individual attentional capacity. Results demonstrate for the first time that, in a demanding listening task, instantaneous acoustic conditions guide the allocation of attention. Second, such basic neural mechanisms of preparatory attention allocation seem preserved in healthy aging, despite impending sensory decline.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (8): 1383–1395.
Published: 01 August 2013
FIGURES
| View All (4)
Abstract
View article
PDF
Under adverse listening conditions, speech comprehension profits from the expectancies that listeners derive from the semantic context. However, the neurocognitive mechanisms of this semantic benefit are unclear: How are expectancies formed from context and adjusted as a sentence unfolds over time under various degrees of acoustic degradation? In an EEG study, we modified auditory signal degradation by applying noise-vocoding (severely degraded: four-band, moderately degraded: eight-band, and clear speech). Orthogonal to that, we manipulated the extent of expectancy: strong or weak semantic context (± con ) and context-based typicality of the sentence-last word (high or low: ± typ ). This allowed calculation of two distinct effects of expectancy on the N400 component of the evoked potential. The sentence-final N400 effect was taken as an index of the neural effort of automatic word-into-context integration; it varied in peak amplitude and latency with signal degradation and was not reliably observed in response to severely degraded speech. Under clear speech conditions in a strong context, typical and untypical sentence completions seemed to fulfill the neural prediction, as indicated by N400 reductions. In response to moderately degraded signal quality, however, the formed expectancies appeared more specific: Only typical (+ con + typ ), but not the less typical (+ con − typ ) context–word combinations led to a decrease in the N400 amplitude. The results show that adverse listening “narrows,” rather than broadens, the expectancies about the perceived speech signal: limiting the perceptual evidence forces the neural system to rely on signal-driven expectancies, rather than more abstract expectancies, while a sentence unfolds over time.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2004) 16 (1): 31–39.
Published: 01 January 2004
Abstract
View article
PDF
This study further elucidates determinants of vowel perception in the human auditory cortex. The vowel inventory of a given language can be classified on the basis of phonological features which are closely linked to acoustic properties. A cortical representation of speech sounds based on these phonological features might explain the surprisingly inverse correlation between immense variance in the acoustic signal and high accuracy of speech recognition. We investigated timing and mapping of the N100m elicited by 42 tokens of seven natural German vowels varying along the phonological features tongue height (corresponding to the frequency of the first formant) and place of articulation (corresponding to the frequency of the second and third formants). Auditoryevoked fields were recorded using a 148-channel whole-head magnetometer while subjects performed target vowel detection tasks. Source location differences appeared to be driven by place of articulation: Vowels with mutually exclusive place of articulation features, namely, coronal and dorsal elicited separate centers of activation along the posterior-anterior axis. Additionally, the time course of activation as reflected in the N100m peak latency distinguished between vowel categories especially when the spatial distinctiveness of cortical activation was low. In sum, results suggest that both N100m latency and source location as well as their interaction reflect properties of speech stimuli that correspond to abstract phonological features.