Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-5 of 5
Andreas Widmann
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience 1–17.
Published: 10 May 2025
Abstract
View articletitled, Salient, Unexpected Omissions of Sounds Can Involuntarily Distract Attention
View
PDF
for article titled, Salient, Unexpected Omissions of Sounds Can Involuntarily Distract Attention
Salient unexpected and task-irrelevant sounds can act as distractors by capturing attention away from a task. Consequently, a performance impairment (e.g., prolonged RTs) is typically observed along with a pupil dilation response (PDR) and the P3a ERP component. Previous results showed prolonged RTs in response to task-relevant visual stimuli also following unexpected sound omissions. However, it was unclear whether this was due to the absence of the sound's warning effect or to distraction caused by the violation of a sensory prediction. In our paradigm, participants initiated a trial through a button press that elicited either a regular sound (80%), a deviant sound (10%), or no sound (10%). Thereafter, a digit was presented visually, and the participant had to classify it as even or odd. To dissociate warning and distraction effects, we additionally included a control condition in which a button press never generated a sound, and therefore no sound was expected. Results show that, compared with expected events, unexpected deviants and omissions lead to prolonged RTs (distraction effect), enlarged PDR, and a P3a-like ERP effect. Moreover, sound events, compared with no sound events, yielded faster RTs (warning effect), larger PDR, and increased P3a. Overall, we observed a co-occurrence of warning and distraction effects. This suggests that not only unexpected sounds but also unexpected sound omissions can act as salient distractors. This finding supports theories claiming that involuntary attention is based on prediction violation.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2025) 37 (2): 443–463.
Published: 01 February 2025
FIGURES
| View All (4)
Abstract
View articletitled, Semantic Context Effects in Picture and Sound Naming: Evidence from Event-related Potentials and Pupillometric Data
View
PDF
for article titled, Semantic Context Effects in Picture and Sound Naming: Evidence from Event-related Potentials and Pupillometric Data
When a picture is repeatedly named in the context of semantically related pictures (homogeneous context), responses are slower than when the picture is repeatedly named in the context of unrelated pictures (heterogeneous context). This semantic interference effect in blocked-cyclic naming plays an important role in devising theories of word production. Wöhner, Mädebach, and Jescheniak [Wöhner, S., Mädebach, A., & Jescheniak, J. D. Naming pictures and sounds: Stimulus type affects semantic context effects. Journal of Experimental Psychology: Human Perception and Performance , 47 , 716–730, 2021] have shown that the effect is substantially larger when participants name environmental sounds than when they name pictures. We investigated possible reasons for this difference, using EEG and pupillometry. The behavioral data replicated Wöhner and colleagues. ERPs were more positive in the homogeneous compared with the heterogeneous context over central electrode locations between 140–180 msec and 250–350 msec for picture naming and between 250 and 350 msec for sound naming, presumably reflecting semantic interference during semantic and lexical processing. The later component was of similar size for pictures and sounds. ERPs were more negative in the homogeneous compared with the heterogeneous context over frontal electrode locations between 400 and 600 msec only for sounds. The pupillometric data showed a stronger pupil dilation in the homogeneous compared with the heterogeneous context only for sounds. The amplitudes of the late ERP negativity and pupil dilation predicted naming latencies for sounds in the homogeneous context. The latency of the effects indicates that the difference in semantic interference between picture and sound naming arises at later, presumably postlexical processing stages closer to articulation. We suggest that the processing of the auditory stimuli interferes with phonological response preparation and self-monitoring, leading to enhanced semantic interference.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (8): 1397–1415.
Published: 01 July 2022
FIGURES
| View All (4)
Abstract
View articletitled, Hearing “Birch” Hampers Saying “Duck”—An Event-Related Potential Study on Phonological Interference in Immediate and Delayed Word Production
View
PDF
for article titled, Hearing “Birch” Hampers Saying “Duck”—An Event-Related Potential Study on Phonological Interference in Immediate and Delayed Word Production
When speakers name a picture (e.g., “duck”), a distractor word phonologically related to an alternative name (e.g., “birch” related to “bird”) slows down naming responses compared with an unrelated distractor word. This interference effect obtained with the picture–word interference task is assumed to reflect the phonological coactivation of close semantic competitors and is critical for evaluating contemporary models of word production. In this study, we determined the ERP signature of this effect in immediate and delayed versions of the picture–word interference task. ERPs revealed a differential processing of related and unrelated distractors: an early (305–436 msec) and a late (537–713 msec) negativity for related as compared with unrelated distractors. In the behavioral data, the interference effect was only found in immediate naming, whereas its ERP signature was also present in delayed naming. The time window of the earlier ERP effect suggests that the behavioral interference effect indeed emerges at a phonological processing level, whereas the functional significance of the later ERP effect is as yet not clear. The finding of a robust ERP correlate of phonological coactivation might facilitate future research on lexical processing in word production.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2019) 31 (12): 1917–1932.
Published: 01 December 2019
FIGURES
| View All (6)
Abstract
View articletitled, Action Intention-based and Stimulus Regularity-based Predictions: Same or Different?
View
PDF
for article titled, Action Intention-based and Stimulus Regularity-based Predictions: Same or Different?
We act on the environment to produce desired effects, but we also adapt to the environmental demands by learning what to expect next, based on experience: How do action-based predictions and sensory predictions relate to each other? We explore this by implementing a self-generation oddball paradigm, where participants performed random sequences of left and right button presses to produce frequent standard and rare deviant tones. By manipulating the action–tone association as well as the likelihood of a button press over the other one, we compare ERP effects evoked by the intention to produce a specific tone, tone regularity, and both intention and regularity. We show that the N1b and Tb components of the N1 response are modulated by violations of tone regularity only. However, violations of action intention as well as of regularity elicit MMN responses, which occur similarly in all three conditions. Regardless of whether the predictions at sensory levels were based on either intention, regularity, or both, the tone deviance was further and equally well detected at hierarchically higher processing level, as reflected in similar P3a effects between conditions. We did not observe additive prediction errors when intention and regularity were violated concurrently, suggesting the two integrate despite presumably having independent generators. Even though they are often discussed as individual prediction sources in the literature, this study represents to our knowledge the first to directly compare them. Finally, these results show how, in the context of action, our brain can easily switch between top–down intention-based expectations and bottom–up regularity cues to efficiently predict future events.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2019) 31 (8): 1110–1125.
Published: 01 August 2019
FIGURES
| View All (7)
Abstract
View articletitled, Presentation Probability of Visual–Auditory Pairs Modulates Visually Induced Auditory Predictions
View
PDF
for article titled, Presentation Probability of Visual–Auditory Pairs Modulates Visually Induced Auditory Predictions
Predictions about forthcoming auditory events can be established on the basis of preceding visual information. Sounds being incongruent to predictive visual information have been found to elicit an enhanced negative ERP in the latency range of the auditory N1 compared with physically identical sounds being preceded by congruent visual information. This so-called incongruency response (IR) is interpreted as reduced prediction error for predicted sounds at a sensory level. The main purpose of this study was to examine the impact of probability manipulations on the IR. We manipulated the probability with which particular congruent visual–auditory pairs were presented (83/17 vs. 50/50 condition). This manipulation led to two conditions with different strengths of the association of visual with auditory information. A visual cue was presented either above or below a fixation cross and was followed by either a high- or low-pitched sound. In 90% of trials, the visual cue correctly predicted the subsequent sound. In one condition, one of the sounds was presented more frequently (83% of trials), whereas in the other condition both sounds were presented with equal probability (50% of trials). Therefore, in the 83/17 condition, one congruent combination of visual cue and corresponding sound was presented more frequently than the other combinations presumably leading to a stronger visual–auditory association. A significant IR for unpredicted compared with predicted but otherwise identical sounds was observed only in the 83/17 condition, but not in the 50/50 condition, where both congruent visual cue–sound combinations were presented with equal probability. We also tested whether the processing of the prediction violation is dependent on the task relevance of the visual information. Therefore, we contrasted a visual–auditory matching task with a pitch discrimination task. It turned out that the task only had an impact on the behavioral performance but not on the prediction error signals. Results suggest that the generation of visual-to-auditory sensory predictions is facilitated by a strong association between the visual cue and the predicted sound (83/17 condition) but is not influenced by the task relevance of the visual information.