Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-18 of 18
Erich Schröger
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (8): 1397–1415.
Published: 01 July 2022
FIGURES
| View All (4)
Abstract
View article
PDF
When speakers name a picture (e.g., “duck”), a distractor word phonologically related to an alternative name (e.g., “birch” related to “bird”) slows down naming responses compared with an unrelated distractor word. This interference effect obtained with the picture–word interference task is assumed to reflect the phonological coactivation of close semantic competitors and is critical for evaluating contemporary models of word production. In this study, we determined the ERP signature of this effect in immediate and delayed versions of the picture–word interference task. ERPs revealed a differential processing of related and unrelated distractors: an early (305–436 msec) and a late (537–713 msec) negativity for related as compared with unrelated distractors. In the behavioral data, the interference effect was only found in immediate naming, whereas its ERP signature was also present in delayed naming. The time window of the earlier ERP effect suggests that the behavioral interference effect indeed emerges at a phonological processing level, whereas the functional significance of the later ERP effect is as yet not clear. The finding of a robust ERP correlate of phonological coactivation might facilitate future research on lexical processing in word production.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2019) 31 (12): 1917–1932.
Published: 01 December 2019
FIGURES
| View All (6)
Abstract
View article
PDF
We act on the environment to produce desired effects, but we also adapt to the environmental demands by learning what to expect next, based on experience: How do action-based predictions and sensory predictions relate to each other? We explore this by implementing a self-generation oddball paradigm, where participants performed random sequences of left and right button presses to produce frequent standard and rare deviant tones. By manipulating the action–tone association as well as the likelihood of a button press over the other one, we compare ERP effects evoked by the intention to produce a specific tone, tone regularity, and both intention and regularity. We show that the N1b and Tb components of the N1 response are modulated by violations of tone regularity only. However, violations of action intention as well as of regularity elicit MMN responses, which occur similarly in all three conditions. Regardless of whether the predictions at sensory levels were based on either intention, regularity, or both, the tone deviance was further and equally well detected at hierarchically higher processing level, as reflected in similar P3a effects between conditions. We did not observe additive prediction errors when intention and regularity were violated concurrently, suggesting the two integrate despite presumably having independent generators. Even though they are often discussed as individual prediction sources in the literature, this study represents to our knowledge the first to directly compare them. Finally, these results show how, in the context of action, our brain can easily switch between top–down intention-based expectations and bottom–up regularity cues to efficiently predict future events.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2019) 31 (8): 1110–1125.
Published: 01 August 2019
FIGURES
| View All (7)
Abstract
View article
PDF
Predictions about forthcoming auditory events can be established on the basis of preceding visual information. Sounds being incongruent to predictive visual information have been found to elicit an enhanced negative ERP in the latency range of the auditory N1 compared with physically identical sounds being preceded by congruent visual information. This so-called incongruency response (IR) is interpreted as reduced prediction error for predicted sounds at a sensory level. The main purpose of this study was to examine the impact of probability manipulations on the IR. We manipulated the probability with which particular congruent visual–auditory pairs were presented (83/17 vs. 50/50 condition). This manipulation led to two conditions with different strengths of the association of visual with auditory information. A visual cue was presented either above or below a fixation cross and was followed by either a high- or low-pitched sound. In 90% of trials, the visual cue correctly predicted the subsequent sound. In one condition, one of the sounds was presented more frequently (83% of trials), whereas in the other condition both sounds were presented with equal probability (50% of trials). Therefore, in the 83/17 condition, one congruent combination of visual cue and corresponding sound was presented more frequently than the other combinations presumably leading to a stronger visual–auditory association. A significant IR for unpredicted compared with predicted but otherwise identical sounds was observed only in the 83/17 condition, but not in the 50/50 condition, where both congruent visual cue–sound combinations were presented with equal probability. We also tested whether the processing of the prediction violation is dependent on the task relevance of the visual information. Therefore, we contrasted a visual–auditory matching task with a pitch discrimination task. It turned out that the task only had an impact on the behavioral performance but not on the prediction error signals. Results suggest that the generation of visual-to-auditory sensory predictions is facilitated by a strong association between the visual cue and the predicted sound (83/17 condition) but is not influenced by the task relevance of the visual information.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (8): 1127–1138.
Published: 01 August 2016
FIGURES
| View All (4)
Abstract
View article
PDF
Behavioral control is influenced not only by learning from the choices made and the rewards obtained but also by “what might have happened,” that is, inference about unchosen options and their fictive outcomes. Substantial progress has been made in understanding the neural signatures of direct learning from choices that are actually made and their associated rewards via reward prediction errors (RPEs). However, electrophysiological correlates of abstract inference in decision-making are less clear. One seminal theory suggests that the so-called feedback-related negativity (FRN), an ERP peaking 200–300 msec after a feedback stimulus at frontocentral sites of the scalp, codes RPEs. Hitherto, the FRN has been predominantly related to a so-called “model-free” RPE: The difference between the observed outcome and what had been expected. Here, by means of computational modeling of choice behavior, we show that individuals employ abstract, “double-update” inference on the task structure by concurrently tracking values of chosen stimuli (associated with observed outcomes) and unchosen stimuli (linked to fictive outcomes). In a parametric analysis, model-free RPEs as well as their modification because of abstract inference were regressed against single-trial FRN amplitudes. We demonstrate that components related to abstract inference uniquely explain variance in the FRN beyond model-free RPEs. These findings advance our understanding of the FRN and its role in behavioral adaptation. This might further the investigation of disturbed abstract inference, as proposed, for example, for psychiatric disorders, and its underlying neural correlates.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (5): 988–1000.
Published: 01 May 2015
FIGURES
| View All (6)
Abstract
View article
PDF
The flexible allocation of attention enables us to perceive and behave successfully despite irrelevant distractors. How do acoustic challenges influence this allocation of attention, and to what extent is this ability preserved in normally aging listeners? Younger and healthy older participants performed a masked auditory number comparison while EEG was recorded. To vary selective attention demands, we manipulated perceptual separability of spoken digits from a masking talker by varying acoustic detail (temporal fine structure). Listening conditions were adjusted individually to equalize stimulus audibility as well as the overall level of performance across participants. Accuracy increased, and response times decreased with more acoustic detail. The decrease in response times with more acoustic detail was stronger in the group of older participants. The onset of the distracting speech masker triggered a prominent contingent negative variation (CNV) in the EEG. Notably, CNV magnitude decreased parametrically with increasing acoustic detail in both age groups. Within identical levels of acoustic detail, larger CNV magnitude was associated with improved accuracy. Across age groups, neuropsychological markers further linked early CNV magnitude directly to individual attentional capacity. Results demonstrate for the first time that, in a demanding listening task, instantaneous acoustic conditions guide the allocation of attention. Second, such basic neural mechanisms of preparatory attention allocation seem preserved in healthy aging, despite impending sensory decline.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (4): 798–818.
Published: 01 April 2015
FIGURES
| View All (4)
Abstract
View article
PDF
Recent findings on multisensory integration suggest that selective attention influences cross-sensory interactions from an early processing stage. Yet, in the field of emotional face–voice integration, the hypothesis prevails that facial and vocal emotional information interacts preattentively. Using ERPs, we investigated the influence of selective attention on the perception of congruent versus incongruent combinations of neutral and angry facial and vocal expressions. Attention was manipulated via four tasks that directed participants to (i) the facial expression, (ii) the vocal expression, (iii) the emotional congruence between the face and the voice, and (iv) the synchrony between lip movement and speech onset. Our results revealed early interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N1 and P2 amplitude by incongruent emotional face–voice combinations. Although audiovisual emotional interactions within the N1 time window were affected by the attentional manipulations, interactions within the P2 modulation showed no such attentional influence. Thus, we propose that the N1 and P2 are functionally dissociated in terms of emotional face–voice processing and discuss evidence in support of the notion that the N1 is associated with cross-sensory prediction, whereas the P2 relates to the derivation of an emotional percept. Essentially, our findings put the integration of facial and vocal emotional expressions into a new perspective—one that regards the integration process as a composite of multiple, possibly independent subprocesses, some of which are susceptible to attentional modulation, whereas others may be influenced by additional factors.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2014) 26 (7): 1481–1489.
Published: 01 July 2014
FIGURES
Abstract
View article
PDF
One of the functions of the brain is to predict sensory consequences of our own actions. In auditory processing, self-initiated sounds evoke a smaller brain response than passive sound exposure of the same sound sequence. Previous work suggests that this response attenuation reflects a predictive mechanism to differentiate the sensory consequences of one's own actions from other sensory input, which seems to form the basis for the sense of agency (recognizing oneself as the agent of the movement). This study addresses the question whether attenuation of brain responses to self-initiated sounds can be explained by brain activity involved in movement planning rather than movement execution. We recorded ERPs in response to sounds initiated by button presses. In one condition, participants moved a finger to press the button voluntarily, whereas in another condition, we initiated a similar, but involuntary, finger movement by stimulating the corresponding region of the primary motor cortex with TMS. For involuntary movements, no movement intention (and no feeling of agency) could be formed; thus, no motor plans were available to the forward model. A portion of the brain response evoked by the sounds, the N1-P2 complex, was reduced in amplitude following voluntary, self-initiated movements, but not following movements initiated by motor cortex stimulation. Our findings demonstrate that movement intention and the corresponding feeling of agency determine sensory attenuation of brain responses to self-initiated sounds. The present results support the assumptions of a predictive internal forward model account operating before primary motor cortex activation.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (3): 698–706.
Published: 01 March 2012
FIGURES
| View All (4)
Abstract
View article
PDF
Forward predictions are crucial in motor action (e.g., catching a ball, or being tickled) but may also apply to sensory or cognitive processes (e.g., listening to distorted speech or to a foreign accent). According to the “internal forward model,” the cerebellum generates predictions about somatosensory consequences of movements. These predictions simulate motor processes and prepare respective cortical areas for anticipated sensory input. Currently, there is very little evidence that a cerebellar forward model also applies to other sensory domains. In the current study, we address this question by examining the role of the cerebellum when auditory stimuli are anticipated as a consequence of a motor act. We applied an N100 suppression paradigm and compared the ERP in response to self-initiated with the ERP response to externally produced sounds. We hypothesized that sensory consequences of self-initiated sounds are precisely predicted and should lead to an N100 suppression compared with externally produced sounds. Moreover, if the cerebellum is involved in the generation of a motor-to-auditory forward model, patients with focal cerebellar lesions should not display an N100 suppression effect. Compared with healthy controls, patients showed a largely attenuated N100 suppression effect. The current results suggest that the cerebellum forms not only motor-to-somatosensory predictions but also motor-to-auditory predictions. This extends the cerebellar forward model to other sensory domains such as audition.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (6): 1124–1139.
Published: 01 June 2010
FIGURES
| View All (5)
Abstract
View article
PDF
For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential regularities. In combination with a wide range of auditory MMN studies, the present study highlights the critical role of sensory systems in automatically encoding sequential regularities when modeling the world.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (6): 1179–1188.
Published: 01 June 2010
FIGURES
Abstract
View article
PDF
There is an ongoing debate whether visual object representations can be formed outside the focus of voluntary attention. Recently, implicit behavioral measures suggested that grouping processes can occur for task-irrelevant visual stimuli, thus supporting theories of preattentive object formation (e.g., Lamy, D., Segal, H., & Ruderman, L. Grouping does not require attention. Perception and Psychophysics, 68, 17–31, 2006; Russell, C., & Driver, J. New indirect measures of “inattentive” visual grouping in a change-detection task. Perception and Psychophysics, 67, 606–623, 2005). We developed an ERP paradigm that allows testing for visual grouping when neither the objects nor its constituents are related to the participant's task. Our paradigm is based on the visual mismatch negativity ERP component, which is elicited by stimuli deviating from a regular stimulus sequence even when the stimuli are ignored. Our stimuli consisted of four pairs of colored discs that served as objects. These objects were presented isochronously while participants were engaged in a task related to the continuously presented fixation cross. Occasionally, two color deviances occurred simultaneously either within the same object or across two different objects. We found significant ERP differences for same- versus different-object deviances, supporting the notion that forming visual object representations by grouping can occur outside the focus of voluntary attention. Also our behavioral experiment, in which participants responded to color deviances—thus, this time the discs but, again, not the objects were task relevant—showed that the object status matters. Our results stress the importance of early grouping processes for structuring the perceptual world.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2009) 21 (1): 155–168.
Published: 01 January 2009
Abstract
View article
PDF
Setting perceptual expectations can be based on different sources of information that determine which functional networks will be involved in implementing preparatory top–down influences and dealing with situations in which expectations are violated. The goal of the present study was to investigate and directly compare brain activations triggered by violating expectations within two different task contexts. In the serial prediction task, participants monitored ordered perceptual sequences for predefined sequential deviants. In contrast, the target detection task entailed a presentation of stimuli which had to be monitored for predefined nonsequential deviants. Detection of sequential deviants triggered an increase of activity in premotor and cerebellar components of the “standard” sequencing network and activations in additional frontal areas initially not involved in sequencing. This pattern of activity reflects the detection of a mismatch between the expected and presented stimuli, updating of the underlying sequence representation (i.e., forward model), and elaboration of the violation. In contrast, target detection elicited activations in posterior temporal and parietal areas, reflecting an increase in perceptual processing evoked by the nonsequential deviant. The obtained results suggest that distinct functional networks involved in detecting deviants in different contexts reflect the origin and the nature of expectations being violated.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2007) 19 (10): 1664–1677.
Published: 01 October 2007
Abstract
View article
PDF
Traditional auditory oddball paradigms imply the brain's ability to encode regularities, but are not optimal for investigating the process of regularity establishment. In the present study, a dynamic experimental protocol was developed that simulates a more realistic auditory environment with changing regularities. The dynamic sequences were included in a distraction paradigm in order to study regularity extraction and application. Subjects discriminated the duration of sequentially presented tones. Without relevance to the task, tones repeated or changed in frequency according to a pattern unknown to the subject. When frequency repetitions were broken by a deviating tone, behavioral distraction (prolonged reaction time in the duration discrimination task) was elicited. Moreover, event-related brain potential components indicated deviance detection (mismatch negativity), involuntary attention switches (P3a), and attentional reorientation. These results suggest that regularities were extracted from the dynamic stimulation and were used to predict forthcoming stimuli. The effects were already observed with deviants occurring after as few as two presentations of a standard frequency, that is, violating a just emerging rule. Effects of regularity violation strengthened with the number of standard repetitions. Control stimuli comprising no regularity revealed that the observed effects were due to both improvements in standard processing (benefits of regularity establishment) and deteriorations in deviant processing (costs of regularity violation). Thus, regularities are exploited in two different ways: for an efficient processing of regularity-conforming events as well as for the detection of nonconforming, presumably important events. The present results underline the brain's flexibility in its adaptation to environmental demands.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2005) 17 (11): 1704–1713.
Published: 01 November 2005
Abstract
View article
PDF
The effects of familiarity on auditory change detection on the basis of auditory sensory memory representations were investigated by presenting oddball sequences of sounds while participants ignored the auditory stimuli. Stimulus sequences were composed of sounds that were familiar and sounds that were made unfamiliar by playing the same sounds backward. The roles of frequently presented stimuli (standards) and infrequently presented ones (deviants) were fully crossed. Deviants elicited the mismatch negativity component of the event-related brain potential. We found an enhancement in detecting changes when deviant sounds appeared among familiar standard sounds compared when they were delivered among unfamiliar standards. Familiarity with the deviant sounds also enhanced the change-detection process. We suggest that tuning to familiar items sets up preparatory processes that affect change detection in familiar sound sequences.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2003) 15 (8): 1149–1159.
Published: 15 November 2003
Abstract
View article
PDF
A common stylistic element of Western tonal music is the change of key within a musical sequence (known as modulation in musical terms). The aim of the present study was to investigate neural correlates of the cognitive processing of modulations with event-related brain potentials. Participants listened to sequences of chords that were infrequently modulating. Modulating chords elicited distinct effects in the event-related brain potentials: an early right anterior negativity reflecting the processing of a violation of musical regularities and a late frontal negativity taken to reflect processes of harmonic integration. Additionally, modulations elicited a tonic negative potential suggested to reflect cognitive processes characteristic for the processing of tonal modulations, namely, the restructuring of the “hierarchy of harmonic stability” (which specifies musical expectations), presumably entailing working memory operations. Participants were “nonmusicians”; results thus support the hypothesis that nonmusicians have a sophisticated (implicit) knowledge about musical regularities.
Journal Articles
Children Processing Music: Electric Brain Responses Reveal Musical Competence and Gender Differences
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2003) 15 (5): 683–693.
Published: 01 May 2003
Abstract
View article
PDF
Numerous studies investigated physiological correlates of the processing of musical information in adults. How these correlates develop during childhood is poorly understood. In the present study, we measured event-related electric brain potentials elicited in 5and 9-year-old children while they listened to (major–minor tonal) music. Stimuli were chord sequences, infrequently containing harmonically inappropriate chords. Our results demonstrate that the degree of (in) appropriateness of the chords modified the brain responses in both groups according to music-theoretical principles. This suggests that already 5-year-old children process music according to a well-established cognitive representation of the major–minor tonal system and according to music-syntactic regularities. Moreover, we show that, in contrast to adults, an early negative brain response was left predominant in boys, whereas it was bilateral in girls, indicating a gender difference in children processing music, and revealing that children process music with a hemispheric weighting different from that of adults. Because children process, in contrast to adults, music in the same hemispheres as they process language, results indicate that children process music and language more similarly than adults. This finding might support the notion of a common origin of music and language in the human brain, and concurs with findings that demonstrate the importance of musical features of speech for the acquisition of language.
Journal Articles
The Role of Large-Scale Memory Organization in the Mismatch Negativity Event-Related Brain Potential
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2001) 13 (1): 59–71.
Published: 01 January 2001
Abstract
View article
PDF
The mismatch negativity (MMN) component of event-related brain potentials is elicited by infrequent changes in regular acoustic sequences even if the participant is not actively listening to the sound sequence. Therefore, the MMN is assumed to result from a preattentive process in which an incoming sound is checked against the automatically detected regularities of the auditory sequence and is found to violate them. For example, presenting a discriminably different (deviant) sound within the sequence of a repetitive (standard) sound elicits the MMN. In the present article, we tested whether the memory organization of the auditory sequence can affect the preattentive change detection indexed by the MMN. In Experiment 1, trains of six standard tones were presented with a short, 0.5-sec stimulus onset asynchrony (SOA) between tones in the train. This was followed by a variable SOA between the last standard and the deviant tone (the “irregular presentation” condition). Of 12 participants displaying an MMN at the 0.5-sec predeviant SOA, it was elicited by 11 with the 2-sec predeviant SOA, in 5 participants with the 7-sec SOA, and in none with the 10-sec SOA. In Experiment 2, we repeated the 7-sec irregular predeviant SOA condition, along with a “regular presentation” condition in which the SOA between any two tones was 7 sec. MMN was elicited in about half of the participants (9 out of 16) in the irregular presentation condition, whereas in the regular presentation condition, MMN was elicited in all participants. These results cannot be explained on the basis of memory-strength decay but can be interpreted in terms of automatic, auditory preperceptual grouping principles. In the irregular presentation condition, the close grouping of standards may cause them to become irrelevant to the mismatch process when the deviant tone is presented after a long silent break. Because the MMN indexes preattentive auditory processing, the present results provide evidence that large-scale preperceptual organization of auditory events occurs despite attention being directed away from the auditory stimuli.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2000) 12 (3): 520–541.
Published: 01 May 2000
Abstract
View article
PDF
Only little systematic research has examined event-related brain potentials (ERPs) elicited by the cognitive processing of music. The present study investigated how music processing is influenced by a preceding musical context, affected by the task relevance of unexpected chords, and influenced by the degree and the probability of violation. Four experiments were conducted in which “nonmusicians” listened to chord sequences, which infrequently contained a chord violating the sound expectancy of listeners. Integration of in-key chords into the musical context was reflected as a late negative-frontal deflection in the ERPs. This negative deflection declined towards the end of a chord sequence, reflecting normal buildup of musical context. Brain waves elicited by chords with unexpected notes revealed two ERP effects: an early right-hemispheric preponderant-anterior negativity, which was taken to reflect the violation of sound expectancy; and a late bilateral-frontal negativity. The late negativity was larger compared to in-key chords and taken to reflect the higher degree of integration needed for unexpected chords. The early right-anterior negativity (ERAN) was unaffected by the task relevance of unexpected chords. The amplitudes of both early and late negativities were found to be sensitive to the degree of musical expectancy induced by the preceding harmonic context, and to the probability for deviant acoustic events. The employed experimental design opens a new field for the investigation of music processing. Results strengthen the hypothesis of an implicit musical ability of the human brain.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1996) 8 (6): 527–539.
Published: 01 November 1996
Abstract
View article
PDF
Involuntary switching to task-irrelevant sound change was studied by measuring event-related brain potentials (ERPs) and behavioral performance in a dichotic listening paradigm. Pairs of tones (S1 and S2) were presented and subjects were instructed to ignore S1 (delivered to the left ear) and to make a go/no-go response to the subsequent S2 (delivered to the right ear). On most trials, the task-irrelevant S1 was of standard frequency, but occasionally it deviated from the standard frequency either by a small or large amount. It was predicted that deviant stimuli were automatically detected and that they could involuntarily capture attention. If they lead to attention switching, less capacity should be available for the processing of target tones resulting in impaired processing of S2. As in many previous studies, deviant tones elicited the mismatch negativity (MMN), which is a component of the ERP indicating automatic change detection. Furthermore, targets preceded by a deviant tone elicited a smaller N1 wave and were detected less effectively than targets preceded by a standard tone. This impaired processing of targets following task-irrelevant changes occurred only with short S1–S2 intervals (Experiment I) but not with long ones (Experiment II). The results support a model claiming that the auditory system possesses a change detection system that monitors the acoustic input and may produce an attentional “interrupt” signal when a deviant occurs. The involuntary attentional capture caused by this signal leads to impoverished processing of closely succeeding stimuli.