Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-20 of 26
Martin Eimer
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
1
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2023) 35 (12): 1919–1935.
Published: 01 December 2023
FIGURES
| View All (7)
Abstract
View article
PDF
Visual search is guided by representations of target-defining features (attentional templates) that are activated in a preparatory fashion. Here, we investigated whether these template activation processes are modulated by probabilistic expectations about upcoming search targets. We tracked template activation while observers prepared to search for one or two possible color-defined targets by measuring N2pc components (markers of attentional capture) to task-irrelevant color probes flashed every 200 msec during the interval between search displays. These probes elicit N2pcs only if the corresponding color template is active at the time when the probe appears. Probe N2pcs emerged from about 600 msec before search display onset. They did not differ between one-color and two-color search, indicating that two color templates can be activated concurrently. Critically, probe N2pcs measured during two-color search were identical for probes matching an expected or unexpected color (target color probability: 80% vs. 20%), or one of two equally likely colors. This strongly suggests that probabilistic target color expectations had no impact on search preparation. In marked contrast, subsequent target selection processes were strongly affected by these expectations. We discuss possible explanations for this clear dissociation in the effects of expectations on preparatory search template activation and search target selection, respectively.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2020) 32 (8): 1525–1535.
Published: 01 August 2020
FIGURES
| View All (5)
Abstract
View article
PDF
Visual search is guided by representations of target-defining features (attentional templates). We tracked the time course of template activation processes during the preparation for search in a task where the identity of color-defined search targets switched across successive trials (ABAB). Task-irrelevant color probes that matched either the upcoming relevant target color or the previous now-irrelevant target color were presented every 200 msec during the interval between search displays. N2pc components (markers of attentional capture) were measured for both types of probes at each time point. A reliable probe N2pc indicates that the corresponding color template is active at the time when the probe appears. N2pcs of equal size emerged from 1000 msec before search display onset for both relevant-color and irrelevant-color probes, demonstrating that both color templates were activated concurrently. Evidence for color-selective attentional control was found only immediately before the arrival of the search display, where N2pcs were larger for relevant-color probes. These results reveal important limitations in the executive control of search preparation in tasks where two targets alternate across trials. Although the identity of the upcoming target is fully predictable, both task-relevant and task-irrelevant target templates are coactivated. Knowledge about target identity selectively biases these template activation processes in a temporally discrete fashion, guided by temporal expectations about when the target template will become relevant.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2020) 32 (3): 546–557.
Published: 01 March 2020
FIGURES
Abstract
View article
PDF
Selective attention regulates the activation of working memory (WM) representations. Retro-cues, presented after memory sample stimuli have been stored, modulate these activation states by triggering shifts of attention to task-relevant samples. Here, we investigated whether the control of such attention shifts is modality-specific or shared across sensory modalities. Participants memorized bilateral tactile and visual sample stimuli before an auditory retro-cue indicated which visual and tactile stimuli had to be retained. Critically, these cued samples were located on the same side or opposite sides, thus requiring spatially congruent or incongruent attention shifts in tactile and visual WM. To track the attentional selection of retro-cued samples, tactile and visual contralateral delay activities (tCDA and CDA components) were measured. Clear evidence for spatial synergy effects from attention shifts in visual WM on concurrent shifts in tactile WM were observed: Tactile WM performance was impaired, and tCDA components triggered by retro-cues were strongly attenuated on opposite-sides relative to same-side trials. These spatial congruency effects were eliminated when cued attention shifts in tactile WM occurred in the absence of simultaneous shifts within visual WM. Results show that, in contrast to other modality-specific aspects of WM control, concurrent attentional selection processes within tactile and visual WM are mediated by shared supramodal control processes.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2020) 32 (2): 283–300.
Published: 01 February 2020
FIGURES
| View All (6)
Abstract
View article
PDF
Most investigations of visual search have focused on the discrimination between a search target and other task-irrelevant distractor objects (selection). The attentional limitations that arise when multiple target objects in the same display have to be processed simultaneously (access) remain poorly understood. Here, we employed behavioral and electrophysiological measures to investigate the factors that determine whether multiple target objects can be accessed in parallel. Performance and N2pc components were measured for search displays that contained either a single target or two target objects. When two target objects were present, they either had the same or different target-defining features. Participants reported whether search displays contained a single target, two targets with shared features, or two targets with different features. There were performance costs as well as reduced N2pc amplitudes for two-target/different relative to two-target/same displays, suggesting that access to multiple target objects defined by different features was impaired. These behavioral and electrophysiological costs were also observed in a task where all search display objects were physically different, but not during color or shape singleton search, confirming that they do not reflect a low-level perceptual grouping of physically identical targets. These results demonstrate strong feature-specific limitations of visual access, as proposed by the Boolean map theory of visual attention. They suggest that multiple target objects can be accessed in parallel only when they share task-relevant features and demonstrate that mechanisms of visual access can be studied with electrophysiological markers.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2019) 31 (2): 175–185.
Published: 01 February 2019
FIGURES
| View All (4)
Abstract
View article
PDF
We investigated the sources of dual-task costs arising in multisensory working memory (WM) tasks, where stimuli from different modalities have to be simultaneously maintained. Performance decrements relative to unimodal single-task baselines have been attributed to a modality-unspecific central WM store, but such costs could also reflect increased demands on central executive processes involved in dual-task coordination. To compare these hypotheses, we asked participants to maintain two, three, or four visual items. Unimodal trials, where only this visual task was performed, and bimodal trials, where a concurrent tactile WM task required the additional maintenance of two tactile items, were randomly intermixed. We measured the visual and tactile contralateral delay activity (CDA/tCDA components) as markers of WM maintenance in visual and somatosensory areas. There were reliable dual-task costs, as visual CDA components were reduced in size and visual WM accuracy was impaired on bimodal relative to unimodal trials. However, these costs did not depend on visual load, which caused identical CDA modulations in unimodal and bimodal trials, suggesting that memorizing tactile items did not reduce the number of visual items that could be maintained. Visual load did not also affect tCDA amplitudes. These findings indicate that bimodal dual-task costs do not result from a competition between multisensory items for shared storage capacity. Instead, these costs reflect generic limitations of executive control mechanisms that coordinate multiple cognitive processes in dual tasks. Our results support hierarchical models of WM, where distributed maintenance processes with modality-specific capacity limitations are controlled by a central executive mechanism.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2018) 30 (12): 1902–1915.
Published: 01 December 2018
FIGURES
| View All (5)
Abstract
View article
PDF
Mental representations of target features (attentional templates) control the selection of candidate target objects in visual search. The question where templates are maintained remains controversial. We employed the N2pc component as an electrophysiological marker of template-guided target selection to investigate whether and under which conditions templates are held in visual working memory (vWM). In two experiments, participants memorized one or four shapes (low vs. high vWM load) before either being tested on their memory or performing a visual search task. When targets were defined by one of two possible colors (e.g., red or green), target N2pcs were delayed with high vWM load. This suggests that the maintenance of multiple shapes in vWM interfered with the activation of color-specific search templates, supporting the hypothesis that these templates are held in vWM. This was the case despite participants always searching for the same two target colors. In contrast, the speed of target selection in a task where a single target color remained relevant throughout was unaffected by concurrent load, indicating that a constant search template for a single feature may be maintained outside vWM in a different store. In addition, early visual N1 components to search and memory test displays were attenuated under high load, suggesting a competition between external and internal attention. The size of this attenuation predicted individual vWM performance. These results provide new electrophysiological evidence for impairment of top–down attentional control mechanisms by high vWM load, demonstrating that vWM is involved in the guidance of attentional target selection during search.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2018) 30 (5): 644–655.
Published: 01 May 2018
FIGURES
Abstract
View article
PDF
Working memory (WM) is limited in capacity, but it is controversial whether these capacity limitations are domain-general or are generated independently within separate modality-specific memory systems. These alternative accounts were tested in bimodal visual/tactile WM tasks. In Experiment 1, participants memorized the locations of simultaneously presented task-relevant visual and tactile stimuli. Visual and tactile WM load was manipulated independently (one, two, or three items per modality), and one modality was unpredictably tested after each trial. To track the activation of visual and tactile WM representations during the retention interval, the visual contralateral delay activity (CDA) and tactile CDA (tCDA) were measured over visual and somatosensory cortex, respectively. CDA and tCDA amplitudes were selectively affected by WM load in the corresponding (tactile or visual) modality. The CDA parametrically increased when visual load increased from one to two and to three items. The tCDA was enhanced when tactile load increased from one to two items and showed no further enhancement for three tactile items. Critically, these load effects were strictly modality-specific, as substantiated by Bayesian statistics. Increasing tactile load did not affect the visual CDA, and increasing visual load did not modulate the tCDA. Task performance at memory test was also unaffected by WM load in the other (untested) modality. This was confirmed in a second behavioral experiment where tactile and visual loads were either two or four items, unimodal baseline conditions were included, and participants performed a color change detection task in the visual modality. These results show that WM capacity is not limited by a domain-general mechanism that operates across sensory modalities. They suggest instead that WM storage is mediated by distributed modality-specific control mechanisms that are activated independently and in parallel during multisensory WM.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2017) 29 (4): 628–636.
Published: 01 April 2017
FIGURES
| View All (4)
Abstract
View article
PDF
Attention maintains task-relevant information in working memory (WM) in an active state. We investigated whether the attention-based maintenance of stimulus representations that were encoded through different modalities is flexibly controlled by top–down mechanisms that depend on behavioral goals. Distinct components of the ERP reflect the maintenance of tactile and visual information in WM. We concurrently measured tactile (tCDA) and visual contralateral delay activity (CDA) to track the attentional activation of tactile and visual information during multimodal WM. Participants simultaneously received tactile and visual sample stimuli on the left and right sides and memorized all stimuli on one task-relevant side. After 500 msec, an auditory retrocue indicated whether the sample set's tactile or visual content had to be compared with a subsequent test stimulus set. tCDA and CDA components that emerged simultaneously during the encoding phase were consistently reduced after retrocues that marked the corresponding (tactile or visual) modality as task-irrelevant. The absolute size of cue-dependent modulations was similar for the tCDA/CDA components and did not depend on the number of tactile/visual stimuli that were initially encoded into WM. Our results suggest that modality-specific maintenance processes in sensory brain regions are flexibly modulated by top–down influences that optimize multimodal WM representations for behavioral goals.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (12): 2003–2020.
Published: 01 December 2016
FIGURES
| View All (5)
Abstract
View article
PDF
During the retention of visual information in working memory, event-related brain potentials show a sustained negativity over posterior visual regions contralateral to the side where memorized stimuli were presented. This contralateral delay activity (CDA) is generally believed to be a neural marker of working memory storage. In two experiments, we contrasted this storage account of the CDA with the alternative hypothesis that the CDA reflects the current focus of spatial attention on a subset of memorized items set up during the most recent encoding episode. We employed a sequential loading procedure where participants memorized four task-relevant items that were presented in two successive memory displays (M1 and M2). In both experiments, CDA components were initially elicited contralateral to task-relevant items in M1. Critically, the CDA switched polarity when M2 displays appeared on the opposite side. In line with the attentional activation account, these reversed CDA components exclusively reflected the number of items that were encoded from M2 displays, irrespective of how many M1 items were already held in working memory. On trials where M1 and M2 displays were presented on the same side and on trials where M2 displays appeared nonlaterally, CDA components elicited in the interval after M2 remained sensitive to a residual trace of M1 items, indicating that some activation of previously stored items was maintained across encoding episodes. These results challenge the hypothesis that CDA amplitudes directly reflect the total number of stored objects and suggest that the CDA is primarily sensitive to the activation of a subset of working memory representations within the current focus of spatial attention.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (12): 1947–1963.
Published: 01 December 2016
FIGURES
| View All (6)
Abstract
View article
PDF
The question whether target selection in visual search can be effectively controlled by simultaneous attentional templates for multiple features is still under dispute. We investigated whether multiple-color attentional guidance is possible when target colors remain constant and can thus be represented in long-term memory but not when they change frequently and have to be held in working memory. Participants searched for one, two, or three possible target colors that were specified by cue displays at the start of each trial. In constant-color blocks, the same colors remained task-relevant throughout. In variable-color blocks, target colors changed between trials. The contralateral delay activity (CDA) to cue displays increased in amplitude as a function of color memory load in variable-color blocks, which indicates that cued target colors were held in working memory. In constant-color blocks, the CDA was much smaller, suggesting that color representations were primarily stored in long-term memory. N2pc components to targets were measured as a marker of attentional target selection. Target N2pcs were attenuated and delayed during multiple-color search, demonstrating less efficient attentional deployment to color-defined target objects relative to single-color search. Importantly, these costs were the same in constant-color and variable-color blocks. These results demonstrate that attentional guidance by multiple-feature as compared with single-feature templates is less efficient both when target features remain constant and can be represented in long-term memory and when they change across trials and therefore have to be maintained in working memory.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (11): 1672–1687.
Published: 01 November 2016
FIGURES
Abstract
View article
PDF
Previous research has shown that when two color-defined target objects appear in rapid succession at different locations, attention is deployed independently and in parallel to both targets. This study investigated whether this rapid simultaneous attentional target selection mechanism can also be employed in tasks where targets are defined by a different visual feature (shape) or when alphanumerical category is the target selection attribute. Two displays that both contained a target and a nontarget object on opposite sides were presented successively, and the SOA between the two displays was 100, 50, 20, or 10 msec in different blocks. N2pc components were recorded to both targets as a temporal marker of their attentional selection. When observers searched for shape-defined targets (Experiment 1), N2pc components to the two targets were equal in size and overlapped in time when the SOA between the two displays was short, reflecting two parallel shape-guided target selection processes with their own independent time course. Essentially the same temporal pattern of N2pc components was observed when alphanumerical category was the target-defining attribute (Experiment 2), demonstrating that the rapid parallel attentional selection of multiple target objects is not restricted to situations where the deployment of attention can be guided by elementary visual features but that these processes can even be employed in category-based attentional selection tasks. These findings have important implications for our understanding of the cognitive and neural basis of top–down attentional control.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (11): 1714–1727.
Published: 01 November 2016
FIGURES
| View All (6)
Abstract
View article
PDF
During visual search, target representations (attentional templates) control the allocation of attention to template-matching objects. The activation of new attentional templates can be prompted by verbal or pictorial target specifications. We measured the N2pc component of the ERP as a temporal marker of attentional target selection to determine the role of color signals in search templates for real-world search target objects that are set up in response to word or picture cues. On each trial run, a word cue (e.g., “apple”) was followed by three search displays that contained the cued target object among three distractors. The selection of the first target was based on the word cue only, whereas selection of the two subsequent targets could be controlled by templates set up after the first visual presentation of the target (picture cue). In different trial runs, search displays either contained objects in their natural colors or monochromatic objects. These two display types were presented in different blocks (Experiment 1) or in random order within each block (Experiment 2). RTs were faster, and target N2pc components emerged earlier for the second and third display of each trial run relative to the first display, demonstrating that pictures are more effective than word cues in guiding search. N2pc components were triggered more rapidly for targets in the second and third display in trial runs with colored displays. This demonstrates that when visual target attributes are fully specified by picture cues, the additional presence of color signals in target templates facilitates the speed with which attention is allocated to template-matching objects. No such selection benefits for colored targets were found when search templates were set up in response to word cues. Experiment 2 showed that color templates activated by word cues can even impair the attentional selection of noncolored targets. Results provide new insights into the status of color during the guidance of visual search for real-world target objects. Color is a powerful guiding feature when the precise visual properties of these objects are known but seems to be less important when search targets are specified by word cues.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (2): 319–332.
Published: 01 February 2016
FIGURES
| View All (5)
Abstract
View article
PDF
Finding target objects among distractors in visual search display is often assumed to be based on sequential movements of attention between different objects. However, the speed of such serial attention shifts is still under dispute. We employed a search task that encouraged the successive allocation of attention to two target objects in the same search display and measured N2pc components to determine how fast attention moved between these objects. Each display contained one digit in a known color (fixed-color target) and another digit whose color changed unpredictably across trials (variable-color target) together with two gray distractor digits. Participants' task was to find the fixed-color digit and compare its numerical value with that of the variable-color digit. N2pc components to fixed-color targets preceded N2pc components to variable-color digits, demonstrating that these two targets were indeed selected in a fixed serial order. The N2pc to variable-color digits emerged approximately 60 msec after the N2pc to fixed-color digits, which shows that attention can be reallocated very rapidly between different target objects in the visual field. When search display durations were increased, thereby relaxing the temporal demands on serial selection, the two N2pc components to fixed-color and variable-color targets were elicited within 90 msec of each other. Results demonstrate that sequential shifts of attention between different target locations can operate very rapidly at speeds that are in line with the assumptions of serial selection models of visual search.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (5): 902–912.
Published: 01 May 2015
FIGURES
| View All (4)
Abstract
View article
PDF
Visual search is controlled by representations of target objects (attentional templates). Such templates are often activated in response to verbal descriptions of search targets, but it is unclear whether search can be guided effectively by such verbal cues. We measured ERPs to track the activation of attentional templates for new target objects defined by word cues. On each trial run, a word cue was followed by three search displays that contained the cued target object among three distractors. Targets were detected more slowly in the first display of each trial run, and the N2pc component (an ERP marker of attentional target selection) was attenuated and delayed for the first relative to the two successive presentations of a particular target object, demonstrating limitations in the ability of word cues to activate effective attentional templates. N2pc components to target objects in the first display were strongly affected by differences in object imageability (i.e., the ability of word cues to activate a target-matching visual representation). These differences were no longer present for the second presentation of the same target objects, indicating that a single perceptual encounter is sufficient to activate a precise attentional template. Our results demonstrate the superiority of visual over verbal target specifications in the control of visual search, highlight the fact that verbal descriptions are more effective for some objects than others, and suggest that the attentional templates that guide search for particular real-world target objects are analog visual representations.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (5): 719–729.
Published: 01 May 2013
FIGURES
| View All (5)
Abstract
View article
PDF
Visual search is often guided by top–down attentional templates that specify target-defining features. But search can also occur at the level of object categories. We measured the N2pc component, a marker of attentional target selection, in two visual search experiments where targets were defined either categorically (e.g., any letter) or at the item level (e.g., the letter C) by a prime stimulus. In both experiments, an N2pc was elicited during category search, in both familiar and novel contexts (Experiment 1) and with symbolic primes (Experiment 2), indicating that, even when targets are only defined at the category level, they are selected at early sensory-perceptual stages. However, the N2pc emerged earlier and was larger during item-based search compared with category-based search, demonstrating the superiority of attentional guidance by item-specific templates. We discuss the implications of these findings for attentional control and category learning.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (3): 749–759.
Published: 01 March 2012
FIGURES
Abstract
View article
PDF
The question whether attentional capture by salient but task-irrelevant visual stimuli is triggered in a bottom–up fashion or depends on top–down task settings is still unresolved. Strong support for bottom–up capture was obtained in the additional singleton task, in which search arrays were visible until response onset. Equally strong evidence for top–down control of attentional capture was obtained in spatial cueing experiments in which display durations were very brief. To demonstrate the critical role of temporal task demands on salience-driven attentional capture, we measured ERP indicators of capture by task-irrelevant color singletons in search arrays that could also contain a shape target. In Experiment 1, all displays were visible until response onset. In Experiment 2, display duration was limited to 200 msec. With long display durations, color singleton distractors elicited an N2pc component that was followed by a late Pd component, suggesting that they triggered attentional capture, which was later replaced by location-specific inhibition. When search arrays were visible for only 200 msec, the distractor-elicited N2pc was eliminated and was replaced by a Pd component in the same time range, indicative of rapid suppression of capture. Results show that attentional capture by salient distractors can be inhibited for short-duration search displays, in which it would interfere with target processing. They demonstrate that salience-driven capture is not a purely bottom–up phenomenon but is subject to top–down control.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (4): 832–844.
Published: 01 April 2011
FIGURES
| View All (5)
Abstract
View article
PDF
The ability to drive safely is disrupted by cell phone conversations, and this has been attributed to a diversion of attention from the visual environment. We employed behavioral and ERP measures to study whether the attentive processing of spoken messages is, in itself, sufficient to produce visual–attentional deficits. Participants searched for visual targets defined by a unique feature (Experiment 1) or feature conjunction (Experiment 2), and simultaneously listened to narrated text passages that had to be recalled later (encoding condition), or heard backward-played speech sounds that could be ignored (control condition). Responses to targets were slower in the encoding condition, and ERPs revealed that the visual processing of search arrays and the attentional selection of target stimuli were less efficient in the encoding relative to the control condition. Results demonstrate that the attentional processing of visual information is impaired when concurrent spoken messages are encoded and maintained, in line with cross-modal links in selective attention, but inconsistent with the view that attentional resources are modality-specific. The distraction of visual attention by active listening could contribute to the adverse effects of cell phone use on driving performance.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (10): 2198–2211.
Published: 01 October 2010
FIGURES
| View All (5)
Abstract
View article
PDF
Several theories of the mechanisms linking perception and action require that the links are bidirectional, but there is a lack of consensus on the effects that action has on perception. We investigated this by measuring visual event-related brain potentials to observed hand actions while participants prepared responses that were spatially compatible (e.g., both were on the left side of the body) or incompatible and action type compatible (e.g., both were finger taps) or incompatible, with observed actions. An early enhanced processing of spatially compatible stimuli was observed, which is likely due to spatial attention. This was followed by an attenuation of processing for both spatially and action type compatible stimuli, likely to be driven by efference copy signals that attenuate processing of predicted sensory consequences of actions. Attenuation was not response-modality specific; it was found for manual stimuli when participants prepared manual and vocal responses, in line with the hypothesis that action control is hierarchically organized. These results indicate that spatial attention and forward model prediction mechanisms have opposite, but temporally distinct, effects on perception. This hypothesis can explain the inconsistency of recent findings on action–perception links and thereby supports the view that sensorimotor links are bidirectional. Such effects of action on perception are likely to be crucial, not only for the control of our own actions but also in sociocultural interaction, allowing us to predict the reactions of others to our own actions.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (3): 474–481.
Published: 01 March 2010
FIGURES
Abstract
View article
PDF
The rapid detection of affective signals from conspecifics is crucial for the survival of humans and other animals; if those around you are scared, there is reason for you to be alert and to prepare for impending danger. Previous research has shown that the human brain detects emotional faces within 150 msec of exposure, indicating a rapid differentiation of visual social signals based on emotional content. Here we use event-related brain potential (ERP) measures to show for the first time that this mechanism extends to the auditory domain, using human nonverbal vocalizations, such as screams. An early fronto-central positivity to fearful vocalizations compared with spectrally rotated and thus acoustically matched versions of the same sounds started 150 msec after stimulus onset. This effect was also observed for other vocalized emotions (achievement and disgust), but not for affectively neutral vocalizations, and was linked to the perceived arousal of an emotion category. That the timing, polarity, and scalp distribution of this new ERP correlate are similar to ERP markers of emotional face processing suggests that common supramodal brain mechanisms may be involved in the rapid detection of affectively relevant visual and auditory signals.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2009) 21 (9): 1653–1669.
Published: 01 September 2009
Abstract
View article
PDF
Processing of a given target is facilitated when it is defined within the same (e.g., visual–visual), compared to a different (e.g., tactile–visual), perceptual modality as on the previous trial [Spence, C., Nicholls, M., & Driver, J. The cost of expecting events in the wrong sensory modality. Perception & Psychophysics, 63, 330–336, 2001]. The present study was designed to identify electrocortical (EEG) correlates underlying this “modality shift effect.” Participants had to discriminate (via foot pedal responses) the modality of the target stimulus, visual versus tactile (Experiment 1), or respond based on the target-defining features (Experiment 2). Thus, modality changes were associated with response changes in Experiment 1, but dissociated in Experiment 2. Both experiments confirmed previous behavioral findings with slower discrimination times for modality change, relative to repetition, trials. Independently of the target-defining modality, spatial stimulus characteristics, and the motor response, this effect was mirrored by enhanced amplitudes of the anterior N1 component. These findings are explained in terms of a generalized “modality-weighting” account, which extends the “dimension-weighting” account proposed by Found and Müller [Searching for unknown feature targets on more than one dimension: Investigating a “dimension-weighting” account. Perception & Psychophysics, 58, 88–101, 1996] for the visual modality. On this account, the anterior N1 enhancement is assumed to reflect the detection of a modality change and initiation of the readjustment of attentional weight-setting from the old to the new target-defining modality in order to optimize target detection.
1