Redundant combination of target features from separable dimensions can expedite visual search. The dimension-weighting account explains these “redundancy gains” by assuming that the attention-guiding priority map integrates the feature-contrast signals generated by targets within the respective dimensions. The present study investigated whether this hierarchical architecture is sufficient to explain the gains accruing from redundant targets defined by features in different modalities, or whether an additional level of modality-specific priority coding is necessary, as postulated by the modality-weighting account (MWA). To address this, we had observers perform a visuo-tactile search task in which targets popped out by a visual feature (color or shape) or a tactile feature (vibro-tactile frequency) as well as any combination of these features. The RT gains turned out larger for visuo-tactile versus visual redundant targets, as predicted by the MWA. In addition, we analyzed two lateralized event-related EEG components: the posterior (PCN) and central (CCN) contralateral negativities, which are associated with visual and tactile attentional selection, respectively. The CCN proved to be a stable somatosensory component, unaffected by cross-modal redundancies. In contrast, the PCN was sensitive to cross-modal redundancies, evidenced by earlier onsets and higher amplitudes, which could not be explained by linear superposition of the earlier CCN onto the later PCN. Moreover, linear mixed-effect modeling of the PCN amplitude and timing parameters accounted for approximately 25% of the behavioral RT variance. Together, these behavioral and PCN effects support the hierarchy of priority-signal computation assumed by the MWA.

You do not currently have access to this content.