Two mechanisms are said to be responsible for guiding focal attention in visual selection: bottom–up, saliency-driven capture and top–down control. These mechanisms were examined with a paradigm that combined a visual search task with postdisplay probe detection. Two SOAs between the search display and probe onsets were introduced to investigate how attention was allocated to particular items at different points in time. The dynamic interplay between bottom–up and top–down mechanisms was investigated with ERP methodology. ERPs locked to the search displays showed that top–down control needed time to develop. N2pc indicated allocation of attention to the target item and not to the irrelevant singleton. ERPs locked to probes revealed modulations in the P1 component reflecting top–down control of focal attention at the long SOA. Early bottom–up effects were observed in the error rates at the short SOA. Taken together, the present results show that the top–down mechanism takes time to guide focal attention to the relevant target item and that it is potent enough to limit bottom–up attentional capture.

You do not currently have access to this content.