Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Rebecca Nako
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (11): 1714–1727.
Published: 01 November 2016
FIGURES
| View All (6)
Abstract
View article
PDF
During visual search, target representations (attentional templates) control the allocation of attention to template-matching objects. The activation of new attentional templates can be prompted by verbal or pictorial target specifications. We measured the N2pc component of the ERP as a temporal marker of attentional target selection to determine the role of color signals in search templates for real-world search target objects that are set up in response to word or picture cues. On each trial run, a word cue (e.g., “apple”) was followed by three search displays that contained the cued target object among three distractors. The selection of the first target was based on the word cue only, whereas selection of the two subsequent targets could be controlled by templates set up after the first visual presentation of the target (picture cue). In different trial runs, search displays either contained objects in their natural colors or monochromatic objects. These two display types were presented in different blocks (Experiment 1) or in random order within each block (Experiment 2). RTs were faster, and target N2pc components emerged earlier for the second and third display of each trial run relative to the first display, demonstrating that pictures are more effective than word cues in guiding search. N2pc components were triggered more rapidly for targets in the second and third display in trial runs with colored displays. This demonstrates that when visual target attributes are fully specified by picture cues, the additional presence of color signals in target templates facilitates the speed with which attention is allocated to template-matching objects. No such selection benefits for colored targets were found when search templates were set up in response to word cues. Experiment 2 showed that color templates activated by word cues can even impair the attentional selection of noncolored targets. Results provide new insights into the status of color during the guidance of visual search for real-world target objects. Color is a powerful guiding feature when the precise visual properties of these objects are known but seems to be less important when search targets are specified by word cues.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (11): 2299–2307.
Published: 01 November 2015
FIGURES
| View All (8)
Abstract
View article
PDF
Visual experiences increase our ability to discriminate environmentally relevant stimuli (native stimuli, e.g., human faces) at the cost of a reduced sensitivity to irrelevant or infrequent stimuli (non-native stimuli, e.g., monkey/ape faces)—a developmental progression known as perceptual narrowing. One possible source of the reduced sensitivity in distinguishing non-native stimuli (e.g., one ape face vs. another ape face) could be underspecified attentional search templates (i.e., working memory representations). To determine whether perceptual narrowing stems from underspecified attentional templates for non-native exemplars, this study used ERP (the N2pc component) and behavioral measures in a visual search task, where the target was either an exemplar (e.g., a specific ape face) or a category (e.g., any ape face). The N2pc component, an ERP marker of early attentional selection emerging at 200 msec poststimulus, is typically modulated by the specificity of the target and, therefore, by the attentional template—the N2pc is larger for specific items versus categories. In two experiments using both human and ape faces (i.e., native and non-native stimuli), we found that perceptual narrowing affects later response selection (i.e., manual RT and accuracy), but not early attentional selection relying on attentional templates (i.e., the N2pc component). Our ERP results show that adults deploy exemplar level attentional templates for non-native stimuli (as well as native stimuli), despite poor downstream behavioral performance. Our findings suggest that long-term previous experience with reduced exemplar level judgments (i.e., perceptual narrowing) does not appear to eliminate early attentional selection of non-native exemplars.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (5): 902–912.
Published: 01 May 2015
FIGURES
| View All (4)
Abstract
View article
PDF
Visual search is controlled by representations of target objects (attentional templates). Such templates are often activated in response to verbal descriptions of search targets, but it is unclear whether search can be guided effectively by such verbal cues. We measured ERPs to track the activation of attentional templates for new target objects defined by word cues. On each trial run, a word cue was followed by three search displays that contained the cued target object among three distractors. Targets were detected more slowly in the first display of each trial run, and the N2pc component (an ERP marker of attentional target selection) was attenuated and delayed for the first relative to the two successive presentations of a particular target object, demonstrating limitations in the ability of word cues to activate effective attentional templates. N2pc components to target objects in the first display were strongly affected by differences in object imageability (i.e., the ability of word cues to activate a target-matching visual representation). These differences were no longer present for the second presentation of the same target objects, indicating that a single perceptual encounter is sufficient to activate a precise attentional template. Our results demonstrate the superiority of visual over verbal target specifications in the control of visual search, highlight the fact that verbal descriptions are more effective for some objects than others, and suggest that the attentional templates that guide search for particular real-world target objects are analog visual representations.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (5): 719–729.
Published: 01 May 2013
FIGURES
| View All (5)
Abstract
View article
PDF
Visual search is often guided by top–down attentional templates that specify target-defining features. But search can also occur at the level of object categories. We measured the N2pc component, a marker of attentional target selection, in two visual search experiments where targets were defined either categorically (e.g., any letter) or at the item level (e.g., the letter C) by a prime stimulus. In both experiments, an N2pc was elicited during category search, in both familiar and novel contexts (Experiment 1) and with symbolic primes (Experiment 2), indicating that, even when targets are only defined at the category level, they are selected at early sensory-perceptual stages. However, the N2pc emerged earlier and was larger during item-based search compared with category-based search, demonstrating the superiority of attentional guidance by item-specific templates. We discuss the implications of these findings for attentional control and category learning.