Skip Nav Destination
1-4 of 4
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
A Direct Comparison of Spatial Attention and Stimulus–Response Compatibility between Mice and Humans
Journal of Cognitive Neuroscience (2021) 33 (5): 771–783.
Published: 01 April 2021
FIGURES | View All (5)
AbstractView article PDF
Mice are becoming an increasingly popular model for investigating the neural substrates of visual processing and higher cognitive functions. To validate the translation of mouse visual attention and sensorimotor processing to humans, we compared their performance in the same visual task. Mice and human participants judged the orientation of a grating presented on either the right or left side in the visual field. To induce shifts of spatial attention, we varied the stimulus probability on each side. As expected, human participants showed faster RTs and a higher accuracy for the side with a higher probability, a well-established effect of visual attention. The attentional effect was only present in mice when their response was slow. Although the task demanded a judgment of grating orientation, the accuracy of the mice was strongly affected by whether the side of the stimulus corresponded to the side of the behavioral response. This stimulus–response compatibility (Simon) effect was much weaker in humans and only significant for their fastest responses. Both species exhibited a speed–accuracy trade-off in their responses, because slower responses were more accurate than faster responses. We found that mice typically respond very fast, which contributes to the stronger stimulus–response compatibility and weaker attentional effects, which were only apparent in the trials with slowest responses. Humans responded slower and had stronger attentional effects, combined with a weak influence of stimulus–response compatibility, which was only apparent in trials with fast responses. We conclude that spatial attention and stimulus–response compatibility influence the responses of humans and mice but that strategy differences between species determine the dominance of these effects.
Journal of Cognitive Neuroscience (2009) 21 (6): 1081–1091.
Published: 01 June 2009
AbstractView article PDF
If we search for an item, a representation of this item in our working memory guides attention to matching items in the visual scene. We can hold multiple items in working memory. Do all these items guide attention in parallel? We asked participants to detect a target object in a stream of objects while they maintained a second item in memory for a subsequent task. On some trials, we presented this memory item as a distractor in the stream. Subjects did not confuse these memory items with the search target, as the false alarm rate on trials where the memory item was presented in the stream was comparable to that on trials with only regular distractors. However, a comparable performance does not exclude that the memory items are processed differently from normal distractors. We therefore recorded event-related potentials (ERPs) evoked by search targets, memory items, and regular distractors. As expected, ERPs evoked by search targets differed from those evoked by distractors. Search targets elicited an occipital selection negativity and a frontal selection positivity indexing selective attention, whereas the P3b component, which reflects the matching of sensory events to memory representations, was enhanced for targets compared to distractors. Remarkably, the ERPs evoked by memory items were indistinguishable from the ERPs evoked by normal distractors. This implies that the search target has a special status in working memory that is not shared by the other items. These other, “accessory” items do not guide attention and are excluded from the matching process.
Journal of Cognitive Neuroscience (2002) 14 (4): 525–537.
Published: 15 May 2002
AbstractView article PDF
Here we propose a model of how the visual brain segregates textured scenes into figures and background. During texture segregation, locations where the properties of texture elements change abruptly are assigned to boundaries, whereas image regions that are relatively homogeneous are grouped together. Boundary detection and grouping of image regions require different connection schemes, which are accommodated in a single network architecture by implementing them in different layers. As a result, all units carry signals related to boundary detection as well as grouping of image regions, in accordance with cortical physiology. Boundaries yield an early enhancement of network responses, but at a later point, an entire figural region is grouped together, because units that respond to it are labeled with enhanced activity. The model predicts which image regions are preferentially perceived as figure or as background and reproduces the spatio-temporal profile of neuronal activity in the visual cortex during texture segregation in intact animals, as well as in animals with cortical lesions.
The Role of Neuronal Synchronization in Response Selection: A Biologically Plausible Theory of Structured Representations in the Visual Cortex
Journal of Cognitive Neuroscience (1996) 8 (6): 603–625.
Published: 01 November 1996
AbstractView article PDF
Recent experimental results in the visual cortex of cats and monkeys have suggested an important role for synchronization of neuronal activity on a millisecond time scale. Synchronization has been found to occur selectively between neuronal responses to related image components. This suggests that not only the firing rates of neurons but also the relative timing of their action potentials is used as a coding dimension. Thus, a powerful relational code would be available, in addition to the rate code, for the representation of perceptual objects. This could alleviate difficulties in the simultaneous representation of multiple objects. In this article we present a set of theoretical arguments and predictions concerning the mechanisms that could group neurons responding to related image components into coherently active aggregates. Synchrony is likely to be mediated by synchronizing connections; we introduce the concept of an interaction skeleton to refer to the subset of synchronizing connections that are rendered effective by a particular stimulus configuration. If the image is segmented into objects, these objects can typically be segmented further into their constituent parts. The synchronization behavior of neurons that represent the various image components may accurately reflect this hierarchical clustering. We propose that the range of synchronizing interactions is a dynamic parameter of the cortical network, so that the grain of the resultant grouping process may be adapted to the actual behavioral requirements. It can be argued that different aspects of purposeful behavior rely on separable processes by which sensory input is transformed into adjustments of motor activity. Indeed, neurophysiological evidence has suggested separate processing streams originating in the primary visual cortex for object identification and sensorimotor coordination. However, such a separation calls for a mechanism that avoids interference effects in the presence of multiple objects, or when multiple motor programs are simultaneously prepared. In this article we suggest that synchronization between responses of neurons in both the visual cortex and in areas that are involved in response selection and execution might allow for a selective routing of sensory information to the appropriate motor program.