Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Moshe Bar
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (7): 948–958.
Published: 01 July 2016
FIGURES
| View All (7)
Abstract
View article
PDF
Recognizing objects in the environment and understanding our surroundings often depends on context: the presence of other objects and knowledge about their relations with each other. Such contextual information activates a set of medial lobe brain regions, the parahippocampal cortex and the retrosplenial complex. Both regions are more activated by single objects with a unique contextual association than by objects not associated with any specific context. Similarly they are more activated by spatially coherent arrangements of objects when those are consistent with their known spatial relations. The current study tested how context in multiple-object displays is represented in these regions in the absence of relevant spatial information. Using an fMRI slow-event-related design, we show that the precuneus (a subpart of the retrosplenial complex) is more activated by simultaneously presented contextually related objects than by unrelated objects. This suggests that the representation of context in this region is cumulative, representing integrated information across objects in the display. We discuss these findings in relation to processing of visual information and relate them to previous findings of contextual effects in perception.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2008) 20 (12): 2226–2237.
Published: 01 December 2008
Abstract
View article
PDF
Everyday contextual settings create associations that later afford generating predictions about what objects to expect in our environment. The cortical network that takes advantage of such contextual information is proposed to connect the representation of associated objects such that seeing one object (bed) will activate the visual representations of other objects sharing the same context (pillow). Given this proposal, we hypothesized that the cortical activity elicited by seeing a strong contextual object would predict the occurrence of false memories whereby one erroneously “remembers” having seen a new object that is related to a previously presented object. To test this hypothesis, we used functional magnetic resonance imaging during encoding of contextually related objects, and later tested recognition memory. New objects that were contextually related to previously presented objects were more often falsely judged as “old” compared with new objects that were contextually unrelated to old objects. This phenomenon was reflected by activity in the cortical network mediating contextual processing, which provides a better understanding of how the brain represents and processes context.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2008) 20 (12): 2167–2174.
Published: 01 December 2008
Abstract
View article
PDF
The human amygdala robustly activates to fear faces. Heightened response to fear faces is thought to reflect the amygdala's adaptive function as an early warning mechanism. Although culture shapes several facets of emotional and social experience, including how fear is perceived and expressed to others, very little is known about how culture influences neural responses to fear stimuli. Here we show that the bilateral amygdala response to fear faces is modulated by culture. We used functional magnetic resonance imaging to measure amygdala response to fear and nonfear faces in two distinct cultures. Native Japanese in Japan and Caucasians in the United States showed greater amygdala activation to fear expressed by members of their own cultural group. This finding provides novel and surprising evidence of cultural tuning in an automatic neural response.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2008) 20 (3): 371–388.
Published: 01 March 2008
Abstract
View article
PDF
Visual context plays a prominent role in everyday perception. Contextual information can facilitate recognition of objects within scenes by providing predictions about objects that are most likely to appear in a specific setting, along with the locations that are most likely to contain objects in the scene. Is such identity-related (“semantic”) and location-related (“spatial”) contextual knowledge represented separately or jointly as a bound representation? We conducted a functional magnetic resonance imaging (fMRI) priming experiment whereby semantic and spatial contextual relations between prime and target object pictures were independently manipulated. This method allowed us to determine whether the two contextual factors affect object recognition with or without interacting, supporting a unified versus independent representations, respectively. Results revealed a Semantic × Spatial interaction in reaction times for target object recognition. Namely, significant semantic priming was obtained when targets were positioned in expected (congruent), but not in unexpected (incongruent), locations. fMRI results showed corresponding interactive effects in brain regions associated with semantic processing (inferior prefrontal cortex), visual contextual processing (parahippocampal cortex), and object-related processing (lateral occipital complex). In addition, activation in fronto-parietal areas suggests that attention and memory-related processes might also contribute to the contextual effects observed. These findings indicate that object recognition benefits from associative representations that integrate information about objects' identities and their locations, and directly modulate activation in object-processing cortical regions. Such context frames are useful in maintaining a coherent and meaningful representation of the visual world, and in providing a platform from which predictions can be generated to facilitate perception and action.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2003) 15 (4): 600–609.
Published: 15 May 2003
Abstract
View article
PDF
The majority of the research related to visual recognition has so far focused on bottom-up analysis, where the input is processed in a cascade of cortical regions that analyze increasingly complex information. Gradually more studies emphasize the role of top-down facilitation in cortical analysis, but it remains something of a mystery how such processing would be initiated. After all, top-down facilitation implies that high-level information is activated earlier than some relevant lower-level information. Building on previous studies, I propose a specific mechanism for the activation of top-down facilitation during visual object recognition. The gist of this hypothesis is that a partially analyzed version of the input image (i.e., a blurred image) is projected rapidly from early visual areas directly to the prefrontal cortex (PFC). This coarse representation activates in the PFC expectations about the most likely interpretations of the input image, which are then back-projected as an “initial guess” to the temporal cortex to be integrated with the bottom-up analysis. The top-down process facilitates recognition by substantially limiting the number of object representations that need to be considered. Furthermore, such a rapid mechanism may provide critical information when a quick response is necessary.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2001) 13 (6): 793–799.
Published: 15 August 2001
Abstract
View article
PDF
The nature of visual object representation in the brain is the subject of a prolonged debate. One set of theories asserts that objects are represented by their structural description and the representation is “object-centered.” Theories from the other side of the debate suggest that humans store multiple “snapshots” for each object, depicting it as seen under various conditions, and the representation is therefore “viewer-centered.” The principal tool that has been used to support and criticize each of these hypotheses is subjects' performance in recognizing objects under novel viewing conditions. For example, if subjects take more time in recognizing an object from an unfamiliar viewpoint, it is common to claim that the representation of that object is viewpoint-dependent and therefore viewer-centered. It is suggested here, however, that performance cost in recognition of objects under novel conditions may be misleading when studying the nature of object representation. Specifically, it is argued that viewpoint-dependent performance is not necessarily an indication of viewer-centered representation. An account for the neural basis of perceptual priming is first provided. In light of this account, it is conceivable that viewpoint dependency reflects the utilization of neural paths with different levels of sensitivity en route to the same representation, rather than the existence of viewpoint-specific representations. New experimental paradigms are required to study the validity of the viewer-centered approach.
Journal Articles
Inferior Temporal Neurons Show Greater Sensitivity to Nonaccidental than to Metric Shape Differences
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2001) 13 (4): 444–453.
Published: 15 May 2001
Abstract
View article
PDF
It has long been known that macaque inferior temporal (IT) neurons tend to fire more strongly to some shapes than to others, and that different IT neurons can show markedly different shape preferences. Beyond the discovery that these preferences can be elicited by features of moderate complexity, no general principle of (nonface) object recognition had emerged by which this enormous variation in selectivity could be understood. Psychophysical, as well as computational work, suggests that one such principle is the difference between viewpoint-invariant, nonaccidental (NAP) and view-dependent, metric shape properties (MPs). We measured the responses of single IT neurons to objects differing in either a NAP (namely, a change in a geon) or an MP of a single part, shown at two orientations in depth. The cells were more sensitive to changes in NAPs than in MPs, even though the image variation (as assessed by wavelet-like measures) produced by the former were smaller than the latter. The magnitude of the response modulation from the rotation itself was, on average, similar to that produced by the NAP differences, although the image changes from the rotation were much greater than that produced by NAP differences. Multidimensional scaling of the neural responses indicated a NAP/MP dimension, independent of an orientation dimension. The present results thus demonstrate that a significant portion of the neural code of IT cells represents differences in NAPs rather than MPs. This code may enable immediate recognition of novel objects at new views.