Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Galit Yovel
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2018) 30 (7): 951–962.
Published: 01 July 2018
FIGURES
| View All (7)
Abstract
View article
PDF
We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2017) 29 (2): 322–336.
Published: 01 February 2017
FIGURES
| View All (10)
Abstract
View article
PDF
The quantity and nature of the processes underlying recognition memory remains an open question. A majority of behavioral, neuropsychological, and brain studies have suggested that recognition memory is supported by two dissociable processes: recollection and familiarity. It has been conversely argued, however, that recollection and familiarity map onto a single continuum of mnemonic strength and hence that recognition memory is mediated by a single process. Previous electrophysiological studies found marked dissociations between recollection and familiarity, which have been widely held as corroborating the dual-process account. However, it remains unknown whether a strength interpretation can likewise apply for these findings. Here we describe an ERP study, using a modified remember–know (RK) procedure, which allowed us to control for mnemonic strength. We find that ERPs of high and low mnemonic strength mimicked the electrophysiological distinction between R and K responses, in a lateral positive component (LPC), 500–1000 msec poststimulus onset. Critically, when contrasting strength with RK experience, by comparing weak R to strong K responses, the electrophysiological signal mapped onto strength, not onto subjective RK experience. Invoking the LPC as support for dual-process accounts may, therefore, be amiss.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2014) 26 (11): 2469–2478.
Published: 01 November 2014
FIGURES
| View All (4)
Abstract
View article
PDF
Faces and bodies are processed by distinct category-selective brain areas. Neuroimaging studies have so far presented isolated faces and headless bodies, and therefore little is known on whether and where faces and headless bodies are grouped together to one object, as they appear in the real world. The current study examined whether a face presented above a body are represented as two separate images or as an integrated face–body representation in face and body-selective brain areas by employing a fMRI competition paradigm. This paradigm has been shown to reveal higher fMRI response to sequential than simultaneous presentation of multiple stimuli (i.e., the competition effect), indicating competitive interactions among simultaneously presented multiple stimuli. We therefore hypothesized that if a face above a body is integrated to an image of a person whereas a body above a face is represented as two separate objects, the competition effect will be larger for the latter than the former. Consistent with our hypothesis, our findings reveal a competition effect when a body is presented above a face, but not when a face is presented above a body, suggesting that a body above a face is represented as two separate objects whereas a face above a body is represented as an integrated image of a person. Interestingly, this integration of a face and a body to an image of a person was found in the fusiform, but not the lateral-occipital face and body areas. We conclude that faces and bodies are processed separately at early stages and are integrated to a unified image of a person at mid-level stages of object processing.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2014) 26 (3): 490–500.
Published: 01 March 2014
FIGURES
Abstract
View article
PDF
Target objects required for goal-directed behavior are typically embedded within multiple irrelevant objects that may interfere with their encoding. Most neuroimaging studies of high-level visual cortex have examined the representation of isolated objects, and therefore, little is known about how surrounding objects influence the neural representation of target objects. To investigate the effect of different types of clutter on the distributed responses to target objects in high-level visual areas, we used fMRI and manipulated the type of clutter. Specifically, target objects (i.e., a face and a house) were presented either in isolation, in the presence of a homogeneous (identical objects from another category) clutter (“pop-out” display), or in the presence of a heterogeneous (different objects) clutter, while participants performed a target identification task. Using multivoxel pattern analysis (MVPA) we found that in the posterior fusiform object area a heterogeneous but not homogeneous clutter interfered with decoding of the target objects. Furthermore, multivoxel patterns evoked by isolated objects were more similar to multivoxel patterns evoked by homogenous compared with heterogeneous clutter in the lateral occipital and posterior fusiform object areas. Interestingly, there was no effect of clutter on the neural representation of the target objects in their category-selective areas, such as the fusiform face area and the parahippocampal place area. Our findings show that the variation among irrelevant surrounding objects influences the neural representation of target objects in the object general area, but not in object category-selective cortex, where the representation of target objects is invariant to their surroundings.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (3): 746–756.
Published: 01 March 2011
FIGURES
| View All (7)
Abstract
View article
PDF
The ventral visual cortex has a modular organization in which discrete and well-defined regions show a much stronger response to certain object categories (e.g., faces, bodies) than to other categories. The majority of previous studies have examined the response of these category-selective regions to isolated images of preferred or nonpreferred categories. Thus, little is known about the way these category-selective regions represent more complex visual stimuli, which include both preferred and nonpreferred stimuli. Here we examined whether glasses (nonpreferred) modify the representation of simultaneously presented faces (preferred) in the fusiform face area. We used an event-related fMR-adaptation paradigm in which faces were presented with glasses either on or above the face while subjects performed a face or a glasses discrimination task. Our findings show that the sensitivity of the fusiform face area to glasses was maximal when glasses were presented on the face than above the face during a face discrimination task rather than during a glasses discrimination task. These findings suggest that nonpreferred stimuli may significantly modify the representation of preferred stimuli, even when they are task irrelevant. Future studies will determine whether this interaction is specific to faces or may be found for other object categories in category-selective areas.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2006) 18 (4): 580–593.
Published: 01 April 2006
Abstract
View article
PDF
It is well established that faces are processed by mechanisms that are not used with other objects. Two prominent hypotheses have been proposed to characterize how information is represented by these special mechanisms. The spacing hypothesis suggests that face-specific mechanisms primarily extract information about spacing among parts rather than information about the shape of the parts. In contrast, the holistic hypothesis suggests that faces are processed as nondecomposable wholes and, therefore, claims that both parts and spacing among them are integral aspects of face representation. Here we examined these hypotheses by testing a group of developmental prosopagnosics (DPs) who suffer from deficits in face recognition. Subjects performed a face discrimination task with faces that differed either in the spacing of the parts but not the parts (spacing task), or in the parts but not the spacing of the parts (part task). Consistent with the holistic hypothesis, DPs showed lower performance than controls on both the spacing and the part tasks, as long as salient contrast differences between the parts were minimized. Furthermore, by presenting similar spacing and part tasks with houses, we tested whether face-processing mechanisms are specific to faces, or whether they are used to process spacing information from any stimulus. DPs' normal performance on the tasks of two houses indicates that their deficit does not result from impairment in a general-purpose spacing mechanism. In summary, our data clearly support face-specific holistic hypothesis by showing that face perception mechanisms extract both part and spacing information.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2003) 15 (3): 462–474.
Published: 01 April 2003
Abstract
View article
PDF
Studies in healthy individuals and split-brain patients have shown that the representation of facial information from the left visual field (LVF) is better than the representation of facial information from the right visual field (RVF). To investigate the neurophysiological basis of this LVF superiority in face perception, we recorded event-related potentials (ERPs) to centrally presented face stimuli in which relevant facial information is present bilaterally (B faces) or only in the left (L faces) or the right (R faces) visual field. Behavioral findings showed best performance for B faces and, in line with the LVF superiority, better performance for L than R faces. Evoked potentials to B, L, and R faces at 100 to 150-msec poststimulus showed no evidence of asymmetric transfer of information between the hemispheres at early stages of visual processing, suggesting that this factor is not responsible for the LVF superiority. Neural correlates of the LVF superiority, however, were manifested in a shorter latency of the face-specific N170 component to L than R faces and in a larger amplitude to L than R faces at 220—280 and 400—600 msec over both hemispheres. These ERP amplitude differences between L and R faces covaried across subjects with the extent to which the face-specific N170 component was larger over the right than the left hemisphere. We conclude that the two hemispheres exchange information symmetrically at early stages of face processing and together generate a shared facial representation, which is better when facial information is directly presented to the right hemisphere (RH; L faces) than to the left hemisphere (LH; R faces) and best when both hemispheres receive facial information (B faces).