Abstract

Face processing in the human brain recruits a widespread cortical network based mainly in the ventral and lateral temporal and occipital lobes. However, the extent to which activity within this network is driven by different face properties versus being determined by the manner in which faces are processed (as determined by task requirements) remains unclear. We combined a functional magnetic resonance adaptation paradigm with three target detection tasks, where participants had to detect a specific identity, emotional expression, or direction of gaze, while the task-irrelevant face properties varied independently. Our analysis focused on differentiating the influence of task demands and the processing of stimulus changes within the neural network underlying face processing. Results indicated that the fusiform and inferior occipital gyrus do not respond as a function of stimulus change (such as identity), but rather their activity depends on the task demands. Specifically, we hypothesize that, whether the task encourages a configural- or a featural-processing strategy determines activation. Our results for the superior temporal sulcus were even more specific in that we only found greater responses to stimulus changes that may engage featural processing. These results contribute to our understanding of the functional anatomy of face processing and provide insights into possible compensatory mechanisms in face processing.

INTRODUCTION

Faces are complex objects that can convey a variety of important social information to the viewer. Even newborn infants prefer to view face stimuli (Farroni et al., 2005; Johnson, Dziurawiec, Ellis, & Morton, 1991) and, by adolescence, we become experts at recognizing faces (Mondloch, Maurer, & Ahola, 2006). Although there is a considerable literature on fMRI studies of face processing, only a few studies have systematically analyzed the effects of varying task demands on the neural processing of different face properties, such as identity, emotional expression, and eye gaze. In the present fMRI adaptation study, we examine the effects of different stimulus changes (identity, expression, gaze) and different task demands (detecting a change in identity, expression, or gaze) on the neural processing of faces. Specifically, we hypothesized that the cortical processing of face stimuli will be influenced by task demands. That is, although it has often been reported in the literature that different cortical areas respond to changes in different aspects of faces (Haxby, Hoffman, & Gobbini, 2000), we propose that such differential responses are equally dependent on task-dependent processing strategies.

The processing of faces is thought to vary in at least two different respects: (1) the processing of invariant versus changeable aspects of faces, and (2) featural versus configural processing. The former distinction derives from a seminal cognitive model of face processing by Bruce and Young (1986), who proposed distinct pathways for processing facial identity versus emotional expression and eye gaze. This model is supported by behavioral (e.g., Calder, Young, Keane, & Dean, 2000) and neuropsychological studies (Young, Newcombe, de Haan, Small, & Hay, 1993, but see Calder & Young, 2005), and served as the basis for an important distinction introduced by Haxby et al. (2000), between the processing of invariant versus changeable face properties. Changeable properties refer to properties such as eye gaze and expression that can vary on different encounters with a person, whereas invariant properties include those that are necessary to identify the person. Haxby et al. mapped this distinction onto the brain, hypothesizing that the processing of invariant face properties is supported by the fusiform face area (FFA) (Kanwisher, McDermott, & Chun, 1997), whereas the processing of changeable face properties is supported by the superior temporal sulcus (STS). The occipital face area was assumed to support earlier stages of face processing that provide the common input to the FFA and the STS. This model, because of its explicit mappings of face properties to brain areas, has generally become the current standard view in the field.

The second processing distinction was introduced by Carey and Diamond (1977). Configural information describes the interrelationship between different face parts (e.g., eyes, nose, and mouth). This focus on the interrelationship of face parts stands in contrast to the feature-based strategy normally thought to be employed for unfamiliar object processing (Maurer, Le Grand, & Mondloch, 2002). The configural–featural distinction has been supported by various behavioral experiments (e.g., Thompson, 1980; Yin, 1969). It has been suggested that recognizing the identity of a face may depend more heavily on configural processing (Calder & Young, 2005; Calder et al., 2000), whereas gaze detection tasks may bias toward featural processing strategies because detection of eye gaze can be achieved by through the analysis of local features (Mondloch, Le Grand, & Maurer, 2002). In contrast, emotional expressions may differ in both face part changes, such as raised eyebrows or a wide-opened mouth, and in configural relations, and thus, engage both types of processing (Durand, Gallay, Seigneuric, Robichon, & Baudouin, 2007; Mondloch, Geldart, Maurer, & Le Grand, 2003).

The current study studies the effect of task demands on face-processing strategies by combining three types of face change (identity, expression, and gaze) with three types of detection tasks (identity, expression, or gaze), within the context of an fMRI adaptation paradigm. fMRI adaptation is based on a property of neuronal population responses, namely, a reduced response to repeated presentation of the same stimulus characteristic (to which the neurons are tuned) relative to presentation of a different stimulus characteristic. These neural adaptation effects are believed to cause a corresponding decrease in the BOLD signal recorded by fMRI (Grill-Spector, Henson, & Martin, 2006; Sawamura, Orban, & Vogels, 2006; Grill-Spector & Malach, 2001; Kourtzi & Kanwisher, 2000). We presented blocks of face stimuli that shared one or more face properties, for instance, the same identity and/or same expression and/or same gaze direction (Figure 1). Using an adaptation paradigm, we hoped to further differentiate the response of neural populations within the face-processing network with regard to different stimulus changes and task demands. If regional activation is determined by stimulus rather than task demands, as a reading of the current literature would suggest, then we will see similar stimulus-dependent patterns of activation and adaptation to all three tasks. More precisely, as each task consists of systematic variations of the three face properties, this should result in comparable response patterns in the corresponding specialized regions. However, if this is not observed [and task demands are the main determinants for neural activation patterns (overall hypothesis)], then there are at least two possibilities to further differentiate the task-dependent activation: (1) According to the invariant versus changeable face-processing account, brain regions such as the FFA that are sensitive to the invariant face properties (i.e., identity) should show greater activity when those properties vary than when they are constant. Namely, in the FFA, identity changes should show an overall greater activation than expression changes. Conversely, brain regions sensitive to changeable properties of faces (e.g., expression or gaze), such as the STS, should show greater activity when changeable properties vary than when they are constant. That is, in the STS, expression changes should show an overall greater activation than identity changes. (2) On the other hand, according to the configural versus featural face-processing account, tasks that rely more on configural processing (e.g., identity and, to some extent also, expression tasks) should result in greater responses to varying expressions or varying identities in brain regions that are configural-sensitive (e.g., FFA). In contrast, tasks that bias toward featural processing (e.g., gaze task) should not. For example, the FFA should show greater activity for identity changes in the expression task and expression changes in the identity task, as compared to identity and expression changes in the gaze task. Conversely, brain regions, such as the STS, that are involved in featural processing should only show greater activity for featural changes (such as expression) in tasks that encourage featural processing (e.g., gaze task) than for those that do not (e.g., identity task). Therefore, because emotional expressions can also be processed using a featural strategy (when looking predominantly at the eye region in top-heavy emotions; Leppaenen & Hietanen, 2007), we would expect to find increased neural responses to other featural face properties, such as gaze changes in the expression task.

Figure 1. 

Example adaptation trial for the three tasks. The target stimulus is depicted on the right for illustrative purposes; note, however, that it could occur at any random point in the block.

Figure 1. 

Example adaptation trial for the three tasks. The target stimulus is depicted on the right for illustrative purposes; note, however, that it could occur at any random point in the block.

METHODS

Participants

Fourteen participants (average age = 25.5 years, range = 19 to 37 years, 9 women, 1 left-handed) were recruited from an academic environment. All participants had normal or corrected-to-normal vision and reported no history of neurological illness. Participants received monetary compensation (£10 per hour) for participating in the experiment. The study was approved by the local ethics committee (Department of Psychology, Birkbeck College and University College London) and each participant gave informed consent before participating in the study.

Stimuli

A stimulus set was created from 27 color photographs taken under standard lighting conditions [3 women × 3 emotional expressions (happy, angry, neutral) × 3 directions of eye gaze (right, left, direct) (Figure 1)]. We chose to use only female faces in order to keep any task-irrelevant variation to a minimum, as it has been shown that sex changes influence identity processing (Ganel & Goshen-Gottstein, 2002).1 All pictures were cropped to show the face in frontal view and to exclude the neck and haircut of the person; any differences in the face stimuli were adjusted by comparing the means and standard deviations in the histograms for each RGB value using Adobe Photoshop 7 [Mean (SD): R = 52.2 (3.3); G = 36.2 (1.6); B = 24.4 (1.7)]. The stimulus size of 6.3 × 7 cm corresponded to a visual angle of 9.5° × 11° when presented to the participants in the scanner. The stimuli were presented on a dark gray background of a computer screen. Experimental procedure and stimulus presentation were controlled using Matlab (Mathworks, Massachusetts).

Experimental Procedure

The participants were required to detect a specified target in a stream of consecutively presented standard stimuli (in the identity task, participants had to detect a specific identity; in the expression task, a happy face; in the gaze task, a face with direct gaze). Each task was a separate session, the experiment therefore consisted of three sessions, which all took place on the same day. The order of the tasks was counterbalanced across participants. At the beginning of each task, a short message (10 sec) informed the participants of the relevant dimension to attend to (e.g., “identity task,” “expression task,” “gaze task”). The same set of stimuli was used for all three tasks, but the stimulus presentation order varied, as the choice of target depended on the particular task. That is, one of the three different identities served as the target stimuli in the identity task, whereas the two remaining stimuli served as standard stimuli. The same was true for the two other tasks. In order to achieve maximum experimental control, the same target stimuli were used for each participant. We chose the target identity at random as we did not have any a priori expectations regarding their differentiability.

Each stimulus was presented for 500 msec, with an interstimulus interval of 1 sec. The standard (nontarget) stimuli were arranged in mini-blocks of about 15 sec (Wenger, Visscher, Miezin, Petersen, & Schlaggar, 2004), containing, on average, nine standard stimuli (SD ± 2 standard stimuli) and one target stimulus (note that each session contained a small number of mini-blocks with zero or two targets). The number of standard stimuli in the mini-blocks was varied to disguise the different adaptation conditions and to minimize explicit strategies. In addition, the mini-blocks were presented successively without temporal gaps to give the impression of a constant flow of stimuli; this further disguised the mini-block structure. Target stimuli occurred in a pseudorandomized frequency in the mini-blocks, but targets never appeared before the presentation of at least five standard stimuli, thus ensuring sufficient time for adaptation to occur (note that the number and position of the targets was balanced over all conditions). Another reason for keeping the targets apart was that we tried to include as many adaptation periods as possible, thus ensuring maximum design efficiency. Each session consisted of about 30 mini-blocks. Finally, six periods of 10 sec of blank screen were inserted into each session, at randomly selected breaks between mini-blocks.

In order to make use of the fMRI adaptation technique (Grill-Spector et al., 2006), different properties of the standard stimuli either varied or were constant within a mini-block, to create 12 conditions in total (see Figure 1). For example, in the identity task, identity remained constant over trials within each mini-block, but emotional expression, eye gaze, both, or neither varied across trials. This resulted in a 3 (3 experimental tasks) × 4 (4 adaptation conditions) semifactorial design, with the following conditions that were arranged in the following order:

  • (a) 

    Identity task: zero change (−), expression change (E), gaze change (G), expression/gaze change (E/G);

  • (b) 

    Expression task: zero change (−), identity change (I), gaze change (G), identity/gaze change (I/G);

  • (c) 

    Gaze task: zero change (−), identity change (I), expression change (E), identity/expression change (I/E).

We note that the task-relevant face property itself was not varied throughout a mini-block (such a condition would have automatically yielded a stronger neural response, due to the task relevance of the varying face property, thus rendering a comparison with the other conditions meaningless) but only the task-irrelevant face properties (such as emotional expression and/or eye gaze in the identity task). This design was chosen because we were primarily interested in the influence of cognitive processing strategies on task-irrelevant face properties, rather than the comparison of task-irrelevant versus task-relevant face properties. Nevertheless, it is important to note that we included an identical condition in all three tasks [zero change (−)], which would serve as a baseline condition for comparisons within tasks and allow for a comparison of changes in neural activation across tasks.

fMRI Experiment

We used a Siemens 1.5-T Avanto MRI scanner (Siemens, Erlangen, Germany) to acquire gradient echo-planar images (29 oblique slices covering the occipital, temporal, and most of the parietal lobes; TR = 2500 msec; TE = 50 msec; flip angle = 90°; field of view = 192 mm × 192 mm; voxel size: 3 × 3 × 4.5 mm). Following the functional scans, a T1-weighted structural image (1 mm3 resolution) was acquired for coregistration and display of the functional data.

Data Analysis

Data were analyzed using SPM5 (Wellcome Department of Imaging Neuroscience, London; www.fil.ion.ucl.ac.uk/spm5). Echo-planar imaging volumes were spatially realigned to correct for movement artifacts, normalized to the Montreal Neurological Institute (MNI) standard space (Ashburner & Friston, 2003a, 2003b), and smoothed using an 8-mm Gaussian kernel. Statistical analysis was performed in two stages. In the first stage, we computed a General Linear Model (GLM) with 15 regressors, one for each condition in the semifactorial design (3 tasks × 4 adaptation conditions) plus one for targets trials for each of the three tasks. (Note that in the current study, we were interested in the activation to the standard trials and we did not analyze the target trials, in that the response to these targets involves confounding attentional and motor-related effects.) Each mini-block was modeled as an epoch of 12 sec and convolved with a canonical hemodynamic response function. Because of the short SOA, this means that the regressors for the conditions of interest effectively model the mean response during a mini-block (bar target trials). This is likely to be a sufficient measure of any fMRI adaptation that occurred during mini-blocks because the inclusion of additional regressors to capture linear or quadratic changes across trials within each mini-block did not add significant findings (in other words, adaptation appeared to saturate quickly, most likely within the first few trials). To account for (linear) residual movement artifacts, the model also included six further regressors representing the rigid-body parameters estimated during realignment. Voxelwise parameter estimates for these regressors were obtained by restricted maximum-likelihood estimation using a temporal high-pass filter (cutoff = 128 sec) to remove low-frequency drifts, and modeling temporal autocorrelation across scans with an AR(1) process.

Images of these parameter estimates comprised the data for a second GLM that treated participants as the only random effect. This GLM included only the 12 conditions of interest, using a single pooled error estimate, whose nonsphericity was estimated using restricted maximum-likelihood estimation as described in Friston et al. (2002).

ROI Analysis

The ROI analysis was performed using MarsBaR (http://marsbar.sourceforge.net/). First, we used the statistical parametric map (SPM) for the average response to 12 adaptation conditions (vs. the implicit blank screen periods), thresholded at p < .05 clusterwise corrected, to localize the functional ROIs. Note that this contrast used to identify ROIs is orthogonal to subsequent contrasts between conditions. We chose to include the localizer condition in the experimental design in order to obtain increased statistical efficiency (Friston, Rotshtein, Geng, Sterzer, & Henson, 2006). We then selected three clusters corresponding to regions of theoretical interest in the right hemisphere (Pitcher, Walsh, Yovel, & Duchaine, 2007): the anterior fusiform gyrus (antFG) (MNI coordinates: +39, −45, −18), the inferior occipital gyrus (infOG) (+42, −78, −9), and the STS (+60, −54, +9) (for additional analysis in corresponding ROIs in the left hemisphere, as well as further ROIs such as the bilateral amygdalae; see Appendix). It should be noted that although our MNI coordinates for the antFG and the infOG are in close proximity of the often reported FFA and occipital face area regions (Rossion et al., 2003; Kanwisher et al., 1997), we will continue to refer to them as antFG and infOG to avoid possible confusion. The parameter estimates were averaged across voxels within a 6-mm spherical volume centered on these maxima, and entered into a two-factor repeated measures, Huynh-Feldt-corrected analysis of variance (ANOVA), with factors task (3 levels) and adaptation condition (4 levels). In case of a significant interaction between task and adaptation condition, we conducted simple effects analyses (Keppel, 1991), which were, in the case of significant simple main effects for task, followed by a t test in order to understand the source of this effect.

RESULTS

Behavioral Data

Median RTs were calculated for correct trials only.2 These were subjected to a two-way repeated measures ANOVA with the factors task (3 levels) and adaptation condition (4 levels). Neither the main effects nor the interaction were significant (all Fs < 1.41, all ps > .23). The same results were obtained for the accuracy rates (all Fs < 1, all ps > .3). These results suggest that all three tasks involved comparable levels of difficulty for the participants as RTs and accuracy rates did not differ for the different adaptation conditions across tasks.

fMRI Data

Our fMRI analysis consisted of two steps, a whole-brain analysis that looked for overall task-related differences and an ROI analysis that looked for specific adaptation effects in selected regions. These adaptation effects were investigated looking for differences in adaptation conditions across tasks (task-dependent changes) and between tasks (stimulus-dependent changes).

Whole-brain Analysis

The whole-brain analysis yielded widespread activation in both occipital lobes, the ventral-temporal stream, as well the STS in the right hemisphere, relative to baseline periods within each task (Figure 2; identity task = red, expression task = blue, gaze task = green). Activation for the three tasks was mostly overlapping, but with additional regions—namely, parts of the right precentral gyrus, right insula, right superior and inferior temporal gyri, fusiform, and medial occipital gyrus (Figure 3) being activated for the emotional expression task only. This was confirmed by planned contrasts of each task versus the average of the other two (Table 1).

Figure 2. 

Task-related activation found in the whole-brain analysis. Activation maps indicate regions where the average response was higher for the identity task (red), the expression task (blue), or the gaze task (green) in comparison to baseline (refer to the legend for the color coding of the overlapping activation). The maps are thresholded at uncorrected height threshold of p < .001, and an extent threshold of p < .05, corrected. Note that the right hemisphere is depicted on the right side.

Figure 2. 

Task-related activation found in the whole-brain analysis. Activation maps indicate regions where the average response was higher for the identity task (red), the expression task (blue), or the gaze task (green) in comparison to baseline (refer to the legend for the color coding of the overlapping activation). The maps are thresholded at uncorrected height threshold of p < .001, and an extent threshold of p < .05, corrected. Note that the right hemisphere is depicted on the right side.

Figure 3. 

Task-related activation in the whole-brain analysis (expression task only) in the right superior temporal gyrus (45, −57, 12), right insula (36, 24, 6), right fusiform gyrus (42, −45, −15), and the left medial occipital gyrus (−48, −75, 9). The maps are thresholded at uncorrected height threshold of p < .001, and an extent threshold of p < .05, corrected. Note that the right hemisphere is depicted on the right side.

Figure 3. 

Task-related activation in the whole-brain analysis (expression task only) in the right superior temporal gyrus (45, −57, 12), right insula (36, 24, 6), right fusiform gyrus (42, −45, −15), and the left medial occipital gyrus (−48, −75, 9). The maps are thresholded at uncorrected height threshold of p < .001, and an extent threshold of p < .05, corrected. Note that the right hemisphere is depicted on the right side.

Table 1. 

Significantly Activated Clusters in the Whole-brain Analysis for Each Task

Region of Activation
L/R
MNI (x y z)
Cluster Size
Z
Identity Task vs. Mean of Expression and Gaze Tasks 
No significant activation 
 
Expression Task vs. Mean of Identity and Gaze Tasks 
Precent. G 54 6 15 187 4.55 
Insula 36 24 6  4.16 
STG 57 −42 9 84 4.38 
STG 45 −57 12  3.76 
MOG −48 −75 9 282 4.16 
FG 42 −45 −15 67 3.97 
ITG 51 −57 −15  3.47 
 
Gaze Task vs. Mean of Identity and Expression Tasks 
No significant activation 
Region of Activation
L/R
MNI (x y z)
Cluster Size
Z
Identity Task vs. Mean of Expression and Gaze Tasks 
No significant activation 
 
Expression Task vs. Mean of Identity and Gaze Tasks 
Precent. G 54 6 15 187 4.55 
Insula 36 24 6  4.16 
STG 57 −42 9 84 4.38 
STG 45 −57 12  3.76 
MOG −48 −75 9 282 4.16 
FG 42 −45 −15 67 3.97 
ITG 51 −57 −15  3.47 
 
Gaze Task vs. Mean of Identity and Expression Tasks 
No significant activation 

Random effect analysis, height threshold p < .001 (uncorrected), extent threshold p < .05 (corrected). L/R = left/right hemisphere; MNI = Montreal Neurological Institute; Precent. G = precentral gyrus; STG = superior temporal gyrus; MOG = medial occipital gyrus; FG = fusiform gyrus; ITG = inferior temporal gyrus.

ROI Analysis

In all three functionally defined ROIs, the main effects for task and adaptation condition, as well as the interactions between task and adaptation condition, reached significance, with the exception of the interaction effect in the STS, which approached significance (Table 2). Moreover, we did not find significant differences for the zero change (−) baseline condition across tasks in all ROIs (F < 1, all ps > .05). Below, we consider each ROI in turn. Here, p values associated with all planned comparison are two-tailed.

Table 2. 

ROI Analysis: Main Effects of Task, Adaptation Condition, and Their Interaction

Brain Area
Main Effect
Interaction
ROI
MNI (x y z)
Task
Adaptation Condition
Task × Adaptation Condition
STS 60, −54, 9 [F(2, 26) = 4.04, p = .031] [F(3, 39) = 4.55, p = .019] [F(6, 78) = 2.02, p = .079] 
antFG 39 −45 −18 [F(2, 26) = 5.00, p = .015] [F(3, 39) = 6.06, p = .002] [F(6, 78) = 2.47, p = .044] 
infOG 42 −78 −9 [F(2, 26) = 4.83, p = .021] [F(3, 39) = 6.21, p = .001] [F(6, 78) = 2.49, p = .050] 
Brain Area
Main Effect
Interaction
ROI
MNI (x y z)
Task
Adaptation Condition
Task × Adaptation Condition
STS 60, −54, 9 [F(2, 26) = 4.04, p = .031] [F(3, 39) = 4.55, p = .019] [F(6, 78) = 2.02, p = .079] 
antFG 39 −45 −18 [F(2, 26) = 5.00, p = .015] [F(3, 39) = 6.06, p = .002] [F(6, 78) = 2.47, p = .044] 
infOG 42 −78 −9 [F(2, 26) = 4.83, p = .021] [F(3, 39) = 6.21, p = .001] [F(6, 78) = 2.49, p = .050] 

MNI = Montreal Neurological Institute; STS = superior temporal sulcus; antFG = anterior fusiform gyrus; infOG = inferior occipital gyrus. All regions depicted are in the right hemisphere.

All p values are Huynh–Feldt corrected.

Anterior Fusiform Gyrus

The simple main effect for adaptation condition was significant in all three tasks (Table 3, Figure 4).

Table 3. 

ROI Analysis: Simple Main Effects of Adaptation Condition within Each Task

Task
ROI
MNI (x y z)
Simple Main Effect
Identity task STS 60 −54 9 No significant effects 
antFG 39 −45 −18 [F(3, 39) = 3.62, p = .021] 
infOG 42 −78 −9 [F(3, 39) = 2.79, p = .061] 
Expression task STS 60, −54, 9 [F(3, 39) = 7.31, p = .001] 
antFG 39 −45 −18 [F(3, 39) = 5.41, p = .003] 
infOG 42 −78 −9 [F(3, 39) = 7.66, p = .000] 
Gaze task STS 60, −54, 9 No significant effects 
antFG 39 −45 −18 [F(3, 39) = 3.11, p = .040] 
infOG 42 −78 −9 [F(3, 39) = 2.87, p = .049] 
Task
ROI
MNI (x y z)
Simple Main Effect
Identity task STS 60 −54 9 No significant effects 
antFG 39 −45 −18 [F(3, 39) = 3.62, p = .021] 
infOG 42 −78 −9 [F(3, 39) = 2.79, p = .061] 
Expression task STS 60, −54, 9 [F(3, 39) = 7.31, p = .001] 
antFG 39 −45 −18 [F(3, 39) = 5.41, p = .003] 
infOG 42 −78 −9 [F(3, 39) = 7.66, p = .000] 
Gaze task STS 60, −54, 9 No significant effects 
antFG 39 −45 −18 [F(3, 39) = 3.11, p = .040] 
infOG 42 −78 −9 [F(3, 39) = 2.87, p = .049] 

MNI = Montreal Neurological Institute; STS = superior temporal sulcus; antFG = anterior fusiform gyrus; infOG = inferior occipital gyrus. All regions depicted are in the right hemisphere.

All p values are Huynh–Feldt corrected.

Figure 4. 

Top: Location of the ROI in the right antFG projected onto an average brain template (MNI coordinates x, y, z: 39, −45, −18). Bottom: Two-way interaction between Task × Adaptation condition. Average activation is shown for each adaptation condition in each task separately, significant differences between the conditions are indicated with a star. The letters in a particular adaptation condition (“−” = zero changes, “I” = identity, “E” = emotional expression, “G” = gaze) stand for the face property(s) that changed while the others were repeated. Solid parentheses indicate significant activation differences for adaptation conditions within tasks; dotted parentheses indicate significant activation differences for adaptation conditions across tasks.

Figure 4. 

Top: Location of the ROI in the right antFG projected onto an average brain template (MNI coordinates x, y, z: 39, −45, −18). Bottom: Two-way interaction between Task × Adaptation condition. Average activation is shown for each adaptation condition in each task separately, significant differences between the conditions are indicated with a star. The letters in a particular adaptation condition (“−” = zero changes, “I” = identity, “E” = emotional expression, “G” = gaze) stand for the face property(s) that changed while the others were repeated. Solid parentheses indicate significant activation differences for adaptation conditions within tasks; dotted parentheses indicate significant activation differences for adaptation conditions across tasks.

Identity task

In the identity task, the antFG showed greater responses to changes in emotional expression [t(13) = 3.57, p = .003] relative to the no-change condition.

Expression task

In the expression task, the antFG responded to changes in identity [t(13) = 3.19, p = .007], gaze [t(13) = 4.04, p = .001], and their combined change [t(13) = 3.58, p = .003].

Gaze task

The antFG responded to changes in identity [t(13) = 2.26, p = .042], and the combined change in identity and emotional expression [t(13) = 2.77, p = .02].

Across-task comparisons

Additional comparisons across tasks for each adaptation condition revealed that gaze changes yielded increased activation in the expression task in comparison to the identity task [t(13) = 3.68, p = .003].

Inferior Occipital Gyrus

The simple main effect for adaptation condition approached significance in the identity task and was significant in the emotion and gaze tasks (Table 3, Figure 5).

Figure 5. 

Top: Location of the ROI in the right infOG projected onto an average brain template (MNI coordinates x, y, z: 42, −78, −9). Bottom: Two-way interaction between Task × Adaptation condition. Average activation is shown for each adaptation condition in each task separately, significant differences between the conditions are indicated with a star. The letters in a particular adaptation condition (“−” = zero changes, “I” = identity, “E” = emotional expression, “G” = gaze) stand for the face property(s) that changed while the others were repeated. Solid parentheses indicate significant activation differences for adaptation conditions within tasks; dotted parentheses indicate significant activation differences for adaptation conditions across tasks.

Figure 5. 

Top: Location of the ROI in the right infOG projected onto an average brain template (MNI coordinates x, y, z: 42, −78, −9). Bottom: Two-way interaction between Task × Adaptation condition. Average activation is shown for each adaptation condition in each task separately, significant differences between the conditions are indicated with a star. The letters in a particular adaptation condition (“−” = zero changes, “I” = identity, “E” = emotional expression, “G” = gaze) stand for the face property(s) that changed while the others were repeated. Solid parentheses indicate significant activation differences for adaptation conditions within tasks; dotted parentheses indicate significant activation differences for adaptation conditions across tasks.

Identity task

In the identity task, the infOG responded to changes in emotional expression only [t(13) = 2.89, p = .013].

Expression task

In the expression task, the infOG responded to changes in identity [t(13) = 3.18, p = .007], gaze [t(13) = 5.67, p = .000], as well as their simultaneous change [t(13) = 3.82, p = .002].

Gaze task

In the gaze task, the infOG showed a trend for increased responses to changes in identity [t(13) = 2.00, p = .066].

Across-task comparisons

Across-task comparisons showed a significant increase in activation for emotional expression changes in the identity task in comparison to the gaze task [t(13) = 3.11, p = .008], as well as increased activation for gaze changes in the expression task in comparison to the identity task [t(13) = 3.56, p = .003].

Superior Temporal Sulcus

In the STS, only the simple main effect for adaptation condition for the expression task was significant (Table 3, Figure 6).

Figure 6. 

Top: Location of the ROI in the right STS projected onto an average brain template (MNI coordinates x, y, z: 60, −54, 9). Bottom: Two-way interaction between Task × Adaptation condition. Average activation is shown for each adaptation condition in each task separately, significant differences between the conditions are indicated with a star. The letters in a particular adaptation condition (“−” = zero changes, “I” = identity, “E” = emotional expression, “G” = gaze) stand for the face property(s) that changed while the others were repeated.

Figure 6. 

Top: Location of the ROI in the right STS projected onto an average brain template (MNI coordinates x, y, z: 60, −54, 9). Bottom: Two-way interaction between Task × Adaptation condition. Average activation is shown for each adaptation condition in each task separately, significant differences between the conditions are indicated with a star. The letters in a particular adaptation condition (“−” = zero changes, “I” = identity, “E” = emotional expression, “G” = gaze) stand for the face property(s) that changed while the others were repeated.

Expression task

Further comparisons revealed that this simple main effect was due to significant responses to gaze changes [t(13) = 4.48, p = .001].

DISCUSSION

The current study investigated the extent to which activity in the neural face-processing network is driven automatically by different face stimulus properties versus being determined by the manner in which faces are processed as reflected by the participants' task demands. Although the prevailing view (as reviewed in the Introduction) suggests that brain responses would vary as a function of the varying face property (thus predicting that the three different tasks would yield comparable stimulus-dependent activation patterns, as all three face properties varied systematically in each task), our overall hypothesis postulated that brain responses would vary between the different tasks. Moreover, in a second step, we examined two sets of specific hypotheses, related to the strategies and processes engaged during the neural processing of faces within the face-processing network.

Whole-brain Analysis

The whole-brain analysis found overlapping activation to faces relative to baseline in the occipital lobes, the bilateral ventral temporal streams, and the right superior temporal lobe in all three tasks. The Expression task yielded additional areas of activation, such as the right precentral gyrus, right insulae, right superior temporal gyrus, as well as the inferior temporal gyrus, the fusiform gyrus, and the medial occipital gyrus. Although most of these areas were also found to be activated in the other two tasks, although to a lesser extent, the activation in the insula seems emotional expression specific (Adolphs, 2002). Moreover, the precentral gyrus activation runs in line with previous evidence that demonstrated the involvement of premotor cortex in the perception and generation of emotionally expressive faces (Hennenlocher et al., 2005; Leslie, Johnson-Frey, & Grafton, 2003). Although the results from our whole-brain analysis found similar activation patterns for all three tasks, we also found additional and more widespread activation for the expression task (see also Ishai, Pessoa, Bikle, & Ungerleider, 2004). This difference cannot easily be explained by task difficulty as our behavioral data suggested that participants found all three tasks equally challenging. Hence, it is more consistent with our overall hypothesis (although based on differential effects for the expression task only), which postulated that neural activity is largely driven by task rather than stimulus demands, thus predicting partially distinct activation patterns for each task. In a second step, therefore, we conducted further and more detailed analysis (ROI analysis) to elucidate the precise nature of this task-dependent activation, by making use of the increased spatial resolution of neural adaptation methods. This would allow us to determine whether the overall stronger neural responses in the expression task were due to less adaptation for repeated responses, a very plausible interpretation. Another possible explanation might be the dual use of configural and featural processing strategies, whereas the two other tasks engaged primarily one processing strategy.

ROI Analysis

Perhaps the most striking observation from our ROI analysis is the similarity in the response properties of the antFG and the infOG (note that this was, to some extent, also true for the left infOG, which exhibited similar activation patterns as the right infOG; see Appendix for a detailed analysis). For both regions, changes in expression in the identity task and changes in identity in the expression task elicited increased activity. Given prior evidence that the processing of identity and of expression may rely more on configural processing, this would be consistent with a role for the antFG and the infOG in configural processing. This would also explain why gaze changes did not show increased activity in these regions during the identity task because such featural changes would be ignored when adopting a largely configural processing strategy. However, in the expression task, both brain regions did show increased activity to gaze changes, independent of concurrent identity changes, and to a greater extent, than in the identity task. As the expression task may engage both featural and configural processing strategies (Mondloch et al., 2003), this would suggest that the antFG and the infOG can be involved not only in configural processing, but also in featural processing, when task demands require such processing. This conclusion is also supported by activation patterns observed in the across-task comparisons in both the antFG and the infOG. Namely, when the participants' attention was directed to emotional expression in the expression task, neural activity increased for concurrent gaze changes in comparison to the same condition in identity task. In addition, in the infOG, stronger activation was observed for expression changes in the identity task in comparison to the gaze task. Therefore, the across-task comparisons support our hypothesis that neural response patterns in a particular brain region are driven by the cognitive strategy afforded by the task.

These results for the antFG are not consistent with the prevailing theoretical view that the antFG responds primarily to invariant face properties such as identity, rather than changeable properties such as expression. The antFG also showed activity in response to changes in identity in the gaze task, whereas the infOG also showed a trend to the same effect. This particular result is less clearly explained by the featural versus configural processing account, but we suggest that it may be due to the high saliency of identity changes. This may have been the result of lower attentional demands imposed by the gaze processing task. The STS exhibited a pattern of activity dramatically different from the infOG and the antFG, showing increased activity only to gaze changes in the expression task. This result is consistent with the involvement of this region more in featural processing (Calder & Young, 2005). However, it is less clear why this increase in activity was not found when both gaze and identity changed in the expression task. A possible explanation might be that participants focused on changes in identity, rather than expression, in this specific condition. The lack of any differential activity between the different change conditions in the gaze task would also be consistent with the idea that identity and expression changes can be detected by configural processing. However, it is less clear how these STS results would be explained by the prevailing view that STS responds to changeable face properties, in which case, we should have observed increased activity to expression and/or gaze changes across all our tasks. One possible explanation might lie in the use of static face stimuli in the current study. Finally, the sensitivity of the STS to facial emotional expressions (and therefore, to changeable face properties in general) has been less reliably shown in the literature (see Calder & Young, 2005 for a review).

The similarity between the antFG and the infOG in their pattern of significant activity changes suggests that, at least with our stimuli and task demands, these ROIs are part of a common processing system that responds flexibly in the context of different processing strategies or stimulus changes. This differs from the standard views as elaborated by Haxby et al. (2000) and Bruce and Young (1986), in which the infOG supports an initial stage of “structural encoding” of faces that is necessary for processing both invariant (in the antFG) and changeable (in the STS) face properties. One possibility is that their similar response profiles result from a feedback loop from the antFG to the infOG (Rotshtein, Vuilleumier, Winston, Driver, & Dolan, 2007; Rossion et al., 2003). Thus, the infOG may show less selective responses than the antFG at initial activation, but display more refined response properties at a later stage of processing (Rotshtein, Vuilleumier, et al., 2007); a dynamic tuning response that would be lost in the coarse temporal resolution of the fMRI response. It has been shown that the infOG receives reentrant input from several areas in the ventral-temporal network (inferior temporal gyrus, fusiform gyrus) (Rotshtein, Vuilleumier, et al., 2007). Therefore, we speculate that the infOG is involved in the integration of different face properties (e.g., identity and emotional expression) at a later processing stage. This idea receives support from the prosopagnosia literature (it should be noted that the patients reported in these studies included both developmental and acquired prosopagnostic patients, thus covering the entire spectrum of prosopagnosia patients tested so far), which shows that a lack of coherence between the activity in the infOG and the antFG is a strong marker for difficulties in recognizing faces (DeGutis, Bentin, Robertson, & D'Esposito, 2007; Rossion et al., 2003). This hypothesis has also recently been supported in a transcranial magnetic stimulation study, which showed that transcranial magnetic stimulation to the right infOG interfered with the integration of facial identity and emotional expression at a late processing stage (after 170 msec poststimulus presentation), reflecting re-entrant feedback processing (Cohen Kadosh, Cohen Kadosh, Pitcher, Walsh, & Johnson, submitted).

Finally, in our study, the right amygdala showed significant activation in the identity task, when expression or expression and gaze changed (see Appendix for a detailed analysis). This finding runs in line with previous studies that showed that the bilateral amygdalae modulate implicit emotional expression processing in visual areas, including the fusiform gyri (Vuilleumier, Richardson, Armony, Driver, & Dolan, 2004; Vuilleumier, Armony, Driver, & Dolan, 2003). This further supports our interpretation of task-dependent activation within these areas, which is modulated by feedforward and feedback connections within the face network. The present findings are also consistent with a previous fMRI adaptation study by Ganel, Valyear, Goshen-Gottstein, and Goodale (2005). These authors examined changes in facial identity or expression as a function of whether or not those changes were task-relevant. As in the present study, they found increased activity in the FFA when expression changed, regardless of task, suggesting that emotional expressions are processed by the FFA (in addition to identity) and are processed automatically (independent of task requirements). This pattern was consistent with their hypothesis that, in order to process the emotional expression of a face, it is important to first process the identity, because identity (i.e., facial structure) determines how emotion is interpreted. Although this hypothesis may be correct, our results suggest that activity in the antFG is more broadly determined by those changes in a face that are relevant to the specific processing strategy (e.g., featural vs. configural) adopted by participants for a given task. Our findings of similar response patterns in the antFG and the infOG are also in line with a recent study fMRI study (Rotshtein, Geng, Driver, & Dolan, 2007), which showed that individual face-processing skills (both configural and featural, as assessed in several separate tasks) correlated with neural activation in the right antFG, whereas activity in the right infOG correlated with configural processing only. The difference in the infOG might be due to differences in the experimental design (our fMRI task did not test both processing strategies directly, but only indirectly via the adaptation patterns for task-irrelevant face properties), but it is important to note that we did not find differential adaptation effects in the infOG for the gaze task, which may rely more on featural processing.

The activity in response to gaze changes in the STS observed in the present study, at least in the expression task, is consistent with previous studies showing a role for the STS in detection of eye gaze (Calder et al., 2007; Mosconi, Macka, McCarthy, & Pelphrey, 2005; Winston, Henson, Fine-Goulden, & Dolan, 2004; Hoffman & Haxby, 2000). However, we note that there is considerable anterior–posterior anatomical variability along the STS across these different studies.

Implications for Future Studies

An important implication of the present study is that brain activity associated with face processing depends on the specific face-processing strategy adopted. This should be considered when comparing brain activity in those with normal face-processing skills with that in people showing evidence of atypical face processing due, for example, to developmental prosopagnosia or acquired brain damage. Differences (or similarities) in brain activity may relate to different preferred processing strategies, rather than fundamentally different functional properties of different brain regions. The same caveat should apply to studies of normal development of face processing. Indeed, future application of the present paradigm to participants who have not yet, or failed to acquire the full range of cognitive face processing strategies (e.g., young children; Mondloch et al., 2003), or who suffer from developmental prosopagnosia (DeGutis et al., 2007), could provide further insights regarding different and perhaps compensatory strategies in such groups. This would help neuroimaging studies to trace the developmental trajectory of the functional anatomy of face processing, such as the emergence of functionally specialized brain regions (Cohen Kadosh & Johnson, 2007).

Conclusion

The current study provides data that offer an additional interpretation of the prevailing view of the functional anatomy of human face processing, in which different brain regions respond to different properties of the visual image of a face (such as invariant vs. changeable properties; e.g., Haxby et al., 2000). Instead, we would like to suggest a more complex situation in which the effect of face properties interacts with the task demands. Our findings are consistent with a recent approach suggestive of a more dynamic interpretation of neural activity in cortical regions (Gilbert & Sigman, 2007; Friston & Price, 2001), less dependent on the classical hierarchical organization view of the brain and its receptive fields. This approach suggests that a region's neural response properties should be interpreted with regard to (1) interactions of a particular region with other brain regions and (2) possible top–down influences, such as attentional and/or cognitive task demands. The results from our study provide support for top–down influences on the neural activation patterns in the face-processing network.

APPENDIX

Additional ROI Analysis

Four additional regions of theoretical interest were selected. As for the three ROIs reported in the main paper, we used the SPM for the average response to 12 adaptation conditions (vs. the implicit blank screen periods), thresholded at p < .05, clusterwise corrected (using a height threshold of p < .001, uncorrected), to localize the functional ROIs (note that this contrast is orthogonal to subsequent contrasts between conditions; Friston et al., 2006). We then selected four clusters corresponding to regions of theoretical interest: the bilateral amygdalae (MNI coordinates: right amygdala, +21, −3, −12; left amygdala, −21, −3, −12), the left antFG (MNI coordinates: −39, −48, −18), and the left infOG (MNI coordinates: −39, −84, −9). In contrast to the right STS (see main paper), no activation survived the p < .05, clusterwise-corrected threshold in the left STS. The parameter estimates were averaged across voxels within a 6-mm spherical volume centered on these maxima, and entered into a two-factor repeated measures ANOVA, with factors task (3 levels) and adaptation condition (4 levels). In case of a significant interaction between task and adaptation condition, we conducted simple effects analyses to further explore the effects of adaptation condition within each task. The right amygdala showed a significant interaction between task and adaptation condition. In the left amygdala and the left antFG, none of the interactions between task and adaptation condition were significant, whereas the interaction effect in the left infOG approached significance. Both main effects for task and adaptation condition were significant in the left antFG and infOG (Table A1), but not in the left amygdala.

Table A1. 

ROI Analysis: Main Effects of Task, Adaptation Condition, and Their Interaction

Brain Area
Main Effect
Interaction
ROI
MNI (x y z)
Task
Adaptation Condition
Task × Adaptation Condition
R Amygdala 21 −3 −12 [F(2, 26) = 4.26, p = .051] [F(3, 39) = 2.65, p = .064] [F(6, 78) = 3.31, p = .006] 
L Amygdala −21 −3 −12 [F(2, 26) = 2.63, p = .105] [F(3, 39) = 1.05, p = .381] [F(6, 78) = 1.4, p = .227] 
L antFG −39 −48 −18 [F(2, 26) = 3.37, p = .050] [F(3, 39) = 3.9, p = .016] [F(6, 78) = 1.11, p = .366] 
L infOG −39 −84 −9 [F(2, 26) = 6.71, p = .004] [F(3, 39) = 6.3, p = .001] [F(6, 78) = 2.14, p = .069] 
Brain Area
Main Effect
Interaction
ROI
MNI (x y z)
Task
Adaptation Condition
Task × Adaptation Condition
R Amygdala 21 −3 −12 [F(2, 26) = 4.26, p = .051] [F(3, 39) = 2.65, p = .064] [F(6, 78) = 3.31, p = .006] 
L Amygdala −21 −3 −12 [F(2, 26) = 2.63, p = .105] [F(3, 39) = 1.05, p = .381] [F(6, 78) = 1.4, p = .227] 
L antFG −39 −48 −18 [F(2, 26) = 3.37, p = .050] [F(3, 39) = 3.9, p = .016] [F(6, 78) = 1.11, p = .366] 
L infOG −39 −84 −9 [F(2, 26) = 6.71, p = .004] [F(3, 39) = 6.3, p = .001] [F(6, 78) = 2.14, p = .069] 

MNI = Montreal Neurological Institute; R = right hemisphere, L = left hemisphere; antFG = anterior Fusiform gyrus; infOG = inferior occipital gyrus.

All p values are Huynh–Feldt corrected.

Right Amygdala

Only the simple main effect for adaptation condition for the identity task was significant (Table A2, Figure A1).

Table A2. 

ROI Analysis: Simple Main Effect of Adaptation Condition within Each Task

Task
ROI
MNI (x y z)
Simple Main Effect
Identity task R Amygdala 21 −3 −12 [F(3, 39) = 6.8, p = .001] 
L infOG −39 −84 −9 [F(3, 39) = 2.9, p = .049] 
Expression task R Amygdala 21 −3 −12 No significant effects 
L infOG −39 −84 −9 [F(3, 39) = 3.08, p = .039] 
Gaze task R Amygdala 21 −3 −12 No significant effects 
L infOG −39 −84 −9 [F(3, 39) = 4.67, p = .012] 
Task
ROI
MNI (x y z)
Simple Main Effect
Identity task R Amygdala 21 −3 −12 [F(3, 39) = 6.8, p = .001] 
L infOG −39 −84 −9 [F(3, 39) = 2.9, p = .049] 
Expression task R Amygdala 21 −3 −12 No significant effects 
L infOG −39 −84 −9 [F(3, 39) = 3.08, p = .039] 
Gaze task R Amygdala 21 −3 −12 No significant effects 
L infOG −39 −84 −9 [F(3, 39) = 4.67, p = .012] 

MNI = Montreal Neurological Institute; R = right hemisphere, L = left hemisphere; infOG = inferior occipital gyrus.

All p values are Huynh–Feldt corrected.

Figure A1. 

Top: Location of the ROI in the right amygdala projected onto an average brain template (MNI coordinates x, y, z: 21, −3, −12). Bottom: Two-way interaction between Task × Adaptation condition. Average activation is shown for each adaptation condition in each task separately, significant differences between the conditions are indicated with a star. The letters in a particular adaptation condition (“−” = no changes, “I” = identity, “E” = emotional expression, “G” = gaze) stand for the face property(s) that changed while the others were repeated.

Figure A1. 

Top: Location of the ROI in the right amygdala projected onto an average brain template (MNI coordinates x, y, z: 21, −3, −12). Bottom: Two-way interaction between Task × Adaptation condition. Average activation is shown for each adaptation condition in each task separately, significant differences between the conditions are indicated with a star. The letters in a particular adaptation condition (“−” = no changes, “I” = identity, “E” = emotional expression, “G” = gaze) stand for the face property(s) that changed while the others were repeated.

Identity task

Further comparisons revealed that this simple main effect was due to significant activation whenever the emotional expression changed [changes in emotional expression (t(13) = 4.3, p = .001), and combined changes in emotional expression and gaze (t(13) = 2.63, p = .021); all reported tests are two-tailed].

Left Inferior Occipital Gyrus

The simple main effect for adaptation condition was significant in all three tasks (Table A2, Figure A2).

Figure A2. 

Top: Location of the ROI in the left infOG projected onto an average brain template (MNI coordinates x, y, z: −39, −84, −9). Bottom: Two-way interaction between Task × Adaptation condition. Average activation is shown for each adaptation condition in each task separately, significant differences between the conditions are indicated with a star. The letters in a particular adaptation condition (“−” = no changes, “I” = identity, “E” = emotional expression, “G” = gaze) stand for the face property(s) that changed while the others were repeated.

Figure A2. 

Top: Location of the ROI in the left infOG projected onto an average brain template (MNI coordinates x, y, z: −39, −84, −9). Bottom: Two-way interaction between Task × Adaptation condition. Average activation is shown for each adaptation condition in each task separately, significant differences between the conditions are indicated with a star. The letters in a particular adaptation condition (“−” = no changes, “I” = identity, “E” = emotional expression, “G” = gaze) stand for the face property(s) that changed while the others were repeated.

Identity task

Further analysis showed that in the identity task, the infOG responded to changes in emotional expression [t(13) = 3.26, p = .06].

Expression task

Planned comparisons showed that changes in gaze [t(13) = 3.04, p = .009], as well as simultaneous changes in gaze and identity were detected [t(13) = 2.85, p = .01].

Gaze task

The infOG showed increased activity for changes in identity [t(13) = 2.25, p = .043].

Across-task comparisons

Further analysis showed that emotional expression changes were significantly increased in the identity versus the gaze task [t(13) = 3.56, p = .003]. Moreover, gaze changes exhibited stronger activation in the expression task in comparison to the identity task [t(13) = 4.16, p = .001].

Acknowledgments

We thank Dr. Fani Deligianni for help with the programming of the experiment. K. C. K. is supported by a Marie Curie Fellowship (MEST-CT-2005-020725), R. N. A. H., F. D., and M. H. J. are supported by the Medical Research Council (WBSE U.1055.05.012.00001.01; MRC NIA G0400341; G9715587), and R. C. K. is supported by a Marie Curie Intra European Fellowship.

The authors declare no potential conflict of interest.

Reprint requests should be sent to Kathrin Cohen Kadosh, Centre for Brain and Cognitive Development, Birkbeck College, London WC1E 7JL, UK, or via e-mail: k.cohen_kadosh@bbk.ac.uk.

Notes

1. 

We note that, in order to avoid confounding memory demands in the identity task, we only introduced three different facial identities and participants had sufficient time to familiarize themselves with the identity stimuli, as well as the expression and the gaze stimuli. It is important to point out that we found no significant performance differences between the tasks both in RT and accuracy, suggesting that they represented an equal level of task difficulty to the participants. Moreover, the lack of any activation in memory-related areas is consistent with a lack of differential memory demands in the identity task.

2. 

Median RTs were analyzed as the number of trials per adaptation condition was less than 10 trials in some participants.

REFERENCES

REFERENCES
Adolphs
,
R.
(
2002
).
Recognizing emotion from facial expressions: Psychological and neurological mechanisms.
Behavioural and Cognitive Neuroscience Reviews
,
1
,
21
62
.
Ashburner
,
J.
, &
Friston
,
K. J.
(
2003a
).
Rigid body transformation.
In R. S. Frakoviak, K. J. Friston, C. Frith, R. J. Dolan, C. Price, S. Zeki, et al. (Eds.),
Human brain function
(2nd ed., pp.
635
654
).
Oxford
:
Academic Press
.
Ashburner
,
J.
, &
Friston
,
K. J.
(
2003b
).
Spatial normalization using basis functions.
In R. S. Frakoviak, K. J. Friston, C. Frith, R. J. Dolan, C. Price, S. Zeki, et al. (Eds.),
Human brain function
(2nd ed., pp.
655
672
).
Oxford
:
Academic Press
.
Bruce
,
V.
, &
Young
,
A.
(
1986
).
Understanding face recognition.
British Journal of Psychology
,
77
,
305
327
.
Calder
,
A. J.
,
Beaver
,
J. D.
,
Winston
,
J. S.
,
Dolan
,
R. J.
,
Jenkins
,
R.
,
Eger
,
E.
,
et al
(
2007
).
Separate coding of different gaze directions in the superior temporal sulcus and inferior parietal lobule.
Current Biology
,
17
,
20
25
.
Calder
,
A. J.
, &
Young
,
A. W.
(
2005
).
Understanding the recognition of facial identity and facial expression.
Nature Reviews Neuroscience
,
6
,
641
651
.
Calder
,
A. J.
,
Young
,
A. W.
,
Keane
,
J.
, &
Dean
,
M.
(
2000
).
Configural information in facial expression perception.
Journal of Experimental Psychology: Human Perception and Performance
,
26
,
527
551
.
Carey
,
S.
, &
Diamond
,
R.
(
1977
).
From piecemeal to configurational representation of faces.
Science
,
195
,
312
314
.
Cohen Kadosh
,
K.
,
Cohen Kadosh
,
R.
,
Pitcher
,
D.
,
Walsh
,
V.
, &
Johnson
,
M. H.
(
submitted
).
TMS to the right OFA impairs the integration of different facial features.
Cohen Kadosh
,
K.
, &
Johnson
,
M. H.
(
2007
).
Developing a cortex specialized for face perception.
Trends in Cognitive Sciences
,
11
,
267
269
.
DeGutis
,
J. M.
,
Bentin
,
S.
,
Robertson
,
L. C.
, &
D'Esposito
,
M.
(
2007
).
Functional plasticity in ventral temporal cortex following cognitive rehabilitation of a congenital prosopagnosic.
Journal of Cognitive Neuroscience
,
19
,
1790
1802
.
Durand
,
K.
,
Gallay
,
M.
,
Seigneuric
,
A.
,
Robichon
,
F.
, &
Baudouin
,
J.-Y.
(
2007
).
The development of facial emotion recognition: The role of configural information.
Journal of Experimental Child Psychology
,
97
,
14
27
.
Farroni
,
T.
,
Johnson
,
M. H.
,
Menon
,
E.
,
Zulian
,
L.
,
Faraguna
,
D.
, &
Csibra
,
G.
(
2005
).
Newborn's preference for face-relevant stimuli: Effects of contrast polarity.
Proceedings of the National Academy of Sciences, U.S.A.
,
102
,
17245
17250
.
Friston
,
K. J.
,
Penny
,
W.
,
Philips
,
C.
,
Kiebel
,
S.
,
Hinton
,
G.
, &
Ashburner
,
J.
(
2002
).
Classical and Bayesian inference in neuroimaging: Theory.
Neuroimage
,
16
,
465
483
.
Friston
,
K. J.
, &
Price
,
C. J.
(
2001
).
Dynamic representations and generative models of brain function.
Brain Research Bulletin
,
54
,
275
285
.
Friston
,
K. J.
,
Rotshtein
,
P.
,
Geng
,
J. J.
,
Sterzer
,
P.
, &
Henson
,
R. N. A.
(
2006
).
A critique of functional localizers.
Neuroimage
,
30
,
1077
1087
.
Ganel
,
T.
, &
Goshen-Gottstein
,
Y.
(
2002
).
Perceptual integrality of sex and identity of faces: Further evidence for the single route hypothesis.
Journal of Experimental Psychology: Human Perception and Performance
,
28
,
854
867
.
Ganel
,
T.
,
Valyear
,
K. F.
,
Goshen-Gottstein
,
Y.
, &
Goodale
,
M. A.
(
2005
).
The involvement of the “fusiform face area” in processing facial expression.
Neuropsychologia
,
43
,
1645
1654
.
Gilbert
,
C. D.
, &
Sigman
,
M.
(
2007
).
Brain states: Top–down influences in sensory processing.
Neuron
,
54
,
677
696
.
Grill-Spector
,
K.
,
Henson
,
R.
, &
Martin
,
A.
(
2006
).
Repetition and the brain: Neural models of stimulus-specific effects.
Trends in Cognitive Sciences
,
10
,
14
23
.
Grill-Spector
,
K.
, &
Malach
,
R.
(
2001
).
fMR-adaptation: A tool for studying the functional properties of human cortical neurons.
Acta Psychologica
,
107
,
293
321
.
Haxby
,
J. V.
,
Hoffman
,
E. A.
, &
Gobbini
,
M. I.
(
2000
).
The distributed human neural system for face perception.
Trends in Cognitive Sciences
,
4
,
223
233
.
Hennenlocher
,
A.
,
Schroeder
,
U.
,
Erhard
,
P.
,
Castrop
,
F.
,
Haslinger
,
B.
,
Stoecker
,
D.
,
et al
(
2005
).
A common neural basis for receptive and expressive communication of pleasant facial affect.
Neuroimage
,
26
,
581
591
.
Hoffman
,
E. A.
, &
Haxby
,
J. V.
(
2000
).
Distinct representations of eye gaze and identity in the distributed human neural system for face perception.
Nature Neuroscience
,
3
,
80
84
.
Ishai
,
A.
,
Pessoa
,
L.
,
Bikle
,
P. C.
, &
Ungerleider
,
L. G.
(
2004
).
Repetition suppression of faces is modulated by emotion.
Proceedings of the National Academy of Sciences, U.S.A.
,
101
,
9827
9832
.
Johnson
,
M. H.
,
Dziurawiec
,
S.
,
Ellis
,
H.
, &
Morton
,
J.
(
1991
).
Newborn's preferential tracking of face-like stimuli and its subsequent decline.
Cognition
,
40
,
1
19
.
Kanwisher
,
N.
,
McDermott
,
J.
, &
Chun
,
M. M.
(
1997
).
The fusiform face area: A module in human extrastriate cortex specialized for face perception.
Journal of Neuroscience
,
17
,
4302
4311
.
Keppel
,
G.
(
1991
).
Design and analysis: A researchers handbook
(3rd ed.).
Upper Saddle River, NJ
:
Prentice Hall
.
Kourtzi
,
Z.
, &
Kanwisher
,
N.
(
2000
).
Cortical regions involved in perceiving object shape.
Journal of Neuroscience
,
20
,
3310
3318
.
Leppaenen
,
J. M.
, &
Hietanen
,
J. K.
(
2007
).
Is there more in a happy face than just a big smile?
Visual Cognition
,
15
,
468
490
.
Leslie
,
K. R.
,
Johnson-Frey
,
S. H.
, &
Grafton
,
S. T.
(
2003
).
Functional imaging of face and hand imitation: Towards a motor theory of empathy.
Neuroimage
,
21
,
601
607
.
Maurer
,
D.
,
Le Grand
,
R.
, &
Mondloch
,
C. J.
(
2002
).
The many faces of configural processing.
Trends in Cognitive Sciences
,
6
,
255
260
.
Mondloch
,
C. J.
,
Geldart
,
S.
,
Maurer
,
D.
, &
Le Grand
,
R.
(
2003
).
Developmental changes in face processing skills.
Journal of Experimental Child Psychology
,
86
,
67
84
.
Mondloch
,
C. J.
,
Le Grand
,
R.
, &
Maurer
,
D.
(
2002
).
Configural face processing develops more slowly than featural face processing.
Perception
,
31
,
553
566
.
Mondloch
,
C. J.
,
Maurer
,
D.
, &
Ahola
,
S.
(
2006
).
Becoming a face expert.
Psychological Science
,
17
,
930
934
.
Mosconi
,
M. W.
,
Macka
,
P. B.
,
McCarthy
,
G.
, &
Pelphrey
,
K. A.
(
2005
).
Taking an “intentional stance” on eye-gaze shifts: A functional neuroimaging study of social perception in children.
Neuroimage
,
27
,
247
252
.
Pitcher
,
D.
,
Walsh
,
V.
,
Yovel
,
G.
, &
Duchaine
,
B. C.
(
2007
).
TMS evidence for the involvement of the right occipital face area in early face processing.
Current Biology
,
17
,
1568
1573
.
Rossion
,
B.
,
Caldara
,
R.
,
Seghier
,
M.
,
Schuller
,
A.-M.
,
Lazeyrasm
,
F.
, &
Mayer
,
E.
(
2003
).
A network of occipito-temporal face sensitive areas besides the right middle fusiform gyrus is necessary for normal face processing.
Brain
,
126
,
1
15
.
Rotshtein
,
P.
,
Geng
,
J. J.
,
Driver
,
J.
, &
Dolan
,
R. J.
(
2007
).
Role of features and second-order spatial relations in face discrimination, face recognition, and individual face skills: Behavioural and functional magnetic resonance imaging data.
Journal of Cognitive Neuroscience
,
19
,
1435
1452
.
Rotshtein
,
P.
,
Vuilleumier
,
P.
,
Winston
,
J.
,
Driver
,
J.
, &
Dolan
,
R.
(
2007
).
Distinct and convergent visual processing of high and low spatial frequency information in faces.
Cerebral Cortex
,
17
,
2713
2724
.
Sawamura
,
H.
,
Orban
,
G. A.
, &
Vogels
,
R.
(
2006
).
Selectivity of neuronal adaptation does not match response selectivity: A single-cell study of the fMRI adaptation paradigm.
Neuron
,
49
,
307
318
.
Thompson
,
P.
(
1980
).
Margaret Thatcher: A new illusion.
Perception
,
9
,
483
484
.
Vuilleumier
,
P.
,
Armony
,
J. L.
,
Driver
,
J.
, &
Dolan
,
R. J.
(
2003
).
Distinct spatial frequency sensitivities for processing faces and emotional expressions.
Nature Neuroscience
,
6
,
624
631
.
Vuilleumier
,
P.
,
Richardson
,
M. P.
,
Armony
,
J. L.
,
Driver
,
J.
, &
Dolan
,
R. J.
(
2004
).
Distant influences of amygdala lesion on visual cortical activation during emotional face processing.
Nature Neuroscience
,
7
,
1271
1278
.
Wenger
,
K. K.
,
Visscher
,
K. M.
,
Miezin
,
F. M.
,
Petersen
,
S. E.
, &
Schlaggar
,
B. L.
(
2004
).
Comparison of sustained and transient activity in children and adults using a mixed blocked/event-related fMRI design.
Neuroimage
,
22
,
975
985
.
Winston
,
J. S.
,
Henson
,
R. N. A.
,
Fine-Goulden
,
M. R.
, &
Dolan
,
R. J.
(
2004
).
fMRI-adaptation reveals dissociable neural representations of identity and expression in face perception.
Journal of Neurophysiology
,
92
,
1830
1839
.
Yin
,
R. K.
(
1969
).
Looking at upside-down faces.
Journal of Experimental Psychology: General
,
81
,
141
145
.
Young
,
A. W.
,
Newcombe
,
F.
,
de Haan
,
E. H. F.
,
Small
,
M.
, &
Hay
,
D. C.
(
1993
).
Face perception after brain injury: Selective impairments affecting identity and expression.
Brain
,
116
,
941
959
.