Abstract

Although people do not normally try to remember associations between faces and physical contexts, these associations are established automatically, as indicated by the difficulty of recognizing familiar faces in different contexts (“butcher-on-the-bus” phenomenon). The present fMRI study investigated the automatic binding of faces and scenes. In the face–face (F–F) condition, faces were presented alone during both encoding and retrieval, whereas in the face/scene–face (FS–F) condition, they were presented overlaid on scenes during encoding but alone during retrieval (context change). Although participants were instructed to focus only on the faces during both encoding and retrieval, recognition performance was worse in the FS–F than in the F–F condition (“context shift decrement” [CSD]), confirming automatic face–scene binding during encoding. This binding was mediated by the hippocampus as indicated by greater subsequent memory effects (remembered > forgotten) in this region for the FS–F than the F–F condition. Scene memory was mediated by right parahippocampal cortex, which was reactivated during successful retrieval when the faces were associated with a scene during encoding (FS–F condition). Analyses using the CSD as a regressor yielded a clear hemispheric asymmetry in medial temporal lobe activity during encoding: Left hippocampal and parahippocampal activity was associated with a smaller CSD, indicating more flexible memory representations immune to context changes, whereas right hippocampal/rhinal activity was associated with a larger CSD, indicating less flexible representations sensitive to context change. Taken together, the results clarify the neural mechanisms of context effects on face recognition.

INTRODUCTION

Memory for faces is an essential aspect of social cognition. In fact, the failure to recognize a familiar person can lead to awkward social interactions. A major factor accounting for face recognition failure is change in context. George Mandler (1980) described how he failed to recognize his butcher when he saw him on a bus. Because of this famous anecdote, the common difficulty in recognizing a familiar person when encountered in a different context than the one typically associated with the person is known as “butcher-on-the-bus” phenomenon (Mandler, 1980). The present study uses fMRI to investigate the neural bases of this phenomenon. fMRI allows one to identify regions associated with the automatic binding of face and scene information, disentangle encoding and retrieval processes, and distinguish neural mechanisms that increase or reduce context effects to understand the butcher-on-the-bus phenomenon in ways that are not possible with behavioral investigations alone.

Despite the importance of memory for faces in social cognition and the difficulty associated with recognizing faces without their associated contexts, only a handful of behavioral studies have investigated context effects in face memory (Gruppuso, Lindsay, & Masson, 2007; Park, Puglisi, Smith, & Dudley, 1987; Smith & Vela, 1986; Park, Puglisi, & Sovacool, 1984; Winograd & Rivers-Bulkeley, 1977). Moreover, we are not aware of any fMRI studies on this issue. This apparent gap in the literature could reflect the difficulty of finding robust context effects in some paradigms (Humphreys, Pike, Bain, & Tehan, 1988). However, there is now clear evidence that when the context is a rich visual scene, a context shift decrement (CSD), decreased recognition performance when context changes between study and test, is readily observable. Indeed, study–test context shifts using naturalistic scenes as stimuli can reduce recognition performance as much as 15% (Hayes, Nadel, & Ryan, 2007; Murnane, Phelps, & Malmberg, 1999).

Previous functional neuroimaging studies of face memory have focused on intentional encoding and retrieval of faces and associated context information, including faces and scenes (Dennis et al., 2008), faces and names (Tsukiura & Cabeza, 2008; Chua, Schacter, Rand-Giovannetti, & Sperling, 2007), and faces and occupations (Yovel & Paller, 2004). These studies are typically classified under the rubric of relational or source memory (Johnson, Hashtroudi, & Lindsay, 1993), which require the participant to intentionally retrieve an item and its associated context. However, most memory associations in everyday life, including associations between faces and their contexts, are unintentional or incidental. The butcher-in-the-bus phenomenon is a good example of an incidental face–context association: Although one did not intentionally try to link the butcher's face to the shop, the difficulty in recognizing the face outside the shop clearly shows that the face and the shop were spontaneously bound during encoding. As for retrieval, although the butcher-on-the-bus phenomenon involves retrieval failure, most of the time, the context is spontaneously reactivated, and we recognize known faces in new contexts without any problem.

In the present study, we scanned both during encoding and during retrieval (see Figure 1). In the face/scene–face (FS–F) condition, faces were presented overlaid on a scene during encoding and by themselves during retrieval. In the face–face (F–F) condition, in contrast, faces were presented by themselves both during encoding and retrieval. Critically, participants were not required to actively associate face and scene information during encoding, and were not required to retrieve face and scene associations at test. During encoding, participants rated the friendliness of the face. During retrieval, they made an old/new recognition judgment (with confidence ratings) about the faces, without any reference to the scenes.

Figure 1. 

Experimental stimuli and design. Participants rated the friendliness of the face during encoding. At test, participants made an old/new face recognition judgment on a 4-point scale (definitely old, probably old, probably new, definitely new). Stimuli were presented for 3 sec at both encoding and retrieval with a jittered ISI. For the experiment, stimuli were presented in color.

Figure 1. 

Experimental stimuli and design. Participants rated the friendliness of the face during encoding. At test, participants made an old/new face recognition judgment on a 4-point scale (definitely old, probably old, probably new, definitely new). Stimuli were presented for 3 sec at both encoding and retrieval with a jittered ISI. For the experiment, stimuli were presented in color.

There were three main goals of the current study. First, we sought to identify brain regions associated with encoding of faces when contextual (scene) information was present or absent at encoding. Based on the idea that the hippocampus automatically binds disparate pieces of information (Moscovitch, 1992, 2008), we hypothesized that greater hippocampal activation would be observed during successful encoding when faces were presented on a scene (FS–F condition) than when they were presented by themselves (F–F condition). To ensure that the predicted hippocampal activation would not merely reflect perception of the scene, we used the subsequent memory paradigm, which isolates encoding success activity (ESA) by comparing subsequently remembered versus forgotten items. This contrast subtracts out perceptual differences in stimulus materials between the FS–F and F–F conditions. It is important to note that unlike previous fMRI studies of ESA for memory associations (Dennis et al., 2008; Staresina & Davachi, 2008; Uncapher, Otten, & Rugg, 2006), encoding and retrieval of associations were incidental and the remember–forgotten distinction was based on memory for items (faces), not associations. Thus, ESA in our study reflects face–context binding, which is incidental but flexible because it predicts later successful face recognition even when context changes.

Second, we investigated whether brain regions associated with processing the context of faces during encoding are “reactivated” during retrieval and contribute to successful recognition of the face. An advantage of the current design is that specific brain regions have been associated with processing faces and scenes. A region within the fusiform gyrus, typically referred to as the fusiform face area, is preferentially activated during perception of faces (Kanwisher, McDermott, & Chun, 1997; McCarthy, Puce, Gore, & Allison, 1997). A region known as the parahippocampal place area (Epstein & Kanwisher, 1998) has been associated with perceptual processing of scenes, and is typically localized to the posterior portion of the parahippocampal gyrus. We hypothesized that during retrieval, parahippocampal cortex would show greater activity in the FS–F than in the F–F condition, reflecting context reactivation. Importantly, the reactivation of parahippocampal cortex during retrieval would be incidental, because the retrieval task is only about faces, and could not be attributed to scene perception, because only faces are presented at test (see Figure 1). We additionally investigated the interaction between parahippocampal cortex and other brain regions during retrieval using connectivity analyses.

Finally, we aimed to elucidate the neural correlates of the CSD—which refers to reductions in memory when context shifts between study and test. A series of studies by Hayes et al. (2007) demonstrated significant CSDs during an episodic object recognition task under incidental and intentional encoding conditions. In the current study, we aim to further elucidate the neural correlates of the CSD by examining correlations between the CSD and neural activity during successful episodic face encoding, in addition to connectivity of parahippocampal cortex during retrieval. We assumed that encoding activity associated with greater CSDs would reflect the formation of perceptual representations that are less flexible and more sensitive to context change. In contrast, encoding activity associated with smaller CSDs is likely to reflect the formation of abstract representations that are more flexible and immune to context change. Although direct cognitive neuroscience evidence regarding scene context change during face recognition is not available, the implicit memory literature on study–test perceptual changes suggest a hemispheric asymmetry with the right hemisphere playing a greater role in perceptual representations and the left hemisphere mediating abstract representations (Vuilleumier, Henson, Driver, & Dolan, 2002; Koutstaal et al., 2001; Marsolek, 1999). Thus, within the medial temporal lobes (MTL), we predicted that right MTL activity would be associated with greater CSDs, and left MTL activity with smaller CSDs.

METHODS

Participants

Nineteen healthy young adults (9 women, 10 men; mean age = 24.2 years), recruited from the Duke community and screened for contraindications to MRI, participated in the study. All participants gave written informed consent and received financial compensation. All experimental procedures were approved by the Duke University institutional review board.

Materials

Figure 1 presents examples of stimulus materials, which were presented in color during the experiment. Face stimuli consisted of 425 faces gathered from the following databases: the Color FERET database (Phillips, Moon, Rizvi, & Rauss, 2000), adult face database from Dr. Denise Park's lab (Park et al., 2004), the AR face database (Martinez & Benavente, 1998), and the FRI CVL Face Database (Solina, Peer, Batageli, Juvan, & Kovac, 2003). Scene stimuli consisted of 195 indoor and outdoor scenes gathered from the Internet. Using Adobe Photoshop CS2 version 9.0.2 and Irfanview 4.0 (www.irfanview.com/), face stimuli were edited to a uniform size (320 × 240 pixels), and background (black) and scene stimuli were standardized to 576 × 432 pixels. Face–scene combination stimuli were created using a custom MATLAB (version 6.5.1) script that randomly overlaid faces on scenes, and images were standardized to 576 × 432 pixels.

Procedure

After completing informed consent and metal screening, participants were placed supine on the MRI table, fitted with earplugs and earphones, and had their heads stabilized with cushions. The participants were moved into the bore of the scanner, and three-plane localizer scans were collected. Prior to functional scanning, participants were informed that they would see a set of faces, some of which would appear on a black background and others on a rich, naturalistic scene. During encoding scans, participants rated the friendliness of the face on a 4-point scale. During retrieval scans, participants made old/new responses on a 4-point confidence scale: definitely old, probably old, probably new, and definitely new. There were 10 functional runs, alternating between encoding (5 runs, 6 min per run) and retrieval (5 runs, 7 min 20 sec per run). Encoding and retrieval runs consisted of 16 trials from each target condition, with retrieval runs containing an additional seven lures (novel face on a black background) as catch trials. Two additional target and lure conditions were also presented, but are beyond the scope of the current article. These conditions included trials in which the face and the scene were the same at study and test (FS-Intact) and trials in which the face and the scene were previously viewed during encoding, but not presented together (FS-Recombined).

All experimental stimuli were presented for 3 sec at encoding and retrieval, with a white crosshair presented for fixation during the intertrial interval. Stimulus order and intertrial jitter (range: 1 to 7 sec) were determined by optseq2, a software program designed to maximize statistical efficiency and facilitate deconvolution of the hemodynamic response for rapid, event-related designs (http://surfer.nmr.mgh.harvard.edu/optseq/; Dale, 1999). Stimuli were presented via a mirror in the head coil and a rear projection system using a PC computer with Cogent, a stimulus presentation toolbox within MATLAB 6.5.1. Button responses and response time (RT) were recorded using a magnetically shielded four-button box held in the participant's right hand. After completion of scanning sessions, participants were debriefed.

Image Acquisition

Images were collected on a General Electric 3.0-T Signa Excite HD short bore scanner (Milwaukee, WI), equipped with an eight-channel head coil. Total scan time was approximately 120 min. A three-plane localizer was collected in order to align a high-resolution SPGR series (1-mm sections covering whole brain, interscan spacing = 0, matrix = 2562, flip angle = 30°, TR = 22 msec, TE = min full, FOV = 19.2 cm). Following acquisition of the high-resolution anatomical images, whole-brain functional images were acquired parallel to the anterior commissure–posterior commissure plane using a dual-echo spiral-in/out SENSE gradient-echo sequence (Truong & Song, 2008; Pruessmann, Weiger, Bornert, & Boesiger, 2001): slice order = interleaved, matrix = 642, FOV = 24 cm, TR = 2000 msec, TE = 27 and 28 msec for the spiral-in and spiral-out images, respectively, sections = 28, thickness = 3.8 mm, interscan spacing = 0, flip angle = 60°, SENSE reduction factor = 2. Functional scanning lasted approximately 90 min and occurred in 10 runs. After completion of functional scanning, diffusion-weighted images, which we do not report here, were collected.

Image Processing and Analysis

Functional data were processed using SPM5 (Statistical Parametric Mapping; Wellcome Department of Cognitive Neurology, www.fil.ion.ucl.ac.uk/spm). The first four images were discarded to allow for scanner equilibrium. Images were corrected for asynchronous slice acquisition (slice timing: reference slice = 17, TA = 1.97), realigned, coregistered, normalized (MNI space; SPM defaults), and smoothed (8 mm isotropic kernel). The hemodynamic response for each trial was modeled using the canonical hemodynamic response function. Serial correlations were estimated using an autoregressive AR(1) model. Data were high-pass filtered using a cutoff of 128 sec, and global effects were removed (session-specific grand-mean scaling).

For the image analysis, low confidence hits (“probably old” responses) were modeled but not included in memory success contrasts due to poor memory discrimination for low confidence hits. That is, there was no difference in the overall false alarm rate (0.29) and low confidence F–F hits (0.27) or FS–F hits (0.28) [F(2, 36 < 1), p = ns]. Miss trials were collapsed across low and high confidence responses to ensure sufficient power for modeling the hemodynamic response (average number of trials for F–F miss = 25, FS–F miss = 29).

For all analyses reported, regions of interest (ROIs) included the bilateral hippocampus (relational memory), parahippocampal gyrus (rhinal and parahippocampal cortex; scene processing), as well as the amygdala and fusiform gyrus (face processing). A binary mask of these regions was created using the automated anatomical labeling atlas (Tzourio-Mazoyer et al., 2002) included in the Wake Forest University PickAtlas (Maldjian, Laurienti, & Burdette, 2004; Maldjian, Laurienti, Kraft, & Burdette, 2003). Within these ROIs, significant Memory condition (F–F, FS–F) × Memory success (high confidence hit, miss) interactions were identified, p < .05, extent threshold = 5 voxels [e.g., (FS–F hits > misses) > (F–F hits > misses)]. Results were then inclusively masked with main effects of within-condition memory success, p < .05, extent threshold = 5 voxels (e.g., FS–F hits > misses) to confirm the direction of the interaction effect. The conjoint probability following inclusive masking approaches p = .0025, although this estimate should be taken with caution given that the contrasts were not completely independent. For reporting purposes, we have also included results of whole-brain analyses at the same threshold. Brain figures were created using MRIcron, version Beta 17 (www.mricro.com; Rorden & Brett, 2000).

To identify scene reactivation effects, we followed the same approach as above, although we included an additional inclusive mask of brain regions involved in scene processing by contrasting encoding of FS–F (scene + face stimulus) trials > F–F trials (face alone), p < .005, cluster extent = 5 voxels. This mask was based on all FS–F and F–F trials (both subsequently remembered and forgotten trials) that occurred during the encoding runs.

For the connectivity analyses, a model was generated in which each trial was uniquely coded as a separate event. This single-trial analysis approach allows for the correlation of time-series activity in a seed region (right parahippocampal cortex in the current analysis) with the rest of the brain on a trial-by-trial basis (Dennis et al., 2008; Daselaar, Fleck, & Cabeza, 2006; Rissman, Gazzaley, & D'Esposito, 2004). A paired-samples t test was performed within our ROIs (p < .05, voxel extent = 5) on the individual subjects, resulting in correlation maps to contrast connectivity in FS–F trials relative to F–F trials.

Finally, to assess the neural correlates of binding that may be contributing to the CSD, the CSD was computed for each participant (F–F high confidence hits − FS–F high confidence hits) and entered as a regressor into the ESA analysis (as previously reported), thresholded at p < .05, voxel extent = 5 within our predetermined ROIs. The goal of this analysis was to identify which brain regions were positively or negatively correlated with the CSD.

RESULTS

Behavioral Results

The alpha level for all behavioral results was set at p < .05. Significant interactions were followed up with pairwise comparisons using least significant differences.

Encoding

Results of a 2 (condition: F–F vs. FS–F) × 2 (subsequent memory: definitely old hit vs. misses) repeated measures analysis of variance (ANOVA) of friendliness ratings during encoding revealed no difference in friendliness ratings based on condition or subsequent memory (all Fs < 1.8, p = ns). The grand mean was 2.34, with all cell means falling within 2.29 and 2.40, which was expected given the rating scale and the use of faces with neutral expressions. There was no difference in encoding RTs for F–F (M = 1653 msec; SD = 225) and FS–F (M = 1656 msec; SD = 233) trials [t(18) < 1], indicating that the presence of the scene during encoding did not influence response latency.

Retrieval Accuracy

Table 1 shows the proportion of “old” responses for F–F and FS–F trials as a function of confidence. Results of a 2 (condition: F–F vs. FS–F) × 2 (confidence: low vs. high) repeated measures ANOVA of hit rate revealed a main effect of condition [F(1, 18) = 13.85, p < .005] and confidence [F(1, 18) = 5.10, p < .05] (see Table 1). The interaction was significant [F(1, 18) = 5.10, p < .05]. Follow-up pairwise comparisons revealed that the interaction was driven by a greater proportion of high confidence responses in the F–F relative to FS–F condition (p < .01), whereas there was no difference in proportion of low confidence responses. Thus, participants made fewer high confidence responses in the FS–F condition, in which the visual context shifted between study and test. Mean proportion of false alarms (incorrectly endorsing a lure as “old”) was 0.29 (SD = 13). These data extend the results of recent studies by demonstrating substantial context effects in face recognition, and furthermore, that the change in context primarily reduces the proportion of high confidence responses. The reduction in high confidence responses is consistent with two studies that have reported a greater CSD for estimates of recollection than familiarity (Gruppuso et al., 2007; Macken, 2002; but see Hockley, 2008), although it is noted that one may be highly confident in responses based on familiarity. A potential account of the current behavioral results is that only high-performing participants demonstrate the CSD; however, results of a Pearson correlation analysis of the overall hit rate and the CSD are not consistent with this interpretation, as there was no relationship between overall performance and the CSD (r = .08, p = ns).

Table 1. 

Mean Behavioral Performance and Response Time (SD) by Confidence Levela


High Confidence Hits
Low Confidence Hits
Misses
Proportion
RT
Proportion
RT
Proportion
RT
Face–Face .42 (.13) 1415 (186) .27 (.12) 1848 (306) .31 (.12) 1796 (250) 
Face/Scene–Face .35 (.14) 1478 (177) .28 (.11) 1820 (283) .37 (.13) 1824 (270) 

 

 
High Confidence
 
Low Confidence
 

 

 
Proportion
 
RT
 
Proportion
 
RT
 

 

 
False Alarms 0.08 (0.10) 1650 (656) 0.21 (.09) 1754 (285) 

High Confidence Hits
Low Confidence Hits
Misses
Proportion
RT
Proportion
RT
Proportion
RT
Face–Face .42 (.13) 1415 (186) .27 (.12) 1848 (306) .31 (.12) 1796 (250) 
Face/Scene–Face .35 (.14) 1478 (177) .28 (.11) 1820 (283) .37 (.13) 1824 (270) 

 

 
High Confidence
 
Low Confidence
 

 

 
Proportion
 
RT
 
Proportion
 
RT
 

 

 
False Alarms 0.08 (0.10) 1650 (656) 0.21 (.09) 1754 (285) 

aIn another condition in which the face–scene pair was identical at study and test (FS-Intact), hit rates were higher than the F–F condition (0.75 vs. 0.69). Therefore, the current calculation of the context effect, because we are using the F–F condition that had lower performance, is a conservative estimate.

Retrieval Response Times

Table 1 shows RT for high and low confidence hits in F–F and FS–F trials. Mean RT was 1796 msec (SD = 250) for F–F misses and 1824 msec (SD = 270) for FS–F misses. Results of a 2 (condition: F–F vs. FS–F) × 3 (response type: high confidence hit vs. low confidence hit vs. miss) repeated measures ANOVA of RT revealed a main effect of response type [F(2, 36) = 39.99, p < .001]. Follow-up pairwise comparisons revealed faster RT for high confidence hits relative to low confidence hits and misses (mean difference > 364 msec, ps < .001). The effect of condition [F(1, 18) = 3.91, p = .064] was marginally significant. The interaction was not significant [F(2, 36) = 1.23, p = ns].

fMRI Results

Encoding Success Activity

The first goal of the study was to identify brain regions associated with encoding of faces when contextual (scene) information was present (FS–F condition) or absent (F–F condition) during encoding. Although these two conditions differ at the perceptual level (see Figure 1), perceptual differences in stimulus materials are subtracted out in ESA analyses, which compare subsequently remembered versus forgotten trials in each condition. In the FS–F condition, both remembered and forgotten trials included scenes, and in the F–F condition, neither remembered nor forgotten trials included scenes. Moreover, in both conditions, “remembered” and “forgotten” were defined based on the memory for the faces, regardless of the memory for the scenes or the face–scene associations. Thus, in both conditions, the analyses identified encoding activity predicting memory for the faces, and the contrast between the two conditions revealed how face encoding activity is modulated by the presence (FS–F > F–F) or absence (F–F > FS–F) of context.

FS–F > F–F

Consistent with our hypothesis that the MTL automatically binds disparate pieces of information, such as face and scene information, bilateral anterior hippocampal regions showed greater ESA in the FS–F than in the F–F condition (see Figure 2A). Importantly, the differential involvement of the hippocampus in successful encoding of face–scene pairs compared to single faces was found even though participants focused only on the face both during encoding and retrieval. Furthermore, subsequent memory analyses were based on whether the recognition of the face was successful, independently of memory for the scene. Beyond our predefined ROIs, additional ESA was observed in brain regions implicated in social cognition, including anterior cingulate cortex/medial frontal gyrus and the middle temporal gyrus/superior temporal sulcus (Table 2).

Figure 2. 

(A) Incidental face/scene binding. ESA (subsequent hits − subsequent misses) for FS–F > F–F was associated with activation in bilateral hippocampi. The bar graphs represent the ROI mean difference in parameter estimates (SEM) for subsequent hits and misses for each condition [left hippocampus = −30 −19 −11; right hippocampus = 25 −15 −11]. (B) Reactivation of a scene processing region, right parahippocampal cortex [30 −30 −19], during FS–F retrieval. The bar graphs represent retrieval success activity: the ROI mean difference in parameter estimates (SEM) for retrieval hits and misses for each condition. (C) Results of a connectivity analysis using right parahippocampal cortex as a seed voxel. Note increased connectivity with bilateral hippocampus and left-lateralized visual regions including fusiform and lingual gyrus and lateral occipital regions. PHC = parahippocampal cortex; HC = hippocampus; L = left; R = right.

Figure 2. 

(A) Incidental face/scene binding. ESA (subsequent hits − subsequent misses) for FS–F > F–F was associated with activation in bilateral hippocampi. The bar graphs represent the ROI mean difference in parameter estimates (SEM) for subsequent hits and misses for each condition [left hippocampus = −30 −19 −11; right hippocampus = 25 −15 −11]. (B) Reactivation of a scene processing region, right parahippocampal cortex [30 −30 −19], during FS–F retrieval. The bar graphs represent retrieval success activity: the ROI mean difference in parameter estimates (SEM) for retrieval hits and misses for each condition. (C) Results of a connectivity analysis using right parahippocampal cortex as a seed voxel. Note increased connectivity with bilateral hippocampus and left-lateralized visual regions including fusiform and lingual gyrus and lateral occipital regions. PHC = parahippocampal cortex; HC = hippocampus; L = left; R = right.

Table 2. 

Results of Whole-brain Analysis of Regions Showing a Condition by Encoding Success Interaction

Encoding Success Activity
L/R
BA
MNI Coordinates
k
t
Contrast/Brain Region
x
y
z
Face/Scene–Face > Face–Face 
Hippocampus  37 −26 −11 12 2.81 
Hippocampus  −30 −19 −11 18 2.80 
Parahippocampal/retrosplenial cortex 27/29 −11 −38 11 39 3.42 
Superior temporal sulcus/MTG 21 53 −8 −19 3.06 
Superior temporal sulcus/MTG 21/22 −56 −41 −7 61 3.54 
Medial frontal gyrus/anterior cingulate 32 49 27 14 3.13 
Thalamus  −8 3.00 
Middle frontal gyrus −38 15 46 13 2.79 
 
Face–Face > Face/Scene–Face 
Fusiform 20 45 −30 −19 3.06 
Middle frontal gyrus 10 30 56 11 201 5.23 
Posterior cingulate 31 −38 46 4.57 
Medial frontal/cingulate gyrus 32/6 −8 57 3.61 
Inferior parietal 40 34 −38 38 14 3.53 
Middle frontal gyrus 9/46 −34 38 34 18 3.18 
Cingulate gyrus 23/31 −8 −23 34 33 2.92 
Middle frontal gyrus 10/47 −23 45 −8 71 2.84 
Encoding Success Activity
L/R
BA
MNI Coordinates
k
t
Contrast/Brain Region
x
y
z
Face/Scene–Face > Face–Face 
Hippocampus  37 −26 −11 12 2.81 
Hippocampus  −30 −19 −11 18 2.80 
Parahippocampal/retrosplenial cortex 27/29 −11 −38 11 39 3.42 
Superior temporal sulcus/MTG 21 53 −8 −19 3.06 
Superior temporal sulcus/MTG 21/22 −56 −41 −7 61 3.54 
Medial frontal gyrus/anterior cingulate 32 49 27 14 3.13 
Thalamus  −8 3.00 
Middle frontal gyrus −38 15 46 13 2.79 
 
Face–Face > Face/Scene–Face 
Fusiform 20 45 −30 −19 3.06 
Middle frontal gyrus 10 30 56 11 201 5.23 
Posterior cingulate 31 −38 46 4.57 
Medial frontal/cingulate gyrus 32/6 −8 57 3.61 
Inferior parietal 40 34 −38 38 14 3.53 
Middle frontal gyrus 9/46 −34 38 34 18 3.18 
Cingulate gyrus 23/31 −8 −23 34 33 2.92 
Middle frontal gyrus 10/47 −23 45 −8 71 2.84 

L = left; R = right; BA = Brodmann's area; k = voxel extent; t = t value.

F–F > FS–F

No regions within our predefined ROIs were significantly more active for the F–F relative to FS–F ESA analysis. With the ROI masks removed (whole-brain analysis; see Table 2), the right fusiform gyrus [45 −30 −19] showed greater ESA in the F–F than in the FS–F condition. This region did not survive the ROI analysis because we used a voxel extent of five, although four of the nine active voxels, including the peak voxel, fell within the right fusiform ROI. Thus, activity in this face-selective region predicted subsequent memory for the faces particularly when the face was presented alone during encoding. ESA was observed during F–F trials in left [−41 −71 −19] and right [41 −63 −19] fusiform gyrus and in the right anterior hippocampus/rhinal cortex [34 −11 −23], although these regions did not exhibit greater ESA than FS–F trials. As seen in Table 2, additional ESA for F–F > FS–F outside of our ROIs was observed in the right middle frontal gyrus (BA 10), right posterior cingulate cortex (BA 31), and left inferior parietal cortex (BA 40).

Recapitulation of Encoding-related Activity at Retrieval

The second goal of the study was to investigate whether brain regions associated with processing the context of faces during encoding are “reactivated” during retrieval and contribute to successful recognition of the face. To address this issue, we used a multistep analysis. First, we identified scene processing regions by comparing encoding activity (including both subsequently remembered and subsequently forgotten trials) in the FS–F versus the F–F condition. This contrast resulted in activation in ventral visual regions, including parahippocampal gyrus, retrosplenial cortex, precuneus, and lateral parietal regions. Second, within these brain regions that were associated with encoding scene information, we identified regions showing greater retrieval success activity (RSA: hits > misses) in the FS–F versus the F–F condition. Finally, to confirm the direction of the effect, we masked the results with regions showing RSA in the FS–F condition.

The results of the analysis yielded a single brain region: right parahippocampal cortex. As illustrated by Figure 2B, right parahippocampal cortex, which showed greater activity for the FS–F than the F–F condition during encoding, was reactivated during retrieval and contributed to successful recognition (RSA: hit > miss) to a greater extent in the FS–F than in the F–F condition. Although the recognition task included only faces as stimuli and the recognition judgment was only about faces, recognizing faces that were presented with scenes during encoding was enhanced by the reactivation of the same parahippocampal region involved in processing scenes during encoding. Within the scene processing regions identified in the first step of the analysis, no region showed the greater RSA for F–F than FS–F.

Connectivity analysis

We used the right parahippocampal cortex region identified in the previous analysis as a seed region within our ROI mask to further explore the network of regions associated with scene reactivation. As seen in Figure 2C, right parahippocampal cortex showed significant connectivity during retrieval with the bilateral hippocampus, which is often implicated in intentional relational memory retrieval, in addition to left amygdala, left parahippocampal cortex, and right rhinal cortex. Beyond our ROIs, a predominately left-lateralized network of regions associated with visual processing was revealed (see Table 3).

Table 3. 

Brain Regions in Which Activity during Successful Retrieval Was Significantly Correlated with Activation in Right Parahippocampal Cortex, the Region Showing Scene Reactivation Effects, for FS–F Trials > F–F Trials

Connectivity Analysis
L/R
BA
MNI Coordinates
k
t
Brain Region
x
y
z
Hippocampus  −34 −26 −11 30 2.97 
Hippocampus  26 −26 −11 11 2.77 
Amygdala  −19 −4 −15 15 2.61 
Amygdala/rhinal  23 −4 −27 10 2.47 
Rhinal  41 −15 −27 3.00 
Lingual/fusiform gyrus 19 −19 −64 369 3.70 
Posterior cingulate 23 −49 27 19 3.87 
Inferior temporal gyrus 57 −49 −64 −11 15 3.67 
Middle occipital gyrus 37 −49 −64 −11 68 3.67 
Inferior temporal gyrus 20 41 −15 −27 13 3.00 
Inferior frontal gyrus 11/47 26 23 −15 32 3.04 
Inferior frontal gyrus 47 −41 19 −8 41 2.97 
Connectivity Analysis
L/R
BA
MNI Coordinates
k
t
Brain Region
x
y
z
Hippocampus  −34 −26 −11 30 2.97 
Hippocampus  26 −26 −11 11 2.77 
Amygdala  −19 −4 −15 15 2.61 
Amygdala/rhinal  23 −4 −27 10 2.47 
Rhinal  41 −15 −27 3.00 
Lingual/fusiform gyrus 19 −19 −64 369 3.70 
Posterior cingulate 23 −49 27 19 3.87 
Inferior temporal gyrus 57 −49 −64 −11 15 3.67 
Middle occipital gyrus 37 −49 −64 −11 68 3.67 
Inferior temporal gyrus 20 41 −15 −27 13 3.00 
Inferior frontal gyrus 11/47 26 23 −15 32 3.04 
Inferior frontal gyrus 47 −41 19 −8 41 2.97 

Although stimuli (faces on a black background) and participant responses (definitely old) were the same for both conditions, greater parahippocampal connectivity was observed in medial temporal lobe and visual processing regions. L = left; R = right; M = midline; BA = Brodmann's area; k = voxel extent; t = t value.

Neural Correlates of the Context Shift Decrement

To identify brain regions correlated with the CSD, we entered the CSD for individual subjects as a regressor into our random effects analysis of ESA in the FS–F condition. As illustrated by Figure 3A, hippocampal and parahippocampal regions showed a clear hemispheric asymmetry: Left hippocampus and parahippocampal cortex activity was correlated with smaller CSDs (blue), suggesting more abstract and flexible representations, whereas right hippocampal/rhinal cortex activity was correlated with larger CSDs (orange), suggesting more perceptual and inflexible representations. Activations in rhinal and fusiform cortices were correlated with larger CSDs in both hemispheres (see Figure 3A), consistent with the perceptual role of these regions. Figure 3B shows scatterplots of individual participant's CSD and FS–F ESA activity in the left hippocampus and right hippocampus/rhinal cortex.

Figure 3. 

(A) Brain regions showing negative (blue) and positive (orange) correlations with the CSD during ESA (subsequent hits − misses) for FS–F trials across all participants. For display purposes, bar graphs represent the mean ROI difference in parameter estimates of encoding success activity (subsequent hits − misses) for FS–F trials in participants classified as low or high CSD based on a median split. MNI coordinates of the peak voxel within each region are presented directly beneath the x-axis of the bar graph. (B) Scatterplot of the CSD and ESA in the FS–F condition in each participant. A negative correlation was observed in the left hippocampus, whereas a positive correlation was observed in the right anterior hippocampus/rhinal cortex. Eff. Size Diff. = effect size difference between subsequent hits and misses. CSD = proportion hits in F–F minus proportion hits in FS–F condition.

Figure 3. 

(A) Brain regions showing negative (blue) and positive (orange) correlations with the CSD during ESA (subsequent hits − misses) for FS–F trials across all participants. For display purposes, bar graphs represent the mean ROI difference in parameter estimates of encoding success activity (subsequent hits − misses) for FS–F trials in participants classified as low or high CSD based on a median split. MNI coordinates of the peak voxel within each region are presented directly beneath the x-axis of the bar graph. (B) Scatterplot of the CSD and ESA in the FS–F condition in each participant. A negative correlation was observed in the left hippocampus, whereas a positive correlation was observed in the right anterior hippocampus/rhinal cortex. Eff. Size Diff. = effect size difference between subsequent hits and misses. CSD = proportion hits in F–F minus proportion hits in FS–F condition.

DISCUSSION

The study yielded three main findings. First, ESA in bilateral hippocampal regions was greater in the FS–F than in the F–F condition. This result is consistent with the hypothesis that the hippocampus automatically binds items that are presented together, even if there is no conscious intention to link them in memory. Second, activity in right parahippocampal cortex, which was associated with processing of scenes during encoding, was recapitulated during successful recognition in the FS–F relative to the F–F condition. The activation of a region associated with scene processing during retrieval is noteworthy because the retrieval stimuli included only faces and the retrieval decision was only about faces. Thus, this finding is consistent with the hypothesis that encoded context can be spontaneously reactivated during retrieval without the need of a conscious intention to retrieve it. The contribution of the parahippocampal reactivation to successful recognition was further supported by the connectivity of this region with hippocampal, amygdala, and fusiform regions. Finally, analysis of neural correlates of the CSD revealed that increased activation in the left hippocampus and left parahippocampal cortex was associated with reductions in the CSD, whereas increased activation in the right anterior hippocampus was associated with larger CSDs. These results suggest that right-lateralized MTL regions may mediate encoding of highly detailed, exemplar-specific memory traces, whereas left MTL regions may mediate more flexible, domain-general memory representations. These findings are discussed in the sections below.

Incidental Hippocampal Binding of Face and Scene Information

The first goal of the study was to test the hypothesis that the hippocampus automatically binds disparate pieces of information simultaneously active within consciousness, regardless of goals or intentions (Moscovitch, 1992, 2008). Indeed, despite explicit task instructions of focusing on the face stimulus alone, ESA (subsequently remembered minus forgotten) in bilateral hippocampal regions was greater in the FS–F than in the F–F condition. This finding is consistent with evidence of ESA in anterior hippocampal regions during the encoding of face–scene (Dennis et al., 2008) and face–name associations (Chua et al., 2007), as well as other relational manipulations (Staresina & Davachi, 2008; Prince, Daselaar, & Cabeza, 2005). In these previous studies, participants were encouraged to bind items with their respective contexts (intentional context encoding), and ESA calculations were based on successful retrieval of associations (e.g., face–scene associations). In the present study, in contrast, participants were instructed to focus only on the face (incidental context encoding), and ESA calculations were based on successful recognition of the face regardless of memory for the context. Thus, this is the first study to link the hippocampus to automatic binding of item and context information when memory for the context is not required.

Why would the brain automatically encode irrelevant information? First, although participants were not required to actively associate face–scene information, the fact that the faces were superimposed on the scene would make the scene difficult to ignore. However, given that the test was based on face recognition alone, encoding scene information would not be diagnostic for memory. Moscovitch (1992, 2008) points out the importance of an automatic binding process that is easily recognized when one considers that we are not able to predict the future. That is, at any given moment, we are unable to identify what information may be important for remembering later, yet we are still able to later recall information that was not intentionally encoded. Our data suggest that the hippocampus is not just critical for successful binding of intentional associations, but that it may automatically bind multiple aspects of an event, and that this binding may enhance subsequent memory retrieval even when testing is item-based.

The cognitive mechanism underlying the automatic binding of face and scene information is likely mediated by two mechanisms: one in which item and context are “fused” into a single representation and/or another mechanism in which the representation of item and context exists separately but are “linked.” Different theoretical perspectives have referred to these two mechanisms, respectively, as integration and association (Murnane et al., 1999; Johnson & Chalfonte, 1994), elemental and configural association (Rudy & Sutherland, 1995), or blended and relational (Moses & Ryan, 2006; Cohen & Eichenbaum, 1993), although the conceptual distinctions between the two processes appear to be relatively similar in all cases. O'Keefe and Nadel (1978) suggested that relational (association) is dependent on hippocampal function, whereas elemental or blended (integrated) representations are mediated by nonhippocampal structures. Previous work emphasized integration processes and parahippocampal cortex in binding object and scene information (Hayes et al., 2007), whereas the current paradigms implicate the hippocampus in binding of face and scene information. These differences are accounted for by differences in the analysis approach and stimuli. For instance, binding of objects-in-scenes may be more likely to be integrated, whereas a face-in-scene may be more likely to be associated. Integration and association processes likely co-occur in normal individuals, and participants may have adopted different encoding strategies across the two studies that increased reliance on the corresponding neural regions (Diana, Yonelinas, & Ranganath, 2008; Staresina & Davachi, 2008).

Recapitulation of Encoding-related Activity at Retrieval

The second goal of the study was to test the hypothesis that regions involved in processing the context of the faces during encoding would be spontaneously reactivated during retrieval. We identified scene processing regions by comparing encoding activity (including both subsequently remembered and subsequently forgotten trials) in the FS–F > the F–F condition, which resulted in activation in ventral visual regions, including the parahippocampal gyrus, retrosplenial cortex, precuneus, and lateral parietal regions. Within these brain regions, we identified those showing greater RSA (hits > misses) in the FS–F > the F–F condition. Only one region survived the analysis: right parahippocampal cortex. Although the retrieval test focused on faces, right parahippocampal cortex showed greater RSA in the FS–F than in the F–F condition. Presentation of a face previously encoded within a scene may automatically reactivate the entire memory trace, with activation of relational scene information facilitating face recognition memory.

Greater activity in parahippocampal cortex was recently reported in a paradigm comparing passive viewing of famous relative to unfamiliar, nonfamous faces (Bar, Aminoff, & Ishai, 2008). The authors attributed parahippocampal cortex activation to retrieval of contextual associations. Interestingly, the authors note that when viewing famous faces, the participants “spontaneously remembered other pictures of these celebrities and information that they knew about them from the tabloids and their recent movies” (p. 1235). Indeed, it is this type of spontaneous reactivation of associated scene information that we hypothesize activates parahippocampal cortex in the current paradigm. More generally, parahippocampal cortex has been posited to play a role in retrieval of contextual associations, with anterior parahippocampal cortex mediating nonspatial associations and posterior coordinates mediating spatial associations (Aminoff, Gronau, & Bar, 2007; Bar & Aminoff, 2003).

Much of the work by Bar and Aminoff focused on well-learned associations (e.g., semantic memory or paradigms that required multiple days of training with the stimulus set). In contrast, in the current paradigm, scene reactivation was observed in response to a single learning trial (episodic memory), similar to our previous results (Hayes et al., 2007). Thus, at a minimum, it appears as though parahippocampal cortex is involved in retrieval of scene-related contexts in healthy adults (Hayes, Ryan, Schnyer, & Nadel, 2004), regardless of whether the associated scene context is episodic or semantic (Ryan, Cox, Hayes, & Nadel, 2008).

Furthermore, the results of the connectivity analysis revealed that right parahippocampal cortex activity was significantly correlated with a network of regions implicated in memory and perceptual processing, including the bilateral hippocampus, left amygdala, and left fusiform and lingual gyrus, left lateral occipital regions, and left ventrolateral and anterior prefrontal cortex. Thus, although right parahippocampal cortex was the only region to show reactivation effects, this region clearly exhibited interactions with multiple brain regions implicated in relational memory and face processing.

Neural Correlates of the Context Shift Decrement

When the CSD was entered as a regressor into the analysis of ESA for FS–F trials > F–F trials, we observed negative correlations in the left hippocampus and left parahippocampal cortex with the CSD. In contrast, the right anterior hippocampus and right parahippocampal cortex were positively correlated with CSD, along with item processing regions including bilateral rhinal cortex and bilateral fusiform gyrus. These results indicate that the left hippocampus and parahippocampal cortex may mediate flexible, domain-general representations. The right hippocampus and parahippocampal cortex, in association with item processing regions including bilateral rhinal cortex and fusiform gyri, may mediate detailed visual form-specific representations.

The hemispheric asymmetry in hippocampal and parahippocampal activations is consistent with several lines of research linking the left hemisphere to more abstract representations and the right hemisphere to more perceptual representations. For example, fMRI studies of object priming have reported a hemispheric asymmetry within fusiform cortex. Left fusiform regions exhibit equivalent neural priming (reduced activation) in response to repetitions of identical objects, repetitions of identical objects from a different viewpoint (Vuilleumier et al., 2002), and repetitions of different exemplars from the same object category (e.g., umbrellas of a different shape and color) (Koutstaal et al., 2001). These results suggest that neural priming in left fusiform cortex is not specific to visuoperceptual features of an object, but may be related to semantic processing (Buckner, Koutstaal, Schacter, & Rosen, 2000). In contrast, neural priming of specific object forms appears to be mediated by right posterior regions including fusiform cortex for objects (Koutstaal et al., 2001; Vuilleumier et al., 2002).

Thus, the results of object priming fMRI studies suggest that left fusiform cortex mediates abstract representations that are immune to study–test perceptual shifts, whereas right fusiform cortex mediates perceptual representations that are sensitive to study–test shifts. In the present study, we found a similar hemispheric asymmetry but in hippocampal and parahippocampal regions rather than in the fusiform gyrus. One difference is that instead of object priming, which is mediated by sensory cortices, we measured recognition memory, which is mediated by MTL regions. Another difference is that instead of changing object features (e.g., size, viewpoint) between study and test, which is information that can be stored in visual regions, we changed face–scene associations, which is information that requires relational memory processing in the MTL.

Regardless of these open questions for future research, the present results provide the first evidence that the lateralization of hippocampal/parahippocampal activity during encoding predicts whether or not recognition will be affected by context changes. In contrast with these regions, activity in rhinal cortex was associated with greater CSD, regardless of hemisphere (see Figure 3B). This finding is consistent with evidence that rhinal cortex processes perceptual memory representations (Murray, Bussey, & Saksida, 2007). Several authors have emphasized differences between the memory functions of rhinal cortex, which mediates inflexible nonrelational representations, and the hippocampus, which mediates flexible relational representations (Eichenbaum, Yonelinas, & Ranganath, 2007; Aggleton & Brown, 1999; Nadel, Willner, & Kurz, 1985). The present findings suggest that the difference is really between rhinal cortex and the left hippocampus, because the right anterior hippocampus showed an activation pattern similar to rhinal cortex. The current dissociation between the memory functions of the left and right hippocampus fits well with the idea that the left hippocampus mediates more abstract memory representations and associations, whereas the right hippocampus is more specifically associated with spatial memory (Burgess, Maguire, & O'Keefe, 2002). It is not clear what circumstances encourage a perceptually specific versus an abstract memory representation. Certainly, there is evidence from patients suggesting that right MTL regions are important for spatial relations (Pigott & Milner, 1993; Smith & Milner, 1981, 1989). It is likely that both types of representations may be formed in a normal system or depending on the strategy used by participants (O'Keefe & Nadel, 1978).

An interesting question that arises is how encoding activity in medial temporal regions is linked to both enhanced memory and reduced memory. First, it is important to recognize that the analysis of FS–F ESA and the CSD correlation analysis investigate different questions, and convergence is not necessarily required. The FS–F ESA analysis is trial-based, and isolates regions associated with subsequent memory (both the left and right hippocampus, in this case). In contrast, the CSD correlation analysis focuses on individual differences—as the CSD was entered as a subject-specific regressor. Nevertheless, these distinct approaches provided converging evidence that activation in the left hippocampus predicted subsequent memory and that individuals with greater activity in the left hippocampus were more resistant to contextual change (i.e., had lower CSDs), as regions of activation in these two analyses overlapped (coordinates of negative correlation with CSD: [−34 −19 −15]; coordinates of peak activation in ESA FS–F [−30 −19 −11]). Activity in the right hippocampus was not overlapping, as the correlation with the CSD was located at [18 −8 −12], whereas activation identified in the FS–F ESA analysis was more lateral and inferior [34 −4 −34] or more posterior [34 −22 −11]. The results of the FS–F ESA analysis indicate that although the right hippocampus is associated with subsequent memory, the CSD correlation analysis indicates that more flexible representations are associated with less activity in this region across participants.

Conclusions

Whereas previous functional neuroimaging studies focused on intentional relational memory, the present study investigated incidental relational memory. During both encoding and retrieval, participants were instructed to focus only on the faces and no reference was made to the associated context. Yet, despite these instructions, encoding activity predicting later memory (ESA) was greater in bilateral hippocampal regions when the faces were presented on a rich, naturalistic scene. Investigation of recapitulation of encoding-related processes during retrieval revealed that a scene processing region, right parahippocampal cortex, was spontaneously reactivated at retrieval. Furthermore, this region showed extensive connectivity with the bilateral hippocampus, left amygdala, posterior visual regions, and left ventrolateral and anterior prefrontal cortex during successful retrieval. Finally, correlation of the CSD with neural activity during successful encoding revealed that left MTL regions, including the hippocampus and parahippocampal cortex, correlated negatively with the CSD; that is, increased activity in these regions was associated with a smaller behavioral context effect. In contrast, the right hippocampus and parahippocampal cortex was positively correlated with the CSD, as increased activity in these regions was associated with larger behavioral context effects. Taken together, these findings provide the first evidence available regarding the neural mechanisms of incidental context effects on face recognition, and clarify the causes of the butcher-in-the-bus phenomenon.

Acknowledgments

This work was supported by the National Institutes of Health, the National Institute on Aging (NIA) [grant numbers R01 AG019731 and R01 AG23770 awarded to R. C. and grant number F32 AG029738 awarded to S. M. H.] and the National Institute of Neurological Disorders and Stroke (NINDS) [grant numbers PPG NS 41328 and R01 NS 50329 awarded to Allen Song]. E. B. was supported by funding from the National Institutes of Health awarded to Duke University's Post-baccalaureate Research Education Program (PREP).

We thank Allen Song for providing technical assistance with MRI data acquisition, James Kragel for assistance with data preprocessing and analysis, and Odera Umeano for assistance with data collection. Portions of the research in this article use the FERET database of facial images collected under the FERET program, sponsored by the DOD Counterdrug Technology Development Program Office, as well as face images provided by the Computer Vision Library, University of Ljubljana, Slovenia (www.lrv.fri.uni-lj.si/facedb.html; and Solina et al., 2003). We also thank the anonymous reviewers for helpful comments on the manuscript.

Reprint requests should be sent to Scott M. Hayes, Memory Disorders Research Center (151A), Boston VA Healthcare System, 150 South Huntington Ave., Boston, MA 02130, or via e-mail: scott.hayes@va.gov.

REFERENCES

Aggleton
,
J. P.
, &
Brown
,
M. W.
(
1999
).
Episodic memory, amnesia, and the hippocampal–anterior thalamic axis.
Behavioral and Brain Sciences
,
22
,
425
489
.
Aminoff
,
E.
,
Gronau
,
N.
, &
Bar
,
M.
(
2007
).
The parahippocampal cortex mediates spatial and nonspatial associations.
Cerebral Cortex
,
17
,
1493
1503
.
Bar
,
M.
, &
Aminoff
,
E.
(
2003
).
Cortical analysis of visual context.
Neuron
,
38
,
347
358
.
Bar
,
M.
,
Aminoff
,
E.
, &
Ishai
,
A.
(
2008
).
Famous faces activate contextual associations in the parahippocampal cortex.
Cerebral Cortex
,
18
,
1233
1238
.
Buckner
,
R. L.
,
Koutstaal
,
W.
,
Schacter
,
D. L.
, &
Rosen
,
B. R.
(
2000
).
Functional MRI evidence for a role of frontal and inferior temporal cortex in amodal components of priming.
Brain
,
123
,
620
640
.
Burgess
,
N.
,
Maguire
,
E. A.
, &
O'Keefe
,
J.
(
2002
).
The human hippocampus and spatial and episodic memory.
Neuron
,
35
,
625
641
.
Chua
,
E. F.
,
Schacter
,
D. L.
,
Rand-Giovannetti
,
E.
, &
Sperling
,
R. A.
(
2007
).
Evidence for a specific role of the anterior hippocampal region in successful associative encoding.
Hippocampus
,
17
,
1071
1080
.
Cohen
,
N. J.
, &
Eichenbaum
,
H.
(
1993
).
Memory, amnesia, and the hippocampal system.
Cambridge, MA
:
The MIT Press
.
Dale
,
A. M.
(
1999
).
Optimal experimental design for event-related fMRI.
Human Brain Mapping
,
8
,
109
114
.
Daselaar
,
S. M.
,
Fleck
,
M. S.
, &
Cabeza
,
R.
(
2006
).
Triple dissociation in the medial temporal lobes: Recollection, familiarity, and novelty.
Journal of Neurophysiology
,
96
,
1902
1911
.
Dennis
,
N. A.
,
Hayes
,
S. M.
,
Prince
,
S. E.
,
Madden
,
D. J.
,
Huettel
,
S. A.
, &
Cabeza
,
R.
(
2008
).
Effects of aging on the neural correlates of successful item and source memory encoding.
Journal of Experimental Psychology: Learning, Memory, and Cognition
,
34
,
791
808
.
Diana
,
R. A.
,
Yonelinas
,
A. P.
, &
Ranganath
,
C.
(
2008
).
The effects of unitization on familiarity-based source memory: Testing a behavioral prediction derived from neuroimaging data.
Journal of Experimental Psychology: Learning, Memory, and Cognition
,
34
,
730
740
.
Eichenbaum
,
H.
,
Yonelinas
,
A. P.
, &
Ranganath
,
C.
(
2007
).
The medial temporal lobe and recognition memory.
Annual Review of Neuroscience
,
30
,
123
152
.
Epstein
,
R.
, &
Kanwisher
,
N.
(
1998
).
A cortical representation of the local visual environment.
Nature
,
392
,
598
601
.
Gruppuso
,
V.
,
Lindsay
,
D. S.
, &
Masson
,
M. E. J.
(
2007
).
I'd know that face anywhere!
Psychonomic Bulletin & Review
,
14
,
1085
1089
.
Hayes
,
S. M.
,
Nadel
,
L.
, &
Ryan
,
L.
(
2007
).
The effect of scene context on episodic object recognition: Parahippocampal cortex mediates memory encoding and retrieval success.
Hippocampus
,
17
,
873
889
.
Hayes
,
S. M.
,
Ryan
,
L.
,
Schnyer
,
D. M.
, &
Nadel
,
L.
(
2004
).
An fMRI study of episodic memory: Retrieval of object, spatial, and temporal information.
Behavioral Neuroscience
,
118
,
885
896
.
Hockley
,
W. E.
(
2008
).
The effects of environmental context on recognition memory and claims of remembering.
Journal of Experimental Psychology: Learning, Memory, and Cognition
,
34
,
1412
1429
.
Humphreys
,
M. S.
,
Pike
,
R.
,
Bain
,
J. D.
, &
Tehan
,
G.
(
1988
).
Using multilist designs to test for contextual reinstatement effects in recognition.
Bulletin of the Psychonomic Society
,
26
,
200
202
.
Johnson
,
M. K.
, &
Chalfonte
,
B. L.
(
1994
).
Binding complex memories: The role of reactivation and the hippocampus.
In D. L. Schacter & E. Tulving (Eds.),
Memory systems 1994
(pp.
311
350
).
Cambridge, MA
:
MIT Press
.
Johnson
,
M. K.
,
Hashtroudi
,
S.
, &
Lindsay
,
D. S.
(
1993
).
Source monitoring.
Psychological Bulletin
,
114
,
3
28
.
Kanwisher
,
N.
,
McDermott
,
J.
, &
Chun
,
M. M.
(
1997
).
The fusiform face area: A module in human extrastriate cortex specialized for face perception.
Journal of Neuroscience
,
17
,
4302
4311
.
Koutstaal
,
W.
,
Wagner
,
A. D.
,
Rotte
,
M.
,
Maril
,
A.
,
Buckner
,
R. L.
, &
Schacter
,
D. L.
(
2001
).
Perceptual specificity in visual object priming: Functional magnetic resonance imaging evidence for a laterality difference in fusiform cortex.
Neuropsychologia
,
39
,
184
199
.
Macken
,
W. J.
(
2002
).
Environmental context and recognition: The role of recollection and familiarity.
Journal of Experimental Psychology: Learning, Memory, and Cognition
,
28
,
153
161
.
Maldjian
,
J. A.
,
Laurienti
,
P. J.
, &
Burdette
,
J. H.
(
2004
).
Precentral gyrus discrepancy in electronic versions of the Talairach atlas.
Neuroimage
,
21
,
450
455
.
Maldjian
,
J. A.
,
Laurienti
,
P. J.
,
Kraft
,
R. A.
, &
Burdette
,
J. H.
(
2003
).
An automated method for neuroanatomic and cytoarchitectonic atlas-based interrogation of fMRI data sets.
Neuroimage
,
19
,
1233
1239
.
Mandler
,
G.
(
1980
).
Recognizing: The judgment of previous occurrence.
Psychological Review
,
87
,
252
271
.
Marsolek
,
C. J.
(
1999
).
Dissociable neural subsystems underlie abstract and specific object recognition.
Psychological Science
,
10
,
111
118
.
Martinez
,
A. M.
, &
Benavente
,
R.
(
1998
).
The AR face database.
CVC Technical Report #24.
McCarthy
,
G.
,
Puce
,
A.
,
Gore
,
J. C.
, &
Allison
,
T.
(
1997
).
Face-specific processing in the human fusiform gyrus.
Journal of Cognitive Neuroscience
,
9
,
605
610
.
Moscovitch
,
M.
(
1992
).
Memory and working-with-memory: A component process model based on modules and central systems.
Journal of Cognitive Neuroscience
,
4
,
257
267
.
Moscovitch
,
M.
(
2008
).
The hippocampus as a “stupid,” domain-specific module: Implications for theories of recent and remote memory, and of imagination.
Canadian Journal of Experimental Psychology, Revue Canadienne De Psychologie Experimentale
,
62
,
62
79
.
Moses
,
S. N.
, &
Ryan
,
J. D.
(
2006
).
A comparison and evaluation of the predictions of relational and conjunctive accounts of hippocampal function.
Hippocampus
,
16
,
43
65
.
Murnane
,
K.
,
Phelps
,
M. P.
, &
Malmberg
,
K.
(
1999
).
Context-dependent recognition memory: The ICE theory.
Journal of Experimental Psychology: General
,
128
,
403
415
.
Murray
,
E. A.
,
Bussey
,
T. J.
, &
Saksida
,
L. M.
(
2007
).
Visual perception and memory: A new view of medial temporal lobe function in primates and rodents.
Annual Review of Neuroscience
,
30
,
99
122
.
Nadel
,
L.
,
Willner
,
J.
, &
Kurz
,
E. M.
(
1985
).
Cognitive maps and environmental context.
In P. Balsam & A. Tomie (Eds.),
Context and learning
(pp.
385
406
).
Hillsdale, NJ
:
Erlbaum
.
O'Keefe
,
J.
, &
Nadel
,
L.
(
1978
).
The hippocampus as a cognitive map.
Oxford
:
Oxford University Press
.
Park
,
D. C.
,
Polk
,
T. A.
,
Park
,
R.
,
Minear
,
M.
,
Savage
,
A.
, &
Smith
,
M. R.
(
2004
).
Aging reduces neural specialization in ventral visual cortex.
Proceedings of the National Academy of Sciences, U.S.A.
,
101
,
13091
13095
.
Park
,
D. C.
,
Puglisi
,
J. T.
,
Smith
,
A. D.
, &
Dudley
,
W. N.
(
1987
).
Cue utilization and encoding specificity in picture recognition in older adults.
Journal of Gerontology
,
42
,
423
425
.
Park
,
D. C.
,
Puglisi
,
J. T.
, &
Sovacool
,
M.
(
1984
).
Picture memory in older adults: Effects of contextual detail at encoding and retrieval.
Journal of Gerontology
,
39
,
213
215
.
Phillips
,
P. J.
,
Moon
,
H.
,
Rizvi
,
S. A.
, &
Rauss
,
P. J.
(
2000
).
The FERET evaluation methodology for face recognition algorithms.
IEEE Transactions on Pattern Analysis and Machine Intelligence
,
22
,
1090
1104
.
Pigott
,
S.
, &
Milner
,
B.
(
1993
).
Memory for different aspects of complex visual scenes after unilateral temporal-lobe or frontal-lobe resection.
Neuropsychologia
,
31
,
1
15
.
Prince
,
S. E.
,
Daselaar
,
S. M.
, &
Cabeza
,
R.
(
2005
).
Neural correlates of relational memory: Successful encoding and retrieval of semantic and perceptual associations.
Journal of Neuroscience
,
25
,
1203
1210
.
Pruessmann
,
K. P.
,
Weiger
,
M.
,
Bornert
,
P.
, &
Boesiger
,
P.
(
2001
).
Advances in sensitivity encoding with arbitrary k-space trajectories.
Magnetic Resonance in Medicine
,
46
,
638
651
.
Rissman
,
J.
,
Gazzaley
,
A.
, &
D'Esposito
,
M.
(
2004
).
Measuring functional connectivity during distinct stages of a cognitive task.
Neuroimage
,
23
,
752
763
.
Rorden
,
C.
, &
Brett
,
M.
(
2000
).
Stereotaxic display of brain lesions.
Behavioural Neurology
,
12
,
191
200
.
Rudy
,
J. W.
, &
Sutherland
,
R. J.
(
1995
).
Configural association theory and the hippocampal formation: An appraisal and reconfiguration.
Hippocampus
,
5
,
375
389
.
Ryan
,
L.
,
Cox
,
C.
,
Hayes
,
S. M.
, &
Nadel
,
L.
(
2008
).
Hippocampal activation during episodic and semantic memory retrieval: Comparing category production and category cued recall.
Neuropsychologia
,
46
,
2109
2121
.
Smith
,
M. L.
, &
Milner
,
B.
(
1981
).
The role of the right hippocampus in the recall of spatial location.
Neuropsychologia
,
19
,
781
793
.
Smith
,
M. L.
, &
Milner
,
B.
(
1989
).
Right hippocampal impairment in the recall of spatial location: Encoding deficit or rapid forgetting?
Neuropsychologia
,
27
,
71
81
.
Smith
,
S. M.
, &
Vela
,
E.
(
1986
).
Outshining: The relative effectiveness of cues.
Bulletin of the Psychonomic Society
,
24
,
350
.
Solina
,
F.
,
Peer
,
P.
,
Batageli
,
B.
,
Juvan
,
S.
, &
Kovac
,
J.
(
2003
).
Color-based face detection in the “15 seconds of fame” art installation
, Paper presented at the Conference on Computer Vision/Computer Graphics Collaboration for Model-based Imaging, Rendering, Image Analysis and Graphical Special Effects, 10–11 March, INRIA Rocquencourt, France.
Staresina
,
B. P.
, &
Davachi
,
L.
(
2008
).
Selective and shared contributions of the hippocampus and perirhinal cortex to episodic item and associative encoding.
Journal of Cognitive Neuroscience
,
20
,
1478
1489
.
Truong
,
T. K.
, &
Song
,
A. W.
(
2008
).
Single-shot dual-z-shimmed sensitivity-encoded spiral-in/out imaging for functional MRI with reduced susceptibility artifacts.
Magnetic Resonance in Medicine
,
59
,
221
227
.
Tsukiura
,
T.
, &
Cabeza
,
R.
(
2008
).
Orbitofrontal and hippocampal contributions to memory for face–name associations: The rewarding power of a smile.
Neuropsychologia
,
46
,
2310
2319
.
Tzourio-Mazoyer
,
N.
,
Landeau
,
B.
,
Papathanassiou
,
D.
,
Crivello
,
F.
,
Etard
,
O.
,
Delcroix
,
N.
,
et al
(
2002
).
Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain.
Neuroimage
,
15
,
273
289
.
Uncapher
,
M. R.
,
Otten
,
L. J.
, &
Rugg
,
M. D.
(
2006
).
Episodic encoding is more than the sum of its parts: An fMRI investigation of multifeatural contextual encoding.
Neuron
,
52
,
547
556
.
Vuilleumier
,
P.
,
Henson
,
R. N.
,
Driver
,
J.
, &
Dolan
,
R. J.
(
2002
).
Multiple levels of visual object constancy revealed by event-related fMRI of repetition priming.
Nature Neuroscience
,
5
,
491
499
.
Winograd
,
E.
, &
Rivers-Bulkeley
,
N. T.
(
1977
).
Effects of changing contexts on remembering faces.
Journal of Experimental Psychology: Human Learning and Memory
,
3
,
397
405
.
Yovel
,
G.
, &
Paller
,
K. A.
(
2004
).
The neural basis of the butcher-on-the-bus phenomenon: When a face seems familiar but is not remembered.
Neuroimage
,
21
,
789
800
.