It is known that emotional facial expressions modulate the perception and subsequent recollection of faces and that aging alters these modulatory effects. Yet, the underlying neural mechanisms are not well understood, and they were the focus of the current fMRI study. We scanned healthy young and older adults while perceiving happy, neutral, or angry faces paired with names. Participants were then provided with the names of the faces and asked to recall the facial expression of each face. fMRI analyses focused on the fusiform face area (FFA), the posterior superior temporal sulcus (pSTS), the OFC, the amygdala (AMY), and the hippocampus (HC). Univariate activity, multivariate pattern (MVPA), and functional connectivity analyses were performed. The study yielded two main sets of findings. First, in pSTS and AMY, univariate activity and MVPA discrimination during the processing of facial expressions were similar in young and older adults, whereas in FFA and OFC, MVPA discriminated facial expressions less accurately in older than young adults. These findings suggest that facial expression representations in FFA and OFC reflect age-related dedifferentiation and positivity effect. Second, HC–OFC connectivity showed subsequent memory effects (SMEs) for happy expressions in both age groups, HC–FFA connectivity exhibited SMEs for happy and neutral expressions in young adults, and HC-pSTS interactions displayed SMEs for happy expressions in older adults. These results could be related to compensatory mechanisms and positivity effects in older adults. Taken together, the results clarify the effects of aging on the neural mechanisms in perceiving and encoding facial expressions.

The most common memory complaint in healthy older adults (e.g., 83% of responders in Bolla, Lindgren, Bonaccorsy, & Bleecker, 1991; see also Cohen & Faulkner, 1986; Zelinski, Gilewski, & Thompson, 1980) is a difficulty in remembering people's names. This deficit, which has been confirmed in laboratory studies on face–name associations (James, Fogler, & Tauber, 2008; Naveh-Benjamin, Guez, Kilb, & Reedy, 2004; Crook & West, 1990), is not surprising given that older adults are impaired both in processing visual stimuli (Baltes & Lindenberger, 1997; Lindenberger & Baltes, 1994) including face perception (for review, see Boutet, Taler, & Collin, 2015) and in establishing new associations (Greene & Naveh-Benjamin, 2020; Naveh-Benjamin, 2000). An important component of older adults' visual perception deficits is a reduction in neural specificity known as the age-related dedifferentiation effect (for review, see Koen & Rugg, 2019). In contrast, the associative memory deficit has been linked to impaired hippocampal activity (Tsukiura et al., 2011; Dennis et al., 2008) and hippocampal–cortical connectivity (Ness et al., 2022; Tsukiura et al., 2011; Leshikar, Gutchess, Hebrank, Sutton, & Park, 2010). In a previous study, we found that functional connectivity between the hippocampus (HC) and the OFC during face–name associative learning was enhanced by happy facial expressions and that this mechanism was related to better memory for happy faces paired with names (Tsukiura & Cabeza, 2008). Given that older adults show the age-related positivity effect, which is known as a bias toward positive emotional stimuli or a tendency to interpret neutral stimuli as emotionally positive stimuli (for review, see Mather & Carstensen, 2005), an obvious question is whether the positivity effect could enhance HC–cortex connectivity during the encoding of face–name associations. If this effect is also associated with better face–name learning, it would be an example of functional compensation in older adults. Below, we briefly describe the dedifferentiation and positivity effects, and their implications for the present study.

The age-related dedifferentiation effect in visual perception refers to the finding that the neural representations for different visual stimuli are less distinct in older than young adults (Park et al., 2004; for review, see Koen & Rugg, 2019). This effect has been demonstrated for a variety of tasks and stimuli (Deng et al., 2021; Hill, King, & Rugg, 2021; Saverino et al., 2016; Dennis & Cabeza, 2011; Kalkstein, Checksfield, Bollinger, & Gazzaley, 2011; St-Laurent, Abdi, Burianova, & Grady, 2011; Park, Carp, Hebrank, Park, & Polk, 2010; Payer et al., 2006), including faces (Goh, Suzuki, & Park, 2010). Age-related dedifferentiation has been traditionally examined using univariate analyses and, more recently, using multivariate representational analyses, such as multivariate pattern analysis (MVPA) (Katsumi, Andreano, Barrett, Dickerson, & Touroutoglou, 2021; Hill et al., 2021; Dennis et al., 2019). MVPA is used to measure the discriminability between different stimuli (e.g., faces vs. objects in the work of Haxby et al., 2001), different exemplars of the same class (e.g., different facial identities in the work of Ghuman et al., 2014), or different qualities across stimuli of the same class (e.g., different facial expressions in the work of Wegrzyn et al., 2015; Harry, Williams, Davis, & Kim, 2013). In the present study, we focused on the age-related dedifferentiation in perceiving different facial expressions. Older adults do not discriminate facial expressions compared with young adults, and this deficit has been interpreted as evidence of the age-related dedifferentiation (Franklin & Zebrowitz, 2017). Two potential regions reflecting the age-related dedifferentiation for facial expressions are the fusiform face area (FFA), which is sensitive to the processing of facial expressions (Wegrzyn et al., 2015; Skerry & Saxe, 2014; Harry et al., 2013), and OFC, which is involved in the processing of socioemotional signals, including facial expressions (Goodkind et al., 2012; Watson & Platt, 2012; Heberlein, Padon, Gillihan, Farah, & Fellows, 2008; Hornak et al., 2003; Hornak, Rolls, & Wade, 1996). Both FFA and OFC are affected by age-related atrophy and dedifferentiation (Katsumi et al., 2021; Xie et al., 2021; Shen et al., 2013; Lee, Grady, Habak, Wilson, & Moscovitch, 2011; Goh et al., 2010; Fjell et al., 2009; Salat et al., 2009; Park et al., 2004) and hence are likely to show age-related dedifferentiation for facial expressions in the present study.

The age-related positivity effect refers to the finding that older adults often show a bias toward positive stimuli and interpret ambiguous socioemotional stimuli as more positive than young adults (for review, see Mather & Carstensen, 2005). In behavioral studies, the positivity effect has been found for a variety of emotional stimuli (Huan, Liu, Lei, & Yu, 2020; Gallo, Korthauer, McDonough, Teshale, & Johnson, 2011; van Reekum et al., 2011; Comblain, D'Argembeau, & van der Linden, 2005), including faces (Zebrowitz, Boshyan, Ward, Gutchess, & Hadjikhani, 2017; Riediger, Voelkle, Ebner, & Lindenberger, 2011; Leigland, Schulz, & Janowsky, 2004). In fMRI studies, the memory-related positivity effect in older adults has been linked to age-related changes in functional connectivity for emotional pictures (Addis, Leclerc, Muscatell, & Kensinger, 2010; St Jacques, Dolcos, & Cabeza, 2009) and to an age-related increase in functional connectivity and memory performance for emotionally positive pictures (Addis et al., 2010). These changes could be attributed to functional compensation, which refers to the cognition-enhance recruitment of neural resources (for review, see Cabeza et al., 2018). In our prior fMRI study of memory for face–name associations, we found that happy facial expressions boosted functional connectivity between HC and OFC to a greater extent for subsequently remembered than forgotten stimuli (Tsukiura & Cabeza, 2008). Thus, in the present study, we were interested in (1) whether we would find an age-related increase in functional connectivity for happy faces between HC and OFC or other regions related to processing facial expressions such as the posterior superior temporal sulcus (pSTS) (Wegrzyn et al., 2015; Said, Moore, Engell, Todorov, & Haxby, 2010) or FFA (Wegrzyn et al., 2015; Skerry & Saxe, 2014; Harry et al., 2013) and (2) whether this effect would be associated with subsequent memory (for review, see Paller & Wagner, 2002), suggesting the age-related compensation in memory.

In the present event-related fMRI study, participants were scanned while viewing happy, neutral, or angry faces paired with names, and memory for the facial expressions was assessed by presenting with the names as cues and asking participants to recall the facial expression of each face associated with the cued name (see Figure 1). We performed traditional univariate analyses, but our focus was the dedifferentiation effect measured with MVPA and the positivity effect measured with functional connectivity analyses. To investigate the dedifferentiation in perceiving facial expressions, an MVPA classifier was trained to distinguish between happy, neutral, and angry expressions, and was then used to assess the discriminability among these expressions during face perception. If the MVPA classifiers do not distinguish these facial expressions in older adults, it would reflect the age-related dedifferentiation in perceiving facial expressions. As noted above, our candidate regions reflecting the age-related dedifferentiation for facial expressions were FFA and OFC (Katsumi et al., 2021; Xie et al., 2021; Lee et al., 2011; Goh et al., 2010; Park et al., 2004), which are regions that are involved in facial expressions (Wegrzyn et al., 2015; Skerry & Saxe, 2014; Harry et al., 2013; Watson & Platt, 2012; Heberlein et al., 2008; Hornak et al., 2003; Hornak et al., 1996) and that show atrophy in older adults (Shen et al., 2013; Fjell et al., 2009; Salat et al., 2009). In addition, MVPA in the present fMRI study also investigated neural specificity for facial expressions in the amygdala (AMY) related to the perception of highly arousing facial expressions (Yang et al., 2002; Breiter et al., 1996) and pSTS related to the processing of face-based social signals, including facial expressions and eye movements (Wegrzyn et al., 2015; Said et al., 2010; Puce, Allison, Bentin, Gore, & McCarthy, 1998). To investigate the age-related positivity effect, we performed functional connectivity analyses for subsequently remembered and forgotten. As explained above, we focused on investigating whether HC–cortex functional connectivity, which has been shown in the age-related positivity effect (Addis et al., 2010), is associated with successful memory in older adults. If so, such effect would be consistent with the age-related compensation.

Figure 1.

Example of the encoding and retrieval trials. (A) Example of the encoding trials. (B) Examples of the retrieval trials. Facial stimuli in this figure were collected from the royalty-free database (https://www.photo-ac.com/) for illustration purposes only. All verbal items were presented in Japanese. English is used here for illustration purposes only.

Figure 1.

Example of the encoding and retrieval trials. (A) Example of the encoding trials. (B) Examples of the retrieval trials. Facial stimuli in this figure were collected from the royalty-free database (https://www.photo-ac.com/) for illustration purposes only. All verbal items were presented in Japanese. English is used here for illustration purposes only.

Close modal

Participants

In this study, we scanned 36 young (16 women) and 36 older (18 women) adults and paid them for their participation in the fMRI experiment. All participants were right-handed, native Japanese speakers, with no history of neurological or psychiatric disorders. Their vision was normal or corrected to normal with glasses. All young participants were recruited from the Kyoto University community, and all older participants were recruited from the Kyoto City Silver Human Resource Center. All participants provided written informed consent for a protocol approved by the institutional review board of Graduate School of Human and Environmental Studies, Kyoto University (19-H-10). A priori power analysis for sample size was conducted on a design of repeated-measures ANOVA with an interaction of between-subjects factor of Age Group (Young and Old) and within-subject factor of Facial Expression (Happy, Neutral, and Angry). In this analysis, we employed G*Power Version 3.1 (Faul, Erdfelder, Lang, & Buchner, 2007), which estimated a total sample number of 56 (28 young and 28 older adults) on parameters of small-to-medium effect size (f = 0.2), error probability (α = .05), and power (0.90). The estimated sample size is supported by a similar fMRI study investigating effects of aging and facial expressions on neural mechanisms during the processing of faces (Ebner, Johnson, & Fischer, 2012). To retain sufficient power in the case of missing data by poor performance, large head motion, and so forth, we recruited 36 young and 36 older adults in the present study.

All participants performed several neuropsychological tests, including the Japanese version of the Flinders Handedness Survey (FLANDERS) (Okubo, Suzuki, & Nicholls, 2014; Nicholls, Thomas, Loetscher, & Grimshaw, 2013), the Japanese version of the Montreal Cognitive Assessment (MoCA-J; Fujiwara et al., 2010; Nasreddine et al., 2005), and the Center for Epidemiologic Studies Depression scale (CES-D; Shima, 1985; Radloff, 1977). One young and two older participants showed head movement larger than 1.5 voxels in two or more fMRI runs. In addition, one older participant misunderstood the experimental procedures of the encoding task, one older participant felt sick in the MRI scanner, and one young and one older participant showed possible pathological changes (probable arachnoid cyst) in their structural MRIs. In neuropsychological tests, the MoCA-J score in one young participant was lower than 2 SD of the mean scores in a group of young participants. Regarding the CES-D score, two young participants and one older participant showed worse scores than 2 SD of the mean scores in each group of young and older participants. In the behavioral performance of the fMRI task, four young and two older participants had fewer than three trials in either experimental condition of fMRI analyses. According to these exclusion criteria, behavioral and MRI data from nine young participants and eight older participants were excluded from all analyses. Thus, the analyses were based on data from 27 young (12 women; mean age = 21.19 [SD = 1.62] years) and 28 older (14 women; mean age = 67.36 [SD = 2.57] years) adults.

Age, education year, FLANDERS score, MoCA-J score, and CES-D score data for each participant were compared by two-sample t tests (two-tailed) between age groups of young and older adults. A significant difference between the two groups was identified in age, t(53) = 79.38, p < .001, d = 21.41, and the MoCA-J score, t(53) = 4.75, p < .001, d = 1.28. However, we did not find significant differences in years of education, t(53) = 0.60, p = .55, d = 0.16; the FLANDERS score, t(53) = 1.31, p = .20, d = 0.35; or the CES-D score, t(53) = 1.13, p = .26, d = 0.31. Detailed profiles in young and older adults whose data were analyzed are summarized in Table 1.

Table 1.

Participant Characteristics

 Young (SD)Old (SD)Two-sample t test
Age, years 21.19 (1.62) 67.36 (2.57) Young < old*** 
Sex, male: female 15:12 14:14   
Education, years 14.07 (1.24) 14.32 (1.79) n.s. 
FLANDERS 9.41 (1.37) 9.79 (0.69) n.s. 
MoCA-J 28.52 (0.85) 26.36 (2.22) Young > old*** 
CES-D 8.19 (4.46) 9.75 (5.72) n.s. 
 Young (SD)Old (SD)Two-sample t test
Age, years 21.19 (1.62) 67.36 (2.57) Young < old*** 
Sex, male: female 15:12 14:14   
Education, years 14.07 (1.24) 14.32 (1.79) n.s. 
FLANDERS 9.41 (1.37) 9.79 (0.69) n.s. 
MoCA-J 28.52 (0.85) 26.36 (2.22) Young > old*** 
CES-D 8.19 (4.46) 9.75 (5.72) n.s. 

FLANDERS = Japanese version of the Flinder Handedness Survey; MoCA-J = Japanese version of the Montreal Cognitive Assessment; CES-D = the Center for Epidemiologic Studies Depression scale; n.s. = not significant.

***

p < .001.

Stimuli

The stimuli were colored face pictures of 120 unfamiliar persons (60 female and 60 male faces) selected from an in-house database, and each face included happy, neutral, and angry facial expressions. This database contained faces from voluntary pedestrians aged in their thirties and forties in the downtown area of Kyoto city who were asked to pose making happy, angry, and neutral face expressions. All pictures were taken against a gray background, and the eyes of each face were directed to the front. Easily identifiable visual features of each picture, such as blemishes, freckles, moles, scars, and ornaments, were removed (Sugimoto, Dolcos, & Tsukiura, 2021), and the color of the clothes in each picture was converted into a uniform black color using an image processing software (Adobe Photoshop CS 5.1). The resolution of all pictures was resized to 280 × 350 pixels. These pictures of 120 persons with three facial expressions each (for 360 pictures) were divided into three lists of 40 persons each, among which age and sex were controlled to be equal. Using data from 24 healthy younger adults in a previous study (Sugimoto et al., 2021), emotional arousal and valence in happy, neutral, and angry expressions were controlled to be equal across the lists. The scores of arousal and valence were statistically compared among the lists in each facial expression by one-way ANOVAs. The ANOVA for arousal scores in each facial expression showed no significant difference among the lists [happy: F(2,117) = 0.02, p = .98, η2 = .00; neutral: F(2, 117) = 0.15, p = .86, η2 = .00; angry: F(2, 117) = 0.003, p = .997, η2 = .00]. In the ANOVA for valence scores, we did not find a significant difference among the lists in each facial expression [happy: F(2, 117) = 0.06, p = .94, η2 = .00; neutral: F(2, 117) = 0.24, p = .79, η2 = .00; angry: F(2, 117) = 0.32, p = .73, η2 = .01]. Each list was assigned to the condition of either Happy, Neutral, or Angry for target faces to be encoded, and the assignment was counterbalanced across participants.

A set of Japanese family names was also employed in this study. A total of the top 160 popular Japanese family names, which were written by two-letter Japanese kanji that could have different pronunciations, were collected from an on-line database (myoji-yurai.net/prefectureRanking.htm). These 160 names were divided into four lists without popularity bias. One hundred twenty names in three lists were randomly paired with 120 target faces each, and 40 names in one list were used as distracters in the retrieval phase.

Experimental Procedures

fMRI runs included a memory task regarding the encoding and retrieval of face–name pairs and a functional localizer task. Encoding and retrieval runs of the memory task alternated across eight runs, with each retrieval run testing face–name pairs encoded in the previous encoding run. Each set of the four encoding-retrieval runs used different lists of face–name pairs, and there was approximately a 1-min interval between encoding and retrieval runs in each set. After exiting from the scanner, they evaluated the faces in emotional arousal and valence. Stimulus presentations and recording of participants' responses in all tasks were controlled by MATLAB scripts (www.mathworks.com). All participants were fully trained on encoding and retrieval procedures before the experiment.

Memory Task

Figure 1 illustrates encoding and retrieval trials in a memory task of face–name pairs. During both encoding and retrieval, each stimulus was presented for 3500 msec and was followed by a jittered (2500–7500 msec) visual fixation as ISI. During each encoding run, participants were randomly presented with 30 face–name pairs one by one. For each pair, they were instructed to learn each pair by reading the name silently and pressing a key to indicate the expression of the face (“Happy,” “Neutral,” or “Angry”). During each retrieval run, participants were presented with 30 names of the face–name pairs encoded in the previous run mixed with 10 new names in random order. For each name, participants were told that if they believed the name was not paired with a face in the previous encoding run, they should press “New.” If they believed the name was paired with a face in the previous encoding run, they should indicate the expression of the face by pressing “Happy,” “Neutral,” or “Angry.” If they believed the name was paired with a face in the previous encoding run but could not remember the expression, they should press “Unknown.” They were asked to make responses during encoding and during retrieval as quickly as possible.

In the present study, we focused on the analyses of fMRI data only from the encoding runs. Trials that showed no response in either the encoding or retrieval run and that facial expressions were erroneously judged in the encoding runs were excluded from all analyses. In trials in which learned names were presented, trials in which facial expressions associated with the names were successfully recalled were defined as Hit; trials in which facial expressions associated with the names were erroneously recalled or were categorized as “Unknown,” or in which the names were judged as “New” were defined as Miss. The Hit and Miss trials were subdivided into the three facial expressions, Happy, Neutral, and Angry, in which each facial expression was presented during encoding.

Functional Localizer Task

After completing the memory task of face–name associations, participants performed a run of the functional localizer task (Matsuda et al., 2013), in which movies of emotional facial expressions were presented. The rationale of using movies of facial expressions was that dynamic facial expressions had produced greater activation in the face-related regions than static images (Foley, Rippon, Thai, Longe, & Senior, 2012; Fox, Iaria, & Barton, 2009; Sato, Kochiyama, Yoshikawa, Naito, & Matsumura, 2004; LaBar, Crupain, Voyvodic, & McCarthy, 2003). In addition, the functional localizer task enabled us to identify brain regions reflecting the common processing of multiple facial expressions rather than the processing of a selective facial expression.

In this task, participants were presented with 2-sec movies of male and female faces, in which a neutral facial expression was changed to either an emotional facial expression of joy, fear, anger, or disgust, or with 2-sec movies in which the original movies of male and female faces were transformed into mosaic forms for the control. Thus, we prepared 16 movies, including 8 original and 8 control movies. In addition, we prepared another version of the 2-sec original and control movies, into which the momentary presentation (100 msec) of building pictures was inserted around every three trials at a random time. Participants were required to press the corresponding button as fast as possible when they noticed the building pictures. These movies with the momentary presentation of building pictures included four original and four control stimuli. All of these movies were randomly presented one by one for 2000 msec each, and a visual fixation was shown as ISI, jittered with variable durations (2500–5500 msec). This task included 120 trials, in which 24 movie stimuli were repeated 5 times.

Evaluation Task

After scanning, participants rated the emotional arousal and valence elicited by the encoded faces. In one run, the 120 encoded faces were presented and rated in emotional arousal (1 = calm, 9 = exciting), and in another run, the same faces were presented and rated in emotional valence (1 = unpleasant, 9 = pleasant). The faces were presented in random order, each for 2000 msec for young adults and for 3000 msec for older adults with a 1000-msec ISI. The order of the two rating runs was counterbalanced across participants.

MRI Data Acquisition

All MRI data were acquired by a MAGNETOM Verio 3-T MRI scanner (Siemens), which is located at the Kokoro Research Center, Kyoto University. Stimuli were visually presented on an MRI-compatible display (Nordic Neuro Lab, Inc.), and participants viewed the stimuli through a mirror attached to the head coil of the MRI scanner. Behavioral responses were recorded by a five-button fiber optic response pad (Current Designs, Inc.), which was assigned to the right hand. Head motion in the scanner was minimized by a neck supporter and foam pads, and scanner noise was reduced by ear plugs. First, three directional T1-weighted structural images were acquired to localize the subsequent functional and high-resolution anatomical images. Second, functional images were recorded using a pulse sequence of gradient-echo EPI, which is sensitive to blood oxygenation level-dependent contrast (repetition time = 1500 msec, flip angle = 60°, echo time = 38.8 msec, field of view = 22.0 cm × 22.0 cm, matrix size = 100 × 100, 68 horizontal slices, slice thickness/gap = 2.2/0 mm, multiband factor = 4). Finally, high-resolution T1-weighted structural images were obtained using MPRAGE (repetition time = 2250 msec, echo time = 3.51 msec, field of view = 25.6 cm, matrix size = 256 × 256, 208 horizontal slices, slice thickness/gap = 1.0/0 mm).

fMRI Data Analysis

Preprocessing

All MRI data were preprocessed by Statistical Parametric Mapping 12 (SPM12: www.fil.ion.ucl.ac.uk/spm/software/spm12/) implemented in MATLAB (www.mathworks.com). In the preprocessing, fMRI data from the memory and functional localizer tasks were analyzed separately. First, the initial six volumes of functional images in each run were discarded to prevent an initial dip. Second, six parameters of head motion were extracted from a series of the remaining functional images. Third, a high-resolution structural image was coregistered to the first volume of the functional images. Fourth, during the spatial normalization process, we estimated parameters to fit anatomical space of the structural image to the Tissue Probability Map in the Montreal Neurological Institute (MNI) template, and the parameters were written to all functional images (resampled resolution = 2.2 mm × 2.2 mm × 2.2 mm). Finally, these normalized functional images were spatially smoothed by a Gaussian kernel of FWHM = 5 mm. These functional images after all the preprocessing steps were applied to the univariate analyses in the memory task and functional localizer task and to the functional connectivity analysis in the memory task. In MVPA of the memory task, functional images without spatial smoothing were analyzed.

Univariate Analysis in the Functional Localizer Task and ROI Definition

Functional images in the functional localizer task were statistically analyzed to define ROIs related to the processing of faces and facial expressions. Statistical analyses were performed in SPM12 at the individual level and then at the group level. In the individual-level (fixed-effect) analysis, trial-related activation was modeled by convolving a vector of onsets with a canonical hemodynamic response function (HRF) in the context of the general linear model (GLM), in which the timing of stimulus presentation was defined as the onset with an event duration of 0 sec. This model included nine regressors reflecting four conditions related to the original movies of each facial expression (Happy, Fear, Angry, and Disgust), four control conditions related to the mosaic movies transformed from the original movies (Happy-Mosaic, Fear-Mosaic, Angry-Mosaic, and Disgust-Mosaic), and one dummy condition in which a building picture was inserted into the original and control movies. Six parameters related to head motion were also included in this model as confounding factors. Activation related to the processing of faces and facial expressions was identified by comparing all conditions of the original movies (Happy, Fear, Angry, and Disgust) with all control conditions of the mosaic movies (Happy-Mosaic, Fear-Mosaic, Angry-Mosaic, and Disgust-Mosaic), and the contrast yielded a t statistic in each voxel. A contrast image was created for each participant.

In the group-level (random-effect) analyses, contrast images produced by the individual-level analysis were analyzed by a one-sample t test for all participants in both age groups. This test produced an activation map reflecting greater activation during the general processing of faces and facial expressions than during simple visual processing. In the whole-brain analysis, the height threshold at the voxel level (p < .001) was corrected for whole-brain multiple comparisons by the family-wise error (FWE) rate (p < .05) with a minimum cluster size of 10 voxels.

Table 2 summarizes results in the functional localizer task. Significant activation was identified in one cluster, which included right pSTS, right FFA, and right occipital face area (OFA), and in each cluster of left pSTS, left FFA, left OFA, and bilateral AMY. These regions were applied to ROI masks in the univariate analysis of the memory task. In addition, the significant activation cluster was combined with each anatomical mask to define the right pSTS, right FFA, and bilateral AMY ROIs, which were used for MVPA (see Figure 4). The pSTS ROI was defined as a cluster reflecting significant activation in a region removing anterior temporal lobe, which was reported in a previous study (Binney, Embleton, Jefferies, Parker, & Ralph, 2010), from the right superior temporal gyrus and middle temporal gyrus of the automated anatomical labeling (AAL) ROI package. A cluster showing significant activation in the right fusiform gyrus of the AAL ROI package (Tzourio-Mazoyer et al., 2002) was defined as the right FFA ROI. A cluster showing significant activation in the bilateral AMY extracted from a previous study (Amunts et al., 2005) was defined as the bilateral AMY ROI.

Table 2.

Regions Showing Significant Activation in Functional Localizer Task

RegionsL/RBAMNI coordinatesZ Valuek
xyz
Whole-brain analysis 
Middle temporal gyrus (pSTS)a 21/22/37 −54 −64 5.74 116 
Fusiform gyrus (FFA)a 37 −41 −57 −22 6.88 64 
Middle/inferior occipital gyrus (OFA)a 19 −45 −79 −6 6.03 74 
Superior/middle temporal gyrus (pSTS)a 19/21/22/37/42 45 −75 −8 Inf 1110 
Fusiform gyrus (FFA)a               
Middle/inferior occipital gyrus (OFA)a               
AMYa   −21 −6 −17 7.69 102 
AMYa   21 −9 −15 Inf 71 
  
ROI-based analysis (OFC) 
Inferior orbitofrontal gyrusb 47 −41 29 −6 4.15 
  
ROI-based analysis (posterior parts of the right superior and middle temporal gyri) 
Superior/middle temporal gyrus (pSTS)b 21/22/37/41 45 −66 7.08 1227 
  
ROI-based analysis (fusiform gyrus) 
Fusiform gyrus (FFA)b 19/37 41 −48 −19 Inf 171 
RegionsL/RBAMNI coordinatesZ Valuek
xyz
Whole-brain analysis 
Middle temporal gyrus (pSTS)a 21/22/37 −54 −64 5.74 116 
Fusiform gyrus (FFA)a 37 −41 −57 −22 6.88 64 
Middle/inferior occipital gyrus (OFA)a 19 −45 −79 −6 6.03 74 
Superior/middle temporal gyrus (pSTS)a 19/21/22/37/42 45 −75 −8 Inf 1110 
Fusiform gyrus (FFA)a               
Middle/inferior occipital gyrus (OFA)a               
AMYa   −21 −6 −17 7.69 102 
AMYa   21 −9 −15 Inf 71 
  
ROI-based analysis (OFC) 
Inferior orbitofrontal gyrusb 47 −41 29 −6 4.15 
  
ROI-based analysis (posterior parts of the right superior and middle temporal gyri) 
Superior/middle temporal gyrus (pSTS)b 21/22/37/41 45 −66 7.08 1227 
  
ROI-based analysis (fusiform gyrus) 
Fusiform gyrus (FFA)b 19/37 41 −48 −19 Inf 171 

BA = Brodmann area; k = cluster size; L = left; R = right.

a

Cluster used as ROI in MVPA after masking it with the corresponding anatomical ROI.

b

MNI coordinate used for the center of a seed VOI in the functional connectivity analysis.

The ROI mask in bilateral OFC was defined anatomically, which included bilateral regions in the superior, middle, inferior, and medial orbitofrontal gyri defined by the AAL ROI package. This OFC ROI was used in the univariate analysis and MVPA. To determine seed VOIs in the functional connectivity analyses, significant voxels fulfilling the height threshold (p < .001) were corrected for multiple comparisons in each region of bilateral OFC, posterior parts of the right superior and middle temporal gyri, and the right fusiform gyri, which were created by the AAL ROI package mentioned above. Significant activation was found in each region, in which peak voxels in left OFC (x = −41, y = 29, z = −6), right pSTS (x = 45, y = −66, z = 0), and right FFA (x = 41, y = −48, z = −19) were employed as center voxels of each seed VOI for the functional connectivity analysis.

Univariate Analysis

In the present study, we focused on the statistical analysis of fMRI data only from four runs during the encoding phase in the memory task. Retrieval-related activity will be analyzed and reported elsewhere in the future. In one young adult and one older adult who showed head movements larger than 1.5 voxels during either run of the encoding phase, fMRI data from the remaining three encoding runs were used in the univariate analysis, MVPA, and functional connectivity analysis.

In the univariate analysis of the memory task, using SPM12, functional images were analyzed at the individual level and then at the group level. In the individual-level (fixed-effect) analysis, we modeled trial-related activation by convolving onset vectors with a canonical HRF in the context of the GLM. The onset timing, when face–name associations were presented, was defined as an event with a duration of 0 sec. Regressors in this model included three facial expressions (Happy, Neutral, and Angry) and one no-response (NR) condition, which was defined as encoding trials in which participants showed no response in the encoding and/or retrieval phases and exhibited failure in judging facial expressions in the encoding phase. Six parameters related to head motion were also included in this model as confounding factors. Activation reflecting the processing of each facial expression (Happy, Neutral, and Angry) was computed by comparison with baseline activation by one-sample t tests, and the contrast yielded a t statistic in each voxel. The three contrast images in each facial expression (Happy, Neutral, and Angry) were created for each participant.

At the group-level (random-effect) analyses, the three contrast images (Happy, Neutral, and Angry) obtained by the individual-level analysis were analyzed with a two-way mixed ANOVA with factors of Age Group (Young and Old) and Facial Expression (Happy, Neutral, and Angry), which was modeled by a flexible factorial design with a subject factor. Three types of analysis were performed. First, to identify regions associated with individual facial expressions, the main effect of facial expression (F test) was inclusively masked with pairs of t contrasts: (a) For happy expressions, the contrasts were Happy > Neutral and Happy > Angry (p < .05); (b) for angry expressions, they were Angry > Neutral and Angry > Happy (p < .05); and (c) for both happy and angry expressions (i.e., arousing expressions), the contrasts were Happy > Neutral and Angry > Neutral (p < .05). Second, to identify age-related decreases in activity, the main effect of age group (F test) was inclusively masked with the t contrast of Young > Old (p < .05). Finally, to investigate differential effects of facial expressions in young and older adults, the interaction of Age Group by Facial Expression (F test) was masked inclusively by two types of t contrast: (a) [(Happy > Neutral in Young) > (Happy > Neutral in Old)] and [(Happy > Angry in Young) > (Happy > Angry in Old)] (p < .05); and (b) [(Angry > Neutral in Young) > (Angry > Neutral in Old)] and [(Angry > Happy in Young) > (Angry > Happy in Old)] (p < .05).

In the foregoing analyses, the height threshold at the voxel level (p < .001) was corrected for multiple comparisons in the hypothesis-driven ROI (FWE, p < .05) with a minimum cluster size of two voxels. ROI in the univariate analyses of the memory task was created by combining regions identified in the functional localizer task with bilateral OFC defined anatomically in the AAL ROI package (Tzourio-Mazoyer et al., 2002). Anatomical sites showing significant activation were primarily defined by the SPM Anatomy toolbox (Eickhoff et al., 2005, 2007; Eickhoff, Heim, Zilles, & Amunts, 2006) and MRIcro (www.cabi.gatech.edu/mricro/mricro).

MVPA

MVPA was performed by Pattern Recognition of Neuroimaging Toolbox (PRoNTo; Schrouff et al., 2013) Version 2.1, which was implemented in MATLAB (www.mathworks.com). In this analysis, we investigated how facial expressions were represented by activity patterns in ROIs related to the processing of faces and facial expressions and how the neural representation was different between young and older adults. The MVPA was conducted to examine activity patterns in OFC, pSTS, FFA, and AMY ROIs. Given that right pSTS and FFA regions are more dominant than the left regions in the processing of faces (Ishai, Schmidt, & Boesiger, 2005; Puce et al., 1998; Kanwisher, McDermott, & Chun, 1997), ROIs in these regions were defined only in the right hemisphere. The OFC and AMY ROIs were defined bilaterally. Details of these ROIs were mentioned above.

Before MVPA, activation in individual trials was estimated by a new GLM in each participant (Rissman, Gazzaley, & D'Esposito, 2004). In this model, activation in each trial was modeled by convolving a vector of onsets with a canonical HRF in the context of the GLM, in which the trial onset was set at the timing when each stimulus was presented with a duration of 0 sec. Six parameters reflecting head motion were also included in this model as a confounding factor. This model produced trial-by-trial beta estimates for the whole brain in each participant, and beta images for individual trials in each participant were applied to the pattern classification model created by PRoNTo.

In MVPA by PRoNTo, first, a whole-brain mask image in which voxels without beta values were excluded was created for each participant, and the pattern classification by PRoNTo was statistically analyzed in the whole-brain mask image. The features were extracted in each ROI and were centered by the mean of training data for each voxel. Three patterns of binary classification (Happy vs. Neutral, Happy vs. Angry, and Happy vs. Angry) were conducted by support vector machine classifiers with a linear kernel in all voxels of each ROI. Training and testing followed a leave-one-run-out cross-validation procedure with three runs for training data and one run for testing data. Mean balanced accuracy (BA) was computed for all ROIs in each participant, and the mean BA values for each ROI were tested by permutation tests. In the permutation tests, pattern classification analyses were repeated 1000 times on data where labels of the two classes were randomly swapped. This manipulation produced a null distribution that simulated potential BA scores, in which the two classes of facial expressions were not represented by activity patterns in each ROI. This procedure has been validated in other studies (Etzel, 2017; Haynes, 2015). These results were corrected by the false discovery rate (FDR; q < .05) to control false-positives (Benjamini & Hochberg, 1995). In addition, we confirmed the BA values by one-sample t tests (one-tailed) for chance level (50%) in each age group; these values have been conventionally employed in functional neuroimaging studies.

Functional Connectivity Analysis

To investigate how functional connectivity related to memory for facial expressions was affected by aging, we analyzed the functional connectivity of HC, which is related to association memory (for review, see Diana, Yonelinas, & Ranganath, 2007; Eichenbaum, Yonelinas, & Ranganath, 2007; Davachi, 2006), with left OFC, right pSTS, and FFA as seed regions in each age group. These seeds were decided by results in the functional localizer task, in which regions related to the processing of faces and facial expressions were identified. In the functional connectivity analysis, we employed a generalized form of context-dependent psychophysiological interaction (gPPI; McLaren, Ries, Xu, & Johnson, 2012). Before preparing the gPPI analysis, four encoding runs were collapsed into one run, and trial-related regressors of six conditions, which were decided by facial expression (Happy, Neutral, and Angry) and subsequent memory performance during retrieval (Hit and Miss), were remodeled by convolving onset vectors with a canonical HRF in the context of the GLM. The onset timing, when each stimulus was presented, was set as an event with a duration of 0 sec. The NR condition was also applied to this model as a regressor. Six parameters reflecting head motion in each participant were included in this model as confounding factors.

Regions showing significant activation in the ROI analysis of the functional localizer task were defined as seed regions. Seed regions in OFC, right pSTS, and right FFA were set as a VOI sphere with a 6-mm radius at the center of the peak voxel in the functional localizer task. However, the seed VOIs in the left OFC were not significantly extracted from data of one young adult and one older adult. Thus, the functional connectivity analysis of the left OFC seed was conducted with fMRI data from 26 young and 27 older adults.

The functional connectivity analysis was performed with the gPPI toolbox (www.nitrc.org/projects/gppi), by which a model at the individual level was created. The model included a design matrix with three columns of (1) condition-related regressors formed by convolving vectors of condition-related onsets with a canonical HRF, (2) time series BOLD signals extracted from the seed region, and (3) PPI regressors as the interaction between (1) and (2). In the present study, the gPPI toolbox produced a model including the PPI and condition-related regressors of six experimental conditions (Happy-Hit, Happy-Miss, Neutral-Hit, Neutral-Miss, Angry-Hit, and Angry-Miss) and the NR condition, as well as the BOLD signals in each seed. In addition, six regressors related to head motion were included in this model as confounding factors. Parameters in this model were estimated in each participant. Linear contrasts were computed in the model for each seed region, and regions showing a significant effect in the PPI regressor contrasts were considered to be functionally connected with each seed region at the statistical threshold. Contrast images of the PPI regressors reflecting functional connectivity during successful and unsuccessful encoding in three facial expressions (Happy-Hit, Happy-Miss, Neutral-Hit, Neutral-Miss, Angry-Hit, and Angry-Miss) were obtained for each participant. In addition, the PPI regressor contrasts were computed by comparing successful with unsuccessful encoding in each facial expression (Happy-Hit > Happy-Miss, Neutral-Hit > Neutral-Miss, and Angry-Hit > Angry-Miss) and by comparing between facial expressions in the Hit trials (Happy-Hit > Neutral-Hit, Happy-Hit > Angry-Hit, Neutral-Hit > Happy-Hit, Neutral-Hit > Angry-Hit, Angry-Hit > Happy-Hit, and Angry-Hit > Neutral-Hit). These contrast images were used in the group-level analysis.

In the group-level analysis, we investigated how functional connectivity patterns during successful encoding in each facial expression were identified in each age group of young and older adults. In the functional connectivity analysis specific to the Happy-Hit condition, a one-sample t test for the Happy-Hit contrasts was inclusively masked by three contrasts of Happy-Hit > Happy-Miss, Happy-Hit > Neutral-Hit, and Happy-Hit > Angry-Hit (p < .05). Functional connectivity specific to the Angry-Hit condition was analyzed in a one-sample t test for the Angry-Hit contrasts, which was masked inclusively by contrasts of Angry-Hit > Angry-Miss, Angry-Hit > Neutral-Hit, and Angry-Hit > Happy-Hit (p < .05). The same procedures of statistical analysis for the PPI regressor contrast images were employed to find significant functional connectivity specific to the Neutral-Hit condition. In these analyses, the height threshold at the voxel level (p < .001) was corrected for multiple comparisons in HC ROI (Amunts et al., 2005) (FWE, p < .05) with a minimum cluster size of two voxels.

Behavioral Results

Table 3 summarizes young and older adults' behavioral data during (1) the encoding phase (RTs), (2) the retrieval phase (accuracy and RTs), and (3) the arousal/valence rating phase.

Table 3.

Behavioral Results

 Young (SD)Old (SD)
HappyNeutralAngryHappyNeutralAngry
Encoding 
Response time (msec) 
Subsequent hit 1601.47 (311.69) 1618.97 (297.08) 1694.30 (348.17) 1603.63 (328.74) 1643.79 (394.56) 1751.27 (342.78) 
Subsequent miss 1542.95 (267.28) 1676.42 (331.24) 1783.35 (325.37) 1653.63 (356.20) 1634.94 (368.10) 1806.16 (335.80) 
Retrieval 
Proportion of recall accuracy for facial expressions 
Hit/hit for names 0.62 (0.16) 0.64 (0.20) 0.62 (0.18) 0.45 (0.17) 0.46 (0.12) 0.38 (0.14) 
Proportion of recognition accuracy for names 
Hit for names 0.78 (0.17) 0.75 (0.16) 0.74 (0.16) 0.90 (0.12) 0.89 (0.10) 0.89 (0.11) 
Miss for names 0.22 (0.17) 0.25 (0.16) 0.26 (0.16) 0.10 (0.12) 0.11 (0.09) 0.11 (0.11) 
FA for names 0.18 (0.17) 0.56 (0.27) 
CR for names 0.82 (0.17) 0.44 (0.27) 
Number of trialsa 
Hit 18.96 (7.35) 17.74 (6.81) 17.19 (7.88) 15.29 (6.23) 15.68 (4.06) 12.07 (5.22) 
Miss 19.56 (6.85) 19.52 (7.26) 19.19 (6.57) 22.46 (6.48) 22.64 (3.80) 23.54 (5.90) 
FA for names 6.59 (5.92) 21.07 (9.97) 
CR for names 31.59 (7.57) 16.75 (10.65) 
Response time (msec) 
Hit 1868.34 (290.11) 2038.44 (342.97) 1948.71 (255.42) 2098.08 (364.88) 2280.68 (342.20) 2222.67 (428.80) 
Miss 2252.41 (322.30) 2204.01 (327.05) 2244.64 (359.06) 2398.73 (461.66) 2326.42 (467.67) 2383.47 (418.55) 
FA for names 2534.43 (405.88) 2428.22 (448.14) 
CR for names 1773.10 (352.89) 2151.80 (425.32) 
Rating scores 
Emotional arousal 6.15 (0.97) 1.75 (0.56) 6.07 (0.96) 6.42 (0.95) 2.92 (1.75) 6.46 (1.21) 
Emotional valence 7.28 (0.57) 4.94 (0.18) 2.83 (0.52) 7.52 (0.51) 4.83 (0.40) 2.51 (0.57) 
 Young (SD)Old (SD)
HappyNeutralAngryHappyNeutralAngry
Encoding 
Response time (msec) 
Subsequent hit 1601.47 (311.69) 1618.97 (297.08) 1694.30 (348.17) 1603.63 (328.74) 1643.79 (394.56) 1751.27 (342.78) 
Subsequent miss 1542.95 (267.28) 1676.42 (331.24) 1783.35 (325.37) 1653.63 (356.20) 1634.94 (368.10) 1806.16 (335.80) 
Retrieval 
Proportion of recall accuracy for facial expressions 
Hit/hit for names 0.62 (0.16) 0.64 (0.20) 0.62 (0.18) 0.45 (0.17) 0.46 (0.12) 0.38 (0.14) 
Proportion of recognition accuracy for names 
Hit for names 0.78 (0.17) 0.75 (0.16) 0.74 (0.16) 0.90 (0.12) 0.89 (0.10) 0.89 (0.11) 
Miss for names 0.22 (0.17) 0.25 (0.16) 0.26 (0.16) 0.10 (0.12) 0.11 (0.09) 0.11 (0.11) 
FA for names 0.18 (0.17) 0.56 (0.27) 
CR for names 0.82 (0.17) 0.44 (0.27) 
Number of trialsa 
Hit 18.96 (7.35) 17.74 (6.81) 17.19 (7.88) 15.29 (6.23) 15.68 (4.06) 12.07 (5.22) 
Miss 19.56 (6.85) 19.52 (7.26) 19.19 (6.57) 22.46 (6.48) 22.64 (3.80) 23.54 (5.90) 
FA for names 6.59 (5.92) 21.07 (9.97) 
CR for names 31.59 (7.57) 16.75 (10.65) 
Response time (msec) 
Hit 1868.34 (290.11) 2038.44 (342.97) 1948.71 (255.42) 2098.08 (364.88) 2280.68 (342.20) 2222.67 (428.80) 
Miss 2252.41 (322.30) 2204.01 (327.05) 2244.64 (359.06) 2398.73 (461.66) 2326.42 (467.67) 2383.47 (418.55) 
FA for names 2534.43 (405.88) 2428.22 (448.14) 
CR for names 1773.10 (352.89) 2151.80 (425.32) 
Rating scores 
Emotional arousal 6.15 (0.97) 1.75 (0.56) 6.07 (0.96) 6.42 (0.95) 2.92 (1.75) 6.46 (1.21) 
Emotional valence 7.28 (0.57) 4.94 (0.18) 2.83 (0.52) 7.52 (0.51) 4.83 (0.40) 2.51 (0.57) 

SD = standard deviation; FA = false alarm; CR = correct rejection.

a

The Hit trial was defined as the correct remembering of facial expressions associated with learned names, and the Miss trial included the correct recognition of names (incorrect remembering of facial expressions associated with learned names, and the “Unknown” responses to learned names) and incorrect recognition of names (the “New” responses to learned names).

Encoding

Encoding RTs.

These RTs correspond to the task of judging facial expressions of happy, neutral, or angry faces. Encoding RTs were analyzed with three-way mixed ANOVAs with factors of Age Group (Young and Old), Facial Expression (Happy, Neutral, and Angry), and Subsequent Memory Performance (subsequent Hit and subsequent Miss). Post hoc tests in all analyses used the Bonferroni method. The ANOVA on RTs showed significant main effects of Facial Expression, F(2, 106) = 22.98, p < .001, ηp2 = .30, and Subsequent Memory Performance, F(1, 53) = 6.23, p = .016, ηp2 = .11, as well as reliable interactions between Facial Expression and Subsequent Memory Performance, F(2, 106) = 3.52, p = .033, ηp2 = .06, and between Age Group, Facial Expression, and Subsequent Memory Performance, F(2, 106) = 5.14, p = .007, ηp2 = .09. The remaining main effect and interactions were not significant. Post hoc tests for young adults showed that RTs for happy facial expressions were significantly faster than those for angry facial expressions in the subsequent Miss trials (p < .001), whereas RTs in the subsequent Hit trials did not show significant differences among any facial expressions. Post hoc tests for older adults demonstrated that RTs for happy facial expressions were significantly faster than those for angry facial expressions in the subsequent Hit trials (p = .017), and that RTs for happy (p = .010) and neutral (p = .002) facial expressions were significantly faster than those for angry facial expressions in the subsequent Miss trials. Significant difference of RTs between the subsequent Hit and Miss trials was not found in any facial expressions.

Retrieval

Accuracy.

Recall accuracies for facial expressions were defined as the proportion of the Hit trials to the Hit trials for names, and were analyzed with a two-way mixed ANOVA with factors of Age Group (Young and Old) and Facial Expression (Happy, Neutral, and Angry). The ANOVA demonstrated a significant main effect of Age Group, F(1, 53) = 31.43, p < .001, ηp2 = .37, but not a main effect of Facial Expression, F(2, 106) = 2.73, p = .070, ηp2 = .05, and an interaction between Age Group and Facial Expression, F(2, 106) = 1.14, p = .323, ηp2 = .02. The recall accuracies are illustrated in Figure 2, which shows a clear facial expression memory deficit in older adults compared with young adults.

Figure 2.

Behavioral results of retrieval performance and response time in the Hit trials (correct retrieval of facial expressions associated with learned names) during retrieval. (A) Recall accuracy for facial expressions (Hit trials/ Hit trials for names). (B) Response time in the Hit trials during retrieval. Error bars represent standard errors.

Figure 2.

Behavioral results of retrieval performance and response time in the Hit trials (correct retrieval of facial expressions associated with learned names) during retrieval. (A) Recall accuracy for facial expressions (Hit trials/ Hit trials for names). (B) Response time in the Hit trials during retrieval. Error bars represent standard errors.

Close modal

Recognition accuracies for names, which were defined as the proportion of the Hit trials for names learned in the encoding run to all trials for learned names, were analyzed with a two-way mixed ANOVA with factors of Age Group (Young and Old) and Facial Expression (Happy, Neutral, and Angry). In this ANOVA, a main effect of Age Group was significant, F(1, 53) = 15.39, p < .001, ηp2 = .23, but not a main effect of Facial Expression, F(2, 106) = 2.05, p = .134, ηp2 = .04, and an interaction between Age Group and Facial Expression, F(2, 106) = 0.72, p = .490, ηp2 = .00.

Retrieval RTs.

Retrieval RTs were also analyzed with three-way mixed ANOVAs with factors of Age Group (Young and Old), Facial Expression (Happy, Neutral, and Angry), and Memory Performance (Hit and Miss). The ANOVA showed significant main effects of Age Group, F(1, 53) = 4.76, p = .034, ηp2 = .08; Facial Expression, F(2, 106) = 4.95, p = .009, ηp2 = .09; Memory Performance, F(1, 53) = 50.31, p < .001, ηp2 = .49; and a significant interaction between Facial Expression and Memory Performance, F(2, 106) = 12.32, p < .001, ηp2 = .19. The other interactions were not significant. Post hoc tests showed that happy facial expressions were remembered faster than neutral (p < .001) and angry (p = .015) facial expressions only in the Hit trials. In post hoc tests, we also found that happy and angry facial expressions were remembered faster in the Hit trials than in the Miss trials (p < .001). However, a significant difference of RTs between the Hit and Miss trials was not identified in neutral facial expressions. The RT results reflected that the enhancing retrieval of happy facial expressions was observed commonly in both young and older adults (see Figure 2).

Ratings

Arousal and valence rating scores were analyzed using two-way mixed ANOVAs with factors of Age Group (Young and Old) and Facial Expression (Happy, Neutral, and Angry) separately for arousal and valence ratings. The ANOVA on arousal ratings revealed significant main effects of Age Group, F(1, 53) = 7.79, p = .007, ηp2 = .13, and Facial Expression, F(2, 106) = 306.00, p < .001, ηp2 = .85, as well as a reliable interaction between them, F(2, 106) = 3.61, p = .031, ηp2 = .06. Post hoc tests showed that happy and angry faces were rated as being more arousing than neutral faces (p <. 001 in both contrasts) and that neutral faces were rated as being more arousing by older adults than by young adults (p = .003). The ANOVA on valence ratings yielded a nonsignificant main effect of Age Group, F(1, 53) = 1.38, p = .246, ηp2 = .03, a reliable main effect of Facial Expression, F(2, 106) = 1078.49, p < .001, ηp2 = .95, and a significant interaction between Age Group and Facial Expression, F(2, 106) = 3.84, p = .025, ηp2 = .07. Post hoc tests showed happy faces were rated as being more positive than neutral and angry faces, and that neutral faces were rated as being more positive than angry faces in both age groups (p <. 001 in all contrasts). In post hoc tests, no significant difference between young and older adults was not found in any facial expressions.

fMRI Results

Univariate Analysis

In the univariate analysis, using ANOVA, we found significantly greater activation in pSTS, FFA, and AMY during the processing of angry facial expression than that of other facial expressions, and activity in AMY also significantly increased in both angry and happy facial expressions compared with neutral facial expression. However, activity in these regions did not reflect a main effect of age group, and an interaction of age group with facial expression was not significant.

Encoding-related activation in the memory task was analyzed with a two-way mixed ANOVA with factors of Age Group (Young and Old) and Facial Expression (Happy, Neutral, and Angry). Three types of analysis were performed (see Methods section), and their results are displayed in Figure 3 and Table 4. In the first analysis, which focused on expression-specific effects, the right pSTS, F(2, 106) = 19.02, p < .001, ηp2 = .26; F(2, 106) = 16.49, p < .001, ηp2 = .24; right FFA, F(2, 106) = 14.02, p < .001, ηp2 = .21; and bilateral AMY [left AMY: F(2, 106) = 16.23, p < .001, ηp2 = .23; right AMY: F(2, 106) = 16.02, p < .001, ηp2 = .23] showed activation that was significantly greater for angry expressions than for both happy and neutral expressions. In addition, the right AMY displayed greater activity for both happy and angry facial expressions than for neutral facial expressions, consistent with arousal rating scores, F(2, 106) = 16.02, p < .001, ηp2 = .23. These results of significant activation were corrected for multiple comparisons in the hypothesis-driven ROI (FWE, p < .05). No significant activation was identified in happy facial expressions compared with the other facial expressions. In the second analysis, which focused on age-related differences in activation, and in the third analysis, which focused on interaction between age group and facial expression, no significant activation was found in any region.

Figure 3.

Results of univariate analysis. (A) Regions showing significantly greater activation in angry facial expression than in happy and neutral facial expression. (B) Regions showing significantly greater activation in happy and angry facial expression than in neutral expression. The parameter estimates in graphs were extracted from peak voxels in each region. Error bars represent standard errors. Hap = happy facial expression; Neu = neutral facial expression; Ang = angry facial expression.

Figure 3.

Results of univariate analysis. (A) Regions showing significantly greater activation in angry facial expression than in happy and neutral facial expression. (B) Regions showing significantly greater activation in happy and angry facial expression than in neutral expression. The parameter estimates in graphs were extracted from peak voxels in each region. Error bars represent standard errors. Hap = happy facial expression; Neu = neutral facial expression; Ang = angry facial expression.

Close modal
Table 4.

Regions Showing Significant Activation

RegionsL/RBAMNI coordinatesZ Valuek
xyz
Main effect of facial expression (masked inclusively by Angry > Happy & Angry > Neutral) 
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) 
Middle temporal gyrus (pSTS) 21/37 52 −50 5.22 39 
Middle temporal gyrus (pSTS) 21/22 52 −37 4.86 23 
Fusiform gyrus (FFA) 37 43 −48 −15 4.47 
AMY   −21 −6 −17 4.82 
AMY   23 −6 −17 4.79 
Main effect of facial expression (masked inclusively by Happy > Angry & Happy > Neutral) 
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) 
No significant activation was identified 
Main effect of facial expression (masked inclusively by Angry > Neutral & Happy > Neutral) 
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) 
AMY   23 −6 −17 4.79 
Main effect of age group (masked inclusively by young > old) 
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) 
No significant activation was identified 
Interaction between facial expression and group 
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) 
No significant activation was identified 
RegionsL/RBAMNI coordinatesZ Valuek
xyz
Main effect of facial expression (masked inclusively by Angry > Happy & Angry > Neutral) 
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) 
Middle temporal gyrus (pSTS) 21/37 52 −50 5.22 39 
Middle temporal gyrus (pSTS) 21/22 52 −37 4.86 23 
Fusiform gyrus (FFA) 37 43 −48 −15 4.47 
AMY   −21 −6 −17 4.82 
AMY   23 −6 −17 4.79 
Main effect of facial expression (masked inclusively by Happy > Angry & Happy > Neutral) 
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) 
No significant activation was identified 
Main effect of facial expression (masked inclusively by Angry > Neutral & Happy > Neutral) 
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) 
AMY   23 −6 −17 4.79 
Main effect of age group (masked inclusively by young > old) 
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) 
No significant activation was identified 
Interaction between facial expression and group 
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) 
No significant activation was identified 

BA = Brodmann area; k = cluster size; L = left; R = right.

MVPA

The MVPA analysis demonstrated that discrimination between facial expressions by activity patterns in pSTS was significantly accurate in both age groups, whereas in FFA and OFC activity patterns, significant classification accuracies to discriminate between facial expressions were found only in young adults. In AMY, classification accuracies to discriminate between facial expressions were not significant in both young and older adults.

The MVPA results are displayed in Figure 4 and Table 5. The accuracy of MVPA in classifying facial expressions during the encoding phase was separately analyzed in four ROIs: OFC, right pSTS, right FFA, and AMY. All significant results were corrected by the FDR (q < .05) to control false-positives (Benjamini & Hochberg, 1995). The accuracy scores (BA) in bilateral OFC showed that activation patterns in this region could successfully classify happy versus angry faces in both young and older adults (Young: p < .014; Old: p < .001). In right pSTS, activation patterns successfully distinguish angry versus happy faces (Young: p < .001; Old: p < .001), and angry versus neutral faces (Young: p < .004; Old: p < .001) in both age groups. In contrast, only young adults displayed activation patterns that could accurately classify happy versus neutral faces in OFC (p < .001), and angry versus happy (p < .016) or neutral faces (p < .013) in right FFA. Finally, neither age groups displayed significant classification accuracy of facial expressions in AMY.

Figure 4.

(A) ROI image used in MVPA (colored blue). (B) Multivariate classification accuracy (balanced accuracy) for facial expressions during the encoding phase in each ROI. Error bars represent standard errors, and the dotted line represents chance-level classification accuracy (50%). * = significant results by permutation tests (FDR, q < .05); Hap = happy facial expression; Neu = neutral facial expression; Ang = angry facial expression.

Figure 4.

(A) ROI image used in MVPA (colored blue). (B) Multivariate classification accuracy (balanced accuracy) for facial expressions during the encoding phase in each ROI. Error bars represent standard errors, and the dotted line represents chance-level classification accuracy (50%). * = significant results by permutation tests (FDR, q < .05); Hap = happy facial expression; Neu = neutral facial expression; Ang = angry facial expression.

Close modal
Table 5.

MVPA Results of Balanced Accuracies and p Values in Each ROI

ROIBalanced accuracy (SD)p value
Permutation testOne-sample t test
YoungOldYoungOldYoungOld
OFC 
Happy vs. Neutral 0.55 (0.05) 0.52 (0.07) < .001a .037 < .001 .040 
Happy vs. Angry 0.53 (0.08) 0.54 (0.09) .014a < .001a .029 .006 
Neutral vs. Angry 0.52 (0.08) 0.52 (0.05) .108 .044 .176 .019 
  
Right pSTS 
Happy vs. Neutral 0.52 (0.06) 0.51 (0.05) .065 .163 .039 .084 
Happy vs. Angry 0.56 (0.07) 0.55 (0.06) < .001a < .001a < .001 < .001 
Neutral vs. Angry 0.54 (0.06) 0.57 (0.05) .004a < .001a .001 < .001 
  
Right FFA 
Happy vs. Neutral 0.50 (0.06) 0.50 (0.08) .604 .396 .625 .431 
Happy vs. Angry 0.53 (0.05) 0.50 (0.07) .016a .416 .004 .454 
Neutral vs. Angry 0.53 (0.06) 0.51 (0.07) .013a .189 .007 .214 
  
AMY 
Happy vs. Neutral 0.51 (0.06) 0.50 (0.06) .182 .395 .169 .398 
Happy vs. Angry 0.50 (0.05) 0.52 (0.07) .464 .080 .473 .068 
Neutral vs. Angry 0.49 (0.06) 0.50 (0.06) .757 .430 .797 .454 
ROIBalanced accuracy (SD)p value
Permutation testOne-sample t test
YoungOldYoungOldYoungOld
OFC 
Happy vs. Neutral 0.55 (0.05) 0.52 (0.07) < .001a .037 < .001 .040 
Happy vs. Angry 0.53 (0.08) 0.54 (0.09) .014a < .001a .029 .006 
Neutral vs. Angry 0.52 (0.08) 0.52 (0.05) .108 .044 .176 .019 
  
Right pSTS 
Happy vs. Neutral 0.52 (0.06) 0.51 (0.05) .065 .163 .039 .084 
Happy vs. Angry 0.56 (0.07) 0.55 (0.06) < .001a < .001a < .001 < .001 
Neutral vs. Angry 0.54 (0.06) 0.57 (0.05) .004a < .001a .001 < .001 
  
Right FFA 
Happy vs. Neutral 0.50 (0.06) 0.50 (0.08) .604 .396 .625 .431 
Happy vs. Angry 0.53 (0.05) 0.50 (0.07) .016a .416 .004 .454 
Neutral vs. Angry 0.53 (0.06) 0.51 (0.07) .013a .189 .007 .214 
  
AMY 
Happy vs. Neutral 0.51 (0.06) 0.50 (0.06) .182 .395 .169 .398 
Happy vs. Angry 0.50 (0.05) 0.52 (0.07) .464 .080 .473 .068 
Neutral vs. Angry 0.49 (0.06) 0.50 (0.06) .757 .430 .797 .454 

SD = standard deviation.

a

Significant results after FDR correction for the results of the permutation tests (q < .05). p values are shown before the FDR correction (uncorrected).

Functional Connectivity Analysis

In the functional connectivity analysis, we found that functional connectivity reflecting subsequent recollection of facial expressions was significant between HC and OFC for happy facial expressions in both age groups, between HC and FFA for happy and neutral facial expressions only in young adults, and between HC and pSTS for happy facial expressions only in older adults.

Previous studies have shown that successful memory encoding of faces is associated with increased functional connectivity between a critical region for the recollection of episodic memories, HC (for review, see Diana et al., 2007; Eichenbaum et al., 2007; Davachi, 2006), and cortical regions involved in the processing faces and their expressions (Tsukiura & Cabeza, 2008, 2011; Dennis et al., 2008). Thus, for each age group and each facial expression, we assessed functional connectivity (gPPI) predicting subsequent recollection of facial expressions associated with names between HC and three cortical ROIs, left OFC, right pSTS, and right FFA (seeds identified in the functional localizer task). Results of the functional connectivity analysis are illustrated in Figure 5, including Z values and coordinates of the regions showing the effects. In left OFC, significant functional connectivity with HC was found for happy facial expressions in both age groups [Young: t(25) = 5.08, p < .001, d = 1.00; Old: t(26) = 4,92, p < .001, d = 0.95]. In right pSTS, reliable functional connectivity with HC was observed for happy facial expressions only in the old group [left HC: t(27) = 4.77, p < .001, d = 0.90; right HC: t(27) = 5.23, p < .001, d = 0.99]. Finally, in FFA, significant functional connectivity with HC was found for both happy, t(26) = 6.21, p < .001, d = 1.20, and neutral facial expressions, t(26) = 8.00, p < .001, d = 1.54, only in young adults. These results of significant functional connectivity were corrected for multiple comparisons in HC ROI (FWE, p < .05).

Figure 5.

Regions showing significant functional connectivity during successful encoding. (A) Functional connectivity between the HC and left OFC for happy facial expressions in young adults (Z value = 4.17) and older adults (Z value = 4.10). (B) Functional connectivity between the HC and right pSTS for happy facial expressions in older adults (left: Z value = 4.02; right: Z value = 4.31). (C) Functional connectivity between the HC and right FFA for happy (Z value = 4.82) and neutral facial expressions (Z value = 5.63) in young adults.

Figure 5.

Regions showing significant functional connectivity during successful encoding. (A) Functional connectivity between the HC and left OFC for happy facial expressions in young adults (Z value = 4.17) and older adults (Z value = 4.10). (B) Functional connectivity between the HC and right pSTS for happy facial expressions in older adults (left: Z value = 4.02; right: Z value = 4.31). (C) Functional connectivity between the HC and right FFA for happy (Z value = 4.82) and neutral facial expressions (Z value = 5.63) in young adults.

Close modal

In terms of age effects, two sets of findings emerged from the present study. First, during the processing of facial expressions, univariate activity and MVPA discrimination in pSTS and AMY were similar in both young and older adults, whereas MVPA activity patterns in FFA and OFC discriminated facial expressions less accurately in older than young adults. These results suggest that neural representations of facial expressions in FFA and OFC are affected by the age-related dedifferentiation and that activity patterns in OFC reflect the age-related positivity effects. Second, functional connectivity predicting subsequent face recollection was significant between HC and OFC for happy facial expressions in both age groups, between HC and FFA for happy and neutral facial expressions only in young adults, and between HC and pSTS for happy facial expressions only in older adults. Some of these results suggest the compensatory mechanisms and positivity effects in older adults. These two sets of findings are discussed in separate sections below.

Univariate and MVPA Results during the Perception of Emotional Facial Expressions

The first set of findings was that univariate activity and multivariate activity patterns in pSTS and AMY during the processing of facial expressions were similar in young and older adults, whereas in FFA and OFC, multivariate activity patterns discriminated facial expressions less accurately in older than young adults. These findings suggest that the contributions of pSTS and AMY to the processing of facial expressions are relatively preserved in older adults, whereas the representations of facial expressions in FFA and OFC are affected by the dedifferentiation in older adults.

In univariate analyses, we found that for both young and older adults, AMY activity was enhanced by both happy and angry facial expressions, and that pSTS and FFA activity was increased by angry facial expressions. These findings are consistent with previous cognitive neuroscience studies. For example, AMY shows significantly greater activity for highly arousing faces than for neutral faces (Winston, O'Doherty, & Dolan, 2003; Yang et al., 2002; Breiter et al., 1996), and AMY lesions reliably impairs the perception of negative facial expressions (Sato et al., 2002; Adolphs, Tranel, Damasio, & Damasio, 1994). The involvement of pSTS and FFA in the processing of emotional facial expressions has been also identified by prior studies (Sormaz, Watson, Smith, Young, & Andrews, 2016; Zhang et al., 2016; Wegrzyn et al., 2015; Harry et al., 2013; Said et al., 2010). The absence of age effects in univariate analysis is consistent with evidence that the ability to discriminate facial expressions (Murphy, Millgate, Geary, Catmur, & Bird, 2019; D'Argembeau & van der Linden, 2004), the utilization of visual cues to discriminate facial expressions (Smith et al., 2018), and the neural mechanisms of the processing of facial expressions (Goncalves et al., 2018) are relatively preserved in older adults.

MVPA results showed that activity patterns in FFA successfully classified facial expressions in young but not in older adults. The finding of significant MVPA classification in young adults fits with abundant evidence of the importance of FFA for processing facial expressions, including functional neuroimaging (Zhao et al., 2020; Wegrzyn et al., 2015; Skerry & Saxe, 2014; Harry et al., 2013; Fox, Moon, Iaria, & Barton, 2009) and prosopagnosia (Bentin, Degutis, D'Esposito, & Robertson, 2007) findings. The failure to distinguish facial expressions by FFA activity patterns in older adults agrees with evidence that FFA representations for facial identity display the dedifferentiation in older adults (Lee et al., 2011; Goh et al., 2010). Functional neuroimaging results suggest that FFA represents the morphological difference conveyed by facial identity (for review, see Bernstein & Yovel, 2015). Thus, one possibility is that impaired representations of facial expressions in older adults stem from a deficit in processing facial identity, which is a well-known deficit in older adults (Chaby, Narme, & George, 2011; Habak, Wilkinson, & Wilson, 2008; Boutet & Faubert, 2006). In the present study, we assessed the discrimination of facial expressions but not the discrimination of facial identities. A future study that examines both types of discrimination in the same participants could investigate the hypothesis that deficits in the two abilities interact in older adults.

In OFC, activity patterns in older adults distinguished between happy and angry facial expressions but not between happy and neutral facial expressions, whereas activity patterns in young adults allowed both distinctions. The finding that OFC representations in older adults could not distinguish happy facial expressions from emotionally ambiguous neutral facial expressions is consistent with the positivity effect, which has observed in a previous study as emotionally ambiguous stimuli being rated more positively in older than young adults (Zebrowitz et al., 2017). It is a topic of debate whether the positivity effect in older adults reflects motivational differences in allocating attention to information or more basic deficits in the processing mechanisms of emotions (for review, see Ruffman, Henry, Livingstone, & Phillips, 2008; Mather & Carstensen, 2005). In terms of the latter perspective, the present finding of age-related dedifferentiation in OFC is consistent with evidence that this region is anatomically impaired by aging (Shen et al., 2013; Salat et al., 2009; Lamar & Resnick, 2004; Tisserand et al., 2002). This OFC deficit might lead to a positive shift in older adults. This alternative is consistent with the finding that faces with negative facial expressions were rated as more approachable by patients with OFC lesions than controls (Willis, Palermo, Burke, McGrillen, & Miller, 2010).

Functional Connectivity Predicting Subsequent Recollection of Facial Expressions

The second set of findings was that encoding-related functional connectivity was found between HC and OFC for happy faces in both age groups, between HC and pSTS for happy faces only in older adults, and between HC and FFA for happy and neutral faces only in young adults. Before discussing these findings, it is worth mentioning that angry facial expressions did not modulate encoding-related functional connectivity between HC and cortical regions, consistent with the lack of enhanced memory for angry faces. These results could be related to the use of a cued-recall test that does not display faces during retrieval, unlike the recognition test used in some studies. Consistent with this idea, several studies in which emotional faces were presented during retrieval found that angry expressions enhanced face memory performance (Keightley, Chiew, Anderson, & Grady, 2011; Sergerie, Lepage, & Armony, 2005; Foa, Gilboa-Schechtman, Amir, & Freshman, 2000), whereas others in which emotional faces were not presented during retrieval (e.g., cued-recall or source memory test) did not find the angry-related memory enhancement (Bowen & Kensinger, 2017; D'Argembeau & van der Linden, 2007; Shimamura, Ross, & Bennett, 2006; Fenker, Schott, Richardson-Klavehn, Heinze, & Duzel, 2005).

The finding that HC–OFC interactions contributed to the encoding of happy faces is consistent with our previous results (Tsukiura & Cabeza, 2008). As noted before, happy facial expressions have rewarding values in a social context (Hayward, Pereira, Otto, & Ristic, 2018; Yang & Urminsky, 2018) and OFC is involved in the processing of rewards (for review, see O'Doherty, 2004). Moreover, studies on the enhancement of episodic memory by monetary or social rewards have linked this enhancement to functional connectivity between HC and OFC (Sugimoto et al., 2021; Frank, Preston, & Zeithamova, 2019; Tsukiura & Cabeza, 2008, 2011; Shigemune et al., 2010). Thus, the present study replicates these literatures by showing that HC–OFC interactions contribute to memory for happy facial expressions in both young and older adults.

In contrast with HC–OFC interactions, functional connectivity between HC and FFA contributed to memory for happy and neutral facial expressions in young but not older adults. The contribution of HC–FFA interactions for face and face-related association memories in young adults have been reported in several studies (Liu, Grady, & Moscovitch, 2018; Summerfield et al., 2006; Sperling et al., 2003). Age-related reductions in functional connectivity between HC and FFA during encoding are consistent with a study on memory for face–scene associations (Dennis et al., 2008). It is possible that age-related decrease in HC–FFA interactions cause the impairment of face-related memories in older adults.

Finally, encoding-related functional connectivity of HC with pSTS was significant for happy facial expressions only in older adults. The additional contribution of pSTS in older adults could reflect a compensatory mechanism in older adults. Several previous studies have demonstrated that higher levels of neural activity or functional connectivity play a compensatory role in older adults (for review, see Cabeza et al., 2018; Sala-Llonch, Bartres-Faz, & Junqué, 2015; Cabeza, 2002). For example, a compensatory functional connectivity between the medial temporal lobe and PFC was recruited in older adults during both encoding and retrieval of episodic memories (Dennis et al., 2008; Daselaar, Fleck, Dobbins, Madden, & Cabeza, 2006). In another fMRI study, interacting mechanisms between HC and ventromedial PFC during the successful encoding of emotionally positive pictures were significantly more active in older adults than in young adults (Addis et al., 2010). Thus, the encoding-related HC-pSTS functional connectivity for happy facial expressions in older adults could reflect the age-dependent compensatory mechanisms for positive socioemotional values, an effect related to the positivity effect.

Conclusion

In the present event-related fMRI study, we investigated age-related differences in neural representations and functional connectivity during the perception and subsequent memory of emotional facial expressions associated with names. First, during the perception of emotional facial expressions, univariate activity and multivariate activity patterns in pSTS and AMY were similar in young and older adults, whereas multivariate activity patterns in FFA and OFC classified facial expressions less accurately in older adults. The latter results suggest that neural representations of facial expressions in FFA and OFC are affected by age-related dedifferentiation, and that activity patterns in OFC reflect the positivity effect, which is a tendency to interpret neutral facial expressions as emotionally positive expressions in older adults. Second, recollection-predicting functional connectivity was found between HC and OFC for happy facial expressions in both age groups, between HC and FFA for happy and neutral facial expressions only in young adults, and between HC and pSTS for happy facial expressions only in older adults. These findings could reflect compensatory mechanisms and positivity effects in older adults. Taken together, the results in the present study clarify the effects of aging on neural representations and mechanisms during perceiving and encoding facial expressions.

We would like to thank Drs. Nobuhito Abe, Kohei Asano, and Ryusuke Nakai, and Mses. Aiko Murai, Maki Terao, and Saeko Iwata for their technical assistance in the MRI scanning and data analysis. This work was supported by JSPS KAKENHI grant numbers JP18H04193 (T. T.) and JP20H05802 (T. T.). The research experiments were conducted using an MRI scanner and related facilities at Kokoro Research Center, Kyoto University. The authors declare no competing financial interests.

Reprint requests should be sent to Takashi Tsukiura, Department of Cognitive and Behavioral Sciences, Graduate School of Human and Environmental Studies, Kyoto University, Yoshida-Nihonmatsu-Cho, Sakyo-ku, Kyoto 606-8501, Japan, or via e-mail: [email protected].

Reina Izumika: Conceptualization; Data curation; Formal analysis; Investigation; Methodology; Software; Validation; Visualization; Writing–Original draft. Roberto Cabeza: Supervision; Visualization; Writing–Review & editing. Takashi Tsukiura: Conceptualization; Formal analysis; Funding acquisition; Investigation; Methodology; Project administration; Software; Supervision; Validation; Visualization; Writing–Review & editing.

Takashi Tsukiura, Japan Society for the Promotion of Science (https://dx.doi.org/10.13039/501100001691), grant number: JP18H04193. Takashi Tsukiura, Japan Society for the Promotion of Science (https://dx.doi.org/10.13039/501100001691), grant number: JP20H05802.

Retrospective analysis of the citations in every article published in this journal from 2010 to 2021 reveals a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .407, W(oman)/M = .32, M/W = .115, and W/W = .159, the comparable proportions for the articles that these authorship teams cited were M/M = .549, W/M = .257, M/W = .109, and W/W = .085 (Postle and Fulvio, JoCN, 34:1, pp. 1–3). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance.

Addis
,
D. R.
,
Leclerc
,
C. M.
,
Muscatell
,
K. A.
, &
Kensinger
,
E. A.
(
2010
).
There are age-related changes in neural connectivity during the encoding of positive, but not negative, information
.
Cortex
,
46
,
425
433
. ,
[PubMed]
Adolphs
,
R.
,
Tranel
,
D.
,
Damasio
,
H.
, &
Damasio
,
A.
(
1994
).
Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala
.
Nature
,
372
,
669
672
. ,
[PubMed]
Amunts
,
K.
,
Kedo
,
O.
,
Kindler
,
M.
,
Pieperhoff
,
P.
,
Mohlberg
,
H.
,
Shah
,
N. J.
, et al
(
2005
).
Cytoarchitectonic mapping of the human amygdala, hippocampal region and entorhinal cortex: Intersubject variability and probability maps
.
Anatomy and Embryology
,
210
,
343
352
. ,
[PubMed]
Baltes
,
P. B.
, &
Lindenberger
,
U.
(
1997
).
Emergence of a powerful connection between sensory and cognitive functions across the adult life span: A new window to the study of cognitive aging?
Psychology and Aging
,
12
,
12
21
. ,
[PubMed]
Benjamini
,
Y.
, &
Hochberg
,
Y.
(
1995
).
Controlling the false discovery rate: A practical and powerful approach to multiple testing
.
Journal of the Royal Statistical Society. Series B (Methodological)
,
57
,
289
300
.
Bentin
,
S.
,
Degutis
,
J. M.
,
D'Esposito
,
M.
, &
Robertson
,
L. C.
(
2007
).
Too many trees to see the forest: Performance, event-related potential, and functional magnetic resonance imaging manifestations of integrative congenital prosopagnosia
.
Journal of Cognitive Neuroscience
,
19
,
132
146
. ,
[PubMed]
Bernstein
,
M.
, &
Yovel
,
G.
(
2015
).
Two neural pathways of face processing: A critical evaluation of current models
.
Neuroscience and Biobehavioral Reviews
,
55
,
536
546
. ,
[PubMed]
Binney
,
R. J.
,
Embleton
,
K. V.
,
Jefferies
,
E.
,
Parker
,
G. J.
, &
Ralph
,
M. A.
(
2010
).
The ventral and inferolateral aspects of the anterior temporal lobe are crucial in semantic memory: Evidence from a novel direct comparison of distortion-corrected fMRI, rTMS, and semantic dementia
.
Cerebral Cortex
,
20
,
2728
2738
. ,
[PubMed]
Bolla
,
K. I.
,
Lindgren
,
K. N.
,
Bonaccorsy
,
C.
, &
Bleecker
,
M. L.
(
1991
).
Memory complaints in older adults. Fact or fiction?
Archives of Neurology
,
48
,
61
64
. ,
[PubMed]
Boutet
,
I.
, &
Faubert
,
J.
(
2006
).
Recognition of faces and complex objects in younger and older adults
.
Memory and Cognition
,
34
,
854
864
. ,
[PubMed]
Boutet
,
I.
,
Taler
,
V.
, &
Collin
,
C. A.
(
2015
).
On the particular vulnerability of face recognition to aging: A review of three hypotheses
.
Frontiers in Psychology
,
6
,
1139
. ,
[PubMed]
Bowen
,
H. J.
, &
Kensinger
,
E. A.
(
2017
).
Recapitulation of emotional source context during memory retrieval
.
Cortex
,
91
,
142
156
. ,
[PubMed]
Breiter
,
H. C.
,
Etcoff
,
N. L.
,
Whalen
,
P. J.
,
Kennedy
,
W. A.
,
Rauch
,
S. L.
,
Buckner
,
R. L.
, et al
(
1996
).
Response and habituation of the human amygdala during visual processing of facial expression
.
Neuron
,
17
,
875
887
. ,
[PubMed]
Cabeza
,
R.
(
2002
).
Hemispheric asymmetry reduction in older adults: The HAROLD model
.
Psychology and Aging
,
17
,
85
100
. ,
[PubMed]
Cabeza
,
R.
,
Albert
,
M.
,
Belleville
,
S.
,
Craik
,
F. I. M.
,
Duarte
,
A.
,
Grady
,
C. L.
, et al
(
2018
).
Maintenance, reserve and compensation: The cognitive neuroscience of healthy ageing
.
Nature Reviews Neuroscience
,
19
,
701
710
. ,
[PubMed]
Chaby
,
L.
,
Narme
,
P.
, &
George
,
N.
(
2011
).
Older adults' configural processing of faces: Role of second-order information
.
Psychology and Aging
,
26
,
71
79
. ,
[PubMed]
Cohen
,
G.
, &
Faulkner
,
D.
(
1986
).
Memory for proper names: Age differences in retrieval
.
British Journal of Developmental Psychology
,
4
,
187
197
.
Comblain
,
C.
,
D'Argembeau
,
A.
, &
van der Linden
,
M.
(
2005
).
Phenomenal characteristics of autobiographical memories for emotional and neutral events in older and younger adults
.
Experimental Aging Research
,
31
,
173
189
. ,
[PubMed]
Crook
,
T. H.
, &
West
,
R. L.
(
1990
).
Name recall performance across the adult life-span
.
British Journal of Psychology
,
81
,
335
349
. ,
[PubMed]
D'Argembeau
,
A.
, &
van der Linden
,
M.
(
2004
).
Identity but not expression memory for unfamiliar faces is affected by ageing
.
Memory
,
12
,
644
654
. ,
[PubMed]
D'Argembeau
,
A.
, &
van der Linden
,
M.
(
2007
).
Facial expressions of emotion influence memory for facial identity in an automatic way
.
Emotion
,
7
,
507
515
. ,
[PubMed]
Daselaar
,
S. M.
,
Fleck
,
M. S.
,
Dobbins
,
I. G.
,
Madden
,
D. J.
, &
Cabeza
,
R.
(
2006
).
Effects of healthy aging on hippocampal and rhinal memory functions: An event-related fMRI study
.
Cerebral Cortex
,
16
,
1771
1782
. ,
[PubMed]
Davachi
,
L.
(
2006
).
Item, context and relational episodic encoding in humans
.
Current Opinion in Neurobiology
,
16
,
693
700
. ,
[PubMed]
Deng
,
L.
,
Davis
,
S. W.
,
Monge
,
Z. A.
,
Wing
,
E. A.
,
Geib
,
B. R.
,
Raghunandan
,
A.
, et al
(
2021
).
Age-related dedifferentiation and hyperdifferentiation of perceptual and mnemonic representations
.
Neurobiology of Aging
,
106
,
55
67
. ,
[PubMed]
Dennis
,
N. A.
, &
Cabeza
,
R.
(
2011
).
Age-related dedifferentiation of learning systems: An fMRI study of implicit and explicit learning
.
Neurobiology of Aging
,
32
,
2318.e17
2318.e30
. ,
[PubMed]
Dennis
,
N. A.
,
Hayes
,
S. M.
,
Prince
,
S. E.
,
Madden
,
D. J.
,
Huettel
,
S. A.
, &
Cabeza
,
R.
(
2008
).
Effects of aging on the neural correlates of successful item and source memory encoding
.
Journal of Experimental Psychology: Learning, Memory, and Cognition
,
34
,
791
808
. ,
[PubMed]
Dennis
,
N. A.
,
Overman
,
A. A.
,
Gerver
,
C. R.
,
McGraw
,
K. E.
,
Rowley
,
M. A.
, &
Salerno
,
J. M.
(
2019
).
Different types of associative encoding evoke differential processing in both younger and older adults: Evidence from univariate and multivariate analyses
.
Neuropsychologia
,
135
,
107240
. ,
[PubMed]
Diana
,
R. A.
,
Yonelinas
,
A. P.
, &
Ranganath
,
C.
(
2007
).
Imaging recollection and familiarity in the medial temporal lobe: A three-component model
.
Trends in Cognitive Sciences
,
11
,
379
386
. ,
[PubMed]
Ebner
,
N. C.
,
Johnson
,
M. K.
, &
Fischer
,
H.
(
2012
).
Neural mechanisms of reading facial emotions in young and older adults
.
Frontiers in Psychology
,
3
,
223
. ,
[PubMed]
Eichenbaum
,
H.
,
Yonelinas
,
A. P.
, &
Ranganath
,
C.
(
2007
).
The medial temporal lobe and recognition memory
.
Annual Review of Neuroscience
,
30
,
123
152
. ,
[PubMed]
Eickhoff
,
S. B.
,
Heim
,
S.
,
Zilles
,
K.
, &
Amunts
,
K.
(
2006
).
Testing anatomically specified hypotheses in functional imaging using cytoarchitectonic maps
.
Neuroimage
,
32
,
570
582
. ,
[PubMed]
Eickhoff
,
S. B.
,
Paus
,
T.
,
Caspers
,
S.
,
Grosbras
,
M. H.
,
Evans
,
A. C.
,
Zilles
,
K.
, et al
(
2007
).
Assignment of functional activations to probabilistic cytoarchitectonic areas revisited
.
Neuroimage
,
36
,
511
521
. ,
[PubMed]
Eickhoff
,
S. B.
,
Stephan
,
K. E.
,
Mohlberg
,
H.
,
Grefkes
,
C.
,
Fink
,
G. R.
,
Amunts
,
K.
, et al
(
2005
).
A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data
.
Neuroimage
,
25
,
1325
1335
. ,
[PubMed]
Etzel
,
J. A.
(
2017
).
MVPA significance testing when just above chance, and related properties of permutation tests
. In
2017 International Workshop on Pattern Recognition in Neuroimaging
,
1
4
.
Faul
,
F.
,
Erdfelder
,
E.
,
Lang
,
A. G.
, &
Buchner
,
A.
(
2007
).
G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences
.
Behavior Research Methods
,
39
,
175
191
. ,
[PubMed]
Fenker
,
D. B.
,
Schott
,
B. H.
,
Richardson-Klavehn
,
A.
,
Heinze
,
H. J.
, &
Duzel
,
E.
(
2005
).
Recapitulating emotional context: Activity of amygdala, hippocampus and fusiform cortex during recollection and familiarity
.
European Journal of Neuroscience
,
21
,
1993
1999
. ,
[PubMed]
Fjell
,
A. M.
,
Walhovd
,
K. B.
,
Fennema-Notestine
,
C.
,
McEvoy
,
L. K.
,
Hagler
,
D. J.
,
Holland
,
D.
, et al
(
2009
).
One-year brain atrophy evident in healthy aging
.
Journal of Neuroscience
,
29
,
15223
15231
. ,
[PubMed]
Foa
,
E. B.
,
Gilboa-Schechtman
,
E.
,
Amir
,
N.
, &
Freshman
,
M.
(
2000
).
Memory bias in generalized social phobia: Remembering negative emotional expressions
.
Journal of Anxiety Disorders
,
14
,
501
519
. ,
[PubMed]
Foley
,
E.
,
Rippon
,
G.
,
Thai
,
N. J.
,
Longe
,
O.
, &
Senior
,
C.
(
2012
).
Dynamic facial expressions evoke distinct activation in the face perception network: A connectivity analysis study
.
Journal of Cognitive Neuroscience
,
24
,
507
520
. ,
[PubMed]
Fox
,
C. J.
,
Iaria
,
G.
, &
Barton
,
J. J.
(
2009
).
Defining the face processing network: Optimization of the functional localizer in fMRI
.
Human Brain Mapping
,
30
,
1637
1651
. ,
[PubMed]
Fox
,
C. J.
,
Moon
,
S. Y.
,
Iaria
,
G.
, &
Barton
,
J. J.
(
2009
).
The correlates of subjective perception of identity and expression in the face network: An fMRI adaptation study
.
Neuroimage
,
44
,
569
580
. ,
[PubMed]
Frank
,
L. E.
,
Preston
,
A. R.
, &
Zeithamova
,
D.
(
2019
).
Functional connectivity between memory and reward centers across task and rest track memory sensitivity to reward
.
Cognitive, Affective and Behavioral Neuroscience
,
19
,
503
522
. ,
[PubMed]
Franklin
,
R. G.
, Jr.
, &
Zebrowitz
,
L. A.
(
2017
).
Age differences in emotion recognition: Task demands or perceptual dedifferentiation?
Experimental Aging Research
,
43
,
453
466
. ,
[PubMed]
Fujiwara
,
Y.
,
Suzuki
,
H.
,
Yasunaga
,
M.
,
Sugiyama
,
M.
,
Ijuin
,
M.
,
Sakuma
,
N.
, et al
(
2010
).
Brief screening tool for mild cognitive impairment in older Japanese: Validation of the Japanese version of the Montreal Cognitive Assessment
.
Geriatrics and Gerontology International
,
10
,
225
232
. ,
[PubMed]
Gallo
,
D. A.
,
Korthauer
,
L. E.
,
McDonough
,
I. M.
,
Teshale
,
S.
, &
Johnson
,
E. L.
(
2011
).
Age-related positivity effects and autobiographical memory detail: Evidence from a past/future source memory task
.
Memory
,
19
,
641
652
. ,
[PubMed]
Ghuman
,
A. S.
,
Brunet
,
N. M.
,
Li
,
Y.
,
Konecky
,
R. O.
,
Pyles
,
J. A.
,
Walls
,
S. A.
, et al
(
2014
).
Dynamic encoding of face information in the human fusiform gyrus
.
Nature Communications
,
5
,
5672
. ,
[PubMed]
Goh
,
J. O.
,
Suzuki
,
A.
, &
Park
,
D. C.
(
2010
).
Reduced neural selectivity increases fMRI adaptation with age during face discrimination
.
Neuroimage
,
51
,
336
344
. ,
[PubMed]
Goncalves
,
A. R.
,
Fernandes
,
C.
,
Pasion
,
R.
,
Ferreira-Santos
,
F.
,
Barbosa
,
F.
, &
Marques-Teixeira
,
J.
(
2018
).
Emotion identification and aging: Behavioral and neural age-related changes
.
Clinical Neurophysiology
,
129
,
1020
1029
. ,
[PubMed]
Goodkind
,
M. S.
,
Sollberger
,
M.
,
Gyurak
,
A.
,
Rosen
,
H. J.
,
Rankin
,
K. P.
,
Miller
,
B.
, et al
(
2012
).
Tracking emotional valence: The role of the orbitofrontal cortex
.
Human Brain Mapping
,
33
,
753
762
. ,
[PubMed]
Greene
,
N. R.
, &
Naveh-Benjamin
,
M.
(
2020
).
A specificity principle of memory: Evidence from aging and associative memory
.
Psychological Science
,
31
,
316
331
. ,
[PubMed]
Habak
,
C.
,
Wilkinson
,
F.
, &
Wilson
,
H. R.
(
2008
).
Aging disrupts the neural transformations that link facial identity across views
.
Vision Research
,
48
,
9
15
. ,
[PubMed]
Harry
,
B.
,
Williams
,
M. A.
,
Davis
,
C.
, &
Kim
,
J.
(
2013
).
Emotional expressions evoke a differential response in the fusiform face area
.
Frontiers in Human Neuroscience
,
7
,
692
. ,
[PubMed]
Haxby
,
J. V.
,
Gobbini
,
M. I.
,
Furey
,
M. L.
,
Ishai
,
A.
,
Schouten
,
J. L.
, &
Pietrini
,
P.
(
2001
).
Distributed and overlapping representations of faces and objects in ventral temporal cortex
.
Science
,
293
,
2425
2430
. ,
[PubMed]
Haynes
,
J. D.
(
2015
).
A primer on pattern-based approaches to fMRI: Principles, pitfalls, and perspectives
.
Neuron
,
87
,
257
270
. ,
[PubMed]
Hayward
,
D. A.
,
Pereira
,
E. J.
,
Otto
,
A. R.
, &
Ristic
,
J.
(
2018
).
Smile! Social reward drives attention
.
Journal of Experimental Psychology: Human Perception and Performance
,
44
,
206
214
. ,
[PubMed]
Heberlein
,
A. S.
,
Padon
,
A. A.
,
Gillihan
,
S. J.
,
Farah
,
M. J.
, &
Fellows
,
L. K.
(
2008
).
Ventromedial frontal lobe plays a critical role in facial emotion recognition
.
Journal of Cognitive Neuroscience
,
20
,
721
733
. ,
[PubMed]
Hill
,
P. F.
,
King
,
D. R.
, &
Rugg
,
M. D.
(
2021
).
Age differences in retrieval-related reinstatement reflect age-related dedifferentiation at encoding
.
Cerebral Cortex
,
31
,
106
122
. ,
[PubMed]
Hornak
,
J.
,
Bramham
,
J.
,
Rolls
,
E. T.
,
Morris
,
R. G.
,
O'Doherty
,
J.
,
Bullock
,
P. R.
, et al
(
2003
).
Changes in emotion after circumscribed surgical lesions of the orbitofrontal and cingulate cortices
.
Brain
,
126
,
1691
1712
. ,
[PubMed]
Hornak
,
J.
,
Rolls
,
E. T.
, &
Wade
,
D.
(
1996
).
Face and voice expression identification in patients with emotional and behavioural changes following ventral frontal lobe damage
.
Neuropsychologia
,
34
,
247
261
. ,
[PubMed]
Huan
,
S. Y.
,
Liu
,
K. P.
,
Lei
,
X.
, &
Yu
,
J.
(
2020
).
Age-related emotional bias in associative memory consolidation: The role of sleep
.
Neurobiology of Learning and Memory
,
171
,
107204
. ,
[PubMed]
Ishai
,
A.
,
Schmidt
,
C. F.
, &
Boesiger
,
P.
(
2005
).
Face perception is mediated by a distributed cortical network
.
Brain Research Bulletin
,
67
,
87
93
. ,
[PubMed]
James
,
L. E.
,
Fogler
,
K. A.
, &
Tauber
,
S. K.
(
2008
).
Recognition memory measures yield disproportionate effects of aging on learning face–name associations
.
Psychology and Aging
,
23
,
657
664
. ,
[PubMed]
Kalkstein
,
J.
,
Checksfield
,
K.
,
Bollinger
,
J.
, &
Gazzaley
,
A.
(
2011
).
Diminished top–down control underlies a visual imagery deficit in normal aging
.
Journal of Neuroscience
,
31
,
15768
15774
. ,
[PubMed]
Kanwisher
,
N.
,
McDermott
,
J.
, &
Chun
,
M. M.
(
1997
).
The fusiform face area: A module in human extrastriate cortex specialized for face perception
.
Journal of Neuroscience
,
17
,
4302
4311
. ,
[PubMed]
Katsumi
,
Y.
,
Andreano
,
J. M.
,
Barrett
,
L. F.
,
Dickerson
,
B. C.
, &
Touroutoglou
,
A.
(
2021
).
Greater neural differentiation in the ventral visual cortex is associated with youthful memory in superaging
.
Cerebral Cortex
,
31
,
5275
5287
. ,
[PubMed]
Keightley
,
M. L.
,
Chiew
,
K. S.
,
Anderson
,
J. A.
, &
Grady
,
C. L.
(
2011
).
Neural correlates of recognition memory for emotional faces and scenes
.
Social Cognitive and Affective Neuroscience
,
6
,
24
37
. ,
[PubMed]
Koen
,
J. D.
, &
Rugg
,
M. D.
(
2019
).
Neural dedifferentiation in the aging brain
.
Trends in Cognitive Sciences
,
23
,
547
559
. ,
[PubMed]
LaBar
,
K. S.
,
Crupain
,
M. J.
,
Voyvodic
,
J. T.
, &
McCarthy
,
G.
(
2003
).
Dynamic perception of facial affect and identity in the human brain
.
Cerebral Cortex
,
13
,
1023
1033
. ,
[PubMed]
Lamar
,
M.
, &
Resnick
,
S. M.
(
2004
).
Aging and prefrontal functions: Dissociating orbitofrontal and dorsolateral abilities
.
Neurobiology of Aging
,
25
,
553
558
. ,
[PubMed]
Lee
,
Y.
,
Grady
,
C. L.
,
Habak
,
C.
,
Wilson
,
H. R.
, &
Moscovitch
,
M.
(
2011
).
Face processing changes in normal aging revealed by fMRI adaptation
.
Journal of Cognitive Neuroscience
,
23
,
3433
3447
. ,
[PubMed]
Leigland
,
L. A.
,
Schulz
,
L. E.
, &
Janowsky
,
J. S.
(
2004
).
Age related changes in emotional memory
.
Neurobiology of Aging
,
25
,
1117
1124
. ,
[PubMed]
Leshikar
,
E. D.
,
Gutchess
,
A. H.
,
Hebrank
,
A. C.
,
Sutton
,
B. P.
, &
Park
,
D. C.
(
2010
).
The impact of increased relational encoding demands on frontal and hippocampal function in older adults
.
Cortex
,
46
,
507
521
. ,
[PubMed]
Lindenberger
,
U.
, &
Baltes
,
P. B.
(
1994
).
Sensory functioning and intelligence in old age: A strong connection
.
Psychology and Aging
,
9
,
339
355
. ,
[PubMed]
Liu
,
Z. X.
,
Grady
,
C.
, &
Moscovitch
,
M.
(
2018
).
The effect of prior knowledge on post-encoding brain connectivity and its relation to subsequent memory
.
Neuroimage
,
167
,
211
223
. ,
[PubMed]
Mather
,
M.
, &
Carstensen
,
L. L.
(
2005
).
Aging and motivated cognition: The positivity effect in attention and memory
.
Trends in Cognitive Sciences
,
9
,
496
502
. ,
[PubMed]
Matsuda
,
Y. T.
,
Fujimura
,
T.
,
Katahira
,
K.
,
Okada
,
M.
,
Ueno
,
K.
,
Cheng
,
K.
, et al
(
2013
).
The implicit processing of categorical and dimensional strategies: An fMRI study of facial emotion perception
.
Frontiers in Human Neuroscience
,
7
,
551
. ,
[PubMed]
McLaren
,
D. G.
,
Ries
,
M. L.
,
Xu
,
G.
, &
Johnson
,
S. C.
(
2012
).
A generalized form of context-dependent psychophysiological interactions (gPPI): A comparison to standard approaches
.
Neuroimage
,
61
,
1277
1286
. ,
[PubMed]
Murphy
,
J.
,
Millgate
,
E.
,
Geary
,
H.
,
Catmur
,
C.
, &
Bird
,
G.
(
2019
).
No effect of age on emotion recognition after accounting for cognitive factors and depression
.
Quarterly Journal of Experimental Psychology
,
72
,
2690
2704
. ,
[PubMed]
Nasreddine
,
Z. S.
,
Phillips
,
N. A.
,
Bedirian
,
V.
,
Charbonneau
,
S.
,
Whitehead
,
V.
,
Collin
,
I.
, et al
(
2005
).
The Montreal Cognitive Assessment, MoCA: A brief screening tool for mild cognitive impairment
.
Journal of the American Geriatrics Society
,
53
,
695
699
. ,
[PubMed]
Naveh-Benjamin
,
M.
(
2000
).
Adult age differences in memory performance: Tests of an associative deficit hypothesis
.
Journal of Experimental Psychology: Learning, Memory, and Cognition
,
26
,
1170
1187
. ,
[PubMed]
Naveh-Benjamin
,
M.
,
Guez
,
J.
,
Kilb
,
A.
, &
Reedy
,
S.
(
2004
).
The associative memory deficit of older adults: Further support using face–name associations
.
Psychology and Aging
,
19
,
541
546
. ,
[PubMed]
Ness
,
H. T.
,
Folvik
,
L.
,
Sneve
,
M. H.
,
Vidal-Pineiro
,
D.
,
Raud
,
L.
,
Geier
,
O. M.
, et al
(
2022
).
Reduced hippocampal–striatal interactions during formation of durable episodic memories in aging
.
Cerebral Cortex
,
32
,
2358
2372
. ,
[PubMed]
Nicholls
,
M. E.
,
Thomas
,
N. A.
,
Loetscher
,
T.
, &
Grimshaw
,
G. M.
(
2013
).
The Flinders Handedness survey (FLANDERS): A brief measure of skilled hand preference
.
Cortex
,
49
,
2914
2926
. ,
[PubMed]
O'Doherty
,
J. P.
(
2004
).
Reward representations and reward-related learning in the human brain: Insights from neuroimaging
.
Current Opinion in Neurobiology
,
14
,
769
776
. ,
[PubMed]
Okubo
,
M.
,
Suzuki
,
H.
, &
Nicholls
,
M. E.
(
2014
).
A Japanese version of the FLANDERS handedness questionnaire
.
Japanese Journal of Psychology
,
85
,
474
481
. ,
[PubMed]
Paller
,
K. A.
, &
Wagner
,
A. D.
(
2002
).
Observing the transformation of experience into memory
.
Trends in Cognitive Sciences
,
6
,
93
102
. ,
[PubMed]
Park
,
J.
,
Carp
,
J.
,
Hebrank
,
A.
,
Park
,
D. C.
, &
Polk
,
T. A.
(
2010
).
Neural specificity predicts fluid processing ability in older adults
.
Journal of Neuroscience
,
30
,
9253
9259
. ,
[PubMed]
Park
,
D. C.
,
Polk
,
T. A.
,
Park
,
R.
,
Minear
,
M.
,
Savage
,
A.
, &
Smith
,
M. R.
(
2004
).
Aging reduces neural specialization in ventral visual cortex
.
Proceedings of the National Academy of Sciences, U.S.A.
,
101
,
13091
13095
. ,
[PubMed]
Payer
,
D.
,
Marshuetz
,
C.
,
Sutton
,
B.
,
Hebrank
,
A.
,
Welsh
,
R. C.
, &
Park
,
D. C.
(
2006
).
Decreased neural specialization in old adults on a working memory task
.
NeuroReport
,
17
,
487
491
. ,
[PubMed]
Puce
,
A.
,
Allison
,
T.
,
Bentin
,
S.
,
Gore
,
J. C.
, &
McCarthy
,
G.
(
1998
).
Temporal cortex activation in humans viewing eye and mouth movements
.
Journal of Neuroscience
,
18
,
2188
2199
. ,
[PubMed]
Radloff
,
L. S.
(
1977
).
The CES-D scale: A self-report depression scale for research in the general population
.
Applied Psychological Measurement
,
1
,
385
401
.
Riediger
,
M.
,
Voelkle
,
M. C.
,
Ebner
,
N. C.
, &
Lindenberger
,
U.
(
2011
).
Beyond “happy, angry, or sad?”: Age-of-poser and age-of-rater effects on multi-dimensional emotion perception
.
Cognition and Emotion
,
25
,
968
982
. ,
[PubMed]
Rissman
,
J.
,
Gazzaley
,
A.
, &
D'Esposito
,
M.
(
2004
).
Measuring functional connectivity during distinct stages of a cognitive task
.
Neuroimage
,
23
,
752
763
. ,
[PubMed]
Ruffman
,
T.
,
Henry
,
J. D.
,
Livingstone
,
V.
, &
Phillips
,
L. H.
(
2008
).
A meta-analytic review of emotion recognition and aging: Implications for neuropsychological models of aging
.
Neuroscience and Biobehavioral Reviews
,
32
,
863
881
. ,
[PubMed]
Said
,
C. P.
,
Moore
,
C. D.
,
Engell
,
A. D.
,
Todorov
,
A.
, &
Haxby
,
J. V.
(
2010
).
Distributed representations of dynamic facial expressions in the superior temporal sulcus
.
Journal of Vision
,
10
,
11
. ,
[PubMed]
Sala-Llonch
,
R.
,
Bartrés-Faz
,
D.
, &
Junqué
,
C.
(
2015
).
Reorganization of brain networks in aging: A review of functional connectivity studies
.
Frontiers in Psychology
,
6
,
663
. ,
[PubMed]
Salat
,
D. H.
,
Greve
,
D. N.
,
Pacheco
,
J. L.
,
Quinn
,
B. T.
,
Helmer
,
K. G.
,
Buckner
,
R. L.
, et al
(
2009
).
Regional white matter volume differences in nondemented aging and Alzheimer's disease
.
Neuroimage
,
44
,
1247
1258
. ,
[PubMed]
Sato
,
W.
,
Kochiyama
,
T.
,
Yoshikawa
,
S.
,
Naito
,
E.
, &
Matsumura
,
M.
(
2004
).
Enhanced neural activity in response to dynamic facial expressions of emotion: An fMRI study
.
Cognitive Brain Research
,
20
,
81
91
. ,
[PubMed]
Sato
,
W.
,
Kubota
,
Y.
,
Okada
,
T.
,
Murai
,
T.
,
Yoshikawa
,
S.
, &
Sengoku
,
A.
(
2002
).
Seeing happy emotion in fearful and angry faces: Qualitative analysis of facial expression recognition in a bilateral amygdala-damaged patient
.
Cortex
,
38
,
727
742
. ,
[PubMed]
Saverino
,
C.
,
Fatima
,
Z.
,
Sarraf
,
S.
,
Oder
,
A.
,
Strother
,
S. C.
, &
Grady
,
C. L.
(
2016
).
The associative memory deficit in aging is related to reduced selectivity of brain activity during encoding
.
Journal of Cognitive Neuroscience
,
28
,
1331
1344
. ,
[PubMed]
Schrouff
,
J.
,
Rosa
,
M. J.
,
Rondina
,
J. M.
,
Marquand
,
A. F.
,
Chu
,
C.
,
Ashburner
,
J.
, et al
(
2013
).
PRoNTo: Pattern recognition for neuroimaging toolbox
.
Neuroinformatics
,
11
,
319
337
. ,
[PubMed]
Sergerie
,
K.
,
Lepage
,
M.
, &
Armony
,
J. L.
(
2005
).
A face to remember: Emotional expression modulates prefrontal activity during memory formation
.
Neuroimage
,
24
,
580
585
. ,
[PubMed]
Shen
,
J.
,
Kassir
,
M. A.
,
Wu
,
J.
,
Zhang
,
Q.
,
Zhou
,
S.
,
Xuan
,
S. Y.
, et al
(
2013
).
MR volumetric study of piriform-cortical amygdala and orbitofrontal cortices: The aging effect
.
PLoS One
,
8
,
e74526
. ,
[PubMed]
Shigemune
,
Y.
,
Abe
,
N.
,
Suzuki
,
M.
,
Ueno
,
A.
,
Mori
,
E.
,
Tashiro
,
M.
, et al
(
2010
).
Effects of emotion and reward motivation on neural correlates of episodic memory encoding: A PET study
.
Neuroscience Research
,
67
,
72
79
. ,
[PubMed]
Shima
,
S.
(
1985
).
New self-rating scale for depression
.
Seisin-Igaku
,
27
,
717
723
.
Shimamura
,
A. P.
,
Ross
,
J. G.
, &
Bennett
,
H. D.
(
2006
).
Memory for facial expressions: The power of a smile
.
Psychonomic Bulletin and Review
,
13
,
217
222
. ,
[PubMed]
Skerry
,
A. E.
, &
Saxe
,
R.
(
2014
).
A common neural code for perceived and inferred emotion
.
Journal of Neuroscience
,
34
,
15997
16008
. ,
[PubMed]
Smith
,
M. L.
,
Gruhn
,
D.
,
Bevitt
,
A.
,
Ellis
,
M.
,
Ciripan
,
O.
,
Scrimgeour
,
S.
, et al
(
2018
).
Transmitting and decoding facial expressions of emotion during healthy aging: More similarities than differences
.
Journal of Vision
,
18
,
10
. ,
[PubMed]
Sormaz
,
M.
,
Watson
,
D. M.
,
Smith
,
W. A. P.
,
Young
,
A. W.
, &
Andrews
,
T. J.
(
2016
).
Modelling the perceptual similarity of facial expressions from image statistics and neural responses
.
Neuroimage
,
129
,
64
71
. ,
[PubMed]
Sperling
,
R.
,
Chua
,
E.
,
Cocchiarella
,
A.
,
Rand-Giovannetti
,
E.
,
Poldrack
,
R.
,
Schacter
,
D. L.
, et al
(
2003
).
Putting names to faces: Successful encoding of associative memories activates the anterior hippocampal formation
.
Neuroimage
,
20
,
1400
1410
. ,
[PubMed]
St Jacques
,
P. L.
,
Dolcos
,
F.
, &
Cabeza
,
R.
(
2009
).
Effects of aging on functional connectivity of the amygdala for subsequent memory of negative pictures: A network analysis of functional magnetic resonance imaging data
.
Psychological Science
,
20
,
74
84
. ,
[PubMed]
St-Laurent
,
M.
,
Abdi
,
H.
,
Burianova
,
H.
, &
Grady
,
C. L.
(
2011
).
Influence of aging on the neural correlates of autobiographical, episodic, and semantic memory retrieval
.
Journal of Cognitive Neuroscience
,
23
,
4150
4163
. ,
[PubMed]
Sugimoto
,
H.
,
Dolcos
,
F.
, &
Tsukiura
,
T.
(
2021
).
Memory of my victory and your defeat: Contributions of reward- and memory-related regions to the encoding of winning events in competitions with others
.
Neuropsychologia
,
152
,
107733
. ,
[PubMed]
Summerfield
,
C.
,
Greene
,
M.
,
Wager
,
T.
,
Egner
,
T.
,
Hirsch
,
J.
, &
Mangels
,
J.
(
2006
).
Neocortical connectivity during episodic memory formation
.
PLoS Biology
,
4
,
e128
. ,
[PubMed]
Tisserand
,
D. J.
,
Pruessner
,
J. C.
,
Sanz Arigita
,
E. J.
,
van Boxtel
,
M. P.
,
Evans
,
A. C.
,
Jolles
,
J.
, et al
(
2002
).
Regional frontal cortical volumes decrease differentially in aging: An MRI study to compare volumetric approaches and voxel-based morphometry
.
Neuroimage
,
17
,
657
669
. ,
[PubMed]
Tsukiura
,
T.
, &
Cabeza
,
R.
(
2008
).
Orbitofrontal and hippocampal contributions to memory for face–name associations: The rewarding power of a smile
.
Neuropsychologia
,
46
,
2310
2319
. ,
[PubMed]
Tsukiura
,
T.
, &
Cabeza
,
R.
(
2011
).
Remembering beauty: Roles of orbitofrontal and hippocampal regions in successful memory encoding of attractive faces
.
Neuroimage
,
54
,
653
660
. ,
[PubMed]
Tsukiura
,
T.
,
Sekiguchi
,
A.
,
Yomogida
,
Y.
,
Nakagawa
,
S.
,
Shigemune
,
Y.
,
Kambara
,
T.
, et al
(
2011
).
Effects of aging on hippocampal and anterior temporal activations during successful retrieval of memory for face–name associations
.
Journal of Cognitive Neuroscience
,
23
,
200
213
. ,
[PubMed]
Tzourio-Mazoyer
,
N.
,
Landeau
,
B.
,
Papathanassiou
,
D.
,
Crivello
,
F.
,
Etard
,
O.
,
Delcroix
,
N.
, et al
(
2002
).
Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain
.
Neuroimage
,
15
,
273
289
. ,
[PubMed]
van Reekum
,
C. M.
,
Schaefer
,
S. M.
,
Lapate
,
R. C.
,
Norris
,
C. J.
,
Greischar
,
L. L.
, &
Davidson
,
R. J.
(
2011
).
Aging is associated with positive responding to neutral information but reduced recovery from negative information
.
Social Cognitive and Affective Neuroscience
,
6
,
177
185
. ,
[PubMed]
Watson
,
K. K.
, &
Platt
,
M. L.
(
2012
).
Social signals in primate orbitofrontal cortex
.
Current Biology
,
22
,
2268
2273
. ,
[PubMed]
Wegrzyn
,
M.
,
Riehle
,
M.
,
Labudda
,
K.
,
Woermann
,
F.
,
Baumgartner
,
F.
,
Pollmann
,
S.
, et al
(
2015
).
Investigating the brain basis of facial expression perception using multi-voxel pattern analysis
.
Cortex
,
69
,
131
140
. ,
[PubMed]
Willis
,
M. L.
,
Palermo
,
R.
,
Burke
,
D.
,
McGrillen
,
K.
, &
Miller
,
L.
(
2010
).
Orbitofrontal cortex lesions result in abnormal social judgements to emotional faces
.
Neuropsychologia
,
48
,
2182
2187
. ,
[PubMed]
Winston
,
J. S.
,
O'Doherty
,
J.
, &
Dolan
,
R. J.
(
2003
).
Common and distinct neural responses during direct and incidental processing of multiple facial emotions
.
Neuroimage
,
20
,
84
97
. ,
[PubMed]
Xie
,
Y.
,
Ksander
,
J.
,
Gutchess
,
A.
,
Hadjikhani
,
N.
,
Ward
,
N.
,
Boshyan
,
J.
, et al
(
2021
).
Age differences in neural activation to face trustworthiness: Voxel pattern and activation level assessments
.
Cognitive, Affective and Behavioral Neuroscience
,
21
,
278
291
. ,
[PubMed]
Yang
,
T. T.
,
Menon
,
V.
,
Eliez
,
S.
,
Blasey
,
C.
,
White
,
C. D.
,
Reid
,
A. J.
, et al
(
2002
).
Amygdalar activation associated with positive and negative facial expressions
.
NeuroReport
,
13
,
1737
1741
. ,
[PubMed]
Yang
,
A. X.
, &
Urminsky
,
O.
(
2018
).
The smile-seeking hypothesis: How immediate affective reactions motivate and reward gift giving
.
Psychological Science
,
29
,
1221
1233
. ,
[PubMed]
Zebrowitz
,
L. A.
,
Boshyan
,
J.
,
Ward
,
N.
,
Gutchess
,
A.
, &
Hadjikhani
,
N.
(
2017
).
The older adult positivity effect in evaluations of trustworthiness: Emotion regulation or cognitive capacity?
PLoS One
,
12
,
e0169823
. ,
[PubMed]
Zelinski
,
E. M.
,
Gilewski
,
M. J.
, &
Thompson
,
L. W.
(
1980
).
Do laboratory tests relate to self-assessment of memory ability in the young and old
.
New Directions in Memory and Aging
,
519
544
.
Zhang
,
H.
,
Japee
,
S.
,
Nolan
,
R.
,
Chu
,
C.
,
Liu
,
N.
, &
Ungerleider
,
L. G.
(
2016
).
Face-selective regions differ in their ability to classify facial expressions
.
Neuroimage
,
130
,
77
90
. ,
[PubMed]
Zhao
,
K.
,
Liu
,
M.
,
Gu
,
J.
,
Mo
,
F.
,
Fu
,
X.
, &
Hong Liu
,
C.
(
2020
).
The preponderant role of fusiform face area for the facial expression confusion effect: An MEG study
.
Neuroscience
,
433
,
42
52
. ,
[PubMed]
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.