Abstract
It is known that emotional facial expressions modulate the perception and subsequent recollection of faces and that aging alters these modulatory effects. Yet, the underlying neural mechanisms are not well understood, and they were the focus of the current fMRI study. We scanned healthy young and older adults while perceiving happy, neutral, or angry faces paired with names. Participants were then provided with the names of the faces and asked to recall the facial expression of each face. fMRI analyses focused on the fusiform face area (FFA), the posterior superior temporal sulcus (pSTS), the OFC, the amygdala (AMY), and the hippocampus (HC). Univariate activity, multivariate pattern (MVPA), and functional connectivity analyses were performed. The study yielded two main sets of findings. First, in pSTS and AMY, univariate activity and MVPA discrimination during the processing of facial expressions were similar in young and older adults, whereas in FFA and OFC, MVPA discriminated facial expressions less accurately in older than young adults. These findings suggest that facial expression representations in FFA and OFC reflect age-related dedifferentiation and positivity effect. Second, HC–OFC connectivity showed subsequent memory effects (SMEs) for happy expressions in both age groups, HC–FFA connectivity exhibited SMEs for happy and neutral expressions in young adults, and HC-pSTS interactions displayed SMEs for happy expressions in older adults. These results could be related to compensatory mechanisms and positivity effects in older adults. Taken together, the results clarify the effects of aging on the neural mechanisms in perceiving and encoding facial expressions.
INTRODUCTION
The most common memory complaint in healthy older adults (e.g., 83% of responders in Bolla, Lindgren, Bonaccorsy, & Bleecker, 1991; see also Cohen & Faulkner, 1986; Zelinski, Gilewski, & Thompson, 1980) is a difficulty in remembering people's names. This deficit, which has been confirmed in laboratory studies on face–name associations (James, Fogler, & Tauber, 2008; Naveh-Benjamin, Guez, Kilb, & Reedy, 2004; Crook & West, 1990), is not surprising given that older adults are impaired both in processing visual stimuli (Baltes & Lindenberger, 1997; Lindenberger & Baltes, 1994) including face perception (for review, see Boutet, Taler, & Collin, 2015) and in establishing new associations (Greene & Naveh-Benjamin, 2020; Naveh-Benjamin, 2000). An important component of older adults' visual perception deficits is a reduction in neural specificity known as the age-related dedifferentiation effect (for review, see Koen & Rugg, 2019). In contrast, the associative memory deficit has been linked to impaired hippocampal activity (Tsukiura et al., 2011; Dennis et al., 2008) and hippocampal–cortical connectivity (Ness et al., 2022; Tsukiura et al., 2011; Leshikar, Gutchess, Hebrank, Sutton, & Park, 2010). In a previous study, we found that functional connectivity between the hippocampus (HC) and the OFC during face–name associative learning was enhanced by happy facial expressions and that this mechanism was related to better memory for happy faces paired with names (Tsukiura & Cabeza, 2008). Given that older adults show the age-related positivity effect, which is known as a bias toward positive emotional stimuli or a tendency to interpret neutral stimuli as emotionally positive stimuli (for review, see Mather & Carstensen, 2005), an obvious question is whether the positivity effect could enhance HC–cortex connectivity during the encoding of face–name associations. If this effect is also associated with better face–name learning, it would be an example of functional compensation in older adults. Below, we briefly describe the dedifferentiation and positivity effects, and their implications for the present study.
The age-related dedifferentiation effect in visual perception refers to the finding that the neural representations for different visual stimuli are less distinct in older than young adults (Park et al., 2004; for review, see Koen & Rugg, 2019). This effect has been demonstrated for a variety of tasks and stimuli (Deng et al., 2021; Hill, King, & Rugg, 2021; Saverino et al., 2016; Dennis & Cabeza, 2011; Kalkstein, Checksfield, Bollinger, & Gazzaley, 2011; St-Laurent, Abdi, Burianova, & Grady, 2011; Park, Carp, Hebrank, Park, & Polk, 2010; Payer et al., 2006), including faces (Goh, Suzuki, & Park, 2010). Age-related dedifferentiation has been traditionally examined using univariate analyses and, more recently, using multivariate representational analyses, such as multivariate pattern analysis (MVPA) (Katsumi, Andreano, Barrett, Dickerson, & Touroutoglou, 2021; Hill et al., 2021; Dennis et al., 2019). MVPA is used to measure the discriminability between different stimuli (e.g., faces vs. objects in the work of Haxby et al., 2001), different exemplars of the same class (e.g., different facial identities in the work of Ghuman et al., 2014), or different qualities across stimuli of the same class (e.g., different facial expressions in the work of Wegrzyn et al., 2015; Harry, Williams, Davis, & Kim, 2013). In the present study, we focused on the age-related dedifferentiation in perceiving different facial expressions. Older adults do not discriminate facial expressions compared with young adults, and this deficit has been interpreted as evidence of the age-related dedifferentiation (Franklin & Zebrowitz, 2017). Two potential regions reflecting the age-related dedifferentiation for facial expressions are the fusiform face area (FFA), which is sensitive to the processing of facial expressions (Wegrzyn et al., 2015; Skerry & Saxe, 2014; Harry et al., 2013), and OFC, which is involved in the processing of socioemotional signals, including facial expressions (Goodkind et al., 2012; Watson & Platt, 2012; Heberlein, Padon, Gillihan, Farah, & Fellows, 2008; Hornak et al., 2003; Hornak, Rolls, & Wade, 1996). Both FFA and OFC are affected by age-related atrophy and dedifferentiation (Katsumi et al., 2021; Xie et al., 2021; Shen et al., 2013; Lee, Grady, Habak, Wilson, & Moscovitch, 2011; Goh et al., 2010; Fjell et al., 2009; Salat et al., 2009; Park et al., 2004) and hence are likely to show age-related dedifferentiation for facial expressions in the present study.
The age-related positivity effect refers to the finding that older adults often show a bias toward positive stimuli and interpret ambiguous socioemotional stimuli as more positive than young adults (for review, see Mather & Carstensen, 2005). In behavioral studies, the positivity effect has been found for a variety of emotional stimuli (Huan, Liu, Lei, & Yu, 2020; Gallo, Korthauer, McDonough, Teshale, & Johnson, 2011; van Reekum et al., 2011; Comblain, D'Argembeau, & van der Linden, 2005), including faces (Zebrowitz, Boshyan, Ward, Gutchess, & Hadjikhani, 2017; Riediger, Voelkle, Ebner, & Lindenberger, 2011; Leigland, Schulz, & Janowsky, 2004). In fMRI studies, the memory-related positivity effect in older adults has been linked to age-related changes in functional connectivity for emotional pictures (Addis, Leclerc, Muscatell, & Kensinger, 2010; St Jacques, Dolcos, & Cabeza, 2009) and to an age-related increase in functional connectivity and memory performance for emotionally positive pictures (Addis et al., 2010). These changes could be attributed to functional compensation, which refers to the cognition-enhance recruitment of neural resources (for review, see Cabeza et al., 2018). In our prior fMRI study of memory for face–name associations, we found that happy facial expressions boosted functional connectivity between HC and OFC to a greater extent for subsequently remembered than forgotten stimuli (Tsukiura & Cabeza, 2008). Thus, in the present study, we were interested in (1) whether we would find an age-related increase in functional connectivity for happy faces between HC and OFC or other regions related to processing facial expressions such as the posterior superior temporal sulcus (pSTS) (Wegrzyn et al., 2015; Said, Moore, Engell, Todorov, & Haxby, 2010) or FFA (Wegrzyn et al., 2015; Skerry & Saxe, 2014; Harry et al., 2013) and (2) whether this effect would be associated with subsequent memory (for review, see Paller & Wagner, 2002), suggesting the age-related compensation in memory.
In the present event-related fMRI study, participants were scanned while viewing happy, neutral, or angry faces paired with names, and memory for the facial expressions was assessed by presenting with the names as cues and asking participants to recall the facial expression of each face associated with the cued name (see Figure 1). We performed traditional univariate analyses, but our focus was the dedifferentiation effect measured with MVPA and the positivity effect measured with functional connectivity analyses. To investigate the dedifferentiation in perceiving facial expressions, an MVPA classifier was trained to distinguish between happy, neutral, and angry expressions, and was then used to assess the discriminability among these expressions during face perception. If the MVPA classifiers do not distinguish these facial expressions in older adults, it would reflect the age-related dedifferentiation in perceiving facial expressions. As noted above, our candidate regions reflecting the age-related dedifferentiation for facial expressions were FFA and OFC (Katsumi et al., 2021; Xie et al., 2021; Lee et al., 2011; Goh et al., 2010; Park et al., 2004), which are regions that are involved in facial expressions (Wegrzyn et al., 2015; Skerry & Saxe, 2014; Harry et al., 2013; Watson & Platt, 2012; Heberlein et al., 2008; Hornak et al., 2003; Hornak et al., 1996) and that show atrophy in older adults (Shen et al., 2013; Fjell et al., 2009; Salat et al., 2009). In addition, MVPA in the present fMRI study also investigated neural specificity for facial expressions in the amygdala (AMY) related to the perception of highly arousing facial expressions (Yang et al., 2002; Breiter et al., 1996) and pSTS related to the processing of face-based social signals, including facial expressions and eye movements (Wegrzyn et al., 2015; Said et al., 2010; Puce, Allison, Bentin, Gore, & McCarthy, 1998). To investigate the age-related positivity effect, we performed functional connectivity analyses for subsequently remembered and forgotten. As explained above, we focused on investigating whether HC–cortex functional connectivity, which has been shown in the age-related positivity effect (Addis et al., 2010), is associated with successful memory in older adults. If so, such effect would be consistent with the age-related compensation.
METHODS
Participants
In this study, we scanned 36 young (16 women) and 36 older (18 women) adults and paid them for their participation in the fMRI experiment. All participants were right-handed, native Japanese speakers, with no history of neurological or psychiatric disorders. Their vision was normal or corrected to normal with glasses. All young participants were recruited from the Kyoto University community, and all older participants were recruited from the Kyoto City Silver Human Resource Center. All participants provided written informed consent for a protocol approved by the institutional review board of Graduate School of Human and Environmental Studies, Kyoto University (19-H-10). A priori power analysis for sample size was conducted on a design of repeated-measures ANOVA with an interaction of between-subjects factor of Age Group (Young and Old) and within-subject factor of Facial Expression (Happy, Neutral, and Angry). In this analysis, we employed G*Power Version 3.1 (Faul, Erdfelder, Lang, & Buchner, 2007), which estimated a total sample number of 56 (28 young and 28 older adults) on parameters of small-to-medium effect size (f = 0.2), error probability (α = .05), and power (0.90). The estimated sample size is supported by a similar fMRI study investigating effects of aging and facial expressions on neural mechanisms during the processing of faces (Ebner, Johnson, & Fischer, 2012). To retain sufficient power in the case of missing data by poor performance, large head motion, and so forth, we recruited 36 young and 36 older adults in the present study.
All participants performed several neuropsychological tests, including the Japanese version of the Flinders Handedness Survey (FLANDERS) (Okubo, Suzuki, & Nicholls, 2014; Nicholls, Thomas, Loetscher, & Grimshaw, 2013), the Japanese version of the Montreal Cognitive Assessment (MoCA-J; Fujiwara et al., 2010; Nasreddine et al., 2005), and the Center for Epidemiologic Studies Depression scale (CES-D; Shima, 1985; Radloff, 1977). One young and two older participants showed head movement larger than 1.5 voxels in two or more fMRI runs. In addition, one older participant misunderstood the experimental procedures of the encoding task, one older participant felt sick in the MRI scanner, and one young and one older participant showed possible pathological changes (probable arachnoid cyst) in their structural MRIs. In neuropsychological tests, the MoCA-J score in one young participant was lower than 2 SD of the mean scores in a group of young participants. Regarding the CES-D score, two young participants and one older participant showed worse scores than 2 SD of the mean scores in each group of young and older participants. In the behavioral performance of the fMRI task, four young and two older participants had fewer than three trials in either experimental condition of fMRI analyses. According to these exclusion criteria, behavioral and MRI data from nine young participants and eight older participants were excluded from all analyses. Thus, the analyses were based on data from 27 young (12 women; mean age = 21.19 [SD = 1.62] years) and 28 older (14 women; mean age = 67.36 [SD = 2.57] years) adults.
Age, education year, FLANDERS score, MoCA-J score, and CES-D score data for each participant were compared by two-sample t tests (two-tailed) between age groups of young and older adults. A significant difference between the two groups was identified in age, t(53) = 79.38, p < .001, d = 21.41, and the MoCA-J score, t(53) = 4.75, p < .001, d = 1.28. However, we did not find significant differences in years of education, t(53) = 0.60, p = .55, d = 0.16; the FLANDERS score, t(53) = 1.31, p = .20, d = 0.35; or the CES-D score, t(53) = 1.13, p = .26, d = 0.31. Detailed profiles in young and older adults whose data were analyzed are summarized in Table 1.
. | Young (SD) . | Old (SD) . | Two-sample t test . |
---|---|---|---|
Age, years | 21.19 (1.62) | 67.36 (2.57) | Young < old*** |
Sex, male: female | 15:12 | 14:14 | |
Education, years | 14.07 (1.24) | 14.32 (1.79) | n.s. |
FLANDERS | 9.41 (1.37) | 9.79 (0.69) | n.s. |
MoCA-J | 28.52 (0.85) | 26.36 (2.22) | Young > old*** |
CES-D | 8.19 (4.46) | 9.75 (5.72) | n.s. |
. | Young (SD) . | Old (SD) . | Two-sample t test . |
---|---|---|---|
Age, years | 21.19 (1.62) | 67.36 (2.57) | Young < old*** |
Sex, male: female | 15:12 | 14:14 | |
Education, years | 14.07 (1.24) | 14.32 (1.79) | n.s. |
FLANDERS | 9.41 (1.37) | 9.79 (0.69) | n.s. |
MoCA-J | 28.52 (0.85) | 26.36 (2.22) | Young > old*** |
CES-D | 8.19 (4.46) | 9.75 (5.72) | n.s. |
FLANDERS = Japanese version of the Flinder Handedness Survey; MoCA-J = Japanese version of the Montreal Cognitive Assessment; CES-D = the Center for Epidemiologic Studies Depression scale; n.s. = not significant.
p < .001.
Stimuli
The stimuli were colored face pictures of 120 unfamiliar persons (60 female and 60 male faces) selected from an in-house database, and each face included happy, neutral, and angry facial expressions. This database contained faces from voluntary pedestrians aged in their thirties and forties in the downtown area of Kyoto city who were asked to pose making happy, angry, and neutral face expressions. All pictures were taken against a gray background, and the eyes of each face were directed to the front. Easily identifiable visual features of each picture, such as blemishes, freckles, moles, scars, and ornaments, were removed (Sugimoto, Dolcos, & Tsukiura, 2021), and the color of the clothes in each picture was converted into a uniform black color using an image processing software (Adobe Photoshop CS 5.1). The resolution of all pictures was resized to 280 × 350 pixels. These pictures of 120 persons with three facial expressions each (for 360 pictures) were divided into three lists of 40 persons each, among which age and sex were controlled to be equal. Using data from 24 healthy younger adults in a previous study (Sugimoto et al., 2021), emotional arousal and valence in happy, neutral, and angry expressions were controlled to be equal across the lists. The scores of arousal and valence were statistically compared among the lists in each facial expression by one-way ANOVAs. The ANOVA for arousal scores in each facial expression showed no significant difference among the lists [happy: F(2,117) = 0.02, p = .98, η2 = .00; neutral: F(2, 117) = 0.15, p = .86, η2 = .00; angry: F(2, 117) = 0.003, p = .997, η2 = .00]. In the ANOVA for valence scores, we did not find a significant difference among the lists in each facial expression [happy: F(2, 117) = 0.06, p = .94, η2 = .00; neutral: F(2, 117) = 0.24, p = .79, η2 = .00; angry: F(2, 117) = 0.32, p = .73, η2 = .01]. Each list was assigned to the condition of either Happy, Neutral, or Angry for target faces to be encoded, and the assignment was counterbalanced across participants.
A set of Japanese family names was also employed in this study. A total of the top 160 popular Japanese family names, which were written by two-letter Japanese kanji that could have different pronunciations, were collected from an on-line database (myoji-yurai.net/prefectureRanking.htm). These 160 names were divided into four lists without popularity bias. One hundred twenty names in three lists were randomly paired with 120 target faces each, and 40 names in one list were used as distracters in the retrieval phase.
Experimental Procedures
fMRI runs included a memory task regarding the encoding and retrieval of face–name pairs and a functional localizer task. Encoding and retrieval runs of the memory task alternated across eight runs, with each retrieval run testing face–name pairs encoded in the previous encoding run. Each set of the four encoding-retrieval runs used different lists of face–name pairs, and there was approximately a 1-min interval between encoding and retrieval runs in each set. After exiting from the scanner, they evaluated the faces in emotional arousal and valence. Stimulus presentations and recording of participants' responses in all tasks were controlled by MATLAB scripts (www.mathworks.com). All participants were fully trained on encoding and retrieval procedures before the experiment.
Memory Task
Figure 1 illustrates encoding and retrieval trials in a memory task of face–name pairs. During both encoding and retrieval, each stimulus was presented for 3500 msec and was followed by a jittered (2500–7500 msec) visual fixation as ISI. During each encoding run, participants were randomly presented with 30 face–name pairs one by one. For each pair, they were instructed to learn each pair by reading the name silently and pressing a key to indicate the expression of the face (“Happy,” “Neutral,” or “Angry”). During each retrieval run, participants were presented with 30 names of the face–name pairs encoded in the previous run mixed with 10 new names in random order. For each name, participants were told that if they believed the name was not paired with a face in the previous encoding run, they should press “New.” If they believed the name was paired with a face in the previous encoding run, they should indicate the expression of the face by pressing “Happy,” “Neutral,” or “Angry.” If they believed the name was paired with a face in the previous encoding run but could not remember the expression, they should press “Unknown.” They were asked to make responses during encoding and during retrieval as quickly as possible.
In the present study, we focused on the analyses of fMRI data only from the encoding runs. Trials that showed no response in either the encoding or retrieval run and that facial expressions were erroneously judged in the encoding runs were excluded from all analyses. In trials in which learned names were presented, trials in which facial expressions associated with the names were successfully recalled were defined as Hit; trials in which facial expressions associated with the names were erroneously recalled or were categorized as “Unknown,” or in which the names were judged as “New” were defined as Miss. The Hit and Miss trials were subdivided into the three facial expressions, Happy, Neutral, and Angry, in which each facial expression was presented during encoding.
Functional Localizer Task
After completing the memory task of face–name associations, participants performed a run of the functional localizer task (Matsuda et al., 2013), in which movies of emotional facial expressions were presented. The rationale of using movies of facial expressions was that dynamic facial expressions had produced greater activation in the face-related regions than static images (Foley, Rippon, Thai, Longe, & Senior, 2012; Fox, Iaria, & Barton, 2009; Sato, Kochiyama, Yoshikawa, Naito, & Matsumura, 2004; LaBar, Crupain, Voyvodic, & McCarthy, 2003). In addition, the functional localizer task enabled us to identify brain regions reflecting the common processing of multiple facial expressions rather than the processing of a selective facial expression.
In this task, participants were presented with 2-sec movies of male and female faces, in which a neutral facial expression was changed to either an emotional facial expression of joy, fear, anger, or disgust, or with 2-sec movies in which the original movies of male and female faces were transformed into mosaic forms for the control. Thus, we prepared 16 movies, including 8 original and 8 control movies. In addition, we prepared another version of the 2-sec original and control movies, into which the momentary presentation (100 msec) of building pictures was inserted around every three trials at a random time. Participants were required to press the corresponding button as fast as possible when they noticed the building pictures. These movies with the momentary presentation of building pictures included four original and four control stimuli. All of these movies were randomly presented one by one for 2000 msec each, and a visual fixation was shown as ISI, jittered with variable durations (2500–5500 msec). This task included 120 trials, in which 24 movie stimuli were repeated 5 times.
Evaluation Task
After scanning, participants rated the emotional arousal and valence elicited by the encoded faces. In one run, the 120 encoded faces were presented and rated in emotional arousal (1 = calm, 9 = exciting), and in another run, the same faces were presented and rated in emotional valence (1 = unpleasant, 9 = pleasant). The faces were presented in random order, each for 2000 msec for young adults and for 3000 msec for older adults with a 1000-msec ISI. The order of the two rating runs was counterbalanced across participants.
MRI Data Acquisition
All MRI data were acquired by a MAGNETOM Verio 3-T MRI scanner (Siemens), which is located at the Kokoro Research Center, Kyoto University. Stimuli were visually presented on an MRI-compatible display (Nordic Neuro Lab, Inc.), and participants viewed the stimuli through a mirror attached to the head coil of the MRI scanner. Behavioral responses were recorded by a five-button fiber optic response pad (Current Designs, Inc.), which was assigned to the right hand. Head motion in the scanner was minimized by a neck supporter and foam pads, and scanner noise was reduced by ear plugs. First, three directional T1-weighted structural images were acquired to localize the subsequent functional and high-resolution anatomical images. Second, functional images were recorded using a pulse sequence of gradient-echo EPI, which is sensitive to blood oxygenation level-dependent contrast (repetition time = 1500 msec, flip angle = 60°, echo time = 38.8 msec, field of view = 22.0 cm × 22.0 cm, matrix size = 100 × 100, 68 horizontal slices, slice thickness/gap = 2.2/0 mm, multiband factor = 4). Finally, high-resolution T1-weighted structural images were obtained using MPRAGE (repetition time = 2250 msec, echo time = 3.51 msec, field of view = 25.6 cm, matrix size = 256 × 256, 208 horizontal slices, slice thickness/gap = 1.0/0 mm).
fMRI Data Analysis
Preprocessing
All MRI data were preprocessed by Statistical Parametric Mapping 12 (SPM12: www.fil.ion.ucl.ac.uk/spm/software/spm12/) implemented in MATLAB (www.mathworks.com). In the preprocessing, fMRI data from the memory and functional localizer tasks were analyzed separately. First, the initial six volumes of functional images in each run were discarded to prevent an initial dip. Second, six parameters of head motion were extracted from a series of the remaining functional images. Third, a high-resolution structural image was coregistered to the first volume of the functional images. Fourth, during the spatial normalization process, we estimated parameters to fit anatomical space of the structural image to the Tissue Probability Map in the Montreal Neurological Institute (MNI) template, and the parameters were written to all functional images (resampled resolution = 2.2 mm × 2.2 mm × 2.2 mm). Finally, these normalized functional images were spatially smoothed by a Gaussian kernel of FWHM = 5 mm. These functional images after all the preprocessing steps were applied to the univariate analyses in the memory task and functional localizer task and to the functional connectivity analysis in the memory task. In MVPA of the memory task, functional images without spatial smoothing were analyzed.
Univariate Analysis in the Functional Localizer Task and ROI Definition
Functional images in the functional localizer task were statistically analyzed to define ROIs related to the processing of faces and facial expressions. Statistical analyses were performed in SPM12 at the individual level and then at the group level. In the individual-level (fixed-effect) analysis, trial-related activation was modeled by convolving a vector of onsets with a canonical hemodynamic response function (HRF) in the context of the general linear model (GLM), in which the timing of stimulus presentation was defined as the onset with an event duration of 0 sec. This model included nine regressors reflecting four conditions related to the original movies of each facial expression (Happy, Fear, Angry, and Disgust), four control conditions related to the mosaic movies transformed from the original movies (Happy-Mosaic, Fear-Mosaic, Angry-Mosaic, and Disgust-Mosaic), and one dummy condition in which a building picture was inserted into the original and control movies. Six parameters related to head motion were also included in this model as confounding factors. Activation related to the processing of faces and facial expressions was identified by comparing all conditions of the original movies (Happy, Fear, Angry, and Disgust) with all control conditions of the mosaic movies (Happy-Mosaic, Fear-Mosaic, Angry-Mosaic, and Disgust-Mosaic), and the contrast yielded a t statistic in each voxel. A contrast image was created for each participant.
In the group-level (random-effect) analyses, contrast images produced by the individual-level analysis were analyzed by a one-sample t test for all participants in both age groups. This test produced an activation map reflecting greater activation during the general processing of faces and facial expressions than during simple visual processing. In the whole-brain analysis, the height threshold at the voxel level (p < .001) was corrected for whole-brain multiple comparisons by the family-wise error (FWE) rate (p < .05) with a minimum cluster size of 10 voxels.
Table 2 summarizes results in the functional localizer task. Significant activation was identified in one cluster, which included right pSTS, right FFA, and right occipital face area (OFA), and in each cluster of left pSTS, left FFA, left OFA, and bilateral AMY. These regions were applied to ROI masks in the univariate analysis of the memory task. In addition, the significant activation cluster was combined with each anatomical mask to define the right pSTS, right FFA, and bilateral AMY ROIs, which were used for MVPA (see Figure 4). The pSTS ROI was defined as a cluster reflecting significant activation in a region removing anterior temporal lobe, which was reported in a previous study (Binney, Embleton, Jefferies, Parker, & Ralph, 2010), from the right superior temporal gyrus and middle temporal gyrus of the automated anatomical labeling (AAL) ROI package. A cluster showing significant activation in the right fusiform gyrus of the AAL ROI package (Tzourio-Mazoyer et al., 2002) was defined as the right FFA ROI. A cluster showing significant activation in the bilateral AMY extracted from a previous study (Amunts et al., 2005) was defined as the bilateral AMY ROI.
Regions . | L/R . | BA . | MNI coordinates . | Z Value . | k . | ||
---|---|---|---|---|---|---|---|
x . | y . | z . | |||||
Whole-brain analysis | |||||||
Middle temporal gyrus (pSTS)a | L | 21/22/37 | −54 | −64 | 7 | 5.74 | 116 |
Fusiform gyrus (FFA)a | L | 37 | −41 | −57 | −22 | 6.88 | 64 |
Middle/inferior occipital gyrus (OFA)a | L | 19 | −45 | −79 | −6 | 6.03 | 74 |
Superior/middle temporal gyrus (pSTS)a | R | 19/21/22/37/42 | 45 | −75 | −8 | Inf | 1110 |
Fusiform gyrus (FFA)a | |||||||
Middle/inferior occipital gyrus (OFA)a | |||||||
AMYa | L | −21 | −6 | −17 | 7.69 | 102 | |
AMYa | R | 21 | −9 | −15 | Inf | 71 | |
ROI-based analysis (OFC) | |||||||
Inferior orbitofrontal gyrusb | L | 47 | −41 | 29 | −6 | 4.15 | 3 |
ROI-based analysis (posterior parts of the right superior and middle temporal gyri) | |||||||
Superior/middle temporal gyrus (pSTS)b | R | 21/22/37/41 | 45 | −66 | 0 | 7.08 | 1227 |
ROI-based analysis (fusiform gyrus) | |||||||
Fusiform gyrus (FFA)b | R | 19/37 | 41 | −48 | −19 | Inf | 171 |
Regions . | L/R . | BA . | MNI coordinates . | Z Value . | k . | ||
---|---|---|---|---|---|---|---|
x . | y . | z . | |||||
Whole-brain analysis | |||||||
Middle temporal gyrus (pSTS)a | L | 21/22/37 | −54 | −64 | 7 | 5.74 | 116 |
Fusiform gyrus (FFA)a | L | 37 | −41 | −57 | −22 | 6.88 | 64 |
Middle/inferior occipital gyrus (OFA)a | L | 19 | −45 | −79 | −6 | 6.03 | 74 |
Superior/middle temporal gyrus (pSTS)a | R | 19/21/22/37/42 | 45 | −75 | −8 | Inf | 1110 |
Fusiform gyrus (FFA)a | |||||||
Middle/inferior occipital gyrus (OFA)a | |||||||
AMYa | L | −21 | −6 | −17 | 7.69 | 102 | |
AMYa | R | 21 | −9 | −15 | Inf | 71 | |
ROI-based analysis (OFC) | |||||||
Inferior orbitofrontal gyrusb | L | 47 | −41 | 29 | −6 | 4.15 | 3 |
ROI-based analysis (posterior parts of the right superior and middle temporal gyri) | |||||||
Superior/middle temporal gyrus (pSTS)b | R | 21/22/37/41 | 45 | −66 | 0 | 7.08 | 1227 |
ROI-based analysis (fusiform gyrus) | |||||||
Fusiform gyrus (FFA)b | R | 19/37 | 41 | −48 | −19 | Inf | 171 |
BA = Brodmann area; k = cluster size; L = left; R = right.
Cluster used as ROI in MVPA after masking it with the corresponding anatomical ROI.
MNI coordinate used for the center of a seed VOI in the functional connectivity analysis.
The ROI mask in bilateral OFC was defined anatomically, which included bilateral regions in the superior, middle, inferior, and medial orbitofrontal gyri defined by the AAL ROI package. This OFC ROI was used in the univariate analysis and MVPA. To determine seed VOIs in the functional connectivity analyses, significant voxels fulfilling the height threshold (p < .001) were corrected for multiple comparisons in each region of bilateral OFC, posterior parts of the right superior and middle temporal gyri, and the right fusiform gyri, which were created by the AAL ROI package mentioned above. Significant activation was found in each region, in which peak voxels in left OFC (x = −41, y = 29, z = −6), right pSTS (x = 45, y = −66, z = 0), and right FFA (x = 41, y = −48, z = −19) were employed as center voxels of each seed VOI for the functional connectivity analysis.
Univariate Analysis
In the present study, we focused on the statistical analysis of fMRI data only from four runs during the encoding phase in the memory task. Retrieval-related activity will be analyzed and reported elsewhere in the future. In one young adult and one older adult who showed head movements larger than 1.5 voxels during either run of the encoding phase, fMRI data from the remaining three encoding runs were used in the univariate analysis, MVPA, and functional connectivity analysis.
In the univariate analysis of the memory task, using SPM12, functional images were analyzed at the individual level and then at the group level. In the individual-level (fixed-effect) analysis, we modeled trial-related activation by convolving onset vectors with a canonical HRF in the context of the GLM. The onset timing, when face–name associations were presented, was defined as an event with a duration of 0 sec. Regressors in this model included three facial expressions (Happy, Neutral, and Angry) and one no-response (NR) condition, which was defined as encoding trials in which participants showed no response in the encoding and/or retrieval phases and exhibited failure in judging facial expressions in the encoding phase. Six parameters related to head motion were also included in this model as confounding factors. Activation reflecting the processing of each facial expression (Happy, Neutral, and Angry) was computed by comparison with baseline activation by one-sample t tests, and the contrast yielded a t statistic in each voxel. The three contrast images in each facial expression (Happy, Neutral, and Angry) were created for each participant.
At the group-level (random-effect) analyses, the three contrast images (Happy, Neutral, and Angry) obtained by the individual-level analysis were analyzed with a two-way mixed ANOVA with factors of Age Group (Young and Old) and Facial Expression (Happy, Neutral, and Angry), which was modeled by a flexible factorial design with a subject factor. Three types of analysis were performed. First, to identify regions associated with individual facial expressions, the main effect of facial expression (F test) was inclusively masked with pairs of t contrasts: (a) For happy expressions, the contrasts were Happy > Neutral and Happy > Angry (p < .05); (b) for angry expressions, they were Angry > Neutral and Angry > Happy (p < .05); and (c) for both happy and angry expressions (i.e., arousing expressions), the contrasts were Happy > Neutral and Angry > Neutral (p < .05). Second, to identify age-related decreases in activity, the main effect of age group (F test) was inclusively masked with the t contrast of Young > Old (p < .05). Finally, to investigate differential effects of facial expressions in young and older adults, the interaction of Age Group by Facial Expression (F test) was masked inclusively by two types of t contrast: (a) [(Happy > Neutral in Young) > (Happy > Neutral in Old)] and [(Happy > Angry in Young) > (Happy > Angry in Old)] (p < .05); and (b) [(Angry > Neutral in Young) > (Angry > Neutral in Old)] and [(Angry > Happy in Young) > (Angry > Happy in Old)] (p < .05).
In the foregoing analyses, the height threshold at the voxel level (p < .001) was corrected for multiple comparisons in the hypothesis-driven ROI (FWE, p < .05) with a minimum cluster size of two voxels. ROI in the univariate analyses of the memory task was created by combining regions identified in the functional localizer task with bilateral OFC defined anatomically in the AAL ROI package (Tzourio-Mazoyer et al., 2002). Anatomical sites showing significant activation were primarily defined by the SPM Anatomy toolbox (Eickhoff et al., 2005, 2007; Eickhoff, Heim, Zilles, & Amunts, 2006) and MRIcro (www.cabi.gatech.edu/mricro/mricro).
MVPA
MVPA was performed by Pattern Recognition of Neuroimaging Toolbox (PRoNTo; Schrouff et al., 2013) Version 2.1, which was implemented in MATLAB (www.mathworks.com). In this analysis, we investigated how facial expressions were represented by activity patterns in ROIs related to the processing of faces and facial expressions and how the neural representation was different between young and older adults. The MVPA was conducted to examine activity patterns in OFC, pSTS, FFA, and AMY ROIs. Given that right pSTS and FFA regions are more dominant than the left regions in the processing of faces (Ishai, Schmidt, & Boesiger, 2005; Puce et al., 1998; Kanwisher, McDermott, & Chun, 1997), ROIs in these regions were defined only in the right hemisphere. The OFC and AMY ROIs were defined bilaterally. Details of these ROIs were mentioned above.
Before MVPA, activation in individual trials was estimated by a new GLM in each participant (Rissman, Gazzaley, & D'Esposito, 2004). In this model, activation in each trial was modeled by convolving a vector of onsets with a canonical HRF in the context of the GLM, in which the trial onset was set at the timing when each stimulus was presented with a duration of 0 sec. Six parameters reflecting head motion were also included in this model as a confounding factor. This model produced trial-by-trial beta estimates for the whole brain in each participant, and beta images for individual trials in each participant were applied to the pattern classification model created by PRoNTo.
In MVPA by PRoNTo, first, a whole-brain mask image in which voxels without beta values were excluded was created for each participant, and the pattern classification by PRoNTo was statistically analyzed in the whole-brain mask image. The features were extracted in each ROI and were centered by the mean of training data for each voxel. Three patterns of binary classification (Happy vs. Neutral, Happy vs. Angry, and Happy vs. Angry) were conducted by support vector machine classifiers with a linear kernel in all voxels of each ROI. Training and testing followed a leave-one-run-out cross-validation procedure with three runs for training data and one run for testing data. Mean balanced accuracy (BA) was computed for all ROIs in each participant, and the mean BA values for each ROI were tested by permutation tests. In the permutation tests, pattern classification analyses were repeated 1000 times on data where labels of the two classes were randomly swapped. This manipulation produced a null distribution that simulated potential BA scores, in which the two classes of facial expressions were not represented by activity patterns in each ROI. This procedure has been validated in other studies (Etzel, 2017; Haynes, 2015). These results were corrected by the false discovery rate (FDR; q < .05) to control false-positives (Benjamini & Hochberg, 1995). In addition, we confirmed the BA values by one-sample t tests (one-tailed) for chance level (50%) in each age group; these values have been conventionally employed in functional neuroimaging studies.
Functional Connectivity Analysis
To investigate how functional connectivity related to memory for facial expressions was affected by aging, we analyzed the functional connectivity of HC, which is related to association memory (for review, see Diana, Yonelinas, & Ranganath, 2007; Eichenbaum, Yonelinas, & Ranganath, 2007; Davachi, 2006), with left OFC, right pSTS, and FFA as seed regions in each age group. These seeds were decided by results in the functional localizer task, in which regions related to the processing of faces and facial expressions were identified. In the functional connectivity analysis, we employed a generalized form of context-dependent psychophysiological interaction (gPPI; McLaren, Ries, Xu, & Johnson, 2012). Before preparing the gPPI analysis, four encoding runs were collapsed into one run, and trial-related regressors of six conditions, which were decided by facial expression (Happy, Neutral, and Angry) and subsequent memory performance during retrieval (Hit and Miss), were remodeled by convolving onset vectors with a canonical HRF in the context of the GLM. The onset timing, when each stimulus was presented, was set as an event with a duration of 0 sec. The NR condition was also applied to this model as a regressor. Six parameters reflecting head motion in each participant were included in this model as confounding factors.
Regions showing significant activation in the ROI analysis of the functional localizer task were defined as seed regions. Seed regions in OFC, right pSTS, and right FFA were set as a VOI sphere with a 6-mm radius at the center of the peak voxel in the functional localizer task. However, the seed VOIs in the left OFC were not significantly extracted from data of one young adult and one older adult. Thus, the functional connectivity analysis of the left OFC seed was conducted with fMRI data from 26 young and 27 older adults.
The functional connectivity analysis was performed with the gPPI toolbox (www.nitrc.org/projects/gppi), by which a model at the individual level was created. The model included a design matrix with three columns of (1) condition-related regressors formed by convolving vectors of condition-related onsets with a canonical HRF, (2) time series BOLD signals extracted from the seed region, and (3) PPI regressors as the interaction between (1) and (2). In the present study, the gPPI toolbox produced a model including the PPI and condition-related regressors of six experimental conditions (Happy-Hit, Happy-Miss, Neutral-Hit, Neutral-Miss, Angry-Hit, and Angry-Miss) and the NR condition, as well as the BOLD signals in each seed. In addition, six regressors related to head motion were included in this model as confounding factors. Parameters in this model were estimated in each participant. Linear contrasts were computed in the model for each seed region, and regions showing a significant effect in the PPI regressor contrasts were considered to be functionally connected with each seed region at the statistical threshold. Contrast images of the PPI regressors reflecting functional connectivity during successful and unsuccessful encoding in three facial expressions (Happy-Hit, Happy-Miss, Neutral-Hit, Neutral-Miss, Angry-Hit, and Angry-Miss) were obtained for each participant. In addition, the PPI regressor contrasts were computed by comparing successful with unsuccessful encoding in each facial expression (Happy-Hit > Happy-Miss, Neutral-Hit > Neutral-Miss, and Angry-Hit > Angry-Miss) and by comparing between facial expressions in the Hit trials (Happy-Hit > Neutral-Hit, Happy-Hit > Angry-Hit, Neutral-Hit > Happy-Hit, Neutral-Hit > Angry-Hit, Angry-Hit > Happy-Hit, and Angry-Hit > Neutral-Hit). These contrast images were used in the group-level analysis.
In the group-level analysis, we investigated how functional connectivity patterns during successful encoding in each facial expression were identified in each age group of young and older adults. In the functional connectivity analysis specific to the Happy-Hit condition, a one-sample t test for the Happy-Hit contrasts was inclusively masked by three contrasts of Happy-Hit > Happy-Miss, Happy-Hit > Neutral-Hit, and Happy-Hit > Angry-Hit (p < .05). Functional connectivity specific to the Angry-Hit condition was analyzed in a one-sample t test for the Angry-Hit contrasts, which was masked inclusively by contrasts of Angry-Hit > Angry-Miss, Angry-Hit > Neutral-Hit, and Angry-Hit > Happy-Hit (p < .05). The same procedures of statistical analysis for the PPI regressor contrast images were employed to find significant functional connectivity specific to the Neutral-Hit condition. In these analyses, the height threshold at the voxel level (p < .001) was corrected for multiple comparisons in HC ROI (Amunts et al., 2005) (FWE, p < .05) with a minimum cluster size of two voxels.
RESULTS
Behavioral Results
Table 3 summarizes young and older adults' behavioral data during (1) the encoding phase (RTs), (2) the retrieval phase (accuracy and RTs), and (3) the arousal/valence rating phase.
. | Young (SD) . | Old (SD) . | ||||
---|---|---|---|---|---|---|
Happy . | Neutral . | Angry . | Happy . | Neutral . | Angry . | |
Encoding | ||||||
Response time (msec) | ||||||
Subsequent hit | 1601.47 (311.69) | 1618.97 (297.08) | 1694.30 (348.17) | 1603.63 (328.74) | 1643.79 (394.56) | 1751.27 (342.78) |
Subsequent miss | 1542.95 (267.28) | 1676.42 (331.24) | 1783.35 (325.37) | 1653.63 (356.20) | 1634.94 (368.10) | 1806.16 (335.80) |
Retrieval | ||||||
Proportion of recall accuracy for facial expressions | ||||||
Hit/hit for names | 0.62 (0.16) | 0.64 (0.20) | 0.62 (0.18) | 0.45 (0.17) | 0.46 (0.12) | 0.38 (0.14) |
Proportion of recognition accuracy for names | ||||||
Hit for names | 0.78 (0.17) | 0.75 (0.16) | 0.74 (0.16) | 0.90 (0.12) | 0.89 (0.10) | 0.89 (0.11) |
Miss for names | 0.22 (0.17) | 0.25 (0.16) | 0.26 (0.16) | 0.10 (0.12) | 0.11 (0.09) | 0.11 (0.11) |
FA for names | 0.18 (0.17) | 0.56 (0.27) | ||||
CR for names | 0.82 (0.17) | 0.44 (0.27) | ||||
Number of trialsa | ||||||
Hit | 18.96 (7.35) | 17.74 (6.81) | 17.19 (7.88) | 15.29 (6.23) | 15.68 (4.06) | 12.07 (5.22) |
Miss | 19.56 (6.85) | 19.52 (7.26) | 19.19 (6.57) | 22.46 (6.48) | 22.64 (3.80) | 23.54 (5.90) |
FA for names | 6.59 (5.92) | 21.07 (9.97) | ||||
CR for names | 31.59 (7.57) | 16.75 (10.65) | ||||
Response time (msec) | ||||||
Hit | 1868.34 (290.11) | 2038.44 (342.97) | 1948.71 (255.42) | 2098.08 (364.88) | 2280.68 (342.20) | 2222.67 (428.80) |
Miss | 2252.41 (322.30) | 2204.01 (327.05) | 2244.64 (359.06) | 2398.73 (461.66) | 2326.42 (467.67) | 2383.47 (418.55) |
FA for names | 2534.43 (405.88) | 2428.22 (448.14) | ||||
CR for names | 1773.10 (352.89) | 2151.80 (425.32) | ||||
Rating scores | ||||||
Emotional arousal | 6.15 (0.97) | 1.75 (0.56) | 6.07 (0.96) | 6.42 (0.95) | 2.92 (1.75) | 6.46 (1.21) |
Emotional valence | 7.28 (0.57) | 4.94 (0.18) | 2.83 (0.52) | 7.52 (0.51) | 4.83 (0.40) | 2.51 (0.57) |
. | Young (SD) . | Old (SD) . | ||||
---|---|---|---|---|---|---|
Happy . | Neutral . | Angry . | Happy . | Neutral . | Angry . | |
Encoding | ||||||
Response time (msec) | ||||||
Subsequent hit | 1601.47 (311.69) | 1618.97 (297.08) | 1694.30 (348.17) | 1603.63 (328.74) | 1643.79 (394.56) | 1751.27 (342.78) |
Subsequent miss | 1542.95 (267.28) | 1676.42 (331.24) | 1783.35 (325.37) | 1653.63 (356.20) | 1634.94 (368.10) | 1806.16 (335.80) |
Retrieval | ||||||
Proportion of recall accuracy for facial expressions | ||||||
Hit/hit for names | 0.62 (0.16) | 0.64 (0.20) | 0.62 (0.18) | 0.45 (0.17) | 0.46 (0.12) | 0.38 (0.14) |
Proportion of recognition accuracy for names | ||||||
Hit for names | 0.78 (0.17) | 0.75 (0.16) | 0.74 (0.16) | 0.90 (0.12) | 0.89 (0.10) | 0.89 (0.11) |
Miss for names | 0.22 (0.17) | 0.25 (0.16) | 0.26 (0.16) | 0.10 (0.12) | 0.11 (0.09) | 0.11 (0.11) |
FA for names | 0.18 (0.17) | 0.56 (0.27) | ||||
CR for names | 0.82 (0.17) | 0.44 (0.27) | ||||
Number of trialsa | ||||||
Hit | 18.96 (7.35) | 17.74 (6.81) | 17.19 (7.88) | 15.29 (6.23) | 15.68 (4.06) | 12.07 (5.22) |
Miss | 19.56 (6.85) | 19.52 (7.26) | 19.19 (6.57) | 22.46 (6.48) | 22.64 (3.80) | 23.54 (5.90) |
FA for names | 6.59 (5.92) | 21.07 (9.97) | ||||
CR for names | 31.59 (7.57) | 16.75 (10.65) | ||||
Response time (msec) | ||||||
Hit | 1868.34 (290.11) | 2038.44 (342.97) | 1948.71 (255.42) | 2098.08 (364.88) | 2280.68 (342.20) | 2222.67 (428.80) |
Miss | 2252.41 (322.30) | 2204.01 (327.05) | 2244.64 (359.06) | 2398.73 (461.66) | 2326.42 (467.67) | 2383.47 (418.55) |
FA for names | 2534.43 (405.88) | 2428.22 (448.14) | ||||
CR for names | 1773.10 (352.89) | 2151.80 (425.32) | ||||
Rating scores | ||||||
Emotional arousal | 6.15 (0.97) | 1.75 (0.56) | 6.07 (0.96) | 6.42 (0.95) | 2.92 (1.75) | 6.46 (1.21) |
Emotional valence | 7.28 (0.57) | 4.94 (0.18) | 2.83 (0.52) | 7.52 (0.51) | 4.83 (0.40) | 2.51 (0.57) |
SD = standard deviation; FA = false alarm; CR = correct rejection.
The Hit trial was defined as the correct remembering of facial expressions associated with learned names, and the Miss trial included the correct recognition of names (incorrect remembering of facial expressions associated with learned names, and the “Unknown” responses to learned names) and incorrect recognition of names (the “New” responses to learned names).
Encoding
Encoding RTs.
These RTs correspond to the task of judging facial expressions of happy, neutral, or angry faces. Encoding RTs were analyzed with three-way mixed ANOVAs with factors of Age Group (Young and Old), Facial Expression (Happy, Neutral, and Angry), and Subsequent Memory Performance (subsequent Hit and subsequent Miss). Post hoc tests in all analyses used the Bonferroni method. The ANOVA on RTs showed significant main effects of Facial Expression, F(2, 106) = 22.98, p < .001, ηp2 = .30, and Subsequent Memory Performance, F(1, 53) = 6.23, p = .016, ηp2 = .11, as well as reliable interactions between Facial Expression and Subsequent Memory Performance, F(2, 106) = 3.52, p = .033, ηp2 = .06, and between Age Group, Facial Expression, and Subsequent Memory Performance, F(2, 106) = 5.14, p = .007, ηp2 = .09. The remaining main effect and interactions were not significant. Post hoc tests for young adults showed that RTs for happy facial expressions were significantly faster than those for angry facial expressions in the subsequent Miss trials (p < .001), whereas RTs in the subsequent Hit trials did not show significant differences among any facial expressions. Post hoc tests for older adults demonstrated that RTs for happy facial expressions were significantly faster than those for angry facial expressions in the subsequent Hit trials (p = .017), and that RTs for happy (p = .010) and neutral (p = .002) facial expressions were significantly faster than those for angry facial expressions in the subsequent Miss trials. Significant difference of RTs between the subsequent Hit and Miss trials was not found in any facial expressions.
Retrieval
Accuracy.
Recall accuracies for facial expressions were defined as the proportion of the Hit trials to the Hit trials for names, and were analyzed with a two-way mixed ANOVA with factors of Age Group (Young and Old) and Facial Expression (Happy, Neutral, and Angry). The ANOVA demonstrated a significant main effect of Age Group, F(1, 53) = 31.43, p < .001, ηp2 = .37, but not a main effect of Facial Expression, F(2, 106) = 2.73, p = .070, ηp2 = .05, and an interaction between Age Group and Facial Expression, F(2, 106) = 1.14, p = .323, ηp2 = .02. The recall accuracies are illustrated in Figure 2, which shows a clear facial expression memory deficit in older adults compared with young adults.
Recognition accuracies for names, which were defined as the proportion of the Hit trials for names learned in the encoding run to all trials for learned names, were analyzed with a two-way mixed ANOVA with factors of Age Group (Young and Old) and Facial Expression (Happy, Neutral, and Angry). In this ANOVA, a main effect of Age Group was significant, F(1, 53) = 15.39, p < .001, ηp2 = .23, but not a main effect of Facial Expression, F(2, 106) = 2.05, p = .134, ηp2 = .04, and an interaction between Age Group and Facial Expression, F(2, 106) = 0.72, p = .490, ηp2 = .00.
Retrieval RTs.
Retrieval RTs were also analyzed with three-way mixed ANOVAs with factors of Age Group (Young and Old), Facial Expression (Happy, Neutral, and Angry), and Memory Performance (Hit and Miss). The ANOVA showed significant main effects of Age Group, F(1, 53) = 4.76, p = .034, ηp2 = .08; Facial Expression, F(2, 106) = 4.95, p = .009, ηp2 = .09; Memory Performance, F(1, 53) = 50.31, p < .001, ηp2 = .49; and a significant interaction between Facial Expression and Memory Performance, F(2, 106) = 12.32, p < .001, ηp2 = .19. The other interactions were not significant. Post hoc tests showed that happy facial expressions were remembered faster than neutral (p < .001) and angry (p = .015) facial expressions only in the Hit trials. In post hoc tests, we also found that happy and angry facial expressions were remembered faster in the Hit trials than in the Miss trials (p < .001). However, a significant difference of RTs between the Hit and Miss trials was not identified in neutral facial expressions. The RT results reflected that the enhancing retrieval of happy facial expressions was observed commonly in both young and older adults (see Figure 2).
Ratings
Arousal and valence rating scores were analyzed using two-way mixed ANOVAs with factors of Age Group (Young and Old) and Facial Expression (Happy, Neutral, and Angry) separately for arousal and valence ratings. The ANOVA on arousal ratings revealed significant main effects of Age Group, F(1, 53) = 7.79, p = .007, ηp2 = .13, and Facial Expression, F(2, 106) = 306.00, p < .001, ηp2 = .85, as well as a reliable interaction between them, F(2, 106) = 3.61, p = .031, ηp2 = .06. Post hoc tests showed that happy and angry faces were rated as being more arousing than neutral faces (p <. 001 in both contrasts) and that neutral faces were rated as being more arousing by older adults than by young adults (p = .003). The ANOVA on valence ratings yielded a nonsignificant main effect of Age Group, F(1, 53) = 1.38, p = .246, ηp2 = .03, a reliable main effect of Facial Expression, F(2, 106) = 1078.49, p < .001, ηp2 = .95, and a significant interaction between Age Group and Facial Expression, F(2, 106) = 3.84, p = .025, ηp2 = .07. Post hoc tests showed happy faces were rated as being more positive than neutral and angry faces, and that neutral faces were rated as being more positive than angry faces in both age groups (p <. 001 in all contrasts). In post hoc tests, no significant difference between young and older adults was not found in any facial expressions.
fMRI Results
Univariate Analysis
In the univariate analysis, using ANOVA, we found significantly greater activation in pSTS, FFA, and AMY during the processing of angry facial expression than that of other facial expressions, and activity in AMY also significantly increased in both angry and happy facial expressions compared with neutral facial expression. However, activity in these regions did not reflect a main effect of age group, and an interaction of age group with facial expression was not significant.
Encoding-related activation in the memory task was analyzed with a two-way mixed ANOVA with factors of Age Group (Young and Old) and Facial Expression (Happy, Neutral, and Angry). Three types of analysis were performed (see Methods section), and their results are displayed in Figure 3 and Table 4. In the first analysis, which focused on expression-specific effects, the right pSTS, F(2, 106) = 19.02, p < .001, ηp2 = .26; F(2, 106) = 16.49, p < .001, ηp2 = .24; right FFA, F(2, 106) = 14.02, p < .001, ηp2 = .21; and bilateral AMY [left AMY: F(2, 106) = 16.23, p < .001, ηp2 = .23; right AMY: F(2, 106) = 16.02, p < .001, ηp2 = .23] showed activation that was significantly greater for angry expressions than for both happy and neutral expressions. In addition, the right AMY displayed greater activity for both happy and angry facial expressions than for neutral facial expressions, consistent with arousal rating scores, F(2, 106) = 16.02, p < .001, ηp2 = .23. These results of significant activation were corrected for multiple comparisons in the hypothesis-driven ROI (FWE, p < .05). No significant activation was identified in happy facial expressions compared with the other facial expressions. In the second analysis, which focused on age-related differences in activation, and in the third analysis, which focused on interaction between age group and facial expression, no significant activation was found in any region.
Regions . | L/R . | BA . | MNI coordinates . | Z Value . | k . | ||
---|---|---|---|---|---|---|---|
x . | y . | z . | |||||
Main effect of facial expression (masked inclusively by Angry > Happy & Angry > Neutral) | |||||||
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) | |||||||
Middle temporal gyrus (pSTS) | R | 21/37 | 52 | −50 | 3 | 5.22 | 39 |
Middle temporal gyrus (pSTS) | R | 21/22 | 52 | −37 | 3 | 4.86 | 23 |
Fusiform gyrus (FFA) | R | 37 | 43 | −48 | −15 | 4.47 | 2 |
AMY | L | −21 | −6 | −17 | 4.82 | 7 | |
AMY | R | 23 | −6 | −17 | 4.79 | 5 | |
Main effect of facial expression (masked inclusively by Happy > Angry & Happy > Neutral) | |||||||
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) | |||||||
No significant activation was identified | |||||||
Main effect of facial expression (masked inclusively by Angry > Neutral & Happy > Neutral) | |||||||
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) | |||||||
AMY | R | 23 | −6 | −17 | 4.79 | 4 | |
Main effect of age group (masked inclusively by young > old) | |||||||
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) | |||||||
No significant activation was identified | |||||||
Interaction between facial expression and group | |||||||
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) | |||||||
No significant activation was identified |
Regions . | L/R . | BA . | MNI coordinates . | Z Value . | k . | ||
---|---|---|---|---|---|---|---|
x . | y . | z . | |||||
Main effect of facial expression (masked inclusively by Angry > Happy & Angry > Neutral) | |||||||
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) | |||||||
Middle temporal gyrus (pSTS) | R | 21/37 | 52 | −50 | 3 | 5.22 | 39 |
Middle temporal gyrus (pSTS) | R | 21/22 | 52 | −37 | 3 | 4.86 | 23 |
Fusiform gyrus (FFA) | R | 37 | 43 | −48 | −15 | 4.47 | 2 |
AMY | L | −21 | −6 | −17 | 4.82 | 7 | |
AMY | R | 23 | −6 | −17 | 4.79 | 5 | |
Main effect of facial expression (masked inclusively by Happy > Angry & Happy > Neutral) | |||||||
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) | |||||||
No significant activation was identified | |||||||
Main effect of facial expression (masked inclusively by Angry > Neutral & Happy > Neutral) | |||||||
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) | |||||||
AMY | R | 23 | −6 | −17 | 4.79 | 4 | |
Main effect of age group (masked inclusively by young > old) | |||||||
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) | |||||||
No significant activation was identified | |||||||
Interaction between facial expression and group | |||||||
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY) | |||||||
No significant activation was identified |
BA = Brodmann area; k = cluster size; L = left; R = right.
MVPA
The MVPA analysis demonstrated that discrimination between facial expressions by activity patterns in pSTS was significantly accurate in both age groups, whereas in FFA and OFC activity patterns, significant classification accuracies to discriminate between facial expressions were found only in young adults. In AMY, classification accuracies to discriminate between facial expressions were not significant in both young and older adults.
The MVPA results are displayed in Figure 4 and Table 5. The accuracy of MVPA in classifying facial expressions during the encoding phase was separately analyzed in four ROIs: OFC, right pSTS, right FFA, and AMY. All significant results were corrected by the FDR (q < .05) to control false-positives (Benjamini & Hochberg, 1995). The accuracy scores (BA) in bilateral OFC showed that activation patterns in this region could successfully classify happy versus angry faces in both young and older adults (Young: p < .014; Old: p < .001). In right pSTS, activation patterns successfully distinguish angry versus happy faces (Young: p < .001; Old: p < .001), and angry versus neutral faces (Young: p < .004; Old: p < .001) in both age groups. In contrast, only young adults displayed activation patterns that could accurately classify happy versus neutral faces in OFC (p < .001), and angry versus happy (p < .016) or neutral faces (p < .013) in right FFA. Finally, neither age groups displayed significant classification accuracy of facial expressions in AMY.
ROI . | Balanced accuracy (SD) . | p value . | ||||
---|---|---|---|---|---|---|
Permutation test . | One-sample t test . | |||||
Young . | Old . | Young . | Old . | Young . | Old . | |
OFC | ||||||
Happy vs. Neutral | 0.55 (0.05) | 0.52 (0.07) | < .001a | .037 | < .001 | .040 |
Happy vs. Angry | 0.53 (0.08) | 0.54 (0.09) | .014a | < .001a | .029 | .006 |
Neutral vs. Angry | 0.52 (0.08) | 0.52 (0.05) | .108 | .044 | .176 | .019 |
Right pSTS | ||||||
Happy vs. Neutral | 0.52 (0.06) | 0.51 (0.05) | .065 | .163 | .039 | .084 |
Happy vs. Angry | 0.56 (0.07) | 0.55 (0.06) | < .001a | < .001a | < .001 | < .001 |
Neutral vs. Angry | 0.54 (0.06) | 0.57 (0.05) | .004a | < .001a | .001 | < .001 |
Right FFA | ||||||
Happy vs. Neutral | 0.50 (0.06) | 0.50 (0.08) | .604 | .396 | .625 | .431 |
Happy vs. Angry | 0.53 (0.05) | 0.50 (0.07) | .016a | .416 | .004 | .454 |
Neutral vs. Angry | 0.53 (0.06) | 0.51 (0.07) | .013a | .189 | .007 | .214 |
AMY | ||||||
Happy vs. Neutral | 0.51 (0.06) | 0.50 (0.06) | .182 | .395 | .169 | .398 |
Happy vs. Angry | 0.50 (0.05) | 0.52 (0.07) | .464 | .080 | .473 | .068 |
Neutral vs. Angry | 0.49 (0.06) | 0.50 (0.06) | .757 | .430 | .797 | .454 |
ROI . | Balanced accuracy (SD) . | p value . | ||||
---|---|---|---|---|---|---|
Permutation test . | One-sample t test . | |||||
Young . | Old . | Young . | Old . | Young . | Old . | |
OFC | ||||||
Happy vs. Neutral | 0.55 (0.05) | 0.52 (0.07) | < .001a | .037 | < .001 | .040 |
Happy vs. Angry | 0.53 (0.08) | 0.54 (0.09) | .014a | < .001a | .029 | .006 |
Neutral vs. Angry | 0.52 (0.08) | 0.52 (0.05) | .108 | .044 | .176 | .019 |
Right pSTS | ||||||
Happy vs. Neutral | 0.52 (0.06) | 0.51 (0.05) | .065 | .163 | .039 | .084 |
Happy vs. Angry | 0.56 (0.07) | 0.55 (0.06) | < .001a | < .001a | < .001 | < .001 |
Neutral vs. Angry | 0.54 (0.06) | 0.57 (0.05) | .004a | < .001a | .001 | < .001 |
Right FFA | ||||||
Happy vs. Neutral | 0.50 (0.06) | 0.50 (0.08) | .604 | .396 | .625 | .431 |
Happy vs. Angry | 0.53 (0.05) | 0.50 (0.07) | .016a | .416 | .004 | .454 |
Neutral vs. Angry | 0.53 (0.06) | 0.51 (0.07) | .013a | .189 | .007 | .214 |
AMY | ||||||
Happy vs. Neutral | 0.51 (0.06) | 0.50 (0.06) | .182 | .395 | .169 | .398 |
Happy vs. Angry | 0.50 (0.05) | 0.52 (0.07) | .464 | .080 | .473 | .068 |
Neutral vs. Angry | 0.49 (0.06) | 0.50 (0.06) | .757 | .430 | .797 | .454 |
SD = standard deviation.
Significant results after FDR correction for the results of the permutation tests (q < .05). p values are shown before the FDR correction (uncorrected).
Functional Connectivity Analysis
In the functional connectivity analysis, we found that functional connectivity reflecting subsequent recollection of facial expressions was significant between HC and OFC for happy facial expressions in both age groups, between HC and FFA for happy and neutral facial expressions only in young adults, and between HC and pSTS for happy facial expressions only in older adults.
Previous studies have shown that successful memory encoding of faces is associated with increased functional connectivity between a critical region for the recollection of episodic memories, HC (for review, see Diana et al., 2007; Eichenbaum et al., 2007; Davachi, 2006), and cortical regions involved in the processing faces and their expressions (Tsukiura & Cabeza, 2008, 2011; Dennis et al., 2008). Thus, for each age group and each facial expression, we assessed functional connectivity (gPPI) predicting subsequent recollection of facial expressions associated with names between HC and three cortical ROIs, left OFC, right pSTS, and right FFA (seeds identified in the functional localizer task). Results of the functional connectivity analysis are illustrated in Figure 5, including Z values and coordinates of the regions showing the effects. In left OFC, significant functional connectivity with HC was found for happy facial expressions in both age groups [Young: t(25) = 5.08, p < .001, d = 1.00; Old: t(26) = 4,92, p < .001, d = 0.95]. In right pSTS, reliable functional connectivity with HC was observed for happy facial expressions only in the old group [left HC: t(27) = 4.77, p < .001, d = 0.90; right HC: t(27) = 5.23, p < .001, d = 0.99]. Finally, in FFA, significant functional connectivity with HC was found for both happy, t(26) = 6.21, p < .001, d = 1.20, and neutral facial expressions, t(26) = 8.00, p < .001, d = 1.54, only in young adults. These results of significant functional connectivity were corrected for multiple comparisons in HC ROI (FWE, p < .05).
DISCUSSION
In terms of age effects, two sets of findings emerged from the present study. First, during the processing of facial expressions, univariate activity and MVPA discrimination in pSTS and AMY were similar in both young and older adults, whereas MVPA activity patterns in FFA and OFC discriminated facial expressions less accurately in older than young adults. These results suggest that neural representations of facial expressions in FFA and OFC are affected by the age-related dedifferentiation and that activity patterns in OFC reflect the age-related positivity effects. Second, functional connectivity predicting subsequent face recollection was significant between HC and OFC for happy facial expressions in both age groups, between HC and FFA for happy and neutral facial expressions only in young adults, and between HC and pSTS for happy facial expressions only in older adults. Some of these results suggest the compensatory mechanisms and positivity effects in older adults. These two sets of findings are discussed in separate sections below.
Univariate and MVPA Results during the Perception of Emotional Facial Expressions
The first set of findings was that univariate activity and multivariate activity patterns in pSTS and AMY during the processing of facial expressions were similar in young and older adults, whereas in FFA and OFC, multivariate activity patterns discriminated facial expressions less accurately in older than young adults. These findings suggest that the contributions of pSTS and AMY to the processing of facial expressions are relatively preserved in older adults, whereas the representations of facial expressions in FFA and OFC are affected by the dedifferentiation in older adults.
In univariate analyses, we found that for both young and older adults, AMY activity was enhanced by both happy and angry facial expressions, and that pSTS and FFA activity was increased by angry facial expressions. These findings are consistent with previous cognitive neuroscience studies. For example, AMY shows significantly greater activity for highly arousing faces than for neutral faces (Winston, O'Doherty, & Dolan, 2003; Yang et al., 2002; Breiter et al., 1996), and AMY lesions reliably impairs the perception of negative facial expressions (Sato et al., 2002; Adolphs, Tranel, Damasio, & Damasio, 1994). The involvement of pSTS and FFA in the processing of emotional facial expressions has been also identified by prior studies (Sormaz, Watson, Smith, Young, & Andrews, 2016; Zhang et al., 2016; Wegrzyn et al., 2015; Harry et al., 2013; Said et al., 2010). The absence of age effects in univariate analysis is consistent with evidence that the ability to discriminate facial expressions (Murphy, Millgate, Geary, Catmur, & Bird, 2019; D'Argembeau & van der Linden, 2004), the utilization of visual cues to discriminate facial expressions (Smith et al., 2018), and the neural mechanisms of the processing of facial expressions (Goncalves et al., 2018) are relatively preserved in older adults.
MVPA results showed that activity patterns in FFA successfully classified facial expressions in young but not in older adults. The finding of significant MVPA classification in young adults fits with abundant evidence of the importance of FFA for processing facial expressions, including functional neuroimaging (Zhao et al., 2020; Wegrzyn et al., 2015; Skerry & Saxe, 2014; Harry et al., 2013; Fox, Moon, Iaria, & Barton, 2009) and prosopagnosia (Bentin, Degutis, D'Esposito, & Robertson, 2007) findings. The failure to distinguish facial expressions by FFA activity patterns in older adults agrees with evidence that FFA representations for facial identity display the dedifferentiation in older adults (Lee et al., 2011; Goh et al., 2010). Functional neuroimaging results suggest that FFA represents the morphological difference conveyed by facial identity (for review, see Bernstein & Yovel, 2015). Thus, one possibility is that impaired representations of facial expressions in older adults stem from a deficit in processing facial identity, which is a well-known deficit in older adults (Chaby, Narme, & George, 2011; Habak, Wilkinson, & Wilson, 2008; Boutet & Faubert, 2006). In the present study, we assessed the discrimination of facial expressions but not the discrimination of facial identities. A future study that examines both types of discrimination in the same participants could investigate the hypothesis that deficits in the two abilities interact in older adults.
In OFC, activity patterns in older adults distinguished between happy and angry facial expressions but not between happy and neutral facial expressions, whereas activity patterns in young adults allowed both distinctions. The finding that OFC representations in older adults could not distinguish happy facial expressions from emotionally ambiguous neutral facial expressions is consistent with the positivity effect, which has observed in a previous study as emotionally ambiguous stimuli being rated more positively in older than young adults (Zebrowitz et al., 2017). It is a topic of debate whether the positivity effect in older adults reflects motivational differences in allocating attention to information or more basic deficits in the processing mechanisms of emotions (for review, see Ruffman, Henry, Livingstone, & Phillips, 2008; Mather & Carstensen, 2005). In terms of the latter perspective, the present finding of age-related dedifferentiation in OFC is consistent with evidence that this region is anatomically impaired by aging (Shen et al., 2013; Salat et al., 2009; Lamar & Resnick, 2004; Tisserand et al., 2002). This OFC deficit might lead to a positive shift in older adults. This alternative is consistent with the finding that faces with negative facial expressions were rated as more approachable by patients with OFC lesions than controls (Willis, Palermo, Burke, McGrillen, & Miller, 2010).
Functional Connectivity Predicting Subsequent Recollection of Facial Expressions
The second set of findings was that encoding-related functional connectivity was found between HC and OFC for happy faces in both age groups, between HC and pSTS for happy faces only in older adults, and between HC and FFA for happy and neutral faces only in young adults. Before discussing these findings, it is worth mentioning that angry facial expressions did not modulate encoding-related functional connectivity between HC and cortical regions, consistent with the lack of enhanced memory for angry faces. These results could be related to the use of a cued-recall test that does not display faces during retrieval, unlike the recognition test used in some studies. Consistent with this idea, several studies in which emotional faces were presented during retrieval found that angry expressions enhanced face memory performance (Keightley, Chiew, Anderson, & Grady, 2011; Sergerie, Lepage, & Armony, 2005; Foa, Gilboa-Schechtman, Amir, & Freshman, 2000), whereas others in which emotional faces were not presented during retrieval (e.g., cued-recall or source memory test) did not find the angry-related memory enhancement (Bowen & Kensinger, 2017; D'Argembeau & van der Linden, 2007; Shimamura, Ross, & Bennett, 2006; Fenker, Schott, Richardson-Klavehn, Heinze, & Duzel, 2005).
The finding that HC–OFC interactions contributed to the encoding of happy faces is consistent with our previous results (Tsukiura & Cabeza, 2008). As noted before, happy facial expressions have rewarding values in a social context (Hayward, Pereira, Otto, & Ristic, 2018; Yang & Urminsky, 2018) and OFC is involved in the processing of rewards (for review, see O'Doherty, 2004). Moreover, studies on the enhancement of episodic memory by monetary or social rewards have linked this enhancement to functional connectivity between HC and OFC (Sugimoto et al., 2021; Frank, Preston, & Zeithamova, 2019; Tsukiura & Cabeza, 2008, 2011; Shigemune et al., 2010). Thus, the present study replicates these literatures by showing that HC–OFC interactions contribute to memory for happy facial expressions in both young and older adults.
In contrast with HC–OFC interactions, functional connectivity between HC and FFA contributed to memory for happy and neutral facial expressions in young but not older adults. The contribution of HC–FFA interactions for face and face-related association memories in young adults have been reported in several studies (Liu, Grady, & Moscovitch, 2018; Summerfield et al., 2006; Sperling et al., 2003). Age-related reductions in functional connectivity between HC and FFA during encoding are consistent with a study on memory for face–scene associations (Dennis et al., 2008). It is possible that age-related decrease in HC–FFA interactions cause the impairment of face-related memories in older adults.
Finally, encoding-related functional connectivity of HC with pSTS was significant for happy facial expressions only in older adults. The additional contribution of pSTS in older adults could reflect a compensatory mechanism in older adults. Several previous studies have demonstrated that higher levels of neural activity or functional connectivity play a compensatory role in older adults (for review, see Cabeza et al., 2018; Sala-Llonch, Bartres-Faz, & Junqué, 2015; Cabeza, 2002). For example, a compensatory functional connectivity between the medial temporal lobe and PFC was recruited in older adults during both encoding and retrieval of episodic memories (Dennis et al., 2008; Daselaar, Fleck, Dobbins, Madden, & Cabeza, 2006). In another fMRI study, interacting mechanisms between HC and ventromedial PFC during the successful encoding of emotionally positive pictures were significantly more active in older adults than in young adults (Addis et al., 2010). Thus, the encoding-related HC-pSTS functional connectivity for happy facial expressions in older adults could reflect the age-dependent compensatory mechanisms for positive socioemotional values, an effect related to the positivity effect.
Conclusion
In the present event-related fMRI study, we investigated age-related differences in neural representations and functional connectivity during the perception and subsequent memory of emotional facial expressions associated with names. First, during the perception of emotional facial expressions, univariate activity and multivariate activity patterns in pSTS and AMY were similar in young and older adults, whereas multivariate activity patterns in FFA and OFC classified facial expressions less accurately in older adults. The latter results suggest that neural representations of facial expressions in FFA and OFC are affected by age-related dedifferentiation, and that activity patterns in OFC reflect the positivity effect, which is a tendency to interpret neutral facial expressions as emotionally positive expressions in older adults. Second, recollection-predicting functional connectivity was found between HC and OFC for happy facial expressions in both age groups, between HC and FFA for happy and neutral facial expressions only in young adults, and between HC and pSTS for happy facial expressions only in older adults. These findings could reflect compensatory mechanisms and positivity effects in older adults. Taken together, the results in the present study clarify the effects of aging on neural representations and mechanisms during perceiving and encoding facial expressions.
Acknowledgments
We would like to thank Drs. Nobuhito Abe, Kohei Asano, and Ryusuke Nakai, and Mses. Aiko Murai, Maki Terao, and Saeko Iwata for their technical assistance in the MRI scanning and data analysis. This work was supported by JSPS KAKENHI grant numbers JP18H04193 (T. T.) and JP20H05802 (T. T.). The research experiments were conducted using an MRI scanner and related facilities at Kokoro Research Center, Kyoto University. The authors declare no competing financial interests.
Reprint requests should be sent to Takashi Tsukiura, Department of Cognitive and Behavioral Sciences, Graduate School of Human and Environmental Studies, Kyoto University, Yoshida-Nihonmatsu-Cho, Sakyo-ku, Kyoto 606-8501, Japan, or via e-mail: [email protected].
Author Contributions
Reina Izumika: Conceptualization; Data curation; Formal analysis; Investigation; Methodology; Software; Validation; Visualization; Writing–Original draft. Roberto Cabeza: Supervision; Visualization; Writing–Review & editing. Takashi Tsukiura: Conceptualization; Formal analysis; Funding acquisition; Investigation; Methodology; Project administration; Software; Supervision; Validation; Visualization; Writing–Review & editing.
Funding Information
Takashi Tsukiura, Japan Society for the Promotion of Science (https://dx.doi.org/10.13039/501100001691), grant number: JP18H04193. Takashi Tsukiura, Japan Society for the Promotion of Science (https://dx.doi.org/10.13039/501100001691), grant number: JP20H05802.
Diversity in Citation Practices
Retrospective analysis of the citations in every article published in this journal from 2010 to 2021 reveals a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .407, W(oman)/M = .32, M/W = .115, and W/W = .159, the comparable proportions for the articles that these authorship teams cited were M/M = .549, W/M = .257, M/W = .109, and W/W = .085 (Postle and Fulvio, JoCN, 34:1, pp. 1–3). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance.