Emotions modulate behavioral priorities based on exteroceptive and interoceptive inputs, and the related central and peripheral changes may be experienced subjectively. Yet, it remains unresolved whether the perceptual and subjectively felt components of the emotion processes rely on shared brain mechanisms. We applied functional magnetic resonance imaging, a rich set of emotional movies, and high-dimensional, continuous ratings of perceived and felt emotions in the movies to investigate their cerebral organization. Emotions evoked during natural movie scene perception were represented in the brain across numerous spatial scales and patterns. Perceived and felt emotions generalized both between individuals and between different stimuli depicting the same emotions. The neural affective space demonstrated an anatomical gradient from emotion-general responses in polysensory areas and default mode regions to more emotion-specific discrete processing in subcortical regions. Differences in brain activation during felt and perceived emotions suggest that temporoparietal areas and precuneus have a key role in evaluating the affective value of the sensory input, and subjective emotional state generation is associated with further and significantly stronger recruitment of the temporoparietal junction, anterior prefrontal cortices, cerebellum, and thalamus. These data reveal the similarities and differences of domain-general and emotion-specific affect networks in the brain during a wide range of perceived and felt emotions.

Emotions promote survival by monitoring external and internal challenges and coordinating automatic changes in peripheral physiology, behavior, motivation, and conscious experiences (i.e., feelings) (Anderson & Adolphs, 2014; LeDoux, 2012; Scherer, 2009). Emotion circuits span the whole brain, with the core regions residing in the limbic and paralimbic structures (Kober et al., 2008). Despite significant advances in mapping the neural basis of emotions, two critical questions remain unanswered. First, how distinct emotional states—such as anger, fear, or disgust—are coordinated in the brain, and second, what is the relationship between the brain circuits that extract and recognize emotional information from the external input and those that subsequently generate the phenomenological experience of emotions.

Emotions are organized categorically across multiple levels ranging from brain activity (Kotz et al., 2013; Kragel & LaBar, 2016; Peelen et al., 2010; Saarimäki et al., 2016, 2018), autonomic activation (Kragel & LaBar, 2013), somatosensory and interoceptive experience (Nummenmaa et al., 2014), and facial, bodily, or vocal behaviors (Calder et al., 1996; Sauter et al., 2010; Witkower & Tracy, 2019) to subjective feelings (Toivonen et al., 2012). Recent behavioral studies suggest that the human affective states span almost 30 distinct emotions (Cowen & Keltner, 2017). Yet, most imaging studies have investigated either lower-order emotional dimensions (valence and arousal) (Nummenmaa et al., 2012) or the canonical six “basic” emotions (Kragel & LaBar, 2015; Lettieri et al., 2019; Putkinen et al., 2021; Saarimäki et al., 2016, 2022), only rarely extending the emotion space beyond these categories (Du et al., 2023; Horikawa et al., 2020; Koide-Majima et al., 2020; Lettieri et al., 2024; Saarimäki et al., 2018).

Emotional processing begins with the encoding of the survival impact of sensory information based on, for example, the presence of predators or virus vectors, or others’ emotional expressions and behavior (Adolphs, 2002; de Gelder & Vroomen, 2000; Witkower & Tracy, 2019) and contextual cues (Aviezer et al., 2008). These cascading evaluative processes may trigger the central and peripheral emotional responses and subsequently also the subjective experience or feeling of emotion (see, e.g., Anderson & Adolphs, 2014; Barrett et al., 2007; Scherer, 2009). Emotional perception and feelings are associated with activation in overlapping brain regions (Oosterwijk et al., 2017; Volynets et al., 2020; Wicker et al., 2003). However, an event may trigger emotional states that are either congruent (e.g., seeing a smiling baby makes you happy) or incongruent (e.g., seeing a smiling villain makes you scared) with the event. Similarly, in anxiety disorders, emotional perception might be intact while the resulting feeling is disproportionally biased toward anxious or fearful feelings (Craske & Stein, 2016). Thus, to map the neural basis of emotional perception and feelings, we need to establish accurate stimulus models distinguishing these two aspects independently. Yet, human neuroimaging studies rarely differentiate perceptual versus experiential emotion processes (Adolphs, 2017).

Movies provide a powerful way to portray a wide range of human emotions. The characters may be involved in nuanced and powerful emotional states (such as Rick Blaine on the airstrip at the end of Casablanca or Rose DeWitt Bukater onboard the sinking passenger liner in Titanic), and such scenes also elicit strong and consistent emotions in the viewers (Adolphs et al., 2016; Gross & Levenson, 1995; Saarimäki, 2021; Westermann et al., 1996). Thus, carefully curated cinematic stimuli allow mapping the high-dimensional representation of both perceived and experienced emotions in the viewer’s brain.

In this study, we modeled the high-dimensional organization of perceived and felt emotions in the human brain and tested how these models generalize across stimuli and participants. We showed our participants 2 hours of emotional movies while measuring their brain activity with functional magnetic resonance imaging. We selected a wide range of emotion categories to cover a high-dimensional affective space (Fig. 1A) and 2 hours of emotional movie scenes from an existing emotion-elicitation database (Fig. 1B; Schaefer et al., 2010). We fitted emotion feature models derived from the dynamic emotion ratings to the fMRI data and adopted a cross-validation scheme to evaluate the generalizability of the emotion models across participants and movie stimuli (Fig. 1C, D). This allowed us to directly compare the neural basis of perceived and felt emotions using the same stimuli. The results show that movies elicited consistent neural responses to a wide range of perceived and felt emotions that generalize across stimuli and participants. The cerebral topographies of perceived and felt emotions were partly separate, and neural emotion clusters only partially correspond to behavioral emotion clusters.

Fig. 1.

Methodological pipeline. (A) Candidate emotions were selected based on previous studies. (B) A total of 39 out of 70 movie clips with total duration of >2 hours were selected from (Schaefer et al., 2010) to elicit a wide range of emotions. (C) Dynamic ratings for perceived and felt emotion features were collected during movie viewing. (D) The reliability of the ratings of the emotion features was evaluated and only the 46 most reliable features were included in the stimulus models. (E) The emotion models were fit to the blood oxygenation level dependent (BOLD) functional data using a cross-validation scheme.

Fig. 1.

Methodological pipeline. (A) Candidate emotions were selected based on previous studies. (B) A total of 39 out of 70 movie clips with total duration of >2 hours were selected from (Schaefer et al., 2010) to elicit a wide range of emotions. (C) Dynamic ratings for perceived and felt emotion features were collected during movie viewing. (D) The reliability of the ratings of the emotion features was evaluated and only the 46 most reliable features were included in the stimulus models. (E) The emotion models were fit to the blood oxygenation level dependent (BOLD) functional data using a cross-validation scheme.

Close modal

2.1 Generating the high-dimensional emotional space with movies

We initially selected a wide range of emotion categories to cover a high-dimensional affective space (Fig. 1A). First, we compiled an original list of 128 emotion categories based on previous studies (Adolphs, 2002; Cowen & Keltner, 2017; Saarimäki et al., 2018; Skerry & Saxe, 2015). Next, we translated the emotion categories into Finnish using a glossary of Finnish emotion words (Tuovila, 2005). We then conducted a pilot study where 25 female volunteers rated the similarity between emotion categories. The ratings were collected using a modified online Q-sort where participants organized the categories into piles based on their felt similarities (https://version.aalto.fi/gitlab/eglerean/sensations; Nummenmaa et al., 2018). Based on the average similarities, we selected a final list of 63 emotion categories that covered the whole emotion space (Supplementary Table S2; Supplementary Fig. S1).

Next, we selected a set of movie stimuli from an existing database of emotional Hollywood movies (Fig. 1B; Schaefer et al., 2010). First, we excluded black-and-white movies to minimize low-level visual differences between scenes. Second, we excluded multiple clips with the same identifiable characters from a single movie. Third, to ensure that the scenes elicit the desired emotions, we collected pilot emotional intensity ratings from 13 Finnish-speaking female volunteers. These raters evaluated the experience of 63 emotion categories (scale: 0 = not at all, 4 = extremely much) while viewing the scenes in 3–10-second segments (for more details, see Supplementary Note 1). This selection procedure led to a final set of 39 scenes (length 0:16–6:57; total duration 114 minutes; see scene list in Supplementary Table S3). For the fMRI study, we divided the movie scenes into five runs of similar emotional content as defined by the pilot ratings (8 scenes in runs 1–4 and 7 scenes in run 5; for a list of movies in each run and their mean emotion ratings, see Supplementary Tables S4 and S5).

2.2 Generating emotion models from ratings of perceived and felt emotions

We collected dynamic ratings of perceived and felt emotions during movie viewing using an online rating tool from another independent sample of 16 Finnish-speaking female volunteers (Fig. 1C). The final selection of 39 movie scenes were shown and ratings were collected similarly as in the pilot study. The participants could replay each clip as many times as they wanted. We collected ratings of perceived and felt emotions from the same participants on separate runs and counterbalanced the order of ratings (i.e., perceived or felt first). When rating the perceived emotions, participants were instructed to rate the intensity of the emotions displayed and experienced by the characters. When rating the experienced emotions, participants were instructed to rate the intensity of the emotions they experienced while viewing the clip. We created the dynamic emotion models for each movie scene by linear interpolation to the end-points of the short clips. All ratings were set to zero at the beginning of each scene. Due to the low and variable sampling rate and slow changes of the interpolated ratings, we approximated the hemodynamic response function (HRF) as a single gamma function, excluding the undershoot typically included in the canonical double gamma HRFs.

Next, we performed a reliability analysis to ensure that only reliably evoked emotions would be included in the stimulus models (Fig. 1D). We used a combined reliability measure considering the number of raters that reported perceiving / feeling each emotion, the number of time points when the emotion was detected, and the average intersubject correlation of the emotion ratings. We first calculated the percentage of raters that gave non-zero ratings for each emotion for at least one time point for each run. Second, we calculated the number of time points that received non-zero ratings from at least one rater. Third, we calculated the rating-wise mean intersubject correlation between all pairs of raters. Finally, we calculated the geometric mean of these three measures to derive a joint reliability value for each emotion. To include emotions that were consistently rated for at least one task, we calculated the mean reliability across runs for each task separately and used the higher of these values for each emotion to select the emotions to be included in the subsequent analyses. We evaluated the chance level of reliability as the combined reliability for permuted data. Each permutation consisted of shuffling the values between emotions independently for the three aforementioned measures and recalculating the reliability. The 95th percentile of this distribution across iterations, emotions, and tasks was used as a statistical threshold for defining the reliable emotions used in the data analysis.

2.3 fMRI participants and experimental design

Fifty right-handed Finnish-speaking healthy female volunteers (mean age 24.9 ± 4.4, range 20–38 years) with normal or corrected to normal vision participated in the study. No participants had current psychiatric conditions or medication affecting the central nervous system. All participants gave written informed consent according to the Declaration of Helsinki and were compensated for their time and travel expenses. Aalto University’s ethics committee approved the study protocol.

Functional magnetic resonance imaging (fMRI) was performed on two separate days. In the first fMRI session, we obtained structural images and two functional runs of movie scenes. In the second session, we obtained three functional runs of movie scenes.

Participants watched the movie scenes without a specific task and were instructed only to watch them as they would, for example, watch movies on YouTube. Each of the five runs consisted of 7–8 movie scenes and lasted for approximately 23 minutes (range 22–24 minutes). The order of movie scenes within a run was fixed for all participants, and the order of runs was counterbalanced across participants. A run started with a fixation cross presented for 9.6 seconds (i.e., 4 TRs), followed by the first scene. The movies were played one after another without gaps. After the last movie of a run, a fixation cross was presented for 9.6 seconds.

Sound was delivered through Sensimetrics S14 insert earphones (Sensimetrics Corporation, Malden, Massachusetts, USA). Each participant’s sound level was adjusted individually to be loud enough over the scanner noise. Visual stimuli were back-projected on a semi-transparent screen using a Panasonic PT-DZ110XEJ data projector (Panasonic, Osaka, Japan) and via a mirror to the participant. Stimulus presentation was controlled with Presentation software (Neurobehavioral Systems Inc., Albany, CA, USA). An fMRI-compatible face camera (MCR Systems Ltd, Leicester, UK) was used to ensure that participants had their eyes open during the whole scan.

2.4 fMRI data acquisition and preprocessing

We collected MRI data with a 3T Siemens Magnetom Skyra scanner at the Advanced Magnetic Imaging Centre, Aalto Neuroimaging, Aalto University. To improve the binocular field of view, we used a 30-channel Siemens receiving head coil modified from a standard 32-channel coil by removing the two coil elements surrounding the eyes. We collected whole-brain functional scans using a whole-brain T2*-weighted EPI sequence with the following parameters: 44 axial slices, interleaved order (odd slices first), TR = 2.4 seconds, TE = 24 ms, flip angle = 70°, voxel size = 3.0 x 3.0 x 3.0 mm3, matrix size = 64 x 64 corresponding to FOV 192 x 192 mm2, and PAT2 parallel imaging. To avoid the signal from fat tissue, we used a custom-modified bipolar water excitation radio frequency pulse. Finally, we collected high-resolution anatomical images with isotropic 1 x 1 x 1 mm3 voxel size using a T1-weighted MP-RAGE sequence.

For each blood oxygenation level dependent (BOLD) fMRI run, we performed the preprocessing with fMRIprep version 1.1.8 (Esteban et al., 2019; RRID:SCR_016216). First, we used fMRIprep to prepare a reference volume and its skull-stripped version. Next, we co-registered the BOLD reference to the T1w reference image using bbregister (FreeSurfer), which implements boundary-based registration (Greve & Fischl, 2009). Co-registration was configured with nine degrees of freedom to account for distortions remaining in the BOLD reference. Head-motion parameters for the BOLD reference (transformation matrices and six rotation and translation parameters) were estimated before spatiotemporal filtering using mcflirt (FSL 5.0.9; Jenkinson et al., 2002). BOLD runs were slice-time corrected using 3dTshift from AFNI (Cox & Hyde, 1997; RRID:SCR_005927). The resulting slice-timing corrected BOLD time series were resampled onto their original, native space by applying a single, composite transform to correct head-motion and susceptibility distortions. Next, we applied spatial smoothing with an isotropic, Gaussian kernel of 6 mm FWHM (full-width half-maximum). The resulting “non-aggressively” denoised runs were produced, and the noise regressors were used for nuisance regression during analysis. The BOLD time series were resampled to MNI152NLin2009cAsym standard space, generating a preprocessed BOLD run in the corresponding space.

After preprocessing, we extracted the voxel-wise BOLD time series within the group brain mask, and converted them to percent signal change for each participant. Next, we calculated the mean region of interest (ROI) time series in regions defined with Brainnetome (Fan et al., 2016) and cerebellar connectivity (Diedrichsen et al., 2009, 2011) atlases covering the cerebral cortex, subcortical regions, and the cerebellum. We included 273 regions in the analyses after omitting one cerebellar ROI falling outside the group brain mask. Signals extracted from deep white matter and cerebrospinal fluid and the six motion time series and their second-order effects and linear trends were regressed from the ROI level data. To match the frequency content of the slowly evolving emotion time series (emotion event lengths ranged from 20 to 100 seconds), we filtered the data of periods between 200 (twice the length of the longest event) and 20 seconds (i.e., .005 to .05 Hz). We employed a finite impulse response filter designed with the Parks-McLellan algorithm with a ripple of <1 dB in the passband and 40 dB in the stopband. Transition bands extended from ~−24 dB at 0 Hz in the lower stopband and up to 0.0613 Hz in the upper stopband.

2.5 Fitting the emotion models

The code used for the analyses is provided in the Supplementary Materials.

We tested the fit emotion models on brain activity as well as fit of brain activity on emotion models in a similar fitting and cross-validation scheme. We estimated the beta weights of the emotion (or brain) models with least-squares fitting after removing the mean of the signals and models (Fig. 1E). For between-runs generalization, we fitted the model to the mean standardized activity of all participants in the training runs (three runs). We then tested the model fit on the individual activity time series of the participants in the test runs (the remaining two runs). For emotion model prediction, the cross-validation was similar, except that we used the mean brain activity over participants of the ROIs to predict the emotion ratings. The process was repeated for all permutations of training and test runs (10 combinations in total). We then contrasted the fits of the full models of felt and perceived emotions across participants with paired t-tests. Significance of the test statistics was evaluated with a permutation test where the order of the model fits of felt and perceived emotion models were randomized for every participant and the paired t-test was repeated 1000 times. The 95th percentile of maximum statistics of iterations was used as the threshold for statistical significance to control the family-wise error rate (FWER). Next, for within-run generalization, we used 10-fold cross-validation across participants. Here, we fitted the model to the mean activity of 45 participants, and the fit of the predicted activity was tested on the individual activity of the left-out participants. This process was repeated for each of the ten cross-validation folds and ten combinations of runs. We also tested competing regression models, such as support vector regression with linear and radial basis function kernels, with and without automatic kernel scaling and with and without ridge penalty for model dimensionality. The results in the test runs were very similar with all models, although training accuracies varied between models. On average, the simple linear regression performed as well or sometimes better than the competing models. Therefore, we focus only on the linear regression results in the rest of the manuscript.

To test whether low-level and semantic stimulus features could predict the fit, we repeated the same fitting process for the emotion models extended with two low-level visual features (amplitude of high spatial frequencies and differential energy between subsequent frames), 11 semantic visual features, and one auditory feature (root-mean-squared power) from the movies. The semantic features were based on the panoptic segmentation of video frames at 1-second intervals. The video frames were segmented using a pre-trained panoptic feature pyramid network (Kirillov et al., 2019) from the Deceptron2 python library (https://github.com/facebookresearch/detectron2/tree/main). The default 0.5 was used as the detection threshold. The initial 136 segmented categories were combined into 9 broader categories (Human, Vehicle/Street, Animal, Object, Food/Utensil, Furniture/Appliance, Floor/Ground, Buildings, Background/Vegetation) to simplify the semantic model and to ensure sufficient occurrence rate in the stimuli. Finally, the total percentage of image area of each semantic category was used as the value of that feature in the analyses. A pre-trained RetinaFace-10GF deep face detection model from InsightFace python library (https://github.com/deepinsight/insightface/tree/master) was used to detect facial keypoints (Deng et al., 2020; Guo et al., 2021). The default 0.5 was used as the detection threshold. The face segment was then identified as a rectangle that encloses all the facial keypoints (Keles et al., 2022). The size and number of faces detected in the video frames by the face-detection algorithm were included as additional features in the semantic model. Example code for the image segmentation is available in an online repository (https://github.com/santavis/social-vision-in-cinema/tree/main/python_scripts). The resulting stimulus feature model time series were added to a combined emotion and stimulus features model.

2.6 Brain responses to individual emotions

To evaluate the brain activity elicited by individual emotions rather than the entire combined models for felt and perceived emotions, we calculated the correlations between the individual emotions and ROI activity time series for each run and participant separately. We then tested the model fits for each run as well as the average across runs with one sample t-tests. To evaluate the significance using time series with a similar autocorrelation structure, we calculated null correlations with the same procedure after first randomly circularly shifting the model time series by at least 20 samples forward or backward 100 times, saving the correlation values for each iteration. For the final statistics, we then randomly permuted the emotion labels and the circularly shifted models, calculated the t-statistics across subjects, and repeated this process 1000 times. To control the FWER, we saved the maximum value observed over ROIs and runs and used the 97.5th percentile of the observed values as the two-tailed FWER corrected threshold at p < .05.

2.7 Reliability of the neural responses to emotion categories

To evaluate the reliability of emotion responses, we calculated a spatial correlation matrix for the whole-brain activation pattern elicited by each emotion between all combinations training and test sets used in the between-runs cross-validation. That is, we calculated the spatial correlation of emotion responses across ROIs comparing the responses in the training set to those in the non-overlapping test set. This was critical to avoid circularity, as correlating the spatial activation pattern of two emotions within the same run is driven directly by the correlation between the two ratings. By contrast, calculating the correlations between runs only evaluates the shared brain responses elicited by the same emotion categories across different stimulus contexts. A mean correlation matrix was then produced by averaging the correlation matrices over the 10 run combinations. The mean cross-validated spatial correlation matrix was used for evaluating the reliability of brain-wide responses to emotions and performing the subsequent brain-based clustering. The reliability of the brain activity patterns is represented by the diagonal elements of this matrix (spatial correlation of the activity pattern elicited by the same emotion across runs), while the off-diagonal elements were used to evaluate the similarity between emotions across different stimuli. Otherwise, the emotion clustering employed a normal hierarchical clustering procedure described below. Additionally, to reduce the effects of non-significant activations on the brain-wide reliability measure, we repeated a similar analysis using generalized (two-sided) Dice indices of the thresholded activity maps. This procedure provided highly similar results as the correlation matrices and is, therefore, omitted in the subsequent sections.

2.8 Cluster analysis

To investigate the similarity structure of the dynamic emotion ratings and emotion-related brain activity, we performed an average linkage hierarchical clustering for both data separately. The leaf order was optimized by maximizing the sum of the similarities of adjacent emotions (optimalleaforder-function, https://www.mathworks.com/help/stats/optimalleaforder.html), and default thresholding was used (70% of maximum linkage) as the clustering cutoff for the visualization.

3.1 Reliability of ratings of perceived and felt emotions

Subjective ratings confirmed that the participants perceived and felt a wide array of different emotions while viewing the movies (Fig. 2A–C). However, there was significant variation in both occurrence, intensity, and intersubject consistency of the perceived and felt emotions. We found a clear continuum from commonly and consistently reported emotions (fear, anxiety, despair, devastated, sadness) to infrequently and inconsistently reported emotions (jealousy, craving, hurt, satisfaction). The mean intersubject correlation of the ratings and the number of participants giving above-zero ratings for each emotion were positively correlated. To avoid spurious effects, we show the clustering analysis (Fig. 3) and all further analyses only for the 46 emotions, which showed reliable, non-zero rating profiles across the five runs (Fig. 2). Clustering analysis for all emotions further confirmed that unreliable emotions are separated from reliable ones (Supplementary Fig. S2). Clustering for this reliable set of emotions revealed a broad valence-related structure. The nine clusters were formed around the following emotion categories: 1) sadness, anger, and fear, 2) disgust, 3) displeasure, 4) embarrassment, 5) love, 6) surprise, 7) gratitude and joy, 8) amusement, and 9) calmness.

Fig. 2.

Reliability of emotion ratings. (A) Percentage of raters for each emotion and each task (perceived and felt combined) with above-zero ratings for at least one time point. (B) Percentage of time points for each emotion where at least one rater gave a non-zero rating. (C) Mean intersubject correlation of emotion ratings across pairs of raters. (D) Similarity of mean ratings for perceived and felt emotions across raters across all runs. In each panel, bars indicate the mean values across runs, dots show the values for individual runs, and vertical lines indicate the min-max range. The emotions are ordered based on the combined reliability of panels A–C calculated as the geometric mean of the percentage of non-zero raters, non-zeros timepoints, and intersubject correlation or ratings (shown in the gray plot at the top). Emotions not deemed reliable based on permutation statistics are shown with transparent colors.

Fig. 2.

Reliability of emotion ratings. (A) Percentage of raters for each emotion and each task (perceived and felt combined) with above-zero ratings for at least one time point. (B) Percentage of time points for each emotion where at least one rater gave a non-zero rating. (C) Mean intersubject correlation of emotion ratings across pairs of raters. (D) Similarity of mean ratings for perceived and felt emotions across raters across all runs. In each panel, bars indicate the mean values across runs, dots show the values for individual runs, and vertical lines indicate the min-max range. The emotions are ordered based on the combined reliability of panels A–C calculated as the geometric mean of the percentage of non-zero raters, non-zeros timepoints, and intersubject correlation or ratings (shown in the gray plot at the top). Emotions not deemed reliable based on permutation statistics are shown with transparent colors.

Close modal
Fig. 3.

The cluster structure of perceived and felt emotions. (A) Correlation matrix and dendrogram of ratings over emotion models across all runs and both felt and perceived emotions for the reliably occurring emotions. Colors of the dendrogram indicate the clustering solution based on rating data and is also reflected in subsequent figures. (B) Alluvial diagram shows corresponding cluster structures between felt and perceived emotions. (C) Multidimensional scaling visualizes the similarity of emotions based on the ratings of felt emotions (left) and perceived emotions (right). Emotions not belonging to any cluster are shown in gray in the dendrogram and multidimensional scaling plots and are left unconnected in the alluvial diagram.

Fig. 3.

The cluster structure of perceived and felt emotions. (A) Correlation matrix and dendrogram of ratings over emotion models across all runs and both felt and perceived emotions for the reliably occurring emotions. Colors of the dendrogram indicate the clustering solution based on rating data and is also reflected in subsequent figures. (B) Alluvial diagram shows corresponding cluster structures between felt and perceived emotions. (C) Multidimensional scaling visualizes the similarity of emotions based on the ratings of felt emotions (left) and perceived emotions (right). Emotions not belonging to any cluster are shown in gray in the dendrogram and multidimensional scaling plots and are left unconnected in the alluvial diagram.

Close modal

3.2 Temporal similarity between perceived and felt emotions

Next, we addressed the temporal similarities of perceived and felt emotions by calculating the similarity between the time series for the perceived and felt emotion from the same category (Fig. 2D). Overall, the temporal similarity structures for the perceived and felt emotions were consistent. Out of the 63 emotions, the mean correlation between perceived and felt emotions was above .60 (corresponding to 36% shared variance) for 32 emotions categories. The emotions with the highest similarity included romance, impression, sexual desire, pride, fear-related emotions (including fear, anxiety, horror, and thrill), and sadness-related categories (including sadness and glumness). While the perception of emotion led to a consistent feeling of the same emotion for most emotion categories, there were also some emotions with discordant perception and feeling (e.g., guilt, physical pain, confusion, anger, hatred, interest, excitement, and nostalgia). The correspondence of the cluster structures between perceived and felt emotions further highlights this decoupling (alluvial diagram in Fig. 3): Emotions clustering around love, pride, sadness, and impression had similar perception and feeling structures, whereas social emotions clustering around humiliation, loneliness, and insecurity were less consistently perceived and felt. While some emotions were perceived more often than felt (e.g., physical and emotional pain, fury, zeal, and guilt), others such as pity and nostalgia were mostly felt and rarely perceived in the movies (Fig. 2A).

3.3 Predicting emotion rating with whole brain activity patterns

To test if brain activity would be able to predict emotion ratings of left out movies, we fitted the mean brain activity over subjects of the regions of interest across the brain on each individual emotion and tested the model fit on left out runs (Fig. 4). Overall, we saw that emotions that were rated reliably were also better predicted by the whole brain activity patterns. The majority of reliably rated felt (54.4%) and perceived (69.6%) emotions were predicted significantly better than chance across runs, compared to maximum correlation to null models with shuffled emotion labels. This confirmed that there are repeatable patterns associated with emotions across movies. By contrast, only approximately a quarter of the less reliable emotions (felt 23.5%, perceived 29.4%), were successfully predicted by brain activity in the left-out runs.

Fig. 4.

Prediction of emotion ratings from brain activity. Bars indicate the mean correlation of real and predicted emotion ratings in a similar cross-validation between runs (3 training runs, 2 test runs) as in other analyses. Dots indicate correlations in each of the 10 run combinations, with maximum and minimum values connected with vertical lines. Maximum correlation observed for null models (shuffled training labels) is indicated by the horizontal line and semi-transparent white fill on top of the bar plots. The emotions are ordered according to the combined reliability in Figure 2, and unreliable emotions are indicated by the gray background and grayed bars.

Fig. 4.

Prediction of emotion ratings from brain activity. Bars indicate the mean correlation of real and predicted emotion ratings in a similar cross-validation between runs (3 training runs, 2 test runs) as in other analyses. Dots indicate correlations in each of the 10 run combinations, with maximum and minimum values connected with vertical lines. Maximum correlation observed for null models (shuffled training labels) is indicated by the horizontal line and semi-transparent white fill on top of the bar plots. The emotions are ordered according to the combined reliability in Figure 2, and unreliable emotions are indicated by the gray background and grayed bars.

Close modal

3.4 Brain basis of perceived and felt emotions

To evaluate the specific patterns of brain activity associated with emotions, we first evaluated where in the brain different perceived and felt emotions are represented consistently across stimuli and participants (Fig. 5). First, each of the five runs in our experiment contained different movies depicting the same set of emotions, allowing us to test the generalizability of emotion-related brain responses across stimuli. Figure 5A shows the generalizability (i.e., correlations) of the cross-validated emotion models where three runs were used for training and two runs for testing (for region-wise generalizability, see Supplementary Table S1). For both perception and feeling, the responses were most consistent in the right temporal cortex, bilateral temporoparietal junction (TPJ), dorsal/medial prefrontal cortex (PFC), and precuneus (Pcun). For emotional perception, the responses were also consistent bilaterally in the temporal cortices. For feelings, the model was also consistent in the anterior, medial, and dorsolateral PFC, thalamus, and cerebellum. Second, we evaluated the consistency of emotion-related brain responses across participants. Figure 5B shows that we found largely overlapping, widespread effects for both perception and feeling in regions covering occipital, temporal, and parietal lobes, posterior midline, and cerebellum. The only notable difference was the more consistent prefrontal activity for feelings, which was absent for perception.

Fig. 5.

Consistency of emotion models. (A) Generalizability of emotion responses for different stimuli for felt (left) and perceived (right) emotions. (B) Consistency of the cross-validated models across participants for felt (left) and perceived (right) emotions. Regional data are thresholded at FWER p < .05 of the mean fit of all run combinations (3 training runs, 2 test runs, 10 combinations). Statistics are based on the 95th percentile of maximum statistics in surrogate data (max. over ROIs and 95th percentile over null iterations with circularly shifted training data).

Fig. 5.

Consistency of emotion models. (A) Generalizability of emotion responses for different stimuli for felt (left) and perceived (right) emotions. (B) Consistency of the cross-validated models across participants for felt (left) and perceived (right) emotions. Regional data are thresholded at FWER p < .05 of the mean fit of all run combinations (3 training runs, 2 test runs, 10 combinations). Statistics are based on the 95th percentile of maximum statistics in surrogate data (max. over ROIs and 95th percentile over null iterations with circularly shifted training data).

Close modal

To test whether low-level and semantic stimulus features could explain the emotion-related brain activity, we ran the same analysis with a model that included the emotion features and three low-level visual and auditory features and 11 semantic features (Supplementary Fig. S4). As expected, stimulus feature models explained sensory activity in primary and secondary visual areas (V1/V2), motion-sensitive visual areas (V5/MT), and primary and extended auditory areas (A1/STG), as well as frontoparietal regions. However, activity in regions, including TPJ, superior temporal sulcus (STS), and medial prefrontal cortex (MPFC), was unaffected, suggesting that stimulus-related activity in these regions reflects emotional rather than purely sensory processing.

Directly comparing the variance explained by felt versus perceived emotion models (Fig. 6) reveals that felt emotions better explained activity in the TPJ, Pcun, thalamus, medial and lateral PFC, and inferior premotor regions. By contrast, perceived emotions better explained activity only in anterior and medial superior temporal regions.

Fig. 6.

Comparison of models of felt and perceived emotions. Results are masked with the areas showing significant effects for either felt or perceived emotions in Figure 4A. Differences are thresholded at FWER controlled p < .05 in a permutation test (max. over ROIs and 95th percentile over null iterations where each individual’s felt and perceived statistics were randomly permuted).

Fig. 6.

Comparison of models of felt and perceived emotions. Results are masked with the areas showing significant effects for either felt or perceived emotions in Figure 4A. Differences are thresholded at FWER controlled p < .05 in a permutation test (max. over ROIs and 95th percentile over null iterations where each individual’s felt and perceived statistics were randomly permuted).

Close modal

3.5 Regional responses to specific perceived and felt emotions

Next, we analyzed ROI-level data to address the representation of perceived and felt emotions in different brain regions by quantifying the region-wise consistency of the responses evoked by each perceived and felt emotion (Fig. 7). We first identified the emotion categories associated with the most widespread activity changes across the brain (Fig. 7, bar plot). We found a clear gradient in the brevity of the distribution of emotion-evoked activity in the brain across emotions. Perceived and felt amusement, thrill, and horror led to the most widespread brain activation across the temporo-occipital, parietal, and midline regions. In contrast, more neutrally-valenced emotions, such as calmness, neutral, and nostalgia, yielded focal deactivation in parietal midline and lateral regions. Anger- and disgust-related emotions were also accompanied by deactivation in regions, including superior temporal gyrus (STG) and posterior STS (pSTS). Overall, sadness- and happiness-related, calm emotions yielded weaker responses in fewer areas. Feelings led to more widespread activation than perception for some emotions, such as surprise, fear, and sexual desire, while the opposite was true for other emotions, such as agitation, zeal, annoyance, hatred, guilt, anger, and excitement.

Fig. 7.

Regional responses to specific experienced and perceived emotions. Total brain area activated by individual emotions (top) and the regional consistency of emotion-wise responses for felt (middle) and perceived (bottom) emotions. Emotions are ordered based on the spatial extent of significant activations averaged over both tasks, and brain regions are ordered by the number of emotions that are significantly correlated with the regional activity. The bar plots show the average number of ROIs that were activated over all runs, and the dots show individual-run results. Vertical lines connect the minimum and maximum values over runs. The brain plots show the number of emotions whose responses were statistically significant in at least one run; the matrix plot shows the corresponding data at the level of single emotions at the scale of macroanatomical regions, each comprising multiple ROIs across both hemispheres. The dots indicate the percentage of ROIs within the macroanatomical regions that were significantly activated (hot colors) or deactivated (cold colors) by the emotions over all the runs and over participants. The data are thresholded at two-tailed p < .05 (FWER corrected). For brain plots with higher resolution and bar plots color-coded with clustering results, see Supplementary Figure S5.

Fig. 7.

Regional responses to specific experienced and perceived emotions. Total brain area activated by individual emotions (top) and the regional consistency of emotion-wise responses for felt (middle) and perceived (bottom) emotions. Emotions are ordered based on the spatial extent of significant activations averaged over both tasks, and brain regions are ordered by the number of emotions that are significantly correlated with the regional activity. The bar plots show the average number of ROIs that were activated over all runs, and the dots show individual-run results. Vertical lines connect the minimum and maximum values over runs. The brain plots show the number of emotions whose responses were statistically significant in at least one run; the matrix plot shows the corresponding data at the level of single emotions at the scale of macroanatomical regions, each comprising multiple ROIs across both hemispheres. The dots indicate the percentage of ROIs within the macroanatomical regions that were significantly activated (hot colors) or deactivated (cold colors) by the emotions over all the runs and over participants. The data are thresholded at two-tailed p < .05 (FWER corrected). For brain plots with higher resolution and bar plots color-coded with clustering results, see Supplementary Figure S5.

Close modal

There were also apparent regional differences in the breadth of tuning for different emotions as demonstrated by the spatial gradient ranging from regions activated during most emotions to those activated during few specific emotions (Fig. 7). Superior temporal and posterior midline regions (including Pcun and superior occipital gyrus) generally responded to most emotions for both perception and feeling. More selective activity to clusters of emotions was seen in regions such as the cerebellum whose activity was associated with fear- and surprise-related emotions and in superior parietal regions which yielded consistent responses to fear-related emotions. Finally, regions such as the amygdala showed emotion-specific responses: in the amygdala, activity was consistent especially for amusement, joy, and perceived surprise.

3.6 Spatial clustering of the neural responses to felt and perceived emotions

We next analyzed the similarity of the response profiles by clustering the brain activity patterns based on their between-run similarity and compared it with the similarity of ratings (depicted in Fig. 3). The clusters identified from the neural data differed from those in the self-report data (Fig. 8, see Supplementary Fig. S3 for a side-by-side comparison of the similarity structures). The large valence-based (pleasure/displeasure) cluster structure in self-report data was absent in the neural data. At the neural level (Fig. 9), the most prominent clusters (>85% of emotion-pairs with significant across-runs similarity) contained amusement- and confusion-related states (cluster 4) and fear-related states (cluster 3). Felt joy drove a small cluster (cluster 6). Neutral and calm romantic emotions also formed another small cluster (cluster 7). Less reliable clusters, with >10% pairs of emotions showing significant across-run similarity, were also found for disgust (cluster 2) and felt calm, positive emotions (cluster 8). A large set of emotions also stayed relatively independent of the larger clusters, forming two unreliable clusters (<5% significantly similar emotion pairs), driven by perceived happiness and emotions with a clear bodily component, such as physical pain and sexual desire (clusters 5 and 1). Fear-related perceived and felt emotions consistently clustered together. Similar grouping was also apparent for positive and neutral emotions such as amusement and confusion.

Fig. 8.

The cluster structure of the brain responses to perceived and felt emotions. The organization of the figure is similar to the clustering of ratings in Figure 3. (A) Between-runs correlation matrix and dendrogram clustering of brain responses. (B) Alluvial diagram of cluster correspondence between perceived and felt emotions. (C) Multidimensional scaling of similarity of brain responses to felt (left) and perceived emotions (right).

Fig. 8.

The cluster structure of the brain responses to perceived and felt emotions. The organization of the figure is similar to the clustering of ratings in Figure 3. (A) Between-runs correlation matrix and dendrogram clustering of brain responses. (B) Alluvial diagram of cluster correspondence between perceived and felt emotions. (C) Multidimensional scaling of similarity of brain responses to felt (left) and perceived emotions (right).

Close modal
Fig. 9.

Brain-based clustering of emotions. Cumulative activity patterns for the brain clusters of perceived and felt emotions. Cluster colors and numbers depict the clustering hierarchy and order of emotion ratings in Figure 6. The clusters are ordered based on the mean reliability of the activity patterns (between-runs spatial correlation) of the emotions contained within the cluster. The emotions in the clusters are ordered by reliability, which is depicted by the bar plots. Brain maps show the cumulative significant (p < .05 Bonferroni corrected over regions and emotions) activations over runs and emotions belonging to each cluster, from negative 100% (all emotions are significantly negatively correlated with BOLD activity in all 5 runs) to positive 100% (all emotions are consistently positively correlated). Thus, inconsistent positive and negative activations across runs cancel each other out. Felt and perceived emotions are shown in blue and red font, respectively. For complete results, see Supplementary Table S7.

Fig. 9.

Brain-based clustering of emotions. Cumulative activity patterns for the brain clusters of perceived and felt emotions. Cluster colors and numbers depict the clustering hierarchy and order of emotion ratings in Figure 6. The clusters are ordered based on the mean reliability of the activity patterns (between-runs spatial correlation) of the emotions contained within the cluster. The emotions in the clusters are ordered by reliability, which is depicted by the bar plots. Brain maps show the cumulative significant (p < .05 Bonferroni corrected over regions and emotions) activations over runs and emotions belonging to each cluster, from negative 100% (all emotions are significantly negatively correlated with BOLD activity in all 5 runs) to positive 100% (all emotions are consistently positively correlated). Thus, inconsistent positive and negative activations across runs cancel each other out. Felt and perceived emotions are shown in blue and red font, respectively. For complete results, see Supplementary Table S7.

Close modal

Our main finding was that the emotions evoked during free viewing of movies are represented in the brain across numerous spatial scales and patterns. While the self-report data revealed that 46 emotions were consistently used to describe the affective content of the movies, dimension reduction techniques revealed that these could be reduced to eight main emotion dimensions based on brain activity. Neural activity related to both perceived and felt emotions was consistent across individuals and generalized across different stimuli. There was also a region-specific gradient ranging from temporoparietal regions to default mode and subcortical regions for the breadth of emotional tuning. Some brain regions, such as the posterior temporal cortex and cortical midline, were non-selectively activated during most emotions, while the emotion-specific tuning was sharper outside these regions. Even more selective responses to specific emotions were found mainly in subcortical regions. Similarly, there were large differences in the regional specificity of the emotion-evoked activations. While some emotions (such as fear, anxiety, and amusement) consistently engaged large-scale brain networks across temporal, parietal, and midline regions, others, such as anger and disgust, showed more regionally selective patterns. Finally, although perceived and felt emotions were often in alignment, the similarity in emotional perception and feeling and the resultant brain activation varied across emotions. These results constitute one of the most comprehensive investigations of the neural basis of emotional perception and experience. They highlight spatial gradients in the emotion-specificity in the human brain, and outline similarities and differences in the neural basis of perceiving and feeling different emotional states.

4.1 Generalizability and consistency of emotion responses in the brain

The generalizability of emotion models across different samples of the same emotion was highest in temporoparietal, temporal, and dorsal medial prefrontal cortices and the precuneus. These regions are also activated during the processing of social information (Lahnakoski et al., 2012). This generalization confirms that the emotion-dependent responses were not driven purely by visual, auditory, or semantic features of the videos, as these were completely different across the movies. Accordingly, adding low-level stimulus features to the emotion model improved across-stimulus consistency in sensory areas, while the consistency in the regions responding to emotions—temporoparietal, temporal, and midline regions—was unaffected.

The emotion models generalized well across participants, confirming the consistent and shared nature of the emotional responses across individuals also on the neural level. Across-participant generalization of affective responses has been more difficult to achieve than across-stimulus generalization within individuals (Saarimäki et al., 2016, 2018). However, our results revealed across-participant consistency in widespread brain networks covering the sensory areas (occipital and temporal lobes), posterior midline regions, and parietal regions often associated with attention, consistent with previous studies showing high intersubject synchronization in these regions during emotional moments of movies (Hudson et al., 2020; Iidaka 2017; Nummenmaa et al., 2012; Santavirta et al., 2023; Tu et al., 2019). This likely stems from the high degree of consistency in the time-variant emotional response elicited by the movies, in comparison with static and noisy “snapshot” emotions evoked by, for example, pictures or sound bursts. In movies, emotions occur in natural contexts, which might lead to shared rather than individualistic experiences.

Out of our original 63 emotion categories, 46 (73%) had sufficient occurrence rates and high intersubject reliabilities relative to the complexity of the rating task. The similarity of neural responses revealed that these emotions could be divided into eight clusters. Outside these, we found considerable variability in the reliability and intensity of the emotions evoked by the movies. Emotions such as embarrassment, pride, humiliation, loneliness, shame, gratitude, hurt, and jealousy were difficult to elicit. The common nominator for these emotions is that they often occur in personal social settings and are not typically experienced in third-person settings, such as while viewing a movie.

It is important to note that we only included female participants in the current study, which limits the generalizability in the population. We chose to focus on females to reduce the intersubject variability in emotional responses and emotion regulation that would be expected in a mixed sample.

4.2 Emotional gradients in the brain

Our results also yielded maps of the affective space in both phenomenological and neural domains. The neural affective space demonstrated a gradient-like organization ranging from generic processing in temporoparietal and default mode regions to more localized, discrete processing in subcortical regions.

The largest and most generic hub of the neural affective space was the TPJ, which was activated during the perception and experience of most emotions. TPJ is consistently engaged during social perception (Lahnakoski et al., 2012) and has been previously shown to contain emotion-specific gradients also during perceived and experienced emotions (Lettieri et al., 2019). Accordingly, TPJ activation suggests that social information processing is relevant to most emotions evoked by movies and was better explained by felt than perceived emotions. Taken further, social information processing might be an integral part of human emotions in general, which warrants further investigation of the role of social information in emotional processing.

Another central hub for emotion processing was the precuneus, which responded to most perceived and felt emotions. Temporal pole and parietal regions, including superior and inferior parietal lobules (SPL and IPL, respectively), also responded to numerous emotions. These regions are part of the default mode network that consistently shows differential activity patterns for different emotions (Saarimäki et al., 2016, 2018, 2022). The default mode network has been especially associated with sustained, slow changes in emotional states, potentially reflecting the integration of emotion-related information and the conscious experience of emotions (Saarimäki et al., 2022; Sander et al., 2018).

Some regions showed moderately specific response profiles. For instance, activity in the cerebellum was associated with fear- and surprise-related emotions. Superior parietal regions yielded consistent responses to fear-related emotions. In turn, emotion-evoked activity in subcortical regions was more narrowly tuned and was observed only for some emotions. For example, the amygdala responded primarily to felt and perceived positive emotions and perceived surprise, while the thalamus responded selectively to perceived and felt fear. The selective, local activity does not necessarily mean that these regions would be responsible for a single emotion. Instead, these regions are likely involved in the processing of information that is more pronounced for a subset of emotions. For instance, the amygdala has been linked to the processing of both valence and salience of a stimulus (e.g., Kong & Zweifel, 2021), emphasizing its role in novelty detection and stimulus evaluation rather than in the processing of a single emotion. Furthermore, the middle frontal gyrus—implicated in relation to attention orienting—was active during fear-related emotions while showing decreased activity primarily during calm emotional states, potentially reflecting differences in attentional demands between emotions (Corbetta et al., 2008).

Altogether these data show that the emotion circuits in the brain operate in both domain-general and domain-specific manner. The core, domain-general components of the networks likely encode the detailed emotional content of the sensory input, which is then processed in a more granular manner across various extended emotion circuits whose engagement depends on the specific emotional content.

4.3 Cerebral organization of the emotion space

The current study is the first large-scale investigation of emotion clusters identified based on brain activity; the few previous studies have used far smaller sets of emotions (Du et al., 2023; Lettieri et al., 2024; Saarimäki et al., 2018). The clustering of emotions based on self-reports revealed a primary valence-driven space of emotional feelings, in line with the vast previous literature (Fontaine et al., 2007). Besides the positively and negatively valenced clusters, we identified four smaller clusters around disgust, annoyance, calmness, and confusion. For 91% of the emotion categories, both perceived and felt emotions from the same category clustered together, most likely reflecting their highly overlapping temporal structure within the movies. The temporal co-occurrence of perception and feeling of the same emotion suggests shared underlying cognitive processes in both. In the future, a more thorough evaluation of the specific emotions and events in the stimuli that most clearly differentiate feeling and perception, both subjectively and in the brain, could further elucidate how these two sides are intertwined.

The two largest and most consistent neural clusters consisted of amusement/confusion- and fear-related emotions, respectively. The amusement/confusion cluster included positive and neutral, mainly arousing emotions. Perceived awkwardness, annoyance, and amusement were clustered together with felt amusement and confusion. The fear-related cluster included various negative emotions, but the most robust activity was detected for fear. Also, four other, smaller clusters (related to joy, romance/neutral, disgust, and calm, positive emotions) showed clear, consistent brain activity. The other clusters (driven mainly by happiness and emotions with a solid bodily component, such as physical pain and sexual desire) showed less consistent activity across stimuli.

The cluster structure based on brain activation was not identical to the structure emerging from the self-report data. In the self-reports, we identified clear emotion clusters (including disgust, fear, and sadness, displeasure, embarrassment, love, joy, and calmness) that resembled those found in previous studies (Cowen & Keltner, 2017). Notably, self-reported perceived and felt emotions from the same emotion category often clustered together, while in the neural data, the perceived and felt emotions from the same category often were split into different clusters. As current emotion theories expect (Anderson & Adolphs, 2014; Barrett et al., 2007; Scherer & Moors, 2019), subjective experience is only one part of the overall emotional state and does not directly map to the underlying neural state. Thus, emotion-related brain activity reflects various automatic changes in several functional components, including subjective experiences, physiology, motor activation, motivation, and cognition. Our results support the interpretation that neural similarities between emotions result from changes in multiple functional components (Nummenmaa & Saarimäki, 2019).

4.4 Shared and distinct neural basis of perceived and felt emotions

Besides the spatial gradient ranging from generic to specific emotion responses, we also found an anatomical distinction between perceived and felt emotions. Both perceived and felt emotions elicited overlapping brain activity in the precuneus, TPJ, and auditory areas. While emotion perception engaged brain regions overlapping with those activated while feeling the same emotion, compared to emotional perception, feelings were accompanied by significantly higher activity in the TPJ, medial and lateral PFC, thalamus, and parts of the cerebellum. These regions are consistently reported to activate during emotional experiences elicited by different types of stimuli (Saarimäki, 2021; Saarimäki et al., 2016, 2018). Thus, we conclude that the frontal regions, thalamus, and cerebellum process information during felt emotions but not during emotion perception. In particular, we found direct evidence for the involvement of the PFC in generating subjective experiences of emotions, as also suggested in previous studies (Saarimäki et al., 2016).

Perceived emotion was often accompanied by a concordant feeling, but there were also some emotions for which perception and feeling were decoupled. Alignment between the perception of a character’s emotions and observers’ feelings has been rarely studied. One previous study has shown that characters’ portrayed basic emotions and empathic responses share only 11–35% of the variance in the feelings of the observers (Lettieri et al., 2019). In our data, we found large variability in the shared variance between felt and perceived emotion from the same categories (average 36%, ranging from almost 0% to close to 100%). Emotion categories related to love and sexual desire, pride and impression, fear, and sadness were both perceived and felt simultaneously. Thus, emotional contagion for these emotions is high, suggesting that the perceived emotion is directly mirrored in the observer. These emotions are characterized by seeking affiliation with others (Fischer & Manstead, 2008). On the other hand, the perception of less socially driven emotions such as boredom, interest, confusion, pity, and anger were seldom accompanied by a simultaneous feeling of the same emotion.

4.5 Limitations

We carefully curated a set of 63 emotion categories for the study but could only reliably elicit 46 of them. Especially personal and social emotions, including jealousy, hurt, shame, and gratitude, were difficult to elicit with movies, possibly due to the interpersonal nature of these emotions. Due to methodological constraints (collecting self-reports took 20 hours per participant), we collected self-reports and fMRI data from an independent set of raters. Thus, it is possible that the emotions felt and perceived by the fMRI participants differed from those of the raters. However, the raters came from the same population (young, healthy females) as the fMRI participants and gave consistent ratings. Additionally, our emotion models were built on self-reported changes in categorical emotional content, which is optimal for investigating slower changes in conscious perception and feeling but cannot track some of the fast, momentary changes in stimulus content that subcortical regions might be responsible for. Finally, each run in the fMRI experiment consisted of clips of varying emotional content. It is possible that the emotions evoked by the previous clip affected emotional responses at the beginning of the next clip. However, the long stimulus duration (on average almost 3 minutes) together with the relatively slow development of the emotions over the duration of the movies as well as the slow BOLD response alleviate the effects of previous emotions.

Here, we chose to focus on regions of interest rather than voxel-level prediction, as our interest was on broader patterns of brain activity elicited by slowly developing emotions. In certain regions, such as ventral temporal lobes, voxel-level prediction together with hyper-alignment has been shown to improve prediction accuracy in functional prediction and intersubject alignment (see, e.g., Haxby et al., 2020). However, our results suggest spatially widely distributed and shared patterns of modulation of brain activity by multiple emotions, making voxel-level prediction potentially impractically fine-grained. Moreover, the highly variable context and stimulus features between movies depicting similar emotions may complicate fine-grained prediction, making the generalization in the cross-validated analysis potentially more vulnerable to overfitting. Thus, we believe that the current resolution of the relatively small functionally defined regions distributed approximately uniformly across the brain is a reasonable compromise between maximal local prediction accuracy and a more global understanding of emotion perception and feeling in the brain.

Our results reveal the neural basis of the core emotional dimensions, and highlight spatial cerebral gradients in representing different emotional states. Eight main dimensions of emotions evoked during movie viewing are encoded in the brain across numerous spatial scales and patterns, and perceived and felt emotions generalize both between individuals and between different stimuli. There is a gradient from large-scale to regionally specific representation of emotions ranging from higher-order temporoparietal areas to default mode network, and finally to more selective activity, especially in subcortical regions. While perception and feeling of different emotions were supported by numerous overlapping brain regions, the activity was more focused in frontal, thalamic, and cerebellar activity during feelings. Although emotional perception and the resultant feelings often go hand in hand, our data highlight that they are also often decoupled in both conscious experience and brain activity.

Code for reproducing the analyses is included in the Supplementary Materials of this article. Due to restrictions in the ethics permit, the individual-level data cannot be made publicly available. The data are available upon request pending approval from Aalto University ethics committee within the limitations to open data sharing imposed by the GDPR legislation. Requests for data access should be directed to the author IPJ ([email protected], ORCID: 0000-0001-6001-6950).

Conceptualization and writing by all authors; Funding acquisition by I.P.J.; Supervision by J.M.L., L.N., I.P.J., and M.S.; Data acquisition by S.V., H.S., and A.A.; Methodology, data analyses, and visualization by J.M.L., H.S., S.V., and S.S.

None.

We thank Marita Kattelus for her help with the data acquisition. This work was supported by the Academy of Finland (grants 323425 to H.S. and 276643, 332309 and 332398 to I.P.J.) and the Finnish Cultural Foundation (grant 150496 to J.M.L.). This research is also part of the project Right to Belong which is funded by the Strategic Research Council (352648 and 352655).

Supplementary material for this article is available with the online version here: https://doi.org/10.1162/imag_a_00517.

Adolphs
,
R.
(
2002
).
Recognizing emotion from facial expressions: Psychological and neurological mechanisms
.
Behav Cogn Neurosci Rev
,
1
,
21
62
. https://doi.org/10.1177/1534582302001001003
Adolphs
,
R.
(
2017
).
How should neuroscience study emotions? By distinguishing emotion states, concepts, and experiences
.
Soc Cogn Affect Neurosci
,
12
,
24
31
. https://doi.org/10.1093/scan/nsw153
Adolphs
,
R.
,
Nummenmaa
,
L.
,
Todorov
,
A.
, &
Haxby
,
J. V.
(
2016
).
Data-driven approaches in the investigation of social perception
.
Philos Trans R Soc Lond B Biol Sci
,
371
,
20150367
. https://doi.org/10.1098/rstb.2015.0367
Anderson
,
D. J.
, &
Adolphs
,
R.
(
2014
).
A framework for studying emotions across species
.
Cell
,
157
,
187
200
. https://doi.org/10.1016/j.cell.2014.03.003
Aviezer
,
H.
,
Hassin
,
R. R.
,
Ryan
,
J.
,
Grady
,
C.
,
Susskind
,
J.
,
Anderson
,
A.
,
Moscovitch
,
M.
, &
Bentin
,
S.
(
2008
).
Angry, disgusted, or afraid? Studies on the malleability of emotion perception
.
Psychol Sci
,
19
,
724
732
. https://doi.org/10.1111/j.1467-9280.2008.02148.x
Barrett
,
L. F.
,
Mesquita
,
B.
,
Ochsner
,
K. N.
, &
Gross
,
J. J.
(
2007
).
The experience of emotion
.
Annu Rev Psychol
,
58
,
373
403
. https://doi.org/10.1146/annurev.psych.58.110405.085709
Buckner
,
R. L.
,
Krienen
,
F. M.
,
Castellanos
,
A.
,
Diaz
,
J. C.
, &
Yeo
,
B. T. T.
(
2011
).
The organization of the human cerebellum estimated by intrinsic functional connectivity
.
J Neurophysiol
,
106
,
2322
2345
. https://doi.org/10.1152/jn.00339.2011
Calder
,
A. J.
,
Young
,
A. W.
,
Perrett
,
D. I.
,
Etcoff
,
N. L.
, &
Rowland
,
D.
(
1996
).
Categorical perception of morphed facial expressions
.
Vis Cogn
,
3
,
81
118
. https://doi.org/10.1080/713756735
Corbetta
,
M.
,
Patel
,
G.
, &
Shulman
,
G. L.
(
2008
).
The reorienting system of the human brain: From environment to theory of mind
.
Neuron
,
58
,
306
324
. https://doi.org/10.1016/j.neuron.2008.04.017
Cowen
,
A. S.
, &
Keltner
,
D.
(
2017
).
Self-report captures 27 distinct categories of emotion bridged by continuous gradients
.
Proc Natl Acad Sci U S A
,
114
,
E7900
E7909
. https://doi.org/10.1073/pnas.1702247114
Cox
,
R. W.
, &
Hyde
,
J. S.
(
1997
).
Software tools for analysis and visualization of fMRI data
.
NMR Biomed
,
10
,
171
178
. https://doi.org/10.1002/(sici)1099-1492(199706/08)10:4/5<171::aid-nbm453>3.0.co;2-l
Craske
,
M. G.
, &
Stein
,
M. B.
(
2016
).
Anxiety
.
Lancet
,
388
,
3048
3059
. https://doi.org/10.1016/s0140-6736(16)30381-6
de Gelder
,
B.
, &
Vroomen
,
J.
(
2000
).
The perception of emotions by ear and by eye
.
Cogn Emot
,
14
,
289
311
. https://doi.org/10.1080/026999300378824
Deng
,
J.
,
Guo
,
J.
,
Ververas
,
E.
,
Kotsia
,
I.
, &
Zafeiriou
,
S.
(
2020
).
RetinaFace: Single-shot multi-level face localisation in the wild. In:
Presented at the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
(pp.
5202
5211
). IEEE. https://doi.org/10.1109/cvpr42600.2020.00525
Diedrichsen
,
J.
,
Balster
,
J. H.
,
Flavell
,
J.
,
Cussans
,
E.
, &
Ramnani
,
N.
(
2009
).
A probabilistic MR atlas of the human cerebellum
.
NeuroImage
,
46
,
39
46
. https://doi.org/10.1016/j.neuroimage.2009.01.045
Diedrichsen
,
J.
,
Maderwald
,
S.
,
Küper
,
M.
,
Thürling
,
M.
,
Rabe
,
K.
,
Gizewski
,
E. R.
,
Ladd
,
M.
, &
Timmann
,
D
.
(
2011
).
Imaging the deep cerebellar nuclei: A probabilistic atlas and normalization procedure
.
NeuroImage
,
54
,
1786
1794
. https://doi.org/10.1016/j.neuroimage.2010.10.035
Du
,
C.
,
Fu
,
K.
,
Wen
,
B.
, &
He
,
H.
(
2023
).
Topographic representation of visually evoked emotional experiences in the human cerebral cortex
.
IScience
,
26
(
9
),
107571
. https://doi.org/10.1016/j.isci.2023.107571
Esteban
,
O.
,
Markiewicz
,
C. J.
,
Blair
,
R. W.
,
Moodie
,
C. A.
,
Isik
,
A. I.
,
Erramuzpe
,
A.
,
Kent
,
J. D.
,
Goncalves
,
M.
,
DuPre
,
E.
,
Snyder
,
M.
,
Oya
,
H.
,
Ghosh
,
S. S.
,
Wright
,
J.
,
Durnez
,
J.
,
Poldrack
,
R. A.
, &
Gorgolewski
,
K. J.
(
2019
).
fMRIPrep: A robust preprocessing pipeline for functional MRI
.
Nat Methods
,
16
,
111
116
. https://doi.org/10.1038/s41592-018-0235-4
Fan
,
L.
,
Li
,
H.
,
Zhuo
,
J.
,
Zhang
,
Y.
,
Wang
,
J.
,
Chen
,
L.
,
Yang
,
Z.
,
Chu
,
C.
,
Xie
,
S.
,
Laird
,
A. R.
,
Fox
,
P. T.
,
Eickhoff
,
S. B.
,
Yu
,
C.
, &
Jiang
,
T.
(
2016
).
The Human Brainnetome Atlas: A new brain atlas based on connectional architecture
.
Cereb Cortex
,
26
,
3508
3526
. https://doi.org/10.1093/cercor/bhw157
Fischer
,
A. H.
, &
Manstead
,
A. S. R.
(
2008
).
Social functions of emotion and emotion regulation. In Lewis, M., Haviland-Jones, J. M., & Barrett, L. F. (Eds.)
,
Handbook of emotions
(pp. 456–468). Guilford Press. https://doi.org/10.1017/s1355617711000506
Fontaine
,
J. R. J.
,
Scherer
,
K. R.
,
Roesch
,
E. B.
, &
Ellsworth
,
P. C.
(
2007
).
The world of emotions is not two-dimensional
.
Psychol Sci
,
18
,
1050
1057
. https://doi.org/10.1111/j.1467-9280.2007.02024.x
Greve
,
D. N.
, &
Fischl
,
B.
(
2009
).
Accurate and robust brain image alignment using boundary-based registration
.
Neuroimage
,
48
,
63
72
. https://doi.org/10.1016/j.neuroimage.2009.06.060
Gross
,
J. J.
, &
Levenson
,
R. W.
(
1995
).
Emotion elicitation using films
.
Cogn Emot
,
9
,
87
108
. https://doi.org/10.1080/02699939508408966
Guo
,
J.
,
Deng
,
J.
,
Lattas
,
A.
, &
Zafeiriou
,
S.
(
2021
).
Sample and computation redistribution for efficient face detection
.
arXiv [csCV]
. https://doi.org/10.1109/cvpr46437.2021.01173
Haxby
,
J. V.
,
Guntupalli
,
J. S.
,
Nastase
,
S. A.
, &
Feilong
,
M.
(
2020
).
Hyperalignment: Modeling shared information encoded in idiosyncratic cortical topographies
.
eLife
,
9
,
e56601
. https://doi.org/10.7554/elife.56601
Horikawa
,
T.
,
Cowen
,
A. S.
,
Keltner
,
D.
, &
Kamitani
,
Y.
(
2020
).
The neural representation of visually evoked emotion is high-dimensional, categorical, and distributed across transmodal brain regions
.
iScience
,
23
,
101060
. https://doi.org/10.1016/j.isci.2020.101060
Hudson
,
M.
,
Seppälä
,
K.
,
Putkinen
,
V.
,
Sun
,
L.
,
Glerean
,
E.
,
Karjalainen
,
T.
,
Karlsson
,
H. K.
,
Hirvonen
,
J.
, &
Nummenmaa
,
L.
(
2020
).
Dissociable neural systems for unconditioned acute and sustained fear
.
Neuroimage
,
216
,
116522
. https://doi.org/10.1016/j.neuroimage.2020.116522
Iidaka
,
T.
(
2017
).
Humor appreciation involves parametric and synchronized activity in the medial prefrontal cortex and hippocampus
.
Cereb Cortex
,
27
,
5579
5591
. https://doi.org/10.1093/cercor/bhw325
Jenkinson
,
M.
,
Bannister
,
P.
,
Brady
,
M.
, &
Smith
,
S.
(
2002
).
Improved optimization for the robust and accurate linear registration and motion correction of brain images
.
Neuroimage
,
17
,
825
841
. https://doi.org/10.1006/nimg.2002.1132
Keles
,
U.
,
Kliemann
,
D.
,
Byrge
,
L.
,
Saarimäki
,
H.
,
Paul
,
L. K.
,
Kennedy
,
D. P.
, &
Adolphs
,
R.
(
2022
).
Atypical gaze patterns in autistic adults are heterogeneous across but reliable within individuals
.
Mol Autism
,
13
,
39
. https://doi.org/10.1186/s13229-022-00517-2
Kirillov
,
A.
,
Girshick
,
R.
,
He
,
K.
, &
Dollar
,
P.
(
2019
).
Panoptic feature pyramid networks
. In
Presented at the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
(pp.
6392
6401
). IEEE. https://doi.org/10.1109/cvpr.2019.00656
Kober
,
H.
,
Barrett
,
L. F.
,
Joseph
,
J.
,
Bliss-Moreau
,
E.
,
Lindquist
,
K.
, &
Wager
,
T. D.
(
2008
).
Functional grouping and cortical–subcortical interactions in emotion: A meta-analysis of neuroimaging studies
.
Neuroimage
,
42
,
998
1031
. https://doi.org/10.1016/j.neuroimage.2008.03.059
Koide-Majima
,
N.
,
Nakai
,
T.
, &
Nishimoto
,
S.
(
2020
).
Distinct dimensions of emotion in the human brain and their representation on the cortical surface
.
Neuroimage
,
222
,
117258
. https://doi.org/10.1016/j.neuroimage.2020.117258
Kong
,
M. S.
, &
Zweifel
,
L. S.
(
2021
).
Central amygdala circuits in valence and salience processing
.
Behav Brain Res
,
410
,
113355
. https://doi.org/10.1016/j.bbr.2021.113355
Kotz
,
S. A.
,
Kalberlah
,
C.
,
Bahlmann
,
J.
,
Friederici
,
A. D.
, &
Haynes
J.-D
. (
2013
).
Predicting vocal emotion expressions from the human brain
.
Hum Brain Mapp
,
34
,
1971
1981
. https://doi.org/10.1002/hbm.22041
Kragel
,
P. A.
, &
LaBar
,
K. S.
(
2013
).
Multivariate pattern classification reveals autonomic and experiential representations of discrete emotions
.
Emotion
,
13
,
681
690
. https://doi.org/10.1037/a0031820
Kragel
,
P. A.
, &
LaBar
,
K. S.
(
2015
).
Multivariate neural biomarkers of emotional states are categorically distinct
.
Soc Cogn Affect Neurosci
,
10
,
1437
1448
. https://doi.org/10.1093/scan/nsv032
Kragel
,
P. A.
, &
LaBar
,
K. S.
(
2016
).
Decoding the nature of emotion in the brain
.
Trends Cogn Sci
,
20
,
444
455
. https://doi.org/10.1016/j.tics.2016.03.011
Lahnakoski
,
J. M.
,
Glerean
,
E.
,
Salmi
,
J.
,
Jääskeläinen
,
I. P.
,
Sams
,
M.
,
Hari
,
R.
, &
Nummenmaa
,
L.
(
2012
).
Naturalistic FMRI mapping reveals superior temporal sulcus as the hub for the distributed brain network for social perception
.
Front Hum Neurosci
,
6
,
233
. https://doi.org/10.3389/fnhum.2012.00233
LeDoux
,
J.
(
2012
).
Rethinking the emotional brain
.
Neuron
,
73
,
653
676
. https://doi.org/10.1016/j.neuron.2012.02.004
Lettieri
,
G.
,
Handjaras
,
G.
,
Ricciardi
,
E.
,
Leo
,
A.
,
Papale
,
P.
,
Betta
,
M.
,
Pietrini
,
P.
, &
Cecchetti
,
L.
(
2019
).
Emotionotopy in the human right temporo-parietal cortex
.
Nat Commun
,
10
,
5568
. https://doi.org/10.1038/s41467-019-13599-z
Lettieri
,
G.
,
Handjaras
,
G.
,
Cappello
,
E. M.
,
Setti
,
F.
,
Bottari
,
D.
,
Bruno
,
V.
,
Diano
,
M.
,
Leo
,
A.
,
Tinti
,
C.
,
Garbarini
,
F.
,
Pietrini
,
P.
,
Ricciardi
,
E.
, &
Cecchetti
,
L.
(
2024
).
Dissecting abstract, modality-specific and experience-dependent coding of affect in the human brain
.
Sci Adv
,
10
(
10
),
eadk6840
. https://doi.org/10.1126/sciadv.adk6840
Nummenmaa
,
L.
,
Glerean
,
E.
,
Hari
,
R.
, &
Hietanen
,
J. K.
(
2014
).
Bodily maps of emotions
.
Proc Natl Acad Sci U S A
,
111
,
646
651
. https://doi.org/10.1073/pnas.1321664111
Nummenmaa
,
L.
,
Glerean
,
E.
,
Viinikainen
,
M.
,
Jääskeläinen
,
I. P.
,
Hari
,
R.
, &
Sams
,
M.
(
2012
).
Emotions promote social interaction by synchronizing brain activity across individuals
.
Proc Natl Acad Sci U S A
,
109
,
9599
9604
. https://doi.org/10.1073/pnas.1206095109
Nummenmaa
,
L.
,
Hari
,
R.
,
Hietanen
,
J. K.
, &
Glerean
,
E.
(
2018
).
Maps of subjective feelings
.
Proc Natl Acad Sci U S A
,
115
,
9198
9203
. https://doi.org/10.1073/pnas.1807390115
Nummenmaa
,
L.
, &
Saarimäki
,
H.
(
2019
).
Emotions as discrete patterns of systemic activity
.
Neurosci Lett
,
693
,
3
8
. https://doi.org/10.1016/j.neulet.2017.07.012
Oosterwijk
,
S.
,
Snoek
,
L.
,
Rotteveel
,
M.
,
Barrett
,
L. F.
, &
Scholte
,
H. S.
(
2017
).
Shared states: Using MVPA to test neural overlap between self-focused emotion imagery and other-focused emotion understanding
.
Soc Cogn Affect Neurosci
,
12
,
1025
1035
. https://doi.org/10.1093/scan/nsx037
Peelen
,
M. V.
,
Atkinson
,
A. P.
, &
Vuilleumier
,
P.
(
2010
).
Supramodal representations of perceived emotions in the human brain
.
J Neurosci
,
30
,
10127
10134
. https://doi.org/10.1523/jneurosci.2161-10.2010
Putkinen
,
V.
,
Nazari-Farsani
,
S.
,
Seppälä
,
K.
,
Karjalainen
,
T.
,
Sun
,
L.
,
Karlsson
,
H. K.
,
Hudson
,
M.
,
Heikkilä
,
T. T.
,
Hirvonen
,
J.
, &
Nummenmaa
,
L.
(
2021
).
Decoding music-evoked emotions in the auditory and motor cortex
.
Cereb Cortex
,
31
,
2549
2560
. https://doi.org/10.1093/cercor/bhaa373
Saarimäki
,
H.
(
2021
).
Naturalistic stimuli in affective neuroimaging: A review
.
Front Hum Neurosci
,
15
,
675068
. https://doi.org/10.3389/fnhum.2021.675068
Saarimäki
,
H.
,
Ejtehadian
,
L. F.
,
Glerean
,
E.
,
Jääskeläinen
,
I. P.
,
Vuilleumier
,
P.
,
Sams
,
M.
, &
Nummenmaa
,
L.
(
2018
).
Distributed affective space represents multiple emotion categories across the human brain
.
Soc Cogn Affect Neurosci
,
13
,
471
482
. https://doi.org/10.1093/scan/nsy018
Saarimäki
,
H.
,
Glerean
,
E.
,
Smirnov
,
D.
,
Mynttinen
,
H.
,
Jääskeläinen
,
I. P.
,
Sams
,
M.
, &
Nummenmaa
,
L.
(
2022
).
Classification of emotion categories based on functional connectivity patterns of the human brain
.
Neuroimage
,
247
,
118800
. https://doi.org/10.1016/j.neuroimage.2021.118800
Saarimäki
,
H.
,
Gotsopoulos
,
A.
,
Jääskeläinen
,
I. P.
,
Lampinen
,
J.
,
Vuilleumier
,
P.
,
Hari
,
R.
,
Sams
,
M.
, &
Nummenmaa
,
L.
(
2016
).
Discrete neural signatures of basic emotions
.
Cereb Cortex
,
26
,
2563
2573
. https://doi.org/10.1093/cercor/bhv086
Sander
,
D.
,
Grandjean
,
D.
, &
Scherer
,
K. R.
(
2018
).
An appraisal-driven componential approach to the emotional brain
.
Emot Rev
,
10
,
219
231
. https://doi.org/10.1177/1754073918765653
Santavirta
,
S.
,
Karjalainen
,
T.
,
Nazari-Farsani
,
S.
,
Hudson
,
M.
,
Putkinen
,
V.
,
Seppälä
,
K.
,
Sun
,
L.
,
Glerean
,
E.
,
Hirvonen
,
J.
,
Karlsson
,
H. K.
, &
Nummenmaa
,
L.
(
2023
).
Functional organization of social perception networks in the human brain
.
NeuroImage
,
272
,
120025
. https://doi.org/10.1016/j.neuroimage.2023.120025
Sauter
,
D. A.
,
Eisner
,
F.
,
Ekman
,
P.
, &
Scott
,
S. K.
(
2010
).
Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations
.
Proc Natl Acad Sci U S A
,
107
(
6
),
2408
2412
. https://doi.org/10.1073/pnas.0908239106
Schaefer
,
A.
,
Nils
,
F.
,
Sanchez
,
X.
, &
Philippot
,
P.
(
2010
).
Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers
.
Cogn Emot
,
24
,
1153
1172
. https://doi.org/10.1080/02699930903274322
Scherer
,
K. R.
(
2009
).
The dynamic architecture of emotion: Evidence for the component process model
.
Cogn Emot
,
23
,
1307
1351
. https://doi.org/10.1080/02699930902928969
Scherer
,
K. R.
, &
Moors
,
A.
(
2019
).
The emotion process: Event appraisal and component differentiation
.
Annu Rev Psychol
,
70
,
719
745
. https://doi.org/10.1146/annurev-psych-122216-011854
Skerry
,
A. E.
, &
Saxe
,
R.
(
2015
).
Neural representations of emotion are organized around abstract event features
.
Curr Biol
,
25
,
1945
1954
. https://doi.org/10.1016/j.cub.2015.06.009
Toivonen
,
R.
,
Kivelä
,
M.
,
Saramäki
,
J.
,
Viinikainen
,
M.
,
Vanhatalo
,
M.
, &
Sams
,
M.
(
2012
).
Networks of emotion concepts
.
PLoS One
,
7
(
1
),
e28883
https://doi.org/10.1371/journal.pone.0028883
Tu
,
P.-C.
,
Su
,
T.-P.
,
Lin
,
W.-C.
,
Chang
,
W.-C.
,
Bai
,
Y.-M.
,
Li
,
C.-T.
, &
Lin
,
F.-H.
(
2019
).
Reduced synchronized brain activity in schizophrenia during viewing of comedy movies
.
Sci Rep
,
9
,
12738
. https://doi.org/10.1038/s41598-019-48957-w
Tuovila
,
S.
(
2005
)
.
KUN ON TUNTEET Suomen kielen tunnesanojen semantiikkaa
. https://doi.org/10.47862/apples.129383
Volynets
,
S.
,
Smirnov
,
D.
,
Saarimäki
,
H.
, &
Nummenmaa
,
L.
(
2020
).
Statistical pattern recognition reveals shared neural signatures for displaying and recognizing specific facial expressions
.
Soc Cogn Affect Neurosci
,
15
,
803
813
. https://doi.org/10.1093/scan/nsaa110
Westermann
,
R.
,
Spies
,
K.
,
Stahl
,
G.
, &
Hesse
,
F. W.
(
1996
).
Relative effectiveness and validity of mood induction procedures: A meta-analysis
.
Eur J Soc Psychol
,
26
,
557
580
. https://doi.org/10.1002/(sici)1099-0992(199607)26:4<557::aid-ejsp769>3.3.co;2-w
Wicker
,
B.
,
Keysers
,
C.
,
Plailly
,
J.
,
Royet
,
J. P.
,
Gallese
,
V.
, &
Rizzolatti
,
G.
(
2003
).
Both of us disgusted in My insula: The common neural basis of seeing and feeling disgust
.
Neuron
,
40
,
655
664
. https://doi.org/10.1016/s0896-6273(03)00679-2
Witkower
,
Z.
, &
Tracy
,
J. L.
(
2019
).
Bodily communication of emotion: Evidence for extrafacial behavioral expressions and available coding systems
.
Emot Rev
,
11
,
184
193
. https://doi.org/10.1177/1754073917749880
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.

Supplementary data