Abstract

Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223–233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.

INTRODUCTION

Based largely on findings from behavioral observations, Bruce and Young's (1986) influential “functional model of face recognition” has served as a general framework for the study of face perception for the last 30 years. Central to this model is the notion that analysis of facial expression and identity proceed independently of each other. Further evidence to support the functional division between facial expression and identity processing comes from neuropsychological studies of prosopagnosic patients, who can interpret facial expressions correctly but are unable to correctly identify familiar faces (Humphreys, Donnelly, & Riddoch, 1993). Haxby, Hoffman, and Gobbini (2000) later modified this model to provide a neurological description of face perception, wherein they describe a “distributed human neural system for face perception.” This model, like the earlier Bruce and Young (1986) model, proposes distinct pathways for the visual analysis of facial identity and expression. The perception of identity, the invariant aspect of a face, occurs in a ventral pathway that involves the lateral fusiform gyrus (FG), whereas the STS is part of the dorsal pathway that is implicated in the processing and representation of changeable facial features. The extended system then incorporates additional brain regions to support further face processing, such as emotion recognition.

Real-life faces are dynamic by nature, particularly when expressing emotion. However, much of the research on face perception and emotion recognition to date has used static stimuli of faces, such as stimuli from the Ekman and Friesen (1976) collection. These posed static facial stimuli do not reflect the unique temporal dynamics and information available from seeing a moving face in the real world and thus do not allow a complete description of the neural correlates of natural face perception to be made. Dynamic stimuli would offer a more suitable means of examining the neural basis of realistic natural face perception. Behavioral studies using dynamic facial stimuli have shown that motion plays an important role in facilitating judgments of gender (Hill & Johnston, 2001) and also contributes to identity judgments (Christie & Bruce, 1998; Pike, Kemp, Towell, & Phillips, 1997). In addition, judgments of facial affect are influenced by changing the velocity of an expressing face, suggesting that the dynamic display of facial expressions provides unique temporal information about the expressions, which is not available in static displays (Kamachi et al., 2001). Dynamic facial expressions of emotion have been shown to facilitate emotion recognition compared with their static counterparts (Sato, Kochiyama, Yoshikawa, Naito, & Matsumara, 2004; LaBar, Crupain, Voyvodic, & McCarthy, 2003), possibly because of additional information encoded in facial action patterns, which is not present in static stimuli (Wehrle, Kaiser, Schmidt, & Scherer, 2000).

However, very little is known about the neural structures involved in the perception of realistic dynamic facial expressions, as Haxby et al.'s (2000) distributed face perception model was mainly defined using evidence derived from static images of faces, with the exception of Puce, Allison, Bentin, Gore, and McCarthy (1998). These static stimuli obviously represent impoverished displays lacking natural facial motion, which do not facilitate a complete interrogation of the face perception network, particularly the dorsal pathway which is implicated in the processing of facial dynamics. Although perception of static face stimuli does elicit activation in the dorsal STS region, it is believed to result from implied rather than overt biological motion (Fairhall & Ishai, 2007; Haxby, Hoffman, & Gobbini, 2002). Naturally, dynamic stimuli should therefore be considered to understand the complete neurology of ecologically valid face perception.

Recent brain imaging studies have investigated the neural network underlying the processing of dynamic face stimuli. For example, Kilts, Egan, Gideon, Ely, and Hoffman (2003) carried out a PET study using dynamic and static face stimuli and found increased activation in STS in response to dynamic compared with static face stimuli, along with greater activation in the amygdala and hippocampus. In an fMRI study LaBar et al. (2003) reported increased activation in FG, ventromedial pFC, and STS to dynamic expressions of emotion compared with neutral. Additionally, Sato et al. (2004) carried out an fMRI study using dynamic gray-scaled morphed stimuli of fearful and happy faces, dynamic mosaics of scrambled faces, and static controls. They found increased activation in the inferior occipital gyrus (IOG), middle temporal gyrus (MTG), STS, and FG to dynamic facial expressions compared with the dynamic and static controls. These findings are generally consistent with Haxby et al.'s (2000) distributed model of face perception and show considerable overlap in activation patterns in response to different face processing tasks. However many of these studies (Sato et al., 2004; LaBar et al., 2003) have used morphed stimuli, which were constructed from static stimuli and may represent artificial motion. Using such stimuli with artificial motion is unlikely to fully capture the mechanisms underlying the processing of natural facial motion.

Notably, these findings do not reliably support the view of an independent ventral pathway that processes the invariant aspects and a dorsal pathway involved in processing the changeable aspects of faces. As Sato et al. (2004) and LaBar et al. (2003) both found increased activation in the FG in the ventral pathway to dynamic faces, this suggests that the FG is not only involved in processing the invariant aspects of the face but may also be involved in processing changeable aspects as well. One such model that may account for this was proposed by O'Toole, Roark, and Abdi (2002), who suggest that facial motion such as the dynamic characteristics gained from facial speech, expressions, and head–face movements are processed in the middle temporal visual area (MT/V5) before projecting to the STS. This implies that the STS may play a role in facial identification when identification can be gleaned from dynamic facial signatures. They also suggest that the STS and FG may be connected via the middle temporal visual area MT, thus facilitating recognition through structure-from-motion processes. This is consistent with Sato et al. (2004), who report increased activation in MTG to dynamic faces. In addition, Calder and Young (2005) recently used principle component analysis to investigate the degree of separation between these two pathways and found that facial expressions and identity can be coded within a single multidimensional framework rather than relying on separate independent codes. From this, it would appear that the roles of the FG and STS may not be as dissociable and distinct as previously thought.

One way to examine the degree of separation and functional interplay between the FG and STS, and the other neural structures involved in face processing, is to use connectivity analysis. Rather than looking solely at isolated regional effects, connectivity analysis examines the interactions between brain regions. It also provides a means of assessing the extent to which the same brain regions support different operations depending on task-dependent network connections (Friston et al., 1997). Recently, Fairhall and Ishai (2007) used fMRI and connectivity analysis (dynamic causal modeling) to examine the interactions between the different regions of the face perception network with static face stimuli. They presented photographs of unfamiliar, famous, and emotional faces in a passive viewing task and found that all faces exerted a strong and significant influence on the effective connectivity between IOG and both FG and STS. Emotional and famous faces significantly modulated the coupling between IOG and FG, but not between IOG and STS. They also found that the FG exerted influences on the amygdala, inferior frontal gyrus (IFG), and OFC. They concluded from this that the extraction of the changeable aspects of face stimuli, within limbic and prefrontal regions, is enabled via the FG in the ventral pathway rather than the STS. However, this may be because of the fact that static images of faces were used; indeed, the authors themselves predict that the STS in the dorsal pathway would exert a greater effective influence on the extended system during the perception of dynamic faces.

This prediction will be tested in the present study using fMRI and psychophysiological (PPI) connectivity analysis, which will examine the effective connectivity, that is, the influence one neuronal system exerts upon others, within the dynamic face perception network. PPI analysis can be used to assess how activity in a particular brain ROI modulates activity in other brain regions, in response to an experimental condition (Friston et al., 1997). It has an advantage over other methods of effective connectivity analysis, such as dynamic causal modeling analysis, as it does not require prior specification of the anatomical model. Rather a source region is selected and regions of interaction with this source are identified based on the experimental condition. In the current study, PPI analysis was used to assess possible interactions of the selected ROIs, within the face perception network, in response to dynamic and static faces. By choosing “source” regions in the core (IOG and STS) and extended system (amygdala and IFG), the effective connectivity within Haxby et al.'s (2000) distributed face model can be examined with realistic dynamic facial expressions of emotion.

The main hypothesis tested in the present study is that dynamic facial expressions will elicit activation in the dorsal pathway of the face perception network. This hypothesis will be tested by using fMRI to identify regions of activation in response to dynamic facial expressions, specifically, angry and happy, and speech expressions. Angry and happy expressions were chosen to contrast positive and negative facial affects, whereas speech was chosen as a control for nonaffective facial motion. On the basis of previous studies (Schultz & Pilz, 2009; Sato et al., 2004), it is predicted that, in the core system, dynamic facial expressions will result in increased activation in the MTG and STS. Kilts et al. (2003) report differential activation to angry and happy facial expressions, it is therefore hypothesized in the present study that the brain responses to these different facial expressions of emotion will recruit different structures in Haxby et al.'s (2000) extended system. It is also hypothesized that PPI analysis will reveal a correlation between early visual regions, such as IOG, and regions in the dorsal pathway, such as STS, in response to dynamic face stimuli. Finally, it is predicted that activation in the STS will be correlated with regions in the extended network such as the amygdalae and IFG when viewing dynamic facial expressions of emotion only. Furthermore, the regions beyond the distributed face perception network that are implicated in processing dynamic facial displays will be examined, in a hypothesis generating fashion, using PPI.

METHODS

Stimuli

In this study a unique set of stimuli were created to obtain examples of naturalistic facial expressions. Forty models were recruited from the psychology undergraduate student population and were filmed in a dedicated studio. All models were asked to remove facial piercings, earrings, and headgear, and none had any facial hair. The models were filmed sitting down against a uniform white background at a distance of 1.5 m. They were shown examples of prototypical facial expressions from the Ekman and Friesen (1976) collection and asked to use these as a reference guide when posing the expressions. They were also encouraged to imagine personal situations to evoke these emotions. The models started emoting from a neutral expression and proceeded to each of the five basic emotion expressions (happy, anger, fear, disgust, and surprise), and speech movements were also recorded by filming the models while counting from 1 to 10. A Canon ZR960 video camera was used to capture the video stream in color. This was then transferred to a Dell PC for off-line editing. Each recording session lasted approximately 10 min. Windows Media Player was used to edit the video stream and create 2.5-sec clips for each different expression and the speech category for every participant (image size = 640 × 480 pixels, frame rate = 30 frames/sec). Static stimuli were then created from a screenshot of the final frame of each expression and speech.

Seventy additional participants from the psychology undergraduate student population were recruited to rate both the video stimuli and their static exemplars to ensure that these stimuli depicted recognizable emotional expressions. Thirty participants rated the static stimuli, and 40 rated the dynamic stimuli. They performed a five alternative forced-choice task whereby they had to identify the facial expressions shown as one of the following: angry, happy, fear, disgust, and surprise. Participants also evaluated emotional intensity on a 10-point Likert scale. They were instructed to “rate how intense the emotional expression is” wherein 1 = very low emotional intensity and 10 = very high emotional intensity. Both dynamic and static stimuli were displayed for 2.5 sec, and participants were given 5 sec to respond to each stimulus before proceeding.

The number of correct responses within each emotion category was calculated across participants for both the dynamic and static conditions. This was then expressed as a percentage of the total number of stimuli shown within each facial display condition. Data were then analyzed using a mixed ANOVA with the percentage correct responses for each Facial Display as a within-participant factor with five levels (angry, happy, fear, disgust, and surprise), and the Motion category (either static or dynamic) as a between-participant factor. There was a significant main effect of Motion, F(1, 69) = 29.21, p < .001, wherein dynamic facial expressions were recognized significantly better than static faces. There was also a main effect of Facial Display, F(4, 276) = 65.6, p < .001. Post hoc contrast tests revealed that happy expressions were recognized significantly better than all other expressions at p < .001: angry, F(1, 69) = 78.5; fear, F(1, 69) = 228.23; disgust, F(1, 69) = 28.663; and surprise, F(1, 69) = 71.36. A selection of these happy and angry facial expression stimuli were then used in the fMRI study. Happy expressions were correctly recognized 95% in the static condition and 100% in the dynamic condition. Angry expressions were recognized 75% in the static condition and 85% in the dynamic condition (see table in Supplementary Data for a full list of these behavioral results).

Mean intensity values were also calculated across participants for each facial display, across both the dynamic and static conditions. Again, an ANOVA was used to analyze the intensity ratings with the judged intensities of the correctly identified emotions for each of the Facial Displays as a within-participant factor and a between-participant factor of Motion category (either static or dynamic). Again, there was a significant main effect of Motion, F(1, 69) = 19.03, p < .001, wherein static faces were rated as more intense than dynamic faces. There was also a significant main effect of Facial Display, F(4, 272) = 67.35, p < .001. Post hoc contrast tests revealed that happy facial expressions were rated as significantly more intense than all other expressions (see table in Supplementary Data for a full list of these results). Angry and happy expressions with the highest intensity ratings were then selected for the fMRI study (see Supplementary Data for examples of the stimuli used).

fMRI Participants

Fourteen healthy self-reported right-handed volunteers (six men) with normal or corrected-to-normal vision (mean age = 28.3 years, SD = 3.67 years) gave full written informed consent to take part in the study, which was approved by the Aston University Human Science Ethical Committee.

Experimental Design and Imaging Paradigm

A sample of 24 stimuli (12 dynamic and 12 corresponding static images) was selected for the fMRI experiment based on their highest intensity ratings and correctly identified as the target affect as described above. Two emotion categories were included, specifically, angry and happy, and a speech category was also included as a control for nonaffective facial motion. In the dynamic condition, four different stimuli were presented in each of the three emotion categories and, likewise, in the static condition. The identities were matched across the dynamic and static conditions, as the static stimuli were created from a screenshot of the final frame of each of the dynamic excerpts including the speech controls.

Each stimulus was presented for 3 sec within a block of eight of the same condition. Hence, there were six different blocks of 24-sec duration (dynamic angry, static angry, dynamic happy, static happy, dynamic speech, and static speech). A session of length 288 sec consisted of 24-sec blocks of no visual stimulation (fixation cross), alternating with the six 24-sec blocks of visual stimulation. Blocks were presented in a pseudorandom order around a Latin squares design within each session, and there were six sessions in total (1728 sec). Participants performed a 1-back memory task on the individual identity within each block, making all responses via the lumina response pad. This task was designed to maintain vigilance and to control for attention, which is known to modulate BOLD signals in many of the neural areas included in this study (see, e.g., Beauchamp, Lee, Haxby, & Martin, 2002).

Imaging Protocol

MR data were acquired on a 3-Tesla Siemens Magnetom Trio Scanner (Erlangen, Germany) using an eight-channel birdcage headcoil. A gradient-echo-planar sequence (EPI) was used to acquire 44 contiguous 3-mm-thick axial slices per whole-brain volume in one time series of 576 scans. With the following parameters echo time = 30 msec, repetition time = 3000 msec, field of view = 192 mm, matrix = 64 × 64 pixels per inch, resolution = 3 × 3 mm in-plane resolution. A high-resolution MPRAGE anatomical image with 1 × 1 × 1 voxel resolution was collected at the end of each scanning session. Visual stimuli were presented using Presentation Software (Neurobehavioral Systems, Inc., Albany, CA) and projected via LCD to a screen located in the back of the scanner bore behind the participant's head. Participants viewed the stimuli through a mirror mounted above their eyes on the head coil.

Image Analysis

Subtractive Analysis

Data were analyzed using SPM2 software (Wellcome Department of Cognitive Neurology, London, U.K.; www.fil.ion.ucl.ac.uk/spm). All functional volumes were realigned to the first volume to correct for any head motion. Functional images were then spatially normalized into standard stereotactic space using the Montreal Neurological Institute EPI template to a voxel size of 3 × 3 × 3. They were then spatially smoothed using 7-mm FWHM Gaussian kernel to facilitate group analysis and a high-pass filter of 1/128 Hz was used to eliminate low-frequency components. Block onsets were modeled as boxcars convolved with a canonical hemodynamic response function. The following four planned contrasts were calculated: all dynamic versus all static faces, dynamic angry versus static angry faces, dynamic happy versus static happy faces, and dynamic speech versus static speech faces, which produced a statistical parametric map of the t statistic [SPM(t)]. Then, the contrast images for each participant and each comparison were entered into a one-sample t test for random effects analysis. Voxels were identified as significantly activated if they reached a threshold of p < .001 (uncorrected), with a spatial extent greater than or equal to 7 voxels and corrected for multiple comparisons of the entire brain at a threshold of p < .05.

ROIs were defined for each participant based on the contrast of all faces versus baseline fixation with a p value of p < .001, uncorrected, and coordinates were verified against previous face perception studies (Fairhall & Ishai, 2007; Haxby et al., 2000). These included the IOG, STS, amygdala, and IFG. Percent signal change data were calculated by extracting the time series at the sites of peak activation within each of these ROIs for each participant (spherical volumes of 6-mm radius) and then subtracting the mean baseline signal from the activation periods. A repeated measures ANOVA with two within-participant factors, Motion (dynamic and static) and Facial Display (angry, happy, and speech), was then used to assess the differences in the amount of signal change within each ROI.

Connectivity Analysis

Connectivity analysis in the right hemisphere1 was carried out as follows. First, right IOG (40 −80 −4) was chosen as the source region to interrogate the core face perception system under the dynamic and static conditions. The time series of activity in the right IOG was extracted for each participant based on a sphere of 6-mm radius centered on the most significant voxel revealed in the ROI contrasts. Here the PPI analysis produced an interaction term between the right IOG time series and the psychological condition (i.e., dynamic vs. static). Before this interaction term was created, the BOLD signal was deconvolved with a model of the hemodynamic response function to represent the interaction at the neuronal level. The effect of the interaction term was then evaluated using the following contrast [1 0 0], wherein the first column represents the interaction term, the second represents the psychological variable (i.e., dynamic vs. static), and the third represents the time series of the source region. Individual contrast images were created for each participant and were then used to perform a second level random effects analysis (using a one-sample t test) with a statistical threshold of p < .001 (uncorrected) and an extent threshold of 7 voxels per cluster. This same procedure was repeated for the different areas of interest, that is, with right STS, amygdala, and IFG, chosen as the source regions.

RESULTS

Dynamic versus Static Facial Expressions

As predicted, the contrast between dynamic and static facial expressions revealed significant activation in bilateral STS and bilateral MTG. In addition, bilateral IFG and the right amygdala were also significantly activated. The contrast between dynamic and static angry faces also revealed significant activation in bilateral STS, bilateral MTG, and the right amygdala, but also in bilateral middle occipital gyri (MOG) and the right insula. Dynamic happy faces compared with static happy faces revealed significant activation again in bilateral STS and right MTG, but also in left MOG and left inferior temporal gyrus. The comparison of the response to dynamic versus static speech revealed significant activation again in bilateral STS and MTG, along with bilateral MOG, but also bilaterally in middle frontal gyri (MFG) and precentral gyri (PrCG) (Table 1).

Table 1. 

Brain Regions Showing Significant Activations

Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
R STS (BA 22) 1986 5.46 56, −40, 10 
R MTG (BA 22) 4.81 50, −36, 2 
R MTG/V5 (BA 37) 4.52 48, −62, 4 
L MOG/V5 (BA 19) 413 4.57 −42, −76, 4 
L STS (BA 22) 4.48 50, −58, 16 
L MTG (BA 39) 4.43 −42, −58, 8 
L MFG (BA 6) SMA 42 4.02 −48, 2, 56 
R MFG (BA 6) 63 3.93 48, 2, 42 
R PrCG (BA 6) 3.35 44, −2, 48 
R IFG (BA 47) 10 3.29 52, 24, −6 
R IFG (BA 45) 17 3.21 56, 22, 14 
L IFG (BA 47) 3.18 −38, 24, 2 
R AMG (BA 34) 3.16* 20, −8, −16 
 
(B) Angry Dynamic versus Angry Static Faces 
R MOG (BA 19) 538 4.98 54, −70, −6 
R MTG (BA 37) 4.32 58, −62, 0 
R STS (BA 22) 3.35 50, −58, 16 
R STS (BA 22) 259 4.47 54, −40, 8 
L MOG/V5 (BA 19) 257 4.26 −46, −74, 4 
L MTG (BA 19) 3.52 −44, −62, 16 
L MTG (BA 21) 54 3.78 −58, −40, 2 
L STS (BA 22) 3.47 −58, −44, 12 
R INS 3.5* 48, −40, 22 
R AMG 11 3.25* 20, −8, −16 
 
(C) Speech Dynamic versus Speech Static Faces 
L MTG (BA 19) 489 4.99 −46,−62, 14 
L STS (BA 21) 4.3 −60, −24, −2 
L STS (BA 22) 4.12 −60, −44, 10 
R MTG (BA 21) 646 4.48 64, −38, 0 
R STS (BA 22) 4.42 68, −36, 8 
L MFG (BA 9) 172 4.28 −52, 18, 30 
L MFG (BA 8) 3.35 −48, 10, 42 
R MFG (BA 6) 79 4.23 46, 65, 8 
L IFG (BA 6) 33 4.11 −48, 18, −6 
L MOG/V5 (BA 19) 122 4.01 −42, −76, 4 
R MOG (BA 19) 88 4.01 54, −70, −8 
R MFG (BA 9) 182 3.97 52, 16, 30 
R PrCG (BA 9) 3.4 44, 24, 34 
 
(D) Happy Dynamic versus Happy Static Faces 
R STS (BA 41) 517 4.92 46, −40, 6 
R MTG (BA 22) 4.44 58, −44, 4 
R STS (BA 22) 4.39 62, −32, 8 
L STS (BA 22) 71 3.94 −50, −48, 12 
L MOG/V5 (BA 19) 22 3.51 −44, −76, 4 
L ITG (BA 37) 11 3.33 52, −72, 0 
Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
R STS (BA 22) 1986 5.46 56, −40, 10 
R MTG (BA 22) 4.81 50, −36, 2 
R MTG/V5 (BA 37) 4.52 48, −62, 4 
L MOG/V5 (BA 19) 413 4.57 −42, −76, 4 
L STS (BA 22) 4.48 50, −58, 16 
L MTG (BA 39) 4.43 −42, −58, 8 
L MFG (BA 6) SMA 42 4.02 −48, 2, 56 
R MFG (BA 6) 63 3.93 48, 2, 42 
R PrCG (BA 6) 3.35 44, −2, 48 
R IFG (BA 47) 10 3.29 52, 24, −6 
R IFG (BA 45) 17 3.21 56, 22, 14 
L IFG (BA 47) 3.18 −38, 24, 2 
R AMG (BA 34) 3.16* 20, −8, −16 
 
(B) Angry Dynamic versus Angry Static Faces 
R MOG (BA 19) 538 4.98 54, −70, −6 
R MTG (BA 37) 4.32 58, −62, 0 
R STS (BA 22) 3.35 50, −58, 16 
R STS (BA 22) 259 4.47 54, −40, 8 
L MOG/V5 (BA 19) 257 4.26 −46, −74, 4 
L MTG (BA 19) 3.52 −44, −62, 16 
L MTG (BA 21) 54 3.78 −58, −40, 2 
L STS (BA 22) 3.47 −58, −44, 12 
R INS 3.5* 48, −40, 22 
R AMG 11 3.25* 20, −8, −16 
 
(C) Speech Dynamic versus Speech Static Faces 
L MTG (BA 19) 489 4.99 −46,−62, 14 
L STS (BA 21) 4.3 −60, −24, −2 
L STS (BA 22) 4.12 −60, −44, 10 
R MTG (BA 21) 646 4.48 64, −38, 0 
R STS (BA 22) 4.42 68, −36, 8 
L MFG (BA 9) 172 4.28 −52, 18, 30 
L MFG (BA 8) 3.35 −48, 10, 42 
R MFG (BA 6) 79 4.23 46, 65, 8 
L IFG (BA 6) 33 4.11 −48, 18, −6 
L MOG/V5 (BA 19) 122 4.01 −42, −76, 4 
R MOG (BA 19) 88 4.01 54, −70, −8 
R MFG (BA 9) 182 3.97 52, 16, 30 
R PrCG (BA 9) 3.4 44, 24, 34 
 
(D) Happy Dynamic versus Happy Static Faces 
R STS (BA 41) 517 4.92 46, −40, 6 
R MTG (BA 22) 4.44 58, −44, 4 
R STS (BA 22) 4.39 62, −32, 8 
L STS (BA 22) 71 3.94 −50, −48, 12 
L MOG/V5 (BA 19) 22 3.51 −44, −76, 4 
L ITG (BA 37) 11 3.33 52, −72, 0 

Coordinates indicate local maxima in Talairach space. L = left; R = right; AMG = amygdala; INS = insula; ITG = inferior temporal gyrus. Clusters are significant at p < .05 after correction for multiple comparisons. Multiple peaks within a cluster are shown on subsequent lines.

*Clusters significant at p < .05 after small volume correction.

ROI Analysis

In comparing the dynamic and static conditions within the left and right IOG, no significant differences were found in the amount of signal change, that is, both conditions exhibited a similar response pattern for all faces. In both the left and right STS, there was a significant main effect of Motion, showing a significantly greater increase to dynamic faces, F(1, 13) = 22, p < .05 and F(1, 13) = 6.63, p < .05, respectively, but a main effect of Facial Display was not revealed. On the other hand, the effect of Motion was not significant in either the left or right amygdala, but there was a significant main effect of Facial Display, F(2, 26) = 9.92, p < .05, and F(2, 26) = 3.99, p < .05, respectively.

In the left amygdala post hoc contrast tests revealed that there was no significant difference between the angry and happy conditions, whereas there was a significantly greater increase in signal strength to displays of anger relative to speech, F(1, 13) = 20.711, p < .05, and happy expressions relative to speech F(1, 13) = 7.33, p < .05. Similarly in the right amygdala post hoc contrast tests revealed that there was a significantly greater increase for angry expressions compared to speech, F(1, 13) = 4.68, p < .05, and happy expressions compared with speech, F(1, 13) = 5.74, p < .05, but no significant differences in signal change between the angry and happy conditions. Thus, similar bilateral responses were observed in the left and the right amygdalae, where they exhibited greater responses to angry and happy expressions relative to speech. In the left and right IFG, similar to the left and right IOG, no significant differences were found for Motion or Affect. These within-ROI ANOVAs, then, have indicated STS involvement in motion processing but not affect and amygdala involvement in differential processing of affect but not motion. These differences were not shown by the ANOVAs on the IOG and the IFG (see Table 2 for a list of ROI locations).

Table 2. 

Location of ROIs within the Face Perception Network

Region
n
Mean Coordinates
x
y
z
L IOG 14 −42 (2) −84 (1) −2 (3) 
R IOG 14 40 (3) −80 (2) −4 (2) 
L STS 14 −52 (2) −54 (2) 12 (1) 
R STS 14 52 (2) −54 (1) 12 (1) 
L AMG 12 −22 (2) −6 (1) −18 (2) 
R AMG 13 20 (2) −6 (2) −18 (2) 
L IFG −46 (3) 18 (2) 22 (2) 
R IFG 10 52 (3) 26 (3) 18 (2) 
Region
n
Mean Coordinates
x
y
z
L IOG 14 −42 (2) −84 (1) −2 (3) 
R IOG 14 40 (3) −80 (2) −4 (2) 
L STS 14 −52 (2) −54 (2) 12 (1) 
R STS 14 52 (2) −54 (1) 12 (1) 
L AMG 12 −22 (2) −6 (1) −18 (2) 
R AMG 13 20 (2) −6 (2) −18 (2) 
L IFG −46 (3) 18 (2) 22 (2) 
R IFG 10 52 (3) 26 (3) 18 (2) 

Coordinates are presented in Talairach space (Talairach & Tournoux, 1988). L = left; R = right; AMG = amygdala. n indicates the number of subjects who showed significant activation in each region. SEMs are indicated in parentheses.

Connectivity Analysis

Using PPI analysis, the effective connectivity between different regions within the face perception network was examined. PPI analysis provides a means of assessing possible interactions between selected ROIs, within the face perception network, in response to dynamic and static faces. The interpretation of a significant PPI is that there are different engagements of anatomical connections as a function of psychological context, in this case, viewing dynamic or static faces (Friston et al., 1997).

IOG

A seed voxel was first placed in the right IOG (40, −80, −4), and the regions of covariation under the condition of all dynamic faces compared with all static faces were examined (see Table 3 and Figure 2A; only regions within the right hemisphere are reported). This revealed a significant correlation with the right MTG and STS. When the dynamic and static displays of anger were compared, there was a significant correlation between the right IOG and the right MOG, STS, MFG, and superior frontal gyrus (SFG; see Table 3 and Figure 3A). Comparison of the dynamic and static happy facial expressions revealed a significant correlation with the right IFG and MFG (see Table 3 and Figure 3B). Finally, when activation in response to the dynamic and static displays of speech was examined, a significant correlation was found between the right IOG and the right SFG (see Table 3 and Figure 3C).

Table 3. 

Brain Regions Showing Effective Connectivity with Right IOG

Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
R MTG (BA 22) 28 3.35 60, −40, 6 
R STS (BA 22) 44 3.28 54, −46, 12 
 
(B) Angry Dynamic versus Angry Static Faces 
R SFG (BA 6) 12 3.87 4, 2, 70 
R MOG (BA 18) 24 3.84 46, −78, −8 
R STS (BA 22) 16 3.82 56, −40, 12 
R MFG (BA 6) 3.37 46, 4, 44 
 
(C) Happy Dynamic versus Happy Static Faces 
R IFG (BA 44) 26 3.7 52, 16, 10 
R MFG (BA 6) 22 3.28 38, 2, 50 
 
(D) Speech Dynamic versus Speech Static Faces 
R SFG (BA 6) 31 3.43 4, 2, 70 
Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
R MTG (BA 22) 28 3.35 60, −40, 6 
R STS (BA 22) 44 3.28 54, −46, 12 
 
(B) Angry Dynamic versus Angry Static Faces 
R SFG (BA 6) 12 3.87 4, 2, 70 
R MOG (BA 18) 24 3.84 46, −78, −8 
R STS (BA 22) 16 3.82 56, −40, 12 
R MFG (BA 6) 3.37 46, 4, 44 
 
(C) Happy Dynamic versus Happy Static Faces 
R IFG (BA 44) 26 3.7 52, 16, 10 
R MFG (BA 6) 22 3.28 38, 2, 50 
 
(D) Speech Dynamic versus Speech Static Faces 
R SFG (BA 6) 31 3.43 4, 2, 70 

Coordinates indicate local maxima in Talairach space. L = left; R = right. Multiple peaks within a cluster are shown on subsequent lines.

STS

Next a seed voxel was placed in the right dorsal pathway, in the right STS (52, −54, 12). Again all dynamic were compared with all static facial expressions (see Table 4 and Figure 2B; only regions in the right hemisphere are reported), which revealed a significant correlation between the right STS and the right lingual gyrus, IOG, MTG, SFG, and PrCG. The comparison of the dynamic and static angry expressions revealed a significant correlation with the right lingual gyrus, IOG, and MFG (see Table 4 and Figure 3A), whereas the comparison of dynamic and static displays of happiness revealed a significant correlation with the right lingual gyrus only (see Table 4 and Figure 3B). When the dynamic and static displays of speech were compared, activation in the right STS was significantly correlated with activation in the right lingual gyrus and MOG (see Table 4 and Figure 3C).

Table 4. 

Brain Regions Showing Effective Connectivity with Right STS

Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
R LiG (BA 17) 380 4.74 12, −90, 2 
R IOG 129 4.1 50, −80, −4 
R MTG (BA 22) 107 3.26 50, −40, 6 
R SFG (BA 6) 31 3.75 6, 14, 50 
R PrCG (BA 6) 77 3.7 50, 2, 50 
 
(B) Angry Dynamic versus Angry Static Faces 
R LiG (BA 17) 211 5.31 6, −92, −2 
R MFG (BA 6) 94 4.02 46, 6, 44 
R IOG 34 3.82 52, −80, −4 
 
(C) Happy Dynamic versus Happy Static Faces 
R LiG (BA 17) 311 4.9 8, −88, 4 
 
(D) Speech Dynamic versus Speech Static Faces 
R LiG (BA 17) 565 3.95 14, −92, 2 
R MOG (BA 18) 54 3.48 22, −98, 22 
Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
R LiG (BA 17) 380 4.74 12, −90, 2 
R IOG 129 4.1 50, −80, −4 
R MTG (BA 22) 107 3.26 50, −40, 6 
R SFG (BA 6) 31 3.75 6, 14, 50 
R PrCG (BA 6) 77 3.7 50, 2, 50 
 
(B) Angry Dynamic versus Angry Static Faces 
R LiG (BA 17) 211 5.31 6, −92, −2 
R MFG (BA 6) 94 4.02 46, 6, 44 
R IOG 34 3.82 52, −80, −4 
 
(C) Happy Dynamic versus Happy Static Faces 
R LiG (BA 17) 311 4.9 8, −88, 4 
 
(D) Speech Dynamic versus Speech Static Faces 
R LiG (BA 17) 565 3.95 14, −92, 2 
R MOG (BA 18) 54 3.48 22, −98, 22 

Coordinates indicate local maxima in Talairach space. L = left; R = right; LiG = lingual gyrus. Multiple peaks within a cluster are shown on subsequent lines.

Amygdala

The next seed voxel was placed in the right extended system, in the right amygdala (20, −6, −18), and again the regions of covariation under the condition of all dynamic versus all static facial expressions were examined (see Table 5 and Figure 2C; only regions in the right hemisphere are reported). This revealed a significant correlation between activation in the right amygdala and the right lingual gyrus, MOG, STS, FG, cingulate gyrus, IFG, MFG, SFG, and PrCG. Comparing activation in response to dynamic and static angry facial expressions revealed significant correlations with the right MTG, cingulate gyrus, IFG, and MFG (see Table 5 and Figure 3A). When activation in response to the dynamic and static facial expressions of happiness was examined a significant correlation with the right PrCG was found (see Table 5 and Figure 3B). When the dynamic and static displays of speech were compared, activation in the right amygdala was significantly correlated with activation in the right MTG and cingulate gyrus (see Table 5 and Figure 3C).

Table 5. 

Brain Regions Showing Effective Connectivity with Right Amygdala

Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
R STS (BA 22) 442 4.03 58, −44, 12 
R SFG (BA 6) 106 3.92 4, 12, 48 
R LiG (BA 18) 425 3.8 8, −72, 4 
R MOG (BA 18) 3.66 10, −90, 12 
R MFG (BA 46) 78 3.76 46, 18, 22 
R FG (BA 19) 23 3.74 22, −60, −10 
R IFG (BA 44) 17 3.63 62, 8, 18 
R CiG (BA 30) 3.54 12, −66, 8 
R PrCG (BA 6) 3.42 54, −2, 42 
 
(B) Angry Dynamic versus Angry Static Faces 
R IFG (BA 44) 52 4.06 60, 10, 14 
R CiG (BA 23) 17 3.98 10, −28, 28 
R IFG (BA 46) 67 3.94 54, 32, 10 
R MTG (BA 22) 97 3.8 62, −38, 8 
R MFG (BA 6) 23 3.5 28, −8, 58 
 
(C) Happy Dynamic versus Happy Static Faces 
R PrCG (BA 6) 21 3.71 49, 0, 52 
 
(D) Speech Dynamic versus Speech Static Faces 
R CiG (BA 23) 33 4.24 4, −10, 26 
R MTG (BA 22) 66 3.65 58, −44, 4 
Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
R STS (BA 22) 442 4.03 58, −44, 12 
R SFG (BA 6) 106 3.92 4, 12, 48 
R LiG (BA 18) 425 3.8 8, −72, 4 
R MOG (BA 18) 3.66 10, −90, 12 
R MFG (BA 46) 78 3.76 46, 18, 22 
R FG (BA 19) 23 3.74 22, −60, −10 
R IFG (BA 44) 17 3.63 62, 8, 18 
R CiG (BA 30) 3.54 12, −66, 8 
R PrCG (BA 6) 3.42 54, −2, 42 
 
(B) Angry Dynamic versus Angry Static Faces 
R IFG (BA 44) 52 4.06 60, 10, 14 
R CiG (BA 23) 17 3.98 10, −28, 28 
R IFG (BA 46) 67 3.94 54, 32, 10 
R MTG (BA 22) 97 3.8 62, −38, 8 
R MFG (BA 6) 23 3.5 28, −8, 58 
 
(C) Happy Dynamic versus Happy Static Faces 
R PrCG (BA 6) 21 3.71 49, 0, 52 
 
(D) Speech Dynamic versus Speech Static Faces 
R CiG (BA 23) 33 4.24 4, −10, 26 
R MTG (BA 22) 66 3.65 58, −44, 4 

Coordinates indicate local maxima in Talairach space. L = left; R = right; LiG = lingual gyrus; CiG = cingulate gyrus. Multiple peaks within a cluster are shown on subsequent lines.

IFG

Another seed voxel was placed in the extended face perception system, in the right IFG (52, 26, 18), and as before all the dynamic and static facial expressions were compared (see Table 6 and Figure 2D; only regions in the right hemisphere are reported), revealing a significant correlation between the right IFG and the right MOG, MTG, FG, and MFG. The comparison of the dynamic and static angry expressions revealed a significant correlation with the right IFG and the right amygdala, cingulate gyrus, and SFG (see Table 6 and Figure 3A), whereas the comparison of the dynamic and static expressions of happiness revealed a correlation with the right FG only (see Table 6 and Figure 3B). The final comparison of the activation in response to the dynamic and static displays of speech revealed a significant correlation between the right IFG and the right lingual gyrus and MTG (see Table 6 and Figure 3C).

Table 6. 

Brain Regions Showing Effective Connectivity with Right IFG

Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
R MFG (BA 6) 122 3.56 44, 0, 60 
R MOG (BA 19) 172 3.4 52, −68, 8 
R MTG (BA 19) 2.72 50, −78, 14 
R FG (BA 37) 41 3.23 44, −56, −18 
R MTG (BA 22) 28 3.19 58, −40, 6 
 
(B) Angry Dynamic versus Angry Static Faces 
R AMG 11 3.59 18, −2, −18 
R CiG (BA 30) 29 3.45 6, −52, 16 
R SFG (BA 6) 12 3.43 12, 0, 66 
 
(C) Happy Dynamic versus Happy Static Faces 
R FG (BA 37) 34 2.76 26, −60, −8 
 
(D) Speech Dynamic versus Speech Static Faces 
R LiG (BA 18) 135 4.37 10, −96, −8 
R MTG (BA 22) 59 3.84 62, −40, 4 
Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
R MFG (BA 6) 122 3.56 44, 0, 60 
R MOG (BA 19) 172 3.4 52, −68, 8 
R MTG (BA 19) 2.72 50, −78, 14 
R FG (BA 37) 41 3.23 44, −56, −18 
R MTG (BA 22) 28 3.19 58, −40, 6 
 
(B) Angry Dynamic versus Angry Static Faces 
R AMG 11 3.59 18, −2, −18 
R CiG (BA 30) 29 3.45 6, −52, 16 
R SFG (BA 6) 12 3.43 12, 0, 66 
 
(C) Happy Dynamic versus Happy Static Faces 
R FG (BA 37) 34 2.76 26, −60, −8 
 
(D) Speech Dynamic versus Speech Static Faces 
R LiG (BA 18) 135 4.37 10, −96, −8 
R MTG (BA 22) 59 3.84 62, −40, 4 

Coordinates indicate local maxima in Talairach space. L = left; R = right; AMG = amygdala; CiG = cingulate gyrus; LiG = lingual gyrus. Multiple peaks within a cluster are shown on subsequent lines.

Left Hemisphere

An additional and complementary connectivity analysis was performed in the left hemisphere comparing dynamic to static facial expressions and using equivalent seed regions to those described above. This analysis revealed broadly similar results, where the left IOG was correlated with MOG, STS, and IFG. The left STS showed significant correlations with IOG, MOG, PrCG, and SFG. The left amygdala was correlated with the left lingual gyrus, cingulate gyrus, PrCG, and postcentral gyrus. Finally, the left IFG was correlated with the left MOG, STS, MFG, SFG, and PrCG (see Tables 7,8910).

Table 7. 

Brain Regions Showing Effective Connectivity with Left IOG

Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
L STS (BA 39) 68 4.09 −50, −52, 14 
L IFG (BA 47) 29 3.79 −46, 16, −8 
L MOG (BA 18) 68 3.55 −16, −96, 16 
 
(B) Angry Dynamic versus Angry Static Faces 
L PrCG (BA 6) 41 4.52 −58, 4, 18 
L SFG (BA 6) 107 3.74 0, 0, 70 
L STS (BA 39) 143 3.84 −46, −52, 6 
L LiG (BA 18) 65 3.62 −4, −90, −18 
L IFG (BA 47) 18 3.53 −48, 16, −6 
 
(C) Happy Dynamic versus Happy Static Faces 
L MOG (BA 19) 150 3.66 −20, −96, 14 
L FG (BA 37) 11 3.43 −40, −54, −18 
 
(D) Speech Dynamic versus Speech Static Faces 
L STS (BA 22) 147 4.43 −56, −38, 10 
L MTG (BA 39) 3.58 −54, 56, 14 
L PrCG (BA 6) 85 4.15 −50, −2, 54 
Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
L STS (BA 39) 68 4.09 −50, −52, 14 
L IFG (BA 47) 29 3.79 −46, 16, −8 
L MOG (BA 18) 68 3.55 −16, −96, 16 
 
(B) Angry Dynamic versus Angry Static Faces 
L PrCG (BA 6) 41 4.52 −58, 4, 18 
L SFG (BA 6) 107 3.74 0, 0, 70 
L STS (BA 39) 143 3.84 −46, −52, 6 
L LiG (BA 18) 65 3.62 −4, −90, −18 
L IFG (BA 47) 18 3.53 −48, 16, −6 
 
(C) Happy Dynamic versus Happy Static Faces 
L MOG (BA 19) 150 3.66 −20, −96, 14 
L FG (BA 37) 11 3.43 −40, −54, −18 
 
(D) Speech Dynamic versus Speech Static Faces 
L STS (BA 22) 147 4.43 −56, −38, 10 
L MTG (BA 39) 3.58 −54, 56, 14 
L PrCG (BA 6) 85 4.15 −50, −2, 54 

Coordinates indicate local maxima in Talairach space. L = left; R = right; LiG = lingual gyrus. Multiple peaks within a cluster are shown on subsequent lines.

Table 8. 

Brain Regions Showing Effective Connectivity with Left STS

Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
L MOG (BA 19) 217 4.96 −40, −80, 2 
L IOG (BA 18) 189 4.3 −50, −78, 0 
L PrCG (BA 6) 26 3.7 −38, −14, 68 
L SFG (BA 6) 65 3.67 −2, 8, 52 
 
(B) Angry Dynamic versus Angry Static Faces 
L IOG (BA 18) 34 4.02 −46, −80, −10 
L PoCG (BA 3) 24 3.98 −44, −18, 62 
L AMG (BA 28) 26 3.82 −18, −8, −16 
L LiG (BA 19) 3.44 −14, −60, −2 
L MOG (BA 18) 35 3.19 −10, −102, 16 
 
(C) Happy Dynamic versus Happy Static Faces 
L MOG (BA 19) 211 4.52 −40, −84, 4 
L IOG (BA 18) 41 3.53 −44, −80, −10 
 
(D) Speech Dynamic versus Speech Static Faces 
L LiG (BA 18) 120 4.07 −6, −86, −6 
L MOG (BA 18) 36 3.38 −16, −102, 16 
Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
L MOG (BA 19) 217 4.96 −40, −80, 2 
L IOG (BA 18) 189 4.3 −50, −78, 0 
L PrCG (BA 6) 26 3.7 −38, −14, 68 
L SFG (BA 6) 65 3.67 −2, 8, 52 
 
(B) Angry Dynamic versus Angry Static Faces 
L IOG (BA 18) 34 4.02 −46, −80, −10 
L PoCG (BA 3) 24 3.98 −44, −18, 62 
L AMG (BA 28) 26 3.82 −18, −8, −16 
L LiG (BA 19) 3.44 −14, −60, −2 
L MOG (BA 18) 35 3.19 −10, −102, 16 
 
(C) Happy Dynamic versus Happy Static Faces 
L MOG (BA 19) 211 4.52 −40, −84, 4 
L IOG (BA 18) 41 3.53 −44, −80, −10 
 
(D) Speech Dynamic versus Speech Static Faces 
L LiG (BA 18) 120 4.07 −6, −86, −6 
L MOG (BA 18) 36 3.38 −16, −102, 16 

Coordinates indicate local maxima in Talairach space. L = left; R = right; PoCG = postcentral gyrus; AMG = amygdala; LiG = lingual gyrus. Multiple peaks within a cluster are shown on subsequent lines.

Table 9. 

Brain Regions Showing Effective Connectivity with Left Amygdala

Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
L LiG (BA 17) 114 3.64 −26, −68, 4 
L PrCG (BA 6) 67 3.12 −40, −12, 66 
L PoCG (BA 3) 27 2.61 −40, −28, 58 
L CiG (BA 15) 15 2.61 −2, −20, 26 
 
(B) Angry Dynamic versus Angry Static Faces 
L IOG (BA 18) 28 3.92 −46, −82, −2 
L LiG (BA 17) 35 3.15 −6, −74, 0 
 
(C) Happy Dynamic versus Happy Static Faces 
L SFG (BA 6) 17 2.55 −10, 10, 72 
 
(D) Speech Dynamic versus Speech Static Faces 
L LiG (BA 17) 413 3.54 −6, −64, 4 
L PrCG (BA 14) 14 3.42 −50, −12, 58 
Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
L LiG (BA 17) 114 3.64 −26, −68, 4 
L PrCG (BA 6) 67 3.12 −40, −12, 66 
L PoCG (BA 3) 27 2.61 −40, −28, 58 
L CiG (BA 15) 15 2.61 −2, −20, 26 
 
(B) Angry Dynamic versus Angry Static Faces 
L IOG (BA 18) 28 3.92 −46, −82, −2 
L LiG (BA 17) 35 3.15 −6, −74, 0 
 
(C) Happy Dynamic versus Happy Static Faces 
L SFG (BA 6) 17 2.55 −10, 10, 72 
 
(D) Speech Dynamic versus Speech Static Faces 
L LiG (BA 17) 413 3.54 −6, −64, 4 
L PrCG (BA 14) 14 3.42 −50, −12, 58 

Coordinates indicate local maxima in Talairach space. L = left; R = right; LiG = lingual gyrus; PoCG = postcentral gyrus; CiG = cingulate gyrus. Multiple peaks within a cluster are shown on subsequent lines.

Table 10. 

Brain Regions Showing Effective Connectivity with Left IFG

Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
L SFG (BA 6) 122 3.56 0, 6, 54 
L PrCG (BA 6) 49 3.22 −38, −10, 66 
L MOG (BA 19) 48 3.17 −44, −86, 6 
L STS (BA 39) 26 3.14 −52, −52, 10 
L MFG (BA 11) 17 3.09 −30, 42, −10 
 
(B) Angry Dynamic versus Angry Static Faces 
L CiG (BA 23) 45 3.13 −4, −24, 30 
L PoCG (BA 5) 101 3.05 −36, −46, 58 
L MTG (BA 19) 47 2.92 −58, −64, 14 
L MFG (BA 10) 18 2.8 −28, 42, 26 
 
(C) Happy Dynamic versus Happy Static Faces 
L LiG (BA 17) 413 0, −86, 6 
L MOG (BA 19) 3.47 −42, −84, 10 
L MFG (BA 9) 22 3.06 −28, 42, 36 
L CiG (BA 30) 17 3.02 −22, −66, 8 
L PrCG (BA 6) 59 2.98 −32, −8, 70 
L PoCG (BA 40) 18 2.67 −42, −32, 54 
 
(D) Speech Dynamic versus Speech Static Faces 
L MOG (BA 19) 33 2.98 −46, −84, 0 
Region
Cluster Size (mm)
Z Score
x, y, z
(A) All Dynamic versus All Static Faces 
L SFG (BA 6) 122 3.56 0, 6, 54 
L PrCG (BA 6) 49 3.22 −38, −10, 66 
L MOG (BA 19) 48 3.17 −44, −86, 6 
L STS (BA 39) 26 3.14 −52, −52, 10 
L MFG (BA 11) 17 3.09 −30, 42, −10 
 
(B) Angry Dynamic versus Angry Static Faces 
L CiG (BA 23) 45 3.13 −4, −24, 30 
L PoCG (BA 5) 101 3.05 −36, −46, 58 
L MTG (BA 19) 47 2.92 −58, −64, 14 
L MFG (BA 10) 18 2.8 −28, 42, 26 
 
(C) Happy Dynamic versus Happy Static Faces 
L LiG (BA 17) 413 0, −86, 6 
L MOG (BA 19) 3.47 −42, −84, 10 
L MFG (BA 9) 22 3.06 −28, 42, 36 
L CiG (BA 30) 17 3.02 −22, −66, 8 
L PrCG (BA 6) 59 2.98 −32, −8, 70 
L PoCG (BA 40) 18 2.67 −42, −32, 54 
 
(D) Speech Dynamic versus Speech Static Faces 
L MOG (BA 19) 33 2.98 −46, −84, 0 

Coordinates indicate local maxima in Talairach space. L = left; R = right; CiG = cingulate gyrus; PoCG = postcentral gyrus; LiG = lingual gyrus. Multiple peaks within a cluster are shown on subsequent lines.

DISCUSSION

In the present study, a unique set of naturalistic dynamic stimuli were created and used to investigate brain activation in response to dynamic facial expressions of emotion. Specifically, regions of activation in response to dynamic angry, happy, and speech facial expressions were examined. Dynamic face stimuli have previously been shown to activate regions in the dorsal pathway of the face perception network, such as regions along MTG and STS, along with regions in the extended system, such as the amygdala and IFG (Sato et al., 2004; Kilts et al., 2003; LaBar et al., 2003). Thus, it was predicted that the dynamic facial expressions used in the present study would elicit activation in similar regions along this dorsal pathway and also recruit regions in the extended system. As expected, enhanced activation patterns in bilateral MTG, including area V5/MT, and extending along bilateral STS, were found for the perception of all dynamic facial expressions compared with static images. Further examination of the activation within bilateral STS, through analysis of the percent signal change, revealed that the STS showed sensitivity specifically to motion but not to affect. Thus, STS activations are not specifically related to the perceptions of emotion but appear to be involved in the general processing of dynamic social signals (Kilts et al., 2003; Puce & Perrett, 2003; Allison, Puce, & McCarthy, 2000).

Dynamic facial expressions also elicited activation in the right amygdala, a part of the extended face perception system. Previous neuroimaging studies have reported increased amygdala activation to both static angry and dynamic angry expressions (Sato et al., 2004; Kilts et al., 2003; LaBar et al., 2003; see also N'Diaye, Sander, & Vuilleumier, 2009). However, although the amygdala responded maximally to the dynamic angry expressions in the whole-brain analysis, further interrogation of the amygdala response through analysis of the percent signal change revealed that bilateral amygdalae were sensitive to affect in general (i.e., angry and happy expressions). This was revealed by a significant increase in the amygdala response to angry and happy expressions but not to speech, regardless of the motion condition. This is consistent with previous studies showing greater amygdala activation to emotional expressions, in general (Morris et al., 1998; Breiter et al., 1996; see also Van der Gaag, Minderaa, & Keysers, 2007), and lesion studies showing that patients with bilateral ablation of the amygdala are impaired at processing facial affects (Young, Hellawell, Van de Wal, & Johnson, 1996). Hence, the amygdala may act as a multiprocessor of socially salient information, particularly from the face, where it is sensitive to affects with naturalistic facial motion, particularly to dynamic affects of threat.

Like the amygdala, the right insula also showed greater activation during the observation of dynamic angry facial expressions. The insula is believed to play an important role in emotion perception through its projections to the inferior pFC and amygdala, and it modulates amygdala activity by relaying signals from cortical regions through efferent pathways (Phelps et al., 2001). Although the insular cortex is known to be involved in emotional processing, it has generally been associated with the processing of expressions of disgust (Williams et al., 2005; Phillips et al., 1997). However, Fusar-Poli et al. (2009) carried out a meta-analysis of over 100 fMRI studies of emotional face processing and report insula activation to angry facial expressions in static images.

In addition, frontal regions including bilateral IFG, bilateral MFG and the right PrCG were significantly activated in response to dynamic faces. MFG (BA 6; SMA) and PrCG (BA 6) are implicated in action observation and imitation and are believed to form part of the human mirror neuron system (Iacoboni et al., 1999). The mirror neuron system provides an action recognition mechanism through imitation and learning, whereby sensory representations of action are transformed into corresponding motor programs (Rizzolatti & Craighero, 2004). Recent studies have shown activation in the mirror neuron system during passive observation of mouth, hand, or foot movements (Buccino, Binkofski, & Riggio, 2004; Iacoboni et al., 1999) and during passive observation of dynamic facial expressions of emotion (Sato et al., 2004; Kilts et al., 2003). Further evidence in support of the role of the mirror neuron system in emotion recognition comes from lesion studies where patients with lesions in frontal cortex show impairment in the recognition of emotional stimuli (Adolphs, 2002a, 2002b). Previous face perception studies have implicated the IFG, which is part of the extended face perception system, in processing facial expressions (Ishai, Schmidt, & Boesiger, 2005). However, examination of the percent signal change within bilateral IFG did not show a significant effect of affect or motion, which suggests that IFG is sensitive to all faces regardless of the emotion displayed or the level of inherent motion.

In summary, from the whole-brain analysis and the corresponding percent signal change analysis a dynamic face perception network has emerged (see Figure 1), which extends Haxby et al.'s (2000) distributed neural system for face perception and further develops our understanding of face perception. This network includes early visual regions, such as the IOG, which is insensitive to motion or affect but sensitive to the visual stimulus. The STS, which is specifically sensitive to motion, and the amygdala in the extended system, which is recruited to process affect. Furthermore, a functional distinction can be seen in the amygdala as it exhibits a greater response to the different affects relative to speech. The insula then works in tandem with the amygdala to process the emotional content gleaned from faces, along with frontal regions, including the IFG in the extended system, which again is insensitive to motion or affect, but responsive to faces in general.

Figure 1. 

The dynamic face perception network. Results of whole-brain group analysis (n = 14) for the dynamic versus static condition projected onto the surface of an inflated standard brain, showing both lateral and ventral views. Bilateral activation in MOG (1), MTG (2), and extending along STS (3) to frontal regions of the cortex, IFG (4) and MFG (5), is clearly shown in both the left (L) and right (R) hemispheres. Color bars denote t statistic. Images are thresholded at p < .05, corrected for multiple comparisons. Images were created using mri3dx software.

Figure 1. 

The dynamic face perception network. Results of whole-brain group analysis (n = 14) for the dynamic versus static condition projected onto the surface of an inflated standard brain, showing both lateral and ventral views. Bilateral activation in MOG (1), MTG (2), and extending along STS (3) to frontal regions of the cortex, IFG (4) and MFG (5), is clearly shown in both the left (L) and right (R) hemispheres. Color bars denote t statistic. Images are thresholded at p < .05, corrected for multiple comparisons. Images were created using mri3dx software.

In addition to defining the regional brain activation patterns mediating dynamic face perception, a key motivation of this study was also to examine the associated functional relations between these brain regions. PPI connectivity analysis was used to examine the covariance of changes in activity between different brain regions within the previously defined dynamic face perception network. PPI analysis confirmed the hypothesis that brain activation in early visual regions, such as IOG, would be correlated with regions in the dorsal pathway, such as the STS, when viewing dynamic face stimuli. Furthermore, it was predicted that activation in STS would be correlated with regions in the extended network such as the amygdala and IFG when viewing dynamic facial expressions of emotion. This was also confirmed through PPI analysis.

During the perception of all dynamic stimuli, activation in right IOG was correlated with activation in right MTG and STS (see Figure 2). On the basis of Haxby et al.'s (2000) model, this is part of the core system where early visual and motion processing takes place, and the STS is involved in processing facial motion. When the effective connectivity with the STS was examined it was also found to covary with IOG activation, thus implying reciprocal connections between these two regions. Fairhall and Ishai (2007), however, found that emotional and famous faces significantly modulated the coupling between IOG and FG, but not between IOG and STS, when static faces were processed. This suggests that the coupling between IOG and STS observed in the present study is a result of the specific facial motion properties of the stimuli.

Figure 2. 

Effective connectivity of the four main seed voxels in the dynamic face perception network in the right hemisphere only: (A) IOG, (B) STS, (C) amygdala (AMG), and (D) IFG. Results are thresholded at p < .001 for the dynamic versus static contrast for all 14 participants. Here, effective connectivity was revealed between the following: (A) the IOG seed voxel and MTG and STS; (B) the STS seed voxel, and lingual gyrus (LiG), IOG, MTG, SFG, and PrCG; (C) the amygdala seed voxel and LiG, MOG, FG, STS, IFG, MFG, SFG and PrCG; (D) the IFG seed voxel and MOG, MTG, FG and MFG.

Figure 2. 

Effective connectivity of the four main seed voxels in the dynamic face perception network in the right hemisphere only: (A) IOG, (B) STS, (C) amygdala (AMG), and (D) IFG. Results are thresholded at p < .001 for the dynamic versus static contrast for all 14 participants. Here, effective connectivity was revealed between the following: (A) the IOG seed voxel and MTG and STS; (B) the STS seed voxel, and lingual gyrus (LiG), IOG, MTG, SFG, and PrCG; (C) the amygdala seed voxel and LiG, MOG, FG, STS, IFG, MFG, SFG and PrCG; (D) the IFG seed voxel and MOG, MTG, FG and MFG.

STS activation also correlated with activation in early visual regions such as the lingual gyrus, MOG, and MTG (MT/V5). Similarly, Fusar-Poli et al. (2009) report lingual gyrus and MOG activation in response to face stimuli, independent of emotional valence. In addition, STS activity covaried with frontal regions, including SFG and PrCG. This is consistent with Hein and Knight's (2008) proposal that a covaration between STS and premotor activity facilitates motion processing.

Activation in the right amygdala was correlated with early visual regions, right lingual gyrus and right MOG, the STS in the dorsal pathway, and the FG in the ventral pathway. This correlation between amygdala and FG activity is consistent with previous research, which proposes direct feedback signals from the amygdala to the FG during the processing of emotional faces (Vuilleumier & Pourtois, 2007). The right amygdala was also correlated with the right cingulate, right PrCG, and frontal regions including IFG, MFG, and SFG. This correlation between amygdala and frontal activation is consistent with Adolph's (2002a, 2002b) theory of emotion recognition, where the amygdala and the mirror neuron system work together to link perceptual representations of the face to the generation of knowledge about the particular emotion signaled. Similarly activation in the right IFG was correlated with activation in the right MOG, MTG, and the right FG but interestingly not with the STS.

The effective connectivity within the network of brain regions involved in processing dynamic angry expressions was also examined (see Figure 3A) and revealed that activation in IOG was correlated with STS and MOG activations. Again, STS was reciprocally connected with the IOG, thereby linking early stimulus perception in visual regions to the STS in the dorsal pathway for motion processing. The STS feeds back into regions such as the MOG, which is involved in visual processing, but also connects to frontal regions, such as the MFG. Thus, it would appear that the STS acts as a relay center between regions involved in early visual perceptual processing and emotional processing. Similarly, the amygdala feeds back into visual regions such as the MTG and is also effectively connected to frontal regions such as IFG, MFG, and the cingulate. The IFG and amygdala are reciprocally connected, and both are effectively connected to the right cingulate, so these structures work together to process the emotional content.

Figure 3. 

Connectivity maps for each of the facial display conditions. Red sections indicate the seed voxels in the PPI analysis for the right hemisphere only. (A) Dynamic facial expressions of anger compared with static. (B) Dynamic facial expressions of happiness compared with static. (C) Dynamic displays of speech compared with static.

Figure 3. 

Connectivity maps for each of the facial display conditions. Red sections indicate the seed voxels in the PPI analysis for the right hemisphere only. (A) Dynamic facial expressions of anger compared with static. (B) Dynamic facial expressions of happiness compared with static. (C) Dynamic displays of speech compared with static.

Similarly, the effective connectivity within the network involved in the perception of dynamic happy faces (see Figure 3B) was investigated. In this instance, IOG activation was not correlated with visual regions, but it was correlated with frontal regions including MFG and IFG. STS activation was correlated with activation in the lingual gyrus; however, the STS was not correlated with activation in frontal regions. The amygdala activation was correlated with the right PrCG but not with visual regions. Notably, however, the right IFG was correlated with activation in the right FG only, whereas in the case of dynamic angry expressions the right IFG was reciprocally connected to the amygdala. So in this network, the FG and IFG appear to be involved in processing dynamic happy facial expressions to a greater extent than the amygdala.

Examination of the effective connectivity within the dynamic speech perception network (see Figure 3C) revealed that activation in IOG was correlated with the right SFG. STS activation was correlated with early visual regions, including lingual gyrus and MOGs. The right amygdala was correlated with the right posterior cingulate and the right MTG. Activation in IFG was correlated with activation in visual regions only. This network differs from the affective networks and is consistent with the regions of activation reported by Calvert and Campbell (2003) when they directly contrasted moving speech to static speech, and found activation in visual regions including bilateral IOG, MTG, and STS, along with frontal regions including IFG, MFG, and PrCG.

In conclusion, these findings extend Haxby et al.'s (2000) model to include naturalistic dynamic facial motion. Notably, functional dissociations were found within the STS and the amygdala, where the STS is sensitive to facial motion regardless of affect and conversely the amygdala is sensitive to facial affect regardless of motion. A measure of the effective connectivity within this dynamic face perception network revealed that viewing dynamic facial stimuli was associated with specific increases in connectivity between early visual regions, such as IOG, and the STS in the dorsal pathway, along with coupling between the STS and the amygdala and frontal regions. It was also shown that while similar regions are involved in processing the different dynamic stimuli, the effective connectivity within these networks varies depending on the type of expression that is processed. So processing dynamic angry and happy facial expressions was associated with increases in effective connectivity between IOG and STS and reciprocal connections between the amygdala and the IFG. Although dynamic happy expressions were associated with effective connectivity between IOG and IFG, and IFG and FG, among others. Both of these networks recruit regions involved in emotional processing. However, viewing dynamic speech stimuli, which lack any emotional component, were associated with increases in connectivity between STS and visual regions, lingual gyri and MOG, and IFG was coupled with lingual gyri and MTG, regions involved in motion processing and motor movements.

A limitation of this study that should be addressed in future research is the use of the 1-back recognition task, which may have biased face processing. Also the fact that happy faces were recognized better than angry faces may be a potential confound and should be addressed in future studies. Ekman and Friesen (1976) report that mean accuracy for recognition of the facial expression of happiness reached 100%, making this the most easily recognized facial expression. A further confound may be because of differences in the amount of motion contained in the different categories of facial stimuli. This issue is currently being addressed through the use of a motion capture technique. Preliminary results on a small sample of participants show no difference in the amount of motion between the different facial expression categories; however, this needs to be extended to a larger sample of participants (see Supplementary Data for a table of these results). Nevertheless, these results demonstrate that different dynamic facial expressions evoke distinct activation within a distributed network of cortical regions and show the importance of using naturalistic dynamic stimuli to better understand how facial expressions are processed.

Acknowledgments

MRI scan costs were supported by the Lord Dowding Fund for Humane Research.

Reprint requests should be sent to Elaine Foley, School of Life & Health Sciences, Aston Brain Centre, Aston University, Birmingham, B4 7ET, United Kingdom, or via e-mail: foleye@aston.ac.uk.

Note

1. 

Connectivity analysis was performed separately on regions in the right and left hemispheres; however, as both hemispheres showed broadly similar activations, results from the right hemisphere are primarily discussed.

REFERENCES

REFERENCES
Adolphs
,
R.
(
2002a
).
Neural systems for recognizing emotion.
Current Opinion in Neurobiology
,
12
,
169
177
.
Adolphs
,
R.
(
2002b
).
Recognizing emotion from facial expressions: Psychological and neurological mechanisms.
Behavioral and Cognitive Neuroscience Reviews
,
1
,
21
61
.
Allison
,
T.
,
Puce
,
A.
, &
McCarthy
,
G.
(
2000
).
Social perception from visual cues: Role of the STS region.
Trends in Cognitive Sciences
,
4
,
267
278
.
Beauchamp
,
M. S.
,
Lee
,
K. E.
,
Haxby
,
J. V.
, &
Martin
,
A.
(
2002
).
Parallel visual motion processing streams for manipulable objects and human movements.
Neuron
,
34
,
149
159
.
Breiter
,
H. C.
,
Etcoff
,
N. I.
,
Whalen
,
P. J.
,
Kennedy
,
W. A.
,
Rauch
,
S. L.
,
Buckner
,
R. L.
,
et al
(
1996
).
Response and habituation of the human amygdale during visual processing of facial expression.
Neuron
,
17
,
875
887
.
Bruce
,
V.
, &
Young
,
A.
(
1986
).
Understanding face recognition.
British Journal of Psychology
,
77
,
305
327
.
Buccino
,
G.
,
Binkofski
,
F.
, &
Riggio
,
L.
(
2004
).
The mirror neuron system and action recognition.
Brain & Language
,
89
,
370
376
.
Calder
,
A. J.
, &
Young
,
A. W.
(
2005
).
Understanding the recognition of facial identity and facial expression.
Nature Reviews Neuroscience
,
6
,
641
651
.
Calvert
,
G. A.
, &
Campbell
,
R.
(
2003
).
Reading speech from still and moving faces: The neural substrates of visible speech.
Journal of Cognitive Neuroscience
,
15
,
57
70
.
Christie
,
F.
, &
Bruce
,
V.
(
1998
).
The role of dynamic information in the recognition of unfamiliar faces.
Memory and Cognition
,
26
,
780
790
.
Ekman
,
P.
, &
Friesen
,
W. V.
(
1976
).
Pictures of facial affect.
Palo Alto, CA
:
Consulting Psychologists Press
.
Fairhall
,
S. L.
, &
Ishai
,
A.
(
2007
).
Effective connectivity within the distributed cortical network for face perception.
Cerebral Cortex
,
17
,
2400
2406
.
Friston
,
K. J.
,
Buechel
,
C.
,
Fink
,
G. R.
,
Morris
,
J.
,
Rolls
,
E.
, &
Dolan
,
R. J.
(
1997
).
Psychophysiological and modulatory interactions in neuroimaging.
Neuroimage
,
6
,
218
229
.
Fusar-Poli
,
F. P.
,
Placentino
,
A.
,
Carletti
,
F.
,
Landi
,
P.
,
Allen
,
P.
,
Surguladze
,
S.
,
et al
(
2009
).
Functional atlas of emotional faces processing: A voxel-based meta-analysis of 105 functional magnetic resonance imaging studies.
Journal of Psychiatry & Neuroscience
,
34
,
418
432
.
Haxby
,
J. V.
,
Hoffman
,
E. A.
, &
Gobbini
,
M. I.
(
2000
).
The distributed human neural system for face perception.
Trends in Cognitive Science
,
4
,
223
233
.
Haxby
,
J. V.
,
Hoffman
,
E. A.
, &
Gobbini
,
M. I.
(
2002
).
Human neural systems for face recognition and social communication.
Biological Psychiatry
,
51
,
59
67
.
Hein
,
G.
, &
Knight
,
R. T.
(
2008
).
Superior temporal sulcus—It's my area: Or is it?
Journal of Cognitive Neuroscience
,
20
,
2125
2136
.
Hill
,
H.
, &
Johnston
,
A.
(
2001
).
Categorising sex and identity from the biological motion of faces.
Current Biology
,
11
,
880
885
.
Humphreys
,
G.
,
Donnelly
,
N.
, &
Riddoch
,
M. J.
(
1993
).
Expression is computed separately from facial identity, and it is computed separately for moving and static faces: Neuropsychological evidence.
Neuropsychologia
,
31
,
173
181
.
Iacoboni
,
M.
,
Woods
,
R. P.
,
Brass
,
M.
,
Bekkering
,
H.
,
Mazziotta
,
J. C.
, &
Rizzolatti
,
G.
(
1999
).
Cortical mechanisms of human imitation.
Science
,
286
,
2526
2528
.
Ishai
,
A.
,
Schmidt
,
C. F.
, &
Boesiger
,
P.
(
2005
).
Face perception is mediated by a distributed cortical network.
Brain Research Bulletin
,
67
,
87
93
.
Kamachi
,
M.
,
Bruce
,
V.
,
Mukaida
,
S.
,
Gyoba
,
J.
,
Yoshikawa
,
S.
, &
Akamatsu
,
S.
(
2001
).
Dynamic properties influence the perception of facial expressions.
Perception
,
30
,
875
887
.
Kilts
,
C. D.
,
Egan
,
G.
,
Gideon
,
D.
,
Ely
,
T.
, &
Hoffman
,
J.
(
2003
).
Dissociable neural pathways are involved in the recognition of emotion in static and dynamic facial expressions.
Neuroimage
,
18
,
156
168
.
LaBar
,
K. S.
,
Crupain
,
M. J.
,
Voyvodic
,
J. T.
, &
McCarthy
,
G.
(
2003
).
Dynamic perception of facial affect and identity in the human brain.
Cerebral Cortex
,
13
,
1023
1033
.
Morris
,
J. S.
,
Friston
,
K. J.
,
Buchel
,
C.
,
Frith
,
C. D.
,
Young
,
A. W.
,
Calder
,
A. J.
,
et al
(
1998
).
A neuromodulatory role for the human amygdala in processing emotional facial expressions.
Brain
,
121
,
47
57
.
N'Diaye
,
K.
,
Sander
,
D.
, &
Vuilleumier
,
P.
(
2009
).
Self-relevance processing in the human amygdala: Gaze direction, facial expression, and emotion intensity.
Emotion
,
9
,
798
806
.
O'Toole
,
A. J.
,
Roark
,
D. A.
, &
Abdi
,
H.
(
2002
).
Recognizing moving faces: A psychological and neural synthesis.
Trends in Cognitive Science
,
6
,
261
266
.
Phelps
,
E. A.
,
O'Connor
,
K. J.
,
Gatenby
,
J. C.
,
Grillon
,
C.
,
Gore
,
J. C.
, &
Davis
,
M.
(
2001
).
Activation of the human amygdala to a cognitive representation of fear.
Nature Neuroscience
,
4
,
437
441
.
Phillips
,
M. L.
,
Young
,
A. W.
,
Senior
,
C.
,
Brammer
,
M.
,
Andrew
,
C.
,
Calder
,
A. J.
,
et al
(
1997
).
A specific neural substrate for perception of facial expressions of disgust.
Nature
,
389
,
495
498
.
Pike
,
G. E.
,
Kemp
,
R. I.
,
Towell
,
N. A.
, &
Phillips
,
K. C.
(
1997
).
Recognizing moving faces: The relative contribution of motion and perspective view information.
Visual Cognition
,
4
,
409
437
.
Puce
,
A.
,
Allison
,
T.
,
Bentin
,
S.
,
Gore
,
J. C.
, &
McCarthy
,
G.
(
1998
).
Temporal cortex activation in humans viewing eye and mouth movements.
Journal of Neuroscience
,
18
,
2188
2199
.
Puce
,
A.
, &
Perrett
,
D.
(
2003
).
Electrophysiology and brain imaging of biological motion.
Philosophical Transactions of the Royal Society of London, Series B, Biological Sciences
,
358
,
435
445
.
Rizzolatti
,
G.
, &
Craighero
,
L.
(
2004
).
The mirror-neuron system.
Annual Reviews Neuroscience
,
27
,
169
192
.
Sato
,
W.
,
Kochiyama
,
T.
,
Yoshikawa
,
S.
,
Naito
,
E.
, &
Matsumara
,
M.
(
2004
).
Enhanced neural activity in response to dynamic facial expressions of emotion: An fMRI study.
Cognitive Brain Research
,
20
,
81
91
.
Schultz
,
J.
, &
Pilz
,
K. S.
(
2009
).
Natural facial motion enhances cortical responses to faces.
Experimental Brain Research
,
194
,
465
475
.
Talairach
,
J.
, &
Tournoux
,
P.
(
1988
).
Co-planar stereotaxic atlas of the human brain.
Stuttgart, Germany
:
Thieme
.
Van der Gaag
,
C.
,
Minderaa
,
R. B.
, &
Keysers
,
C.
(
2007
).
The BOLD signal in the amygdala does not differentiate between dynamic facial expressions.
Social Cognitive and Affective Neuroscience
,
2
,
93
103
.
Vuilleumier
,
P.
, &
Pourtois
,
G.
(
2007
).
Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging.
Neuropsychologia
,
45
,
174
194
.
Wehrle
,
T.
,
Kaiser
,
S.
,
Schmidt
,
S.
, &
Scherer
,
K. R.
(
2000
).
Studying the dynamics of emotional expression using synthesized facial muscle movements.
Journal of Personality and Social Psychology
,
78
,
105
119
.
Williams
,
L.
,
Das
,
P.
,
Liddell
,
B.
,
Olivieri
,
G.
,
Peduto
,
A.
,
Brammer
,
M.
,
et al
(
2005
).
BOLD, sweat and fears: fMRI and skin conductance distinguish facial fear signals.
NeuroReport
,
16
,
49
52
.
Young
,
A. W.
,
Hellawell
,
D. J.
,
Van de Wal
,
C.
, &
Johnson
,
M.
(
1996
).
Facial expression processing after amygdalotomy.
Neuropsychologia
,
34
,
31
39
.