Abstract

Facial expressions provide information about an individual's intentions and emotions and are thus an important medium for nonverbal communication. Theories of embodied cognition assume that facial mimicry and resulting facial feedback plays an important role in the perception of facial emotional expressions. Although behavioral and electrophysiological studies have confirmed the influence of facial feedback on the perception of facial emotional expressions, the influence of facial feedback on the automatic processing of such stimuli is largely unexplored. The automatic processing of unattended facial expressions can be investigated by visual expression-related MMN. The expression-related MMN reflects a differential ERP of automatic detection of emotional changes elicited by rarely presented facial expressions (deviants) among frequently presented facial expressions (standards). In this study, we investigated the impact of facial feedback on the automatic processing of facial expressions. For this purpose, participants (n = 19) performed a centrally presented visual detection task while neutral (standard), happy, and sad faces (deviants) were presented peripherally. During the task, facial feedback was manipulated by different pen holding conditions (holding the pen with teeth, lips, or nondominant hand). Our results indicate that automatic processing of facial expressions is influenced and thus dependent on the own facial feedback.

INTRODUCTION

Interpersonal relationships determine our everyday life. Within these interpersonal relationships, the perception and interpretation of emotional facial expressions is indispensable. A growing body of literature emphasizes the pivotal role of facial mimicry in the perception of facial expressions of others. Accordingly, embodied cognition theories suggest that we automatically simulate or mimic emotional expressions of others and the resulting somatosensory facial feedback facilitates the processing of facial emotional stimuli (Niedenthal, Barsalou, Winkielman, Krauth-Gruber, & Ric, 2005). Thus, the perception of a facial emotional expression results in a reexperience of this emotion on a perceptual, somatovisceral, as well as motoric level (Niedenthal, 2007) and in turn facilitates the recognition of these emotional stimuli by evoking a corresponding emotional state in ourselves (Niedenthal et al., 2005). Hence, facial feedback has been proposed to play an important role in interpreting the facial expressions of our counterparts.

The relevance of facial mimicry and the resulting facial feedback for processing facial expressions of emotion is supported by several clinical observations. Severe limitations in the recognition of facial expressions have been observed in patients with movement disorders (i.e., Parkinson's disease; Argaud, Vérin, Sauleau, & Grandjean, 2018), but also in people with mental disorders, such as depression, bipolar disorder (Kohler, Hoffman, Eastman, Healey, & Moberg, 2011), schizophrenia (Kohler, Walker, Martin, Healey, & Moberg, 2010), autism spectrum disorder (Harms, Martin, & Wallace, 2010), and psychopathy (Dawel, O'Kearney, McKone, & Palermo, 2012). In these pathologies, observed deficits in facial emotion recognition are accompanied by reduced or delayed facial mimicry (Livingstone, Vezer, McGarry, Lang, & Russo, 2016; Varcin, Bailey, & Henry, 2010; Oberman, Winkielman, & Ramachandran, 2009), suggesting a causal role of facial mimicry in the perception of facial expressions of emotion.

Experimental evidence for the more general account that facial feedback influences our affective responses is provided by studies investigating the direct consequence of facial feedback manipulation (e.g., Lobmaier & Fischer, 2015; Neal & Chartrand, 2011; Oberman, Winkielman, & Ramachandran, 2007; Niedenthal, Brauer, Halberstadt, & Innes-Ker, 2001; Strack, Martin, & Stepper, 1988; Laird, 1974). In the seminal studies by Laird (1974) and later by Strack et al. (1988), participants were asked to rate the funniness of cartoons while their own facial muscle activity was systematically modulated. In the former study, participants were asked to contract their facial muscles in a way that they would unconsciously pose either a smiling or a frowning facial expression. This facial muscle manipulation influenced the mood of the participants as well as their ratings of the funniness—smiling participants felt happier and rated cartoons to be funnier as in the frowning condition (Laird, 1974). To exclude that participants recognize the emotional meaning of the facial muscle manipulation, Strack et al. (1988) introduced a new method of facial feedback manipulation—participants had to hold a pen with either the teeth, the lips, or with the nondominant hand while rating the funniness of cartoons. In these conditions, holding a pen with the teeth requires contracting the musculus zygomaticus major and the musculus risorius, both also activated while smiling, whereas holding a pen with the lips requires contracting the musculus orbicularis oris and is incompatible with the contraction of the musculus zygomaticus major and risorius that are used in smiling. In accordance with the study by Laird (1974), holding the pen with the teeth and thereby inducing smiling increased funniness ratings, whereas the inhibition of smiling resulted in less funny ratings. Notwithstanding recent contentious debate (Noah, Schul, & Mayo, 2018; Wagenmakers et al., 2016), several studies consistently evidenced that facial feedback specifically influences emotional face perception supporting the facial feedback hypothesis of embodied emotion accounts (e.g., Lobmaier & Fischer, 2015; Sel, Calvo-Merino, Tuettenberg, & Forster, 2015; Neal & Chartrand, 2011; Oberman et al., 2007; Niedenthal et al., 2001).

Two recent studies adopted the methodological implementation of facial feedback manipulation used by Strack et al. (1988) by asking participants to hold a pen with their mouth (Niedenthal et al., 2001) or with their lips or teeth (Lobmaier & Fischer, 2015) while rating morph sequences of changing facial emotional expressions. Results indicate that a general facial muscle restriction delayed the detection of changes in emotional expressions (Niedenthal et al., 2001), while detection of emotional changes strongly relied on the pen holding condition in the second study (Lobmaier & Fischer, 2015). Particularly, induced smiling during the teeth-holding condition resulted in a facilitated detection and perception of happy facial expressions. In contrast, when smiling was inhibited during the lip-holding condition, detection and perception of sad facial expressions was facilitated. The authors conclude that facial feedback supports the detection of intensity changes of facial expressions of emotions when these are congruent to the own facial expression.

Only a few studies so far have tested the influence of facial feedback manipulation on the processing of emotional faces on an electrophysiological level (Davis, Winkielman, & Coulson, 2017; Sel et al., 2015). In the study by Sel et al. (2015), participants had to adopt a happy facial expression by biting on a pen or maintain a neutral facial expression by relaxing their facial muscles while they had to judge the intensity of facial expressions. The concurrent EEG revealed that such facial feedback manipulation modulates the N170, a face-sensitive component of the visually evoked potential. In contrast, by biting on chopsticks, Davis et al. (2017) attempted to disrupt the naturally produced feedback from the lower half of facial muscles and investigated the influence on the later semantic processing of facial expressions—with the result that this disruption increased the N400 (which is representative for the access to semantic information within memory) to happy and disgusted faces. Thus, the electrophysiological results of both studies indicate that facial mimicry manipulation can influence early perceptual as well as later semantic processing of facial emotional expressions. Above-mentioned studies consistently demonstrate the important role of facial mimicry and the resulting facial feedback on the conscious processing of facial expressions of emotions. These studies investigated the relevance of facial feedback in explicit judgments of facial emotions on a behavioral as well as electrophysiological level. However, changes in facial expressions regularly occur outside the focus of attention. Accordingly, in various everyday situations, facial expressions are processed automatically without conscious awareness, challenging the general external validity of previous investigations. Therefore, the aim of this study was to assess the influence of facial feedback manipulation on the automatic processing of facial expressions of emotion when no overt attention is allocated to the emotional stimuli.

A classical approach to investigate stimulus processing under attention-independent condition is provided by recordings of the MMN. This negative sensory electrophysiological component is elicited by regularity violations and is considered to display automatic change discrimination processes (Näätänen, Astikainen, Ruusuvirta, & Huotilainen, 2010). Although the MMM was first observed in the auditory domain, there is clear evidence for a visual analogue, the visual MMN (vMMN; Pazo-Alvarez, Cadaveira, & Amenedo, 2003). In accordance with predictive coding theory, vMMN represents a predictive error elicited by the mismatch between a current input and a prediction induced by representations of visual objects in memory (Winkler & Czigler, 2012). Previous studies indicate that vMMN is sensitive to individual stimulus features like color, orientation, and direction, but also to more complex stimulus characteristics such as categories like gender and color, but also facial emotional expressions. In such studies, MMN to facial expressions (eMMN) is measured during a visual oddball paradigm where a stream of frequently presented faces of one emotion category (standard) is occasionally interrupted by rare emotional faces of another emotion category (deviant; for a review, see Czigler, 2014; Pazo-Alvarez et al., 2003). The process of automatic change detection of emotional faces (as measured by eMMN) is assumed to be emotion-sensitive. This sensitivity can be indexed by negative bias, for example, an enhanced processing (increase in eMMN amplitude and/or reduced eMMN latency onset) of negative emotional deviants (like angry, fearful, or sad faces) compared with neutral or positive emotional deviants (happy or neutral faces; Kovarski et al., 2017; Kimura, Kondo, Ohira, & Schröger, 2012; Stefanics, Csukly, Komlósi, Czobor, & Czigler, 2012; Zhao & Li, 2006). Furthermore, several studies reveal a modification in eMMN characteristics in clinical populations (such as schizophrenia, mood disorders, and developmental disorders; Kremláček et al., 2016). Thus, the nonconscious change detection of facial emotional expressions by means of eMMN appears to be a promising procedure to measure automatic affective responsiveness to emotional faces.

METHODS

Participants

Twenty-eight individuals took part in this study. To assess current depressive disorders and self-reported anhedonia participants completed the Beck Depression Inventory-II (Hautzinger, Keller, & Kühner, 2006) and the German version of the Snaith–Hamilton Pleasure Scale (Franz et al., 1998). One participant was excluded from further analysis due to reported psychiatric disease, and data of eight participants were discarded due to less than 60% remaining trials for eMMN analysis (7) or more than ±3 SD from the statistical mean (1) in any experimental condition, resulting in 19 participants (eight women, mean age = 26.3, SD = 7.7). All remaining participants had normal or corrected-to-normal vision and affirmed to have no neurological or psychiatric diseases. Participants' characteristics are presented in Table 1. They were naïve of the aim of the study and signed informed consent before data collection according to the Declaration of Helsinki. The study was approved by the Ethical Committee of the University Magdeburg.

Table 1. 
Sample Characteristics
Measure (n = 19)M (SD)Range
Age 26.3 (7.7) 19–56 
BDI 3.9 (2.4) 1–9 
SHAPS-D 0.3 (0.9) 0–4 
Measure (n = 19)M (SD)Range
Age 26.3 (7.7) 19–56 
BDI 3.9 (2.4) 1–9 
SHAPS-D 0.3 (0.9) 0–4 

BDI = Beck Depression Inventory; SHAPS-D = Snaith–Hamilton Pleasure Scale.

Stimuli and Procedure

After EEG preparation, participants sat in a comfortable chair in a dimly lit room. Visual stimuli were presented on a gray background on a computer screen (Samsung SyncMaster SA450, 22 in.) at a viewing distance of 0.9 m. Stimuli consisted of black and white photographs taken from the Karolinska face database (Lundqvist, Flykt, & Öhman, 1998). We chose 18 male (AM01, AM02, AM04, AM05, AM06, AM07, AM08, AM10, AM11, AM13, AM14, AM17, AM18, AM22, AM23, AM25, AM34, AM35) and 18 female models (AF01, AF02, AF03, AF05, AF06, AF07, AF09, AF11, AF13, AF14, AF17, AF19, AF20, AF22, AF24, AF26, AF29, AF33), each expressing three different emotions (neutral, happy, sad). To control for low-level properties of the images, mean luminance and contrast of all stimuli were equated using the SHINE toolbox for MATLAB (Willenbockel et al., 2010). Stimulus presentation was controlled with Presentation software (Version 21, Neurobehavioral Systems, Inc.).

The experiment consisted of three blocks. In each block, participants underwent a different facial muscle manipulation condition. In accordance with the study by Strack et al. (1988), facial muscle activity was manipulated by holding a pen with the teeth (innervating muscles responsible for smiling), with the lips (inhibiting muscles responsible for smiling), or with the nondominant hand (control condition). The order of these conditions was counterbalanced across participants, such that participants were assigned to one of nine possible predefined sequences of the facial mimicry manipulation conditions. To cover the study objective, participants were instructed that they are part of a stroke study investigating the influence of paralysis on RT measurements. As they will serve as a control group, the paralysis is simulated by holding a pen with the teeth, the lips, or the nondominant hand. Participants were fully debriefed about the study objective at the end of the experiment. Before each block, participants were carefully briefed how to hold the pen.

Each block started with a familiarization task followed by three visual detection tasks (see Figure 1). Additionally, for each of the three blocks, a set of six male and six female faces from the initial set of 18 male and 18 female faces was selected. During the familiarization task, the faces presented during the visual detection tasks were introduced to exclude any novelty effects on subsequent eMMN measurements. The faces, each displaying three different emotions (happy, sad, neutral), were randomly displayed while participants had to rate the emotional expressions (see Figure 1).

Figure 1. 

Stimuli and procedure for one block. Each block started with a familiarization task (A), where participants were asked to choose the best fitting emotional expression of a face among five options displayed below the image. During the oddball sequence (B), neutral (standard), happy, or sad (deviant) face pairs were presented bilaterally to a centrally presented fixation cross for 200 msec. In the control sequence (C), only happy or sad face pairs were presented. In both sequences, presentation of face pairs was followed by an ISI of 450–600 msec. Participants were asked to focus on the fixation cross and indicate whenever the vertical or horizontal line changed its size. Fixation cross changes occurred only during the ISI, and for the oddball sequence only before a standard stimulus.

Figure 1. 

Stimuli and procedure for one block. Each block started with a familiarization task (A), where participants were asked to choose the best fitting emotional expression of a face among five options displayed below the image. During the oddball sequence (B), neutral (standard), happy, or sad (deviant) face pairs were presented bilaterally to a centrally presented fixation cross for 200 msec. In the control sequence (C), only happy or sad face pairs were presented. In both sequences, presentation of face pairs was followed by an ISI of 450–600 msec. Participants were asked to focus on the fixation cross and indicate whenever the vertical or horizontal line changed its size. Fixation cross changes occurred only during the ISI, and for the oddball sequence only before a standard stimulus.

In the following three visual detection tasks (one oddball sequence, two control sequences), participants were asked to focus on a centrally presented fixation cross (1.3°) and detect size changes in horizontal or vertical line (1.9°) while ignoring the two bilaterally presented faces (see Figure 1). Participants responded to size changes by pressing either the left or right mouse button, depending on the changed line orientation. Target buttons were pseudorandomly assigned for each participant, such that the response buttons for horizontal and vertical line changes (either left for horizontal and right for vertical line changes or vice versa) were counterbalanced across the participants. A practice block was conducted at the start of the experiment.

Bilaterally presented face pairs covering an area of 5.4 × 7.9° were composed of one male and one female character displaying the same emotion presented for 200 msec followed by an ISI of 450–650 msec. The position of the male and female face was randomly assigned, and identities changed from trial to trial. Fixation cross changes occurred only during the ISI and only before standard trials. In each oddball sequence, neutral faces were presented as standard and happy and sad faces as deviants. At the beginning of each oddball sequence, 10 standards were presented to establish a sensory memory pattern of a neutral facial expression. One hundred twenty deviants (60 sad, p = .1; 60 happy, p = .1) and 480 standards (p = .8) were presented pseudorandomly, with the restriction that at least two standards were interspersed between consecutive deviants. In the following two control sequences, only happy or sad faces were presented (102 happy, 102 sad; see Figure 1). The order of happy and sad control sequences was pseudorandomly assigned between each block, so that the order of happy and sad control sequences changed within each participant between the three blocks.

EEG Recording and Data Analysis

EEG was recorded with Brain Vision Recorder software (Version 1.20 Brain Products GmbH) at electrode positions F3, Fz, F4, C3, Cz, C4, P7, P3, Pz, P4, P8, PO7, POz, PO8, O1, Oz, O2, right and left mastoids according to the international 10–20 system. Horizontal and vertical electrooculograms were recorded from two electrodes placed below and lateral to the right eye. Data were online referenced to the tip of the nose, recorded with a sampling rate of 500 Hz and digitally online filtered with a high-pass filter of 0.1 Hz. The impedances were kept below 5 kΩ. EEG data were offline-processed using BrainVision Analyzer (Version 2.1, Brain Products GmbH). Data were re-referenced to the common average potential, notch-filtered (50 Hz), and band-pass filtered between 0.1 and 40 Hz using a second-order zero-phase IIR Butterworth filter (12 dB/oct). Epochs of 800 msec (including 200 msec prestimulus interval) relative to the onset of the face pairs were extracted. Epochs with artifacts were excluded from further analyses according to predetermined rejection criteria (maximal allowed voltage step 100 μV/msec, maximal allowed difference of values in intervals 500 μV, maximal/minimal allowed amplitude 100 μV/−100 μV, lowest allowed activity in intervals 0.5 in 100 msec). As a result, data sets of eight participants were excluded from further analysis due to a loss exceeding 40% of trials. Furthermore, data of the first 10 trials and trials after a fixation cross change were not included into further processing. Data were averaged for deviant (happy deviant and sad deviant) and stimuli from the control sequence (happy control and sad control) separately for the different facial feedback manipulation conditions. Based on previous studies (Wu et al., 2017; Chang, Xu, Shi, Zhang, & Zhao, 2010; Zhao & Li, 2006) and visual inspection, data of P7/PO7 and P8/PO8 were pooled. Finally, differential waveforms were calculated separately for each facial feedback manipulation condition and emotion (deviant happy–control happy for happy-eMMN, deviant sad–control sad for sad-eMMN). Time windows for the analysis of the eMMN were selected based on previous studies (Wu et al., 2017; Csukly, Stefanics, Komlósi, Czigler, & Czobor, 2013; Stefanics et al., 2012) and on visual inspection of grand-averaged waveforms of happy- and sad-eMMN for the hand condition only. This resulted in three time windows reaching 70–140 , 180–270, and 280–360 msec (see Figure 2). Within these time windows, mean amplitudes over a 20-msec interval around the most negative peak (±10 msec) of happy- and sad-eMMNs for the different facial muscle manipulation conditions were extracted for further statistical analysis.

Figure 2. 

Three time windows resulting from hand condition. Electrophysiological responses to happy (upper) and sad (lower) faces for left (left) and right (right) hemisphere during the oddball (dotted black) and control (dashed gray) sequence and the resulting eMMN (black) for the hand condition. By visual inspection, three time windows (gray area) were extracted for further analyses reaching 70–140, 180–270, and 280–360 msec.

Figure 2. 

Three time windows resulting from hand condition. Electrophysiological responses to happy (upper) and sad (lower) faces for left (left) and right (right) hemisphere during the oddball (dotted black) and control (dashed gray) sequence and the resulting eMMN (black) for the hand condition. By visual inspection, three time windows (gray area) were extracted for further analyses reaching 70–140, 180–270, and 280–360 msec.

The statistical analysis was performed using IBM SPSS software 24. Peak amplitudes of difference waveforms of happy-eMMN and sad-eMMN were analyzed by repeated-measures ANOVA with Hemisphere (left vs. right) × Emotion (happy vs. sad) × Facial Muscle Manipulation (hand vs. teeth vs. lips) as within-participant factors separately for each time window. Greenhouse–Geisser adjustment was used, if necessary, to correct for violations of sphericity. For significant interactions, post hoc comparisons were conducted using paired t tests. To correct for multiple comparisons the false discovery rate (FDR) correction was used (Benjamini & Hochberg, 1995).

RESULTS

As shown in Figure 3, facial muscle manipulation systematically influenced happy- and sad-eMMN. For further analysis, time windows were selected by visual inspection of happy- and sad-eMMN in the hand condition, resulting in three time windows (70–140, 180–270, and 280–360 msec).

Figure 3. 

eMMNs for the different experimental conditions. eMMN to happy (upper) and sad (lower) faces at left (left) and right (right) hemisphere displayed for the hand (black), lip (red), and teeth (blue) condition. Gray areas represent range of analyzed time windows.

Figure 3. 

eMMNs for the different experimental conditions. eMMN to happy (upper) and sad (lower) faces at left (left) and right (right) hemisphere displayed for the hand (black), lip (red), and teeth (blue) condition. Gray areas represent range of analyzed time windows.

In the first time window (70–140 msec), analysis revealed a significant main effect of the factor Emotion, F(1, 18) = 7.057, p = .016, ηp2 = .282, demonstrating more negative amplitude for sad-eMMN (M = −0.55, SE = 0.09) than for happy-eMMN (M = −0.31, SE = 0.07; see Figure 4). Furthermore, analysis revealed a significant interaction between Emotion × Facial Muscle Manipulation, F(2, 36) = 3.297, p = .048, ηp2 = .155, as well as a Hemisphere × Emotion × Facial Muscle Manipulation interaction, F(2, 36) = 3.510, p = .04, ηp2 = .163. Post hoc comparisons demonstrated stronger influence of facial muscle manipulation at left hemisphere. Although the sad-eMMN increased during the teeth condition (M = −0.91, SE = 0.19) compared with the hand (M = −0.34, SE = 0.15; t(18) = 2.731, p = .014, p < .05 FDR corrected) and the lip condition (M = −0.49, SE = 0.16; t(18) = 2.385, p = .028, p < .05 FDR corrected), the happy-eMMN showed a trend for the opposite effect with a decrease during the teeth condition (M = −0.05, SE = 0.21) compared with the hand condition (M = −0.64, SE = 0.20; t(18) = −2.011, p = .06, uncorr.).

Figure 4. 

Overview of statistical effects within the first time window. (A) eMMN for happy (white) and sad (gray) faces over all facial muscle manipulation conditions. (B) eMMN for happy (left) and sad (right) faces plotted for each facial muscle manipulation condition. (C) Influence of facial muscle manipulation on happy (left) and sad (right) faces at left hemisphere. (D) Influence of facial muscle manipulation on happy (left) and sad (right) faces at right hemisphere. Facial muscle manipulation conditions: gray, hand; red, lips; blue, teeth. p ≤ .06, * p < .05 FDR corrected.

Figure 4. 

Overview of statistical effects within the first time window. (A) eMMN for happy (white) and sad (gray) faces over all facial muscle manipulation conditions. (B) eMMN for happy (left) and sad (right) faces plotted for each facial muscle manipulation condition. (C) Influence of facial muscle manipulation on happy (left) and sad (right) faces at left hemisphere. (D) Influence of facial muscle manipulation on happy (left) and sad (right) faces at right hemisphere. Facial muscle manipulation conditions: gray, hand; red, lips; blue, teeth. p ≤ .06, * p < .05 FDR corrected.

During the second time window (180–270 msec), statistical analysis revealed a significant interaction between the factors Emotion × Facial Muscle Manipulation, F(2, 36) = 3.153, p = .05, ηp2 = .149. This interaction was driven by a significant increase of the sad-eMMN during the teeth condition (M = −0.72, SE = 0.16) compared with the lip condition (M = −0.30, SE = 0.13; t(18) = 2.361, p = .03, p < .1 FDR corrected; see Figure 5). Neither main effects nor interactions were observed in the third time window.

Figure 5. 

Overview of statistical effects within second time window. Influence of facial muscle manipulation on happy (left) and sad (right) faces for different facial muscle manipulation conditions: gray, hand; red, lips; blue, teeth. *p < .05 FDR corrected.

Figure 5. 

Overview of statistical effects within second time window. Influence of facial muscle manipulation on happy (left) and sad (right) faces for different facial muscle manipulation conditions: gray, hand; red, lips; blue, teeth. *p < .05 FDR corrected.

In summary, results demonstrate that facial muscle manipulation influenced the automatic processing of changes in emotional expressions. The activation of facial muscles responsible for smiling (teeth condition) increases sad-eMMN and decreases happy-eMMN during a 70–140 msec (see Figure 4) and a 180–270 msec period (see Figure 5).

DISCUSSION

This study highlights the impact of facial feedback on automatic processing of emotional facial expressions. During a visual emotional oddball paradigm, participants' attention was directed to a centrally presented fixation cross while face pairs of divergent emotions were shown at periphery. Facial feedback was manipulated by the different facial muscle manipulation conditions—holding the pen with the teeth activated muscles responsible for smiling, whereas holding a pen with the lips inhibited these facial muscles; holding the pen with the nondominant hand served as a control condition, allowing for free facial mimicry.

As hypothesized, electrophysiological data revealed an effect of facial feedback manipulation on eMMN components. Especially the activation of facial muscles responsible for smiling interfered with the automatic processing of emotional facial expressions. In particular, the activation of facial muscles responsible for smiling (teeth condition) increased eMMN to sad faces (first and second time window) and decreased eMMN to happy faces (first time window). No effects of facial feedback manipulation were observed for the late eMMN. However, because the current study is the first to report these effects with a relatively small sample size, future studies are needed to replicate the present results to make reliable conclusions.

Generally, our data revealed visually evoked eMMN responses to facial deviants in three different time intervals—an early time interval lasting from 70 to 140 msec, one middle time interval from 180 to 270 msec, and a late time interval from 280 to 360 msec at posterior sites. These time intervals are consistent with previous literature (Wu et al., 2017; Csukly et al., 2013; Stefanics et al., 2012), confirming that visually evoked mismatch responses to changes in emotional expressions can be reliably measured within these periods. However, although a study by Stefanics et al. (2012) reported an early eMMN for fearful faces only, we additionally found an early mismatch response (70–140 msec) to happy and sad faces, indicating that automatic face processing generally starts as early as 70 msec, like in potentially threatening stimuli. Different implementations to investigate the eMMN exist. These differences concern the emotion categories for standards and deviants, the central task, and the use of an additional control block. Based on these variations, studies provide partially diverging results, making it difficult to make general statements about the timing of automatic emotional processing (Czigler, 2014). Nevertheless, several studies consistently revealed a comparable early onset of the deviant-related negativity around 110 msec (e.g., Kovarski et al., 2017; Li, Lu, Sun, Gao, & Zhao, 2012; Susac, Ilmoniemi, Pihko, Ranken, & Supek, 2010; Zhao & Li, 2006; Wei, Chan, & Luo, 2002), supporting our finding of an early regularity violations detection in the visual system.

Effects of Facial Feedback on Happy- and Sad-eMMN

Importantly, in this study, the facial feedback manipulation differentially affected eMMNs to happy and sad faces. The activation of muscles responsible for smiling (teeth condition) increased sad- and decreased happy-eMMN. These results fit well with the facial feedback hypothesis—facial feedback influences ongoing emotional experience. Mood modulation by facial muscle manipulation was already observed by Laird (1974) where participants asked to contract muscles responsible for smiling described themselves as happier, whereas participants asked to contract muscles activated while frowning described themselves as angrier. Further studies support the role of facial feedback on emotional affect. Facial feedback manipulations influence participants' funniness rating on cartoons (Strack et al., 1988) as well as their sadness ratings of aversive photographs (Larsen, Kasimatis, & Frey, 1992). Clinical studies on depression provide further evidence for the influence of facial mimicry on emotional experience. In recent studies, depression was treated with botulinum toxin injection to the glabellar region (Wollmer et al., 2012; Finzi & Wasserman, 2006) and to the corrugator and procerus muscles (Finzi & Rosenthal, 2014)—muscles mainly activated while expressing anger, sadness, and fear. Both studies determined an antidepressant effect of the botulinum toxin injections by preventing their muscle contraction in these regions. In accordance with the facial feedback hypothesis, decreased negative facial expressions reduce the negative proprioceptive feedback from these regions, thus improving the positive feedback and the mood.

In accordance with these studies of facial feedback manipulation, holding a pen with the teeth increases positive facial feedback and thereby reinforces a happy emotional experience, whereas holding a pen with the lips inhibits facial feedback from muscles responsible for smiling and thus reduces positive facial feedback and consequently happy emotional experience. With regard to eMMN signal, it is conceivable that rarely presented happy and sad faces may additionally pose a mismatch to our own emotional experience. Thus, when we experience happiness (e.g., in the teeth condition), sad faces will constitute a greater mismatch, whereas happy faces fit more with our present emotional experience and thus produce a smaller mismatch.

Alternatively, the influence of our emotional experience on happy- and sad-eMMN could be explained by priming effects. Recently, it has been shown that affective priming influences emotional face processing (e.g., Hietanen & Astikainen, 2013; Hirai, Watanabe, Honda, Miki, & Kakigi, 2008). In the study by Hirai et al. (2008), the presentation of emotional facial expressions was primed with congruent or incongruent stimulus scenes. A larger P2 amplitude for fearful faces was observed when the faces were cued by fearful scenes compared with neutral scenes, and likewise, a larger P2 for neutral compared with fearful faces was found when they were cued by neutral scenes. Considering that neutral faces in general elicit a larger P2 amplitude compared with fearful faces, the authors suggest that congruent priming of fearful faces results in a relative shift of fearful face processing to neutral face processing (Hirai et al., 2008). In the same vein, Hietanen and Astikainen (2013) observed an analogous effect on happy and sad faces in an earlier time window. The N170 to happy faces was increased when preceded by happy scenes, whereas the N170 to sad faces was increased when they were primed by negative scenes. Thus, in this study, the emotional experience induced by facial feedback might constitute an affective prime. Thus, the activation of facial muscles responsible for smiling (teeth condition) reinforces the positive facial feedback and constitutes a positive prime, whereas the inhibition of those muscles reduces positive feedback and constitutes a negative prime. Accordingly, the positive congruent prime (teeth condition) might have shifted the processing of happy faces to neutral face processing, and consequently, these happy deviants will pose a smaller mismatch signal to the neutral standard faces. This interpretation is consistent with the degree of deviance effect (Czigler, Balázs, & Winkler, 2002). This effect indicates that the difference between standard and deviance stimuli must be large enough for visual change detection. Thus, only a small difference between standard and deviant stimuli will be insufficient to elicit a vMMN signal. By assuming that the teeth condition and the resulting positive facial feedback shifts the processing of happy faces toward neutral faces, the difference between neutral standards and happy deviants becomes smaller and leads to the decrease in happy-eMMN amplitude. Further research will be required to investigate the influence of mood on affective priming and subsequent processing of facial expressions of emotions.

From another perspective, simulating emotional and cognitive states of others in social communication helps us to make predictions about their emotional states and intentions (Preston & de Waal, 2002).

In the light of prediction error theories, it has been supposed that our brain permanently adapts the model of its environment by comparing actual sensory inputs with predicted inputs and calculating the resulting prediction error. Depending on the reliability and level of information of the actual input, the size of the effect of the prediction error on the updated model can be different. This effect size is expressed by the precision-weighted prediction error (pwPE; den Ouden, Kok, & de Lange, 2012; Friston, 2005). Such brain model mechanisms also exist for the perception of facial emotional expressions. In a recent study, Stefanics, Stephan, and Heinzle (2019) combined computational models with fMRI measurements to investigate whether violations of different features—either emotional facial expression or color of the face—of the same stimulus activates different pwPEs. In contrast to unexpected color change, unexpected change of facial expressions of emotions elicited pwPE responses, among others, within bilateral cerebellum, lingual gyrus, precuneus, left thalamus, right supramarginal gyrus, and right posterior medial frontal cortex. Especially the activation within precuneus (Schilbach, Eickhoff, Mojzisch, & Vogeley, 2008) and cerebellum is strongly correlated with facial mimicry during the observation of facial expressions. Thus, it might be assumed that induced positive (teeth) and negative (lips) facial feedback operates as positive and negative prime and thereby activates those areas and consequently might change pwPEs to unexpected emotional changes. Further research is needed to investigate the influence of facial mimicry manipulation on pwPEs to unexpected changes of facial emotional expressions.

Our observations of opposite effects of facial feedback manipulation on happy- and sad-eMMN could be a consequence of altered encoding and retrieval skills of emotional information. It has been shown that emotions prime related perceptual codes in memory leading to facilitated encoding of emotion-congruent information. In a study by Niedenthal, Halberstadt, and Setterlund (1997), the categorization of emotional words was faster when the words were congruent to a prior induced emotional state of the participants. The authors assume that emotions activate emotion-related lexical codes, which in turn facilitate emotion-congruent word recognition. Furthermore, facial expressions facilitate recall of emotion-congruent information (Laird, Wagener, Halal, & Szegda, 1982). In this study, the recall of a text was facilitated when facial muscle manipulation was congruent to the emotional content of this text, which further supports the influence of facial mimicry on memory. The auditory as well as vMMN is thought to be elicited by regularity violations and reflects prediction error signals based on memory comparison processes (Czigler, 2014; Näätänen, Paavilainen, Rinne, & Alho, 2007). Recently, combined computational and empirical research supports the assumption that the expression-related vMMN attributes to similar processes as the well-investigated auditory MMN (Stefanics, Heinzle, Horváth, & Stephan, 2018). Thus, we can assume that contracting facial muscles responsible for smiling primes the activation of positive emotional information and thereby facilitates the encoding and retrieval of happy facial expressions. Albeit rare in appearance, the emotional valence of happy faces is stored more effectively in memory than those of sad faces during the teeth condition, and thus, rare happy faces might produce a lower mismatch signal. In contrast, the emotional valence of sad faces is stored less effectively because of the conflicting own posed happy facial expression and consequently is poorly retrieved leading to a higher mismatch signal. In this regard, facial feedback may act as an emotional prime, thereby facilitating the storage of emotion-congruent information and influence the automatic processing of emotional expression.

Not only memory encoding but also already low-level neural encoding of facial expressions is influenced by emotions. In a study by Sel et al. (2015), participants had to adopt different facial expressions while measuring their visual evoked potentials during a facial emotion judgment task. This resulted in modulation of the face-specific N170 to neutral faces during adopting a happy facial expression and indicated that neutral faces are processed similarly to happy facial expressions. The authors conclude that the low-level neural encoding of facial expressions can be influenced in a top–down manner by the own facial expressions. In this respect, it might be possible that our facial feedback manipulation also affected the processing of the neutral standard stimuli. Following the conclusions of Sel et al. (2015), the teeth condition in this study could have led to similar processing of neutral and happy faces, which in turn would result in smaller mismatch responses for happy but increased mismatches for sad facial expression. Thus, our facial feedback manipulation would have affected the neutral standard rather than the emotional deviants per se. Although we cannot completely rule out this conclusion, we minimized potential effects of the standard stimuli by using an emotional control condition (comparable with Kovarski et al., 2017; Kimura et al., 2012; Li et al., 2012; Stefanics et al., 2012) to calculate the eMMN signal.

Conclusion

In summary, our findings demonstrate that our own facial expressions have a strong influence on the automatic neural processing of others' facial expressions. Although there is clear evidence that facial mimicry and the resulting feedback can influence the conscious perception and processing of facial emotional expressions on a behavioral as well as on neurophysiological level, this study demonstrates for the first time the influence of facial feedback on automatic, nonconscious processing. Especially when participants activate their facial muscles responsible for smiling, the mismatch response to unattended rare happy facial expressions decreases, whereas the mismatch response to rare sad facial expressions increases. Thus, our results strongly support previous findings on the influence of facial feedback on the processing of facial expressions. However, further research is needed to determine the precise processes behind the influence of facial feedback on the processing of unattended facial expressions.

Reprint requests should be sent to Maria Kuehne, Department of Neurology, Otto-von-Guericke-University, Leipziger Str. 44, 39120 Magdeburg, Germany, or via e-mail: maria.kuehne@med.ovgu.de.

REFERENCES

Argaud
,
S.
,
Vérin
,
M.
,
Sauleau
,
P.
, &
Grandjean
,
D.
(
2018
).
Facial emotion recognition in Parkinson's disease: A review and new hypotheses
.
Movement Disorders
,
33
,
554
567
.
Benjamini
,
Y.
, &
Hochberg
,
Y.
(
1995
).
Controlling the false discovery rate: A practical and powerful approach to multiple testing
.
Journal of the Royal Statistical Society, Series B: Methodological
,
57
,
289
300
.
Chang
,
Y.
,
Xu
,
J.
,
Shi
,
N.
,
Zhang
,
B.
, &
Zhao
,
L.
(
2010
).
Dysfunction of processing task-irrelevant emotional faces in major depressive disorder patients revealed by expression-related visual MMN
.
Neuroscience Letters
,
472
,
33
37
.
Csukly
,
G.
,
Stefanics
,
G.
,
Komlósi
,
S.
,
Czigler
,
I.
, &
Czobor
,
P.
(
2013
).
Emotion-related visual mismatch responses in schizophrenia: Impairments and correlations with emotion recognition
.
PLoS One
,
8
,
e75444
.
Czigler
,
I.
(
2014
).
Visual mismatch negativity and categorization
.
Brain Topography
,
27
,
590
598
.
Czigler
,
I.
,
Balázs
,
L.
, &
Winkler
,
I.
(
2002
).
Memory-based detection of task-irrelevant visual changes
.
Psychophysiology
,
39
,
869
873
.
Davis
,
J. D.
,
Winkielman
,
P.
, &
Coulson
,
S.
(
2017
).
Sensorimotor simulation and emotion processing: Impairing facial action increases semantic retrieval demands
.
Cognitive, Affective, & Behavioral Neuroscience
,
17
,
652
664
.
Dawel
,
A.
,
O'Kearney
,
R.
,
McKone
,
E.
, &
Palermo
,
R.
(
2012
).
Not just fear and sadness: Meta-analytic evidence of pervasive emotion recognition deficits for facial and vocal expressions in psychopathy
.
Neuroscience & Biobehavioral Reviews
,
36
,
2288
2304
.
den Ouden
,
H. E. M.
,
Kok
,
P.
, &
de Lange
,
F. P.
(
2012
).
How prediction errors shape perception, attention, and motivation
.
Frontiers in Psychology
,
3
,
548
.
Finzi
,
E.
, &
Rosenthal
,
N. E.
(
2014
).
Treatment of depression with onabotulinumtoxinA: A randomized, double-blind, placebo controlled trial
.
Journal of Psychiatric Research
,
52
,
1
6
.
Finzi
,
E.
, &
Wasserman
,
E.
(
2006
).
Treatment of depression with botulinum toxin A: A case series
.
Dermatologic Surgery
,
32
,
645
649
.
Franz
,
M.
,
Lemke
,
M. R.
,
Meyer
,
T.
,
Ulferts
,
J.
,
Puhl
,
P.
, &
Snaith
,
R. P.
(
1998
).
Deutsche Version der Snaith–Hamilton Pleasure Scale (SHAPS-D)
.
Fortschritte der Neurologie-Psychiatrie
,
66
,
407
413
.
Friston
,
K.
(
2005
).
A theory of cortical responses
.
Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences
,
360
,
815
836
.
Harms
,
M. B.
,
Martin
,
A.
, &
Wallace
,
G. L.
(
2010
).
Facial emotion recognition in autism spectrum disorders: A review of behavioral and neuroimaging studies
.
Neuropsychology Review
,
20
,
290
322
.
Hautzinger
,
M.
,
Keller
,
F.
, &
Kühner
,
C.
(
2006
).
Beck Depressions-Inventar (BDI-II)
.
Frankfurt, Germany
:
Harcourt Test Services
.
Hietanen
,
J. K.
, &
Astikainen
,
P.
(
2013
).
N170 response to facial expressions is modulated by the affective congruency between the emotional expression and preceding affective picture
.
Biological Psychology
,
92
,
114
124
.
Hirai
,
M.
,
Watanabe
,
S.
,
Honda
,
Y.
,
Miki
,
K.
, &
Kakigi
,
R.
(
2008
).
Emotional object and scene stimuli modulate subsequent face processing: An event-related potential study
.
Brain Research Bulletin
,
77
,
264
273
.
Kimura
,
M.
,
Kondo
,
H.
,
Ohira
,
H.
, &
Schröger
,
E.
(
2012
).
Unintentional temporal context–based prediction of emotional faces: An electrophysiological study
.
Cerebral Cortex
,
22
,
1774
1785
.
Kohler
,
C. G.
,
Hoffman
,
L. J.
,
Eastman
,
L. B.
,
Healey
,
K.
, &
Moberg
,
P. J.
(
2011
).
Facial emotion perception in depression and bipolar disorder: A quantitative review
.
Psychiatry Research
,
188
,
303
309
.
Kohler
,
C. G.
,
Walker
,
J. B.
,
Martin
,
E. A.
,
Healey
,
K. M.
, &
Moberg
,
P. J.
(
2010
).
Facial emotion perception in schizophrenia: A meta-analytic review
.
Schizophrenia Bulletin
,
36
,
1009
1019
.
Kovarski
,
K.
,
Latinus
,
M.
,
Charpentier
,
J.
,
Cléry
,
H.
,
Roux
,
S.
,
Houy-Durand
,
E.
, et al
(
2017
).
Facial expression related vMMN: Disentangling emotional from neutral change detection
.
Frontiers in Human Neuroscience
,
11
,
18
.
Kremláček
,
J.
,
Kreegipuu
,
K.
,
Tales
,
A.
,
Astikainen
,
P.
,
Põldver
,
N.
,
Näätänen
,
R.
, et al
(
2016
).
Visual mismatch negativity (vMMN): A review and meta-analysis of studies in psychiatric and neurological disorders
.
Cortex
,
80
,
76
112
.
Laird
,
J. D.
(
1974
).
Self-attribution of emotion: The effects of expressive behavior on the quality of emotional experience
.
Journal of Personality and Social Psychology
,
29
,
475
486
.
Laird
,
J. D.
,
Wagener
,
J. J.
,
Halal
,
M.
, &
Szegda
,
M.
(
1982
).
Remembering what you feel: Effects of emotion on memory
.
Journal of Personality and Social Psychology
,
42
,
646
657
.
Larsen
,
R. J.
,
Kasimatis
,
M.
, &
Frey
,
K.
(
1992
).
Facilitating the furrowed brow: An unobtrusive test of the facial feedback hypothesis applied to unpleasant affect
.
Cognition and Emotion
,
6
,
321
338
.
Li
,
X.
,
Lu
,
Y.
,
Sun
,
G.
,
Gao
,
L.
, &
Zhao
,
L.
(
2012
).
Visual mismatch negativity elicited by facial expressions: New evidence from the equiprobable paradigm
.
Behavioral and Brain Functions
,
8
,
7
.
Livingstone
,
S. R.
,
Vezer
,
E.
,
McGarry
,
L. M.
,
Lang
,
A. E.
, &
Russo
,
F. A.
(
2016
).
Deficits in the mimicry of facial expressions in Parkinson's disease
.
Frontiers in Psychology
,
7
,
780
.
Lobmaier
,
J. S.
, &
Fischer
,
M. H.
(
2015
).
Facial feedback affects perceived intensity but not quality of emotional expressions
.
Brain Sciences
,
5
,
357
368
.
Lundqvist
,
D.
,
Flykt
,
A.
, &
Öhman
,
A.
(
1998
).
The Karolinska Directed Emotional Faces—KDEF
[CD].
Stockholm, Sweden
:
Department of Clinical Neuroscience, Psychology Section, Karolinska Institutet
.
Näätänen
,
R.
,
Astikainen
,
P.
,
Ruusuvirta
,
T.
, &
Huotilainen
,
M.
(
2010
).
Automatic auditory intelligence: An expression of the sensory–cognitive core of cognitive processes
.
Brain Research Reviews
,
64
,
123
136
.
Näätänen
,
R.
,
Paavilainen
,
P.
,
Rinne
,
T.
, &
Alho
,
K.
(
2007
).
The mismatch negativity (MMN) in basic research of central auditory processing: A review
.
Clinical Neurophysiology
,
118
,
2544
2590
.
Neal
,
D. T.
, &
Chartrand
,
T. L.
(
2011
).
Embodied emotion perception: Amplifying and dampening facial feedback modulates emotion perception accuracy
.
Social Psychological and Personality Science
,
2
,
673
678
.
Niedenthal
,
P. M.
(
2007
).
Embodying emotion
.
Science
,
316
,
1002
1005
.
Niedenthal
,
P. M.
,
Barsalou
,
L. W.
,
Winkielman
,
P.
,
Krauth-Gruber
,
S.
, &
Ric
,
F.
(
2005
).
Embodiment in attitudes, social perception, and emotion
.
Personality and Social Psychology Review
,
9
,
184
211
.
Niedenthal
,
P. M.
,
Brauer
,
M.
,
Halberstadt
,
J. B.
, &
Innes-Ker
,
Å. H.
(
2001
).
When did her smile drop? Facial mimicry and the influences of emotional state on the detection of change in emotional expression
.
Cognition and Emotion
,
15
,
853
864
.
Niedenthal
,
P. M.
,
Halberstadt
,
J. B.
, &
Setterlund
,
M. B.
(
1997
).
Being happy and seeing “happy”: Emotional state mediates visual word recognition
.
Cognition and Emotion
,
11
,
403
432
.
Noah
,
T.
,
Schul
,
Y.
, &
Mayo
,
R.
(
2018
).
When both the original study and its failed replication are correct: Feeling observed eliminates the facial-feedback effect
.
Journal of Personality and Social Psychology
,
114
,
657
664
.
Oberman
,
L. M.
,
Winkielman
,
P.
, &
Ramachandran
,
V. S.
(
2007
).
Face to face: Blocking facial mimicry can selectively impair recognition of emotional expressions
.
Social Neuroscience
,
2
,
167
178
.
Oberman
,
L. M.
,
Winkielman
,
P.
, &
Ramachandran
,
V. S.
(
2009
).
Slow echo: Facial EMG evidence for the delay of spontaneous, but not voluntary, emotional mimicry in children with autism spectrum disorders
.
Developmental Science
,
12
,
510
520
.
Pazo-Alvarez
,
P.
,
Cadaveira
,
F.
, &
Amenedo
,
E.
(
2003
).
MMN in the visual modality: A review
.
Biological Psychology
,
63
,
199
236
.
Preston
,
S. D.
, &
de Waal
,
F. B. M.
(
2002
).
Empathy: Its ultimate and proximate bases
.
Behavioral and Brain Sciences
,
25
,
1
20
.
Schilbach
,
L.
,
Eickhoff
,
S. B.
,
Mojzisch
,
A.
, &
Vogeley
,
K.
(
2008
).
What's in a smile? Neural correlates of facial embodiment during social interaction
.
Social Neuroscience
,
3
,
37
50
.
Sel
,
A.
,
Calvo-Merino
,
B.
,
Tuettenberg
,
S.
, &
Forster
,
B.
(
2015
).
When you smile, the world smiles at you: ERP evidence for self-expression effects on face processing
.
Social Cognitive and Affective Neuroscience
,
10
,
1316
1322
.
Stefanics
,
G.
,
Csukly
,
G.
,
Komlósi
,
S.
,
Czobor
,
P.
, &
Czigler
,
I.
(
2012
).
Processing of unattended facial emotions: A visual mismatch negativity study
.
Neuroimage
,
59
,
3042
3049
.
Stefanics
,
G.
,
Heinzle
,
J.
,
Horváth
,
A. A.
, &
Stephan
,
K. E.
(
2018
).
Visual mismatch and predictive coding: A computational single-trial ERP study
.
Journal of Neuroscience
,
38
,
4020
4030
.
Stefanics
,
G.
,
Stephan
,
K. E.
, &
Heinzle
,
J.
(
2019
).
Feature-specific prediction errors for visual mismatch
.
Neuroimage
,
196
,
142
151
.
Strack
,
F.
,
Martin
,
L. L.
, &
Stepper
,
S.
(
1988
).
Inhibiting and facilitating conditions of the human smile: A nonobtrusive test of the facial feedback hypothesis
.
Journal of Personality and Social Psychology
,
54
,
768
777
.
Susac
,
A.
,
Ilmoniemi
,
R. J.
,
Pihko
,
E.
,
Ranken
,
D.
, &
Supek
,
S.
(
2010
).
Early cortical responses are sensitive to changes in face stimuli
.
Brain Research
,
1346
,
155
164
.
Varcin
,
K. J.
,
Bailey
,
P. E.
, &
Henry
,
J. D.
(
2010
).
Empathic deficits in schizophrenia: The potential role of rapid facial mimicry
.
Journal of the International Neuropsychological Society
,
16
,
621
629
.
Wagenmakers
,
E.-J.
,
Beek
,
T.
,
Dijkhoff
,
L.
,
Gronau
,
Q. F.
,
Acosta
,
A.
,
Adams
,
R. B.
, Jr.
, et al
(
2016
).
Registered replication report: Strack, Martin, & Stepper (1988)
.
Perspectives on Psychological Science
,
11
,
917
928
.
Wei
,
J.-H.
,
Chan
,
T.-C.
, &
Luo
,
Y.-J.
(
2002
).
A modified oddball paradigm “cross-modal delayed response” and the research on mismatch negativity
.
Brain Research Bulletin
,
57
,
221
230
.
Willenbockel
,
V.
,
Sadr
,
J.
,
Fiset
,
D.
,
Horne
,
G. O.
,
Gosselin
,
F.
, &
Tanaka
,
J. W.
(
2010
).
Controlling low-level image properties: The SHINE toolbox
.
Behavior Research Methods
,
42
,
671
684
.
Winkler
,
I.
, &
Czigler
,
I.
(
2012
).
Evidence from auditory and visual event-related potential (ERP) studies of deviance detection (MMN and vMMN) linking predictive coding theories and perceptual object representations
.
International Journal of Psychophysiology
,
83
,
132
143
.
Wollmer
,
M. A.
,
de Boer
,
C.
,
Kalak
,
N.
,
Beck
,
J.
,
Götz
,
T.
,
Schmidt
,
T.
, et al
(
2012
).
Facing depression with botulinum toxin: A randomized controlled trial
.
Journal of Psychiatric Research
,
46
,
574
581
.
Wu
,
Z.
,
Zhong
,
X.
,
Peng
,
Q.
,
Chen
,
B.
,
Mai
,
N.
, &
Ning
,
Y.
(
2017
).
Negative bias in expression-related mismatch negativity (MMN) in remitted late-life depression: An event-related potential study
.
Journal of Psychiatric Research
,
95
,
224
230
.
Zhao
,
L.
, &
Li
,
J.
(
2006
).
Visual mismatch negativity elicited by facial expressions under non-attentional condition
.
Neuroscience Letters
,
410
,
126
131
.