Abstract

Our visual inputs are often entangled with affective meanings in natural vision, implying the existence of extensive interaction between visual and emotional processing. However, little is known about the neural mechanism underlying such interaction. This exploratory transcranial magnetic stimulation (TMS) study examined the possible involvement of the early visual cortex (EVC, Area V1/V2/V3) in perceiving facial expressions of different emotional valences. Across three experiments, single-pulse TMS was delivered at different time windows (50–150 msec) after a brief 10-msec onset of face images, and participants reported the visibility and perceived emotional valence of faces. Interestingly, earlier TMS at ∼90 msec only reduced the face visibility irrespective of displayed expressions, but later TMS at ∼120 msec selectively disrupted the recognition of negative facial expressions, indicating the involvement of EVC in the processing of negative expressions at a later time window, possibly beyond the initial processing of fed-forward facial structure information. The observed TMS effect was further modulated by individuals' anxiety level. TMS at ∼110–120 msec disrupted the recognition of anger significantly more for those scoring relatively low in trait anxiety than the high scorers, suggesting that cognitive bias influences the processing of facial expressions in EVC. Taken together, it seems that EVC is involved in structural encoding of (at least) negative facial emotional valence, such as fear and anger, possibly under modulation from higher cortical areas.

INTRODUCTION

Visual signals in our surroundings are often associated with different emotional valences and intensities. For instance, happy faces are pleasant, but angry faces are frightening visual inputs for most of us. The affective meanings embedded in such visual signals have a significant impact on our visual processing capabilities, including target detection speed and accuracy as well as perceptual field size (Phelps, 2006). Typically, we are more sensitive to detect fearful faces than neutral or happy faces (Yang, Zald, & Blake, 2007). With recent technical advances in cognitive neuroscience, research on when and where the neural processing of visual signals is modulated by their affective meanings in the visual pathway has attracted increasing attention.

A few fMRI and neurophysiological studies have indicated that the affective value of visual stimuli has modulatory influence upon neuronal activation in a range of cortical regions in the visual pathway (e.g., visual areas in occipital and temporal cortex), usually reflected by relatively enhanced neural responses for emotional relative to neutral stimuli (Vuilleumier & Driver, 2007). This is seen even as early as primary visual cortex (Area V1, the first cortical stage of visual processing; Padmala & Pessoa, 2008) with a response latency of processing fed-forward visual information (Li, Yan, Guo, & Li, 2019). However, EEG studies have argued that these affective-modulated visual neural responses, such as enhanced P1 component to affective stimuli, start at a late time window of ~120 msec suggesting feedback neural modulation processes (Pourtois, Thut, Grave de Peralta, Michel, & Vuilleumier, 2005; Batty & Taylor, 2003). It seems unclear, therefore, to what extent the emotion-modulated activities in the early visual cortex (EVC) happen at the early or late stage of visual processing (e.g., during the processing of early fed-forward or late fed-back information).

Transcranial magnetic stimulation (TMS) is a relatively reliable investigative tool to study functional connectivity for visual neurons (Walsh & Cowey, 2000). It has been argued that single-pulse TMS delivered at different time windows after the stimulus onset can transiently disrupt feedforward or feedback processing in EVC. Typically, a TMS pulse over the occipital cortex at 90–100 msec after stimulus onset maximally suppresses participants' conscious detection performance of a small visual target (e.g., grating, bar, single letter) presented within the visual hemifield contralateral to the stimulated cortical hemisphere, at a location corresponding to V1 retinotopic organization (Roebuck, Bourke, & Guo, 2014; de Graaf, Cornelsen, Jacobs, & Sack, 2011; Sack, van der Mark, Schuhmann, Schwarzbach, & Goebel, 2009). This time window is often interpreted as consistent with the activity of feedforward processing in V1 neurons (see also Kammer, 2007). In contrast, the disruption by a later TMS pulse at 100–130 msec is susceptible to attention and task demands (e.g., reducing performance for a face discrimination task rather than a grating detection task; de Graaf, Goebel, & Sack, 2012), suggesting this late time window may represent a recurrent process of visual information fed-back from other brain structures (de Graaf, Koivisto, Jacobs, & Sack, 2014).

It should be noted that, whereas these studies often positioned the TMS coil over the occipital cortex and aimed to target area V1 using anatomical landmark and/or phosphene localization procedures, a few studies have combined fMRI-based mapping of visual cortex with modeling of the TMS-induced electric field in the brain and argued that the actual stimulated region went beyond the targeted V1 area, also covering neighboring and connected functional regions, such as the corresponding retinotopic area in dorsal V2 and/or V2/V3 border (Salminen-Vaparanta, Noreika, Revonsuo, Koivisto, & Vanni, 2012; Thielscher, Reichenbach, Ugurbil, & Uludağ, 2010). As it is difficult to precisely localize the induced electric field in the occipital cortex, it could be more appropriate to attribute the TMS-induced effect to the disruption of EVC (Area V1/V2/V3) rather than V1 only.

Recently, TMS has been applied to study the processing of affective visual cues, such as facial expressions of emotion. TMS over the right occipital face area (rOFA), an integral part of the face-processing neural network that receives both fed-forward and fed-back facial information from other face-sensitive areas, at 60–100 msec after stimulus onset impairs expression discrimination accuracy, most likely reflecting the disruption of early feedforward processing (Pitcher, Garrido, Walsh, & Duchaine, 2008). At 170–300 msec, it impairs the analysis and integration of facial identity and expression cues, most likely reflecting the disruption of late feedback processing (Kadosh, Walsh, & Kadosh, 2011). It is unclear, however, whether facial expression cues could be processed or differentiated in areas earlier than rOFA in the visual pathway. In this exploratory study, we aimed to deliver single-pulse TMS over EVC at different time windows representing feedforward and feedback processing and to compare participants' expression categorization performance of face images displaying different emotional valences. The findings would help to address whether neurons in EVC (including Area V1/V2/V3) show different processing speeds to affective visual signals of varying valence and possible feedforward and feedback contribution to such affective processing.

Considering that an accurate and timely recognition of negative facial expression is biologically relevant and crucial to our survival and normal social functioning, it is not surprising that many behavioral and brain imaging studies have revealed enhanced perceptual and neural sensitivities for processing negative expressions in comparison with neutral and positive ones. Typically, angry and fearful expressions tend to pop out more easily, capture and hold attention automatically (e.g., anger superiority effect; Anderson, 2005; Hansen & Hansen, 1998), amplify perceptual process (Öhman, Lundqvist, & Esteves, 2001), and enhance early face-specific electrophysiological responses, such as P1 and N170 ERP responses, even outside attention or preattentively (Lyyra, Hietanen, & Astikainen, 2014; Yang et al., 2007). The replication of these findings with simplified schematic line-drawing faces instead of real face images (but not with the inverted schematic faces) further indicated that “anger superiority effect” is likely caused by semantic differences in facial emotional valence rather than visual changes in local facial structures or features between different expressions (Horstmann, 2007; Öhman et al., 2001).

Although these facial emotional valence-modulated neural responses are commonly observed in P1 and N170 components, which are likely generated in extrastriate cortex (e.g., Lyyra et al., 2014; Yang et al., 2007), a couple of studies have reported that fearful faces could elicit a larger C1 component, which is the earliest visually evoked potential (∼60–90 msec after stimulus onset) and may be generated in V1, than happy faces (Zhu & Luo, 2012; Pourtois, Grandjean, Sander, & Vuilleumier, 2004). However, the valence-modulated C1 responses have not been consistently observed across previous studies and may be related to the systematic biases in data filtering (Acunzo, Mackenzie, & van Rossum, 2012) and the attentional process involved in the task (Slotnick, 2018), such as spatial orienting (Pourtois et al., 2004) and executive attention (Zhu & Luo, 2012).

Nevertheless, given these enhanced C1, P1, and N170 responses to negative facial expressions reported in previous research, the involvement of EVC in the processing of facial expressions might happen at a time window earlier than the typical response latencies of P1 and N170 components (∼100–120 and ∼170 msec, respectively). Therefore, in our first exploratory study, we examined whether the delivery of TMS over EVC at early time windows (50–120 msec) could selectively disrupt the processing of negative facial expressions.

EXPERIMENT 1: TMS AT EARLY TIME WINDOWS

Experimental Procedures

Participants

Sixteen white adult participants (12 men), with mean (± SEM) age of 20 ± 0.49 years, took part in Experiment 1. Three more participants were initially tested but were later excluded from data analysis because of failure to induce reliable phosphene and/or frequent head movements during the testing (hence unreliable cortical TMS stimulation location). This sample size was based on previous research in the same field and was comparable with those published reports (e.g., de Graaf et al., 2011, 2014; Roebuck et al., 2014). The suitability of the sample size was confirmed by power analysis using G*power software (Faul, Erdfelder, Lang, & Buchner, 2007). A sample of 16 participants would be large enough to detect a typical effect size (ηp2 = .3) with a power of 0.95 at an alpha level of .05 in a repeated-measures design with nine TMS time windows to estimate the effect of TMS on visual target detection.

All participants (including those in Experiments 13) had normal or corrected-to-normal visual acuity and reported no history of neuropsychiatric illness or epilepsy. Before each experiment, the research purpose, experimental task, and procedure had been explained to the participants, and written informed consent was obtained from each of them. The ethics committee in School of Psychology, University of Lincoln, approved this study, and all procedures complied with the British Psychological Society “Code of Ethics and Conduct” and with the World Medical Association's Declaration of Helsinki as revised in October 2008.

Visual Stimuli and TMS Setup

Grayscale western white face images, consisting of three female and three male models, were selected from the Karolinska Directed Emotional Faces CD-ROM (Lundqvist, Flykt, & Öhman, 1998). Each of these models posed happy, neutral, and angry facial expressions in full frontal view. Although they may have real-world limitations, and categorization performance for some expressions could be subject to culture influence, these well-controlled face images were chosen for their comparability and universality in transmitting facial expression signals, at least for our observer group (white young adults). The faces were processed in Adobe Photoshop to remove external facial features (e.g., hair) and to ensure a homogenous background, brightness, and face size (54 × 71 pixels, 2° × 2.63°). As a result, 18 expressive face images were generated for the testing session (3 expressions × 6 models; see Figure 1 for examples).

Figure 1. 

Examples of a female face image presented with happiness, neutral, and anger facial expressions.

Figure 1. 

Examples of a female face image presented with happiness, neutral, and anger facial expressions.

The face images were presented through a ViSaGe Graphics system (Cambridge Research Systems) and displayed on a noninterlaced gamma-corrected monitor (100-Hz frame rate, 40-cd/m2 background luminance, 1024 × 768 pixel resolution, 33° × 24° at the viewing distance of 70 cm; Mitsubishi Diamond Pro 2070SB). During the presentation, the center of the face image was at 1.5° to the right of a small central fixation point (FP; 0.2° diameter, 10 cd/m2).

TMS was delivered by using a 70-mm figure-of-eight coil (Medtronic MC-B70 coil) through a Medtronic MagPro X100. The coil location and TMS intensity were determined for each participant before the testing session. Initially, the TMS intensity was set at 50% of the maximum output, and the coil was placed ∼2 cm above and 1 cm left of the inion, with the main axis of the coil oriented parallel to the sagittal plane. After fixating on the FP, a TMS pulse was administered manually, and the participants reported whether they experienced a phosphene within a faint thin-line oval, which corresponded to the location of the face image presentation. The location of the coil and TMS intensity were adjusted according to the reported percept until a reliable phosphene was perceived. The TMS intensity was then reduced to the phosphene detection threshold, defined as the intensity at which the phosphene was reported in two of five TMS pulses. Finally, the TMS intensity for the main experiment was set at 120% of the phosphene detection threshold to ensure a reliable cortical stimulation and disruption over EVC (de Graaf et al., 2012). Across all the participants, the average TMS intensity used during the testing was 69 ± 1.42% (mean ± SEM).

Procedure

To control for artifacts associated with TMS (e.g., auditory click sound, mechanical tapping, and muscle contraction), which may disrupt participants' attention and affect their expression categorization performance, participants took part in two separate testing sessions: a TMS session in which the TMS pulses were administered on the left occipital cortex at a location corresponding to the face image onset and a control (sham) session in which the same-intensity TMS pulses were administered on the right occipital cortex (task-unrelated area), which mirrored the stimulation location on the left occipital cortex. Except for the coil location, all the other experimental parameters (e.g., coil orientation, TMS time windows and intensity) and procedures were the same between TMS and control sessions. The order of the testing sessions was counterbalanced across the participants.

During the experiments, participants sat in a quiet, darkened room and viewed the display binocularly with support of a chin rest. No earplugs were applied. The trial was started by a 350-Hz warning tone lasting 150 msec followed by the presentation of a central FP for 1000 msec. A face image with a happy, neutral, or angry expression was then presented for 10 msec. Single-pulse TMS was administered at one of nine SOA time windows (i.e., at 50, 60, 70, 80, 90, 100, 110, or 120 msec after face onset, plus no-TMS condition). The participants were instructed to maintain fixation of the FP throughout the trial and verbally report (or guess if it is necessary) the perceived facial expression valence (three-alternative forced choice: positive, neutral, and negative) and the perceived face image visibility on a 5-point scale, in which 1 = not visible at all and 5 = clearly visible for all image details. No feedback was given. The trial interval was set to 1500 msec. Each participant was tested for one sham/control block and two TMS blocks (162 trials per block, 18 face images [six face identities for each of three expressions] × 9 TMS conditions [eight TMS time windows between 50 and 120 msec + 1 no-TMS condition]). Therefore, 12 trials were presented for each facial expression at each TMS condition over two TMS blocks. Before the formal test, the participants were given a training session (normally 20 trials) to familiarize with the task.

All the collected data were analyzed off-line. A series of repeated-measures ANOVAs were conducted to examine the effect of TMS on participants' facial expression valence recognition accuracy and face image visibility rating. For each ANOVA, Greenhouse–Geisser correction was applied where sphericity was violated, and a Bonferroni adjustment was made for post hoc multiple comparisons.

Results and Discussion

A 9 (TMS conditions: no TMS, TMS at 50, 60, 70, 80, 90, 100, 110, and 120 msec) × 3 (Facial Expressions) ANOVA was conducted to examine to what extent TMS at different time windows would affect participants' image visibility ratings across faces of different emotional valence (Figure 2A). The analysis revealed a significant main effect of TMS Condition, F(4.21, 63.1) = 3.15, p = .02, ηp2 = .17, with TMS delivered at 90 msec inducing a slightly lower face image visibility rating in comparison with the no-TMS condition (p < .01), and a significant main effect of Expression, F(2, 30) = 5.93, p = .01, ηp2 = .28, with happy faces attracting a higher visibility rating than angry faces (p = .01) but not neutral faces (p = .09). Although there was no significant TMS Condition × Expression interaction, F(16, 240) = 1, p = .46, ηp2 = .06, Figure 2A indicates that the interaction effect might lie beyond the time window studied. TMS delivered at 120 msec showed a tendency to reduce visibility rating only for angry faces in comparison with the no-TMS condition (two-tailed t test), t(15) = 2.88, p = .01, 95% CIs [2.33, 15.59], Cohen's d = 0.75.

Figure 2. 

Effect of TMS at different time windows on participants' image visibility rating (A) and emotional valence recognition accuracy (B) of faces displaying happy, neutral, and angry expressions. Error bars represent SEM.

Figure 2. 

Effect of TMS at different time windows on participants' image visibility rating (A) and emotional valence recognition accuracy (B) of faces displaying happy, neutral, and angry expressions. Error bars represent SEM.

Another 9 (TMS Conditions) × 3 (Facial Expressions) ANOVA was also conducted to examine to what extent TMS at different time windows would affect participants' valence recognition accuracy for faces of different expressions (Figure 2B). The analysis revealed a significant main effect of Expression, F(2, 30) = 11.92, p < .001, ηp2 = .43, with higher recognition accuracy for happy than angry or neutral expression (all ps < .01), but a nonsignificant main effect of TMS Condition, F(8, 120) = 1.83, p = .08, ηp2 = .11, or TMS Condition × Expression interaction, F(16, 240) = 1.13, p = .32, ηp2 = .07. However, Figure 2B indicates a clear tendency that TMS at a later time window (110 and 120 msec) might selectively reduce participants' valence recognition performance for negative (angry) faces in comparison with the no-TMS condition (two-tailed t test, 110 msec: t(15) = 3.31, p = .005, 95% CIs [2.41, 11.13], Cohen's d = 0.87; 120 msec: t(15) = 3.18, p = .006, 95% CIs [3.44, 17.39], Cohen's d = 1.07).

In contrast, for a given facial expression, TMS delivered on the right occipital cortex (sham/control session) across all the time windows showed no impact on participants' face image visibility rating (TMS Condition: F(8, 120) = 1.34, p = .23, ηp2 = .08; TMS Condition × Expression: F(16, 240) = 1.56, p = .08, ηp2 = .09) and emotional valence recognition performance (TMS Condition: F(8, 120) = 0.36, p = .94, ηp2 = .02; TMS Condition × Expression: F(16, 240) = 0.81, p = .67, ηp2 = .05). When examining TMS versus sham TMS on face visibility and valence recognition accuracy, a 2 (Sessions: TMS vs. Sham TMS) × 9 (TMS Conditions) × 3 (Facial Expressions) ANOVA only revealed a significant interaction between Sessions and TMS Conditions on face visibility, F(8, 120) = 2.31, p = .03, ηp2 = .13, but did not reveal any main effect of Sessions and its interaction with TMS Conditions and/or Facial Expressions on valence recognition accuracy (all ps > .05). Given that, in this exploratory study, we used multiple TMS conditions and possible TMS disruption is expression specific and timing restricted (∼120 msec; Figure 2B), it is possible that the lack of statistical power accounts for the lack of session-specific effect on valence recognition accuracy. Nevertheless, across all the presented expressions, compared with sham/control session, in TMS sessions, TMS at 90 msec tended to reduce participants' face image visibility rating, t(47) = 1.98, p = .027, 95% CIs [−0.05, 7.05], Cohen's d = 0.28 (Figure 3A), and TMS at 120 msec tended to reduce their emotional valence recognition performance, t(47) = 2.05, p = .023, 95% CIs [0.08, 8.95], Cohen's d = 0.38 (Figure 3B).

Figure 3. 

Effect of TMS delivered at the left (TMS session) and right (sham/control session) occipital cortex on participants' image visibility rating (A) and emotional valence recognition accuracy (B) of faces displaying happy, neutral, and angry expressions. Error bars represent SEM.

Figure 3. 

Effect of TMS delivered at the left (TMS session) and right (sham/control session) occipital cortex on participants' image visibility rating (A) and emotional valence recognition accuracy (B) of faces displaying happy, neutral, and angry expressions. Error bars represent SEM.

Clearly, TMS over EVC at the time window of processing feedforward information (∼90 msec) appeared to disrupt face image visibility irrespective of the displayed facial expressions but had a little detrimental effect on facial valence judgment, whereas TMS at 120 msec appeared to have a tendency to selectively decrease both face image visibility rating and valence recognition for angry faces alone, implying that early visual neural responses to faces containing negative emotional information might be modulated at a later time window. This possibility was examined in detail in Experiment 2 in which we extended the TMS time window from 50–120 to 90–150 msec.

EXPERIMENT 2: TMS AT LATE TIME WINDOWS

Experimental Procedures

Eleven white adult participants (seven men, 21 ± 0.56 years old) took part in Experiment 2. Two more participants were initially tested but were later excluded from data analysis because of frequent head movements during the testing. This sample size was comparable with previous research in the same field (e.g., Roebuck et al., 2014; de Graaf et al., 2011) and was further confirmed by power analysis (Faul et al., 2007). A sample of nine participants would be large enough to detect the maximum effect size (ηp2 = .43) observed in Experiment 1 with a power of 0.95 at an alpha level of .05 in a repeated-measures design with eight TMS time windows to estimate the effect of TMS on visual target detection.

The visual stimuli, TMS setup, and experimental procedure were identical to those used in Experiment 1 except that, (1) in Experiment 2, a single-pulse TMS was administered at one of eight conditions (i.e., at 90, 100, 110, 120, 130, 140, or 150 msec after face onset, plus a no-TMS condition) and (2) no sham/control session was used in Experiment 2. Across all the participants, the average TMS intensity used during the testing was 73 ± 1.87%.

Results and Discussion

To examine how face image visibility was modulated by TMS at the later time windows (Figure 4A), an 8 (TMS Conditions: no TMS, TMS at 90, 100, 110, 120, 130, 140, and 150 msec) × 3 (Facial Expressions) ANOVA revealed a significant main effect of TMS Condition, F(3.34, 33.42) = 5.11, p = .004, ηp2 = .34, with TMS delivered at 90 msec inducing a slightly lower face image visibility rating in comparison with the no-TMS conditions across all facial expressions (p < .001), and a significant main effect of Expression, F(2, 20) = 10.32, p = .001, ηp2 = .58, with happy faces attracting a higher visibility rating than angry faces (p = .009) but not neutral faces (p = .06). Although there was no significant TMS Condition × Expression interaction, F(14, 140) = 0.92, p = .54, ηp2 = .08, a planned comparison revealed that, in comparison with the no-TMS condition, TMS delivered at 120 msec tended to reduce visibility for angry faces, t(10) = 3.22, p = .009, 95% CIs [3.50, 19.23], Cohen's d = 0.76. All these findings were in agreement with those observed in Experiment 1.

Figure 4. 

Effect of TMS at different time windows on participants' image visibility rating (A) and emotional valence recognition accuracy (B) of faces displaying happy, neutral, and angry expressions. Error bars represent SEM.

Figure 4. 

Effect of TMS at different time windows on participants' image visibility rating (A) and emotional valence recognition accuracy (B) of faces displaying happy, neutral, and angry expressions. Error bars represent SEM.

For facial valence recognition accuracy (Figure 4B), an 8 (TMS Conditions) × 3 (Facial Expressions) ANOVA revealed a significant main effect of Expression, F(2, 20) = 11.11, p = .001, ηp2 = .53, with lower recognition accuracy for angry than for happy (p = .006) or neutral (p = .03) expression, and a significant main effect of TMS Condition, F(7, 70) = 4.15, p = .01, ηp2 = .29, and a TMS Condition × Expression interaction, F(14, 140) = 2.75, p = .001, ηp2 = .22. Specifically, in comparison with other time windows and the no-TMS condition, TMS at 110–130 msec gradually disrupted the recognition of angry expression, with the lowest recognition accuracy at 120 msec (all ps < .05). On the other hand, the same TMS pulse delivered at these time windows had negligible influence on the recognition of happy and neutral expressions (all ps > .05).

The combined findings from Experiments 1 and 2 have suggested that, in EVC, the facial expressions of emotion are processed later than the facial structures and are subject to valence-dependent process disruption. This view is supported by the observation that TMS at ∼90 msec only reduced face image visibility across all expressions but had no impact on facial emotional valence recognition, whereas later TMS at ∼120 msec selectively disrupted both the visibility and recognition of negative expressions but had no impact on the recognition of positive and neutral ones. It seems that early visual neural responses to affective visual cues were modulated at a later time window, possibly beyond the initial detection or processing of fed-forward visual information.

If EVC is indeed involved in the processing of affective facial information, it is plausible that its neural responses could be further subject to the influence of cognitive bias associated with facial expression perception. In other words, the affective state of an individual may itself bias early visual neural processing of emotional faces. This possibility was examined in details in Experiment 3.

EXPERIMENT 3: TMS AT LATE TIME WINDOWS FOR PARTICIPANTS WITH VARYING ANXIETY LEVELS

It is well established that anxiety is associated with a cognitive bias in the processing of emotional information, such as allocating cognitive resources selectively to threat-related information (Bar-Haim, Lamy, Pergamin, Backermans-Kranenburg, & Van Ijzendoorn, 2007) and interpreting ambiguous or neutral information as negative and threatening (Calvo & Castillo, 2001). When categorizing facial expressions, anxious individuals show higher perceptual sensitivity to threatening faces (Staugaard, 2010; Fox, 2002) and higher accuracy in identifying negative expressions such as anger and fear (Doty, Japee, Ingvar, & Ungerleider, 2013; Hunter, Buckner, & Schmidt, 2009). However, the neural processes underlying the generation of these cognitive biases remain largely unknown. For instance, it is unclear whether cognitive bias in anxious individuals could be reflected in EVC's involvement in the processing of negative facial expressions.

Furthermore, different subtypes of anxiety may have a different impact on the recognition of different facial expressions. Whereas trait anxiety, a relatively stable anxiety proneness that reflects individuals' tendency to perceive threats, stress, and danger (Spielberger, Gorsuch, Lushene, Vagg, & Jacobs, 1983), is associated with increased accuracy in identifying fear and anger expressions (Doty et al., 2013; Hunter et al., 2009; Surcinelli, Codispoti, Montebarocci, Rossi, & Baldaro, 2006), state anxiety, an emotional state felt in a particular situation or about a particular event (Spielberger et al., 1983), is associated with decreased accuracy in recognizing common facial expressions except for sadness (Attwood et al., 2017). Furthermore, the prolonged state anxiety measured by Beck Anxiety Inventory, a relatively objective measurement of anxiety symptoms that have occurred during the past month (Beck, Epstein, Brown, & Steer, 1988), is coupled with enhanced categorization accuracy for all common facial expressions (Green & Guo, 2018). Hence, it is plausible that EVC may contribute differently to the processing of facial expressions in individuals with different anxiety subtypes.

To explore these research questions, in Experiment 3, we measured participants' trait and state anxiety level using the classical State–Trait Anxiety Inventory, which consists of 40 questions on a 4-point Likert-scale self-report basis (Spielberger et al., 1983), and the Beck Anxiety Inventory, which includes 21 anxiety symptoms and allows the participant to rate to what level each symptom has bothered them during the past month (Beck et al., 1988). The Beck Anxiety Inventory was chosen because of its minimized overlap between anxiety and depression measurement (e.g., state–trait anxiety inventory tends to be highly correlated with depression), high level of internal consistency, and high discriminant validity when used in a nonclinical sample of anxiety research (de Ayala, Vonderharr-Carlson, & Kim, 2005). We then delivered single-pulse TMS over EVC at different time windows to examine to what extent individuals with different anxiety subtypes responded differently to negative facial expressions, such as fear and anger.

Experimental Procedures

Forty-four white adult participants (20 men, 21 ± 0.35 years old) took part in Experiment 3. Five more participants were initially tested but were later excluded from data analysis because of failure to induce reliable phosphenes and/or frequent head movements during the testing. The suitability of this sample size was confirmed by power analysis. A sample of 18 participants would be large enough to detect an average effect size (ηp2 = .3) observed in Experiment 2 with a power of 0.95 at an alpha level of .05 in a repeated-measures design with eight TMS time windows to estimate the effect of TMS on visual target detection.

Grayscale western white face images, consisting of three female and three male models, were selected from the Karolinska Directed Emotional Faces CD-ROM (Lundqvist et al., 1998). Each of these models posed happy, fear, and angry facial expressions in full frontal view. All the images were processed in the same way as in Experiment 1.

The TMS setup and experimental procedure were identical to those used in Experiment 2. Across all the participants, the average TMS intensity used during the testing was 68 ± 0.7%. Either before or after the TMS testing, the participants were required to complete the State–Trait Anxiety Inventory and the Beck Anxiety Inventory.

Results and Discussion

In Experiment 3, the analysis was focused on the effect of TMS on facial expression recognition accuracy. Across all the participants, an 8 (TMS Conditions: no TMS, TMS at 90, 100, 110, 120, 130, 140, and 150 msec) × 3 (Facial Expressions) ANOVA revealed a significant main effect of Expression, F(1.53, 65.97) = 32.72, p < .001, ηp2 = .43 (Figure 5), with higher recognition accuracy for happy than for fear (p < .001) or angry (p < .001) expression, and a significant main effect of TMS Condition, F(7, 301) = 3.08, p = .004, ηp2 = .07, and a TMS Condition × Expression interaction, F(9.38, 403.51) = 3.61, p < .001, ηp2 = .08. Specifically, in comparison with other time windows and the no-TMS condition, TMS at 110–130 msec disrupted the recognition of fear with the lowest recognition accuracy at 120 msec (all ps < .05), whereas TMS at 110, 130, and 150 msec disrupted the recognition of anger (all ps < .05). On the other hand, the TMS pulses delivered at these time windows had negligible influence on the recognition of happy faces (all ps > .05).

Figure 5. 

Effect of TMS at different time windows on participants' facial expression recognition accuracy of faces displaying happy, fearful, and angry expressions. Error bars represent SEM.

Figure 5. 

Effect of TMS at different time windows on participants' facial expression recognition accuracy of faces displaying happy, fearful, and angry expressions. Error bars represent SEM.

We then examined to what extent individuals' anxiety level impact on their fear and anger recognition under TMS conditions. To control for individual differences in the baseline expression recognition performance at the no-TMS condition, we calculated fear and anger recognition index for each participant under TMS conditions, in which expression recognition accuracy at each TMS delivery time window was divided by recognition accuracy at the no-TMS condition. Consequently, an index of 1 indicates TMS delivered at a given time window has no impact on expression recognition in comparison with the no-TMS condition, whereas an index smaller than 1 indicates TMS would disrupt expression recognition.

Across our participants, the two-tailed Pearson correlation analysis between anxiety subtype score (trait, state, and Beck anxiety) and fear or anger recognition index revealed an anxiety-dependent influence on the recognition of angry expressions (Table 1). Specifically, there was no significant correlation between anxiety subtype and fear recognition index across different TMS time windows (all ps > .05), indicating that the observed TMS disruption on the recognition of fear (Figure 5) was not susceptible to individuals' anxiety level and measurement type. However, both trait and state anxiety measurements were positively correlated with anger recognition index, especially when TMS was delivered at 110–130 msec (all ps < .05), indicating greater TMS disruption on recognizing angry faces for those scoring lower on trait and state anxiety. Beck anxiety, on the other hand, was only correlated with anger recognition index for TMS at 90 msec (p = .03).

Table 1. 
Pearson Correlation Analysis between Anxiety Score and Fear and Anger Recognition Index at Different TMS Time Windows
 Trait AnxietyState AnxietyBeck Anxiety
Fear 
90 msec 0.14 (.38) −0.01 (.95) 0.19 (.23) 
100 msec 0.05 (.74) 0.05 (.74) 0.14 (.35) 
110 msec 0.20 (.20) 0.06 (.68) 0.25 (.11) 
120 msec 0.10 (.50) 0.08 (.60) 0.03 (.83) 
130 msec 0.10 (.51) 0.07 (.63) 0.11 (.47) 
140 msec 0.02 (.88) 0.09 (.54) 0.11 (.46) 
150 msec 0.08 (.62) 0.08 (.61) 0.17 (.26) 
  
Anger 
90 msec 0.33 (.03)* 0.14 (.38) 0.32 (.03)* 
100 msec 0.23 (.14) 0.21 (.17) −0.05 (.73) 
110 msec 0.45 (.002)** 0.33 (.03)* 0.19 (.23) 
120 msec 0.50 (.001)** 0.38 (.01)* 0.26 (.09) 
130 msec 0.40 (.007)** 0.29 (.05)* 0.24 (.12) 
140 msec 0.27 (.07) 0.22 (.15) 0.09 (.56) 
150 msec 0.33 (.03)* 0.30 (.05)* 0.19 (.21) 
 Trait AnxietyState AnxietyBeck Anxiety
Fear 
90 msec 0.14 (.38) −0.01 (.95) 0.19 (.23) 
100 msec 0.05 (.74) 0.05 (.74) 0.14 (.35) 
110 msec 0.20 (.20) 0.06 (.68) 0.25 (.11) 
120 msec 0.10 (.50) 0.08 (.60) 0.03 (.83) 
130 msec 0.10 (.51) 0.07 (.63) 0.11 (.47) 
140 msec 0.02 (.88) 0.09 (.54) 0.11 (.46) 
150 msec 0.08 (.62) 0.08 (.61) 0.17 (.26) 
  
Anger 
90 msec 0.33 (.03)* 0.14 (.38) 0.32 (.03)* 
100 msec 0.23 (.14) 0.21 (.17) −0.05 (.73) 
110 msec 0.45 (.002)** 0.33 (.03)* 0.19 (.23) 
120 msec 0.50 (.001)** 0.38 (.01)* 0.26 (.09) 
130 msec 0.40 (.007)** 0.29 (.05)* 0.24 (.12) 
140 msec 0.27 (.07) 0.22 (.15) 0.09 (.56) 
150 msec 0.33 (.03)* 0.30 (.05)* 0.19 (.21) 

Values in the table represent r value (p value).

*

p < .05.

**

p < .01.

Given that the measurements of different anxiety subtypes were positively correlated with each other (trait vs. state: r = .75, p < .001; trait vs. Beck: r = .63, p < .001; state vs. Beck: r = .46, p = .002), we then conducted a partial correlation analysis to clarify different anxiety subtypes' independent contribution. After controlling for state anxiety, trait anxiety was positively correlated with anger recognition index at 90-msec TMS (r = .36, p = .02), 110-msec TMS (r = .33, p = .03), and 120-msec TMS (r = .36, p = .02). There was no other significant correlation between a given anxiety subtype and fear or anger recognition index at various TMS time windows after controlling for other anxiety subtypes (all ps > .05). Taken together, it seems that the TMS disruption on fear recognition was independent of individuals' trait, state, or Beck anxiety level, but the TMS disruption on anger recognition was modulated by trait anxiety, with low-scoring individuals showing larger TMS disruption at 90,110, and 120 msec.

GENERAL DISCUSSION

The combined novel findings from Experiments 1 and 2 have revealed that earlier TMS over EVC at ∼90 msec reduced face image visibility rating but had no impact on facial emotional valence recognition, whereas later TMS at ∼120 msec selectively disrupted both the visibility and recognition of negative expressions but had no impact on the recognition of positive and neutral ones, suggesting that the facial expressions are processed later than the facial structures in EVC. It seems that early visual neural responses to affective facial cues are involved and so can be modulated at a later time window, possibly beyond the initial detection or processing of fed-forward facial structural information. This observation is also broadly in agreement with Bruce and Young's functional model of face processing (Bruce & Young, 2012), in which facial expression analysis is conducted after the initial stage of facial structural encoding (i.e., view-centered descriptions).

Furthermore, the observed TMS effects at different time windows (90 vs. 120 msec) suggest that different neural mechanisms may be involved in the processing of facial structure and facial emotional valence in EVC. Previous visual masking studies have reported that TMS over EVC is likely to induce a two-stage suppression effect (i.e., decreased visual target detection or discrimination performance at two different time widows), indicating a two-stage visual process in which the early time window (90–100 msec) may represent a feedforward process of visual information relatively independent of stimuli, tasks, and context, whereas the later time window (100–130 msec) may represent a recurrent process of visual information fed-back from other brain structures and potentially susceptible to attention and task demands (for a review, see de Graaf et al., 2014). It is plausible that the observed TMS-induced reduction in face visibility rating at ∼90 msec and in negative expression recognition at ∼120 msec might reflect the feedforward and feedback processes of different facial cues in EVC, respectively.

Interestingly, TMS at ∼120 msec selectively disrupted the recognition of negative expressions but had no impact on the recognition of positive ones, suggesting that the processing of negative expressions is more susceptible to the TMS disruption over EVC, although positive ones tend to attract higher visibility rating and recognition accuracy (Figures 2 and 4). It has been well established that, among common facial expressions (happy, sad, angry, fearful, surprised, and disgusted), recognition of happiness is associated with the highest accuracy and fastest RT and is the least susceptible to expression intensity decline and image quality distortion (Guo, Soornack, & Settle, 2019; Guo, 2012), which is probably because of happy expression being more distinctive than other expressions (by containing fewer overlapping features with the others) and our prior experience in processing different expressions (e.g., happiness is the first expression to reach adult-level recognition accuracy in children's development; Rutter et al., 2019). Consequently, the recognition of happiness might be less cognitively demanding, lead to a higher visibility rating, and be less susceptible to the TMS disruption observed in our study.

Although anger and fear are recognized with relatively lower accuracy and longer RT than happy expression (Guo, 2012), they tend to be detected quicker (but not necessarily recognized correctly at a categorical level) and initiate neural process earlier than happy expression (Lyyra et al., 2014; Yang et al., 2007). The observed selective TMS disruption at ∼120 msec on the recognition of negative expressions might also be associated with the difference in processing speed between negative and positive expressions in EVC. Previous studies have commonly reported that negative expressions, rather than positive ones, would enhance early visual and face-specific electrophysiological responses, such as C1, P1, and N170 ERP responses (e.g., Lyyra et al., 2014; Calvo & Beltrán, 2013; Zhu & Luo, 2012; Yang et al., 2007; Pourtois et al., 2004), implying a relatively earlier processing of negative than positive expressions. As we did not observe a disruption of positive expressions at a later time point within the tested TMS time windows (90–150 msec), future research could further extend the TMS time range (e.g., to ∼250 msec).

Both neuropsychological and brain imaging studies have suggested a crucial role of amygdala in processing negative facial expressions. Typically, patients with bilateral amygdala lesions would demonstrate impaired recognition of fearful expressions (e.g., Adolphs, Tranel, Damasio, & Damasio, 1994), and healthy participants would show enhanced neural activities in amygdala when viewing fearful and angry expressions (e.g., Gur et al., 2002). The extensive connections between amygdala and visual cortex, including Area V1 (Pessoa & Adolphs, 2010; Amaral, Behniea, & Kelly, 2003), would enable the facial emotional information processed in amygdala to be projected to various visual areas. Brain imaging studies have observed that amygdala could modulate neural activities in other cortical neural substrates, such as inferior temporal cortex (Hadj-Bouziane et al., 2012; Vuilleumier & Pourtois, 2007), while assessing the biological significance of emotional faces. In light of this, it is plausible that the affective information of negative facial valence is projected from amygdala to EVC or from amygdala to higher cortical areas and then to EVC and consequently modulate early visual neural processing of expressive faces after initial facial structure encoding. Indeed, recent studies have observed that facial expression discrimination performance could be impaired by TMS over rOFA at 60–100 msec (Pitcher et al., 2008), which is earlier than ∼120 msec over EVC observed in this study, and different facial expression images (e.g., happy vs. fearful faces) could induce slightly different neural activation patterns in V1 in an expression categorization task when compared with a face gender or identity discrimination task (Dobs, Schultz, Bülthoff, & Gardner, 2018; Greening, Mitchell, & Smith, 2018; Petro, Smith, Schyns, & Muckli, 2013), suggesting that EVC might receive the processed facial emotional information from higher cortical “face processing” areas (e.g., rOFA, STS, lateral fusiform gyrus).

This observed time window (∼120 msec) of processing affective visual information in EVC is in agreement with those observed in fear conditioning studies. When learning an association between a neutral stimulus with an aversive stimulus, such as pairing human faces with noxious odor, a robust learning-related brain activation enhancement is often reported in extrastriate regions in EEG or magnetoencephalography studies (Steinberg et al., 2012; Dolan, Heinze, Hurlemann, & Hinrichs, 2006; Pizzagalli, Greischar, & Davidson, 2003), indicating that EVC has the capacity to respond to the affective content associated with the current visual inputs at ∼120 msec.

One novelty in this study is that individuals' anxiety level could affect their facial expression recognition performance under TMS conditions. Across all the participants, TMS over EVC at ∼110–130 msec disrupted the recognition of both fear and anger expressions, and the disruption on fear recognition was independent of individual's trait, state, and Beck anxiety measurements. The disruption on anger recognition, on the other hand, was modulated by individuals' trait anxiety level, with the low scorers being more susceptible to TMS disruption than the high scorers. This difference between high and low scorers might be caused by anxiety-related modulation in expression recognition. Previous studies have reported higher detection sensitivity and higher recognition accuracy for angry faces in people scoring high in trait anxiety (Attwood et al., 2017; Doty et al., 2013; Surcinelli et al., 2006). It is plausible that, under TMS condition, the high scorers could still recognize degraded anger expression or are biased to interpret ambiguous expression as anger, which leads to fewer “missed” trials in the presentation of negative expressions. Consequently, the recognition accuracy of anger expression was less reduced by TMS over EVC between 110 and 130 msec in the high rather than low scorers in anxiety measurements.

Alternatively, as anxious individuals often show stronger neural activation in amygdala and pulvinar when processing negative facial expressions (Steuwe et al., 2014), they might reply more on these subcortical structures for expression perception. Consequently, their expression recognition accuracy was less susceptible to TMS disruption over EVC. It would be interesting to disentangle or quantify the contribution of these two potential mechanisms underlying the observed TMS modulation in future research. Furthermore, for the low scorers in trait anxiety, TMS showed a detrimental effect on recognizing anger expression but not on recognizing fear expression, suggesting that EVC may have anxiety-modulated involvement in the perception of anger and fear.

It should be noted that our reported TMS modulation is based on the data from a relatively small group of young, healthy university students. It will be interesting to replicate this research with a large data set, including participants of varying age and mental health profile (e.g., people with various anxiety disorders, such as social anxiety disorder, specific phobia, and generalized anxiety disorder). Furthermore, it remains to be seen to what extent the current findings can be generalized to the processing of other types of affective visual inputs, such as those natural and social scenes of varying valence and arousal level in the International Affective Picture System.

Nevertheless, the current study furthers our understanding of an interaction between visual and emotional processing. We observed that TMS over EVC at ∼120 msec selectively disrupted the recognition of negative facial expressions, suggesting that EVC is involved in the processing of affective facial cues and that its neural responses to negative cues were modulated at a time that is likely to be beyond the initial detection or processing of fed-forward facial structural information. The observed TMS effect was further modulated by individuals' anxiety level with stronger disruption for those scoring relatively low in trait anxiety, implying that cognitive bias can affect the processing of face emotional valence in EVC. These extensive interactions between visual and emotional information among both early and later stages of the visual pathway suggest that vision and emotion are less decomposable and perhaps function through an interaction among multiple brain regions rather than a few specific structures.

Acknowledgments

We thank Mercedes Whybrow for the help with data collection.

Reprint requests should be sent to Kun Guo, School of Psychology, University of Lincoln, Brayford Pool, Lincoln LN6 7TS, United Kingdom, or via e-mail: kguo@lincoln.ac.uk.

REFERENCES

REFERENCES
Acunzo
,
D. J.
,
Mackenzie
,
G.
, &
van Rossum
,
M. C.
(
2012
).
Systematic biases in early ERP and ERF components as a result of high-pass filtering
.
Journal of Neuroscience Methods
,
209
,
212
218
.
Adolphs
,
R.
,
Tranel
,
D.
,
Damasio
,
H.
, &
Damasio
,
A.
(
1994
).
Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala
.
Nature
,
372
,
669
672
.
Amaral
,
D. G.
,
Behniea
,
H.
, &
Kelly
,
J. L.
(
2003
).
Topographic organisation of projections from the amygdala to the visual cortex in the macaque monkey
.
Neuroscience
,
118
,
1099
1120
.
Anderson
,
A. K.
(
2005
).
Affective influences on the attentional dynamics supporting awareness
.
Journal of Experimental Psychology: General
,
134
,
258
281
.
Attwood
,
A. S.
,
Easey
,
K. E.
,
Dalili
,
M. N.
,
Skinner
,
A. L.
,
Woods
,
A.
,
Crick
,
L.
, et al
(
2017
).
State anxiety and emotional face recognition in healthy volunteers
.
Royal Society Open Science
,
4
,
160855
.
Bar-Haim
,
Y.
,
Lamy
,
D.
,
Pergamin
,
L.
,
Bakermans-Kranenburg
,
M. J.
, &
van Ijzendoorn
,
M. H.
(
2007
).
Threat-related attentional bias in anxious and nonanxious individuals: A meta-analytic study
.
Psychological Bulletin
,
133
,
1
24
.
Batty
,
M.
, &
Taylor
,
M. J.
(
2003
).
Early processing of the six basic facial emotional expressions
.
Cognitive Brain Research
,
17
,
613
620
.
Beck
,
A. T.
,
Epstein
,
N.
,
Brown
,
G.
, &
Steer
,
R. A.
(
1988
).
An inventory for measuring clinical anxiety: Psychometric properties
.
Journal of Consulting and Clinical Psychology
,
56
,
893
897
.
Bruce
,
Y.
, &
Young
,
A.
(
2012
).
Face perception
.
Hove, UK
:
Psychology Press
.
Calvo
,
M. G.
, &
Beltrán
,
D.
(
2013
).
Recognition advantage of happy faces: Tracing the neurocognitive processes
.
Neuropsychologia
,
51
,
2051
2061
.
Calvo
,
M. G.
, &
Castillo
,
M. D.
(
2001
).
Selective interpretation in anxiety: Uncertainty for threatening events
.
Cognition and Emotion
,
15
,
299
320
.
de Ayala
,
R. J.
,
Vonderharr-Carlson
,
D. J.
, &
Kim
,
D.
(
2005
).
Assessing the reliability of the Beck Anxiety Inventory scores
.
Educational and Psychological Measurement
,
65
,
742
756
.
de Graaf
,
T. A.
,
Cornelsen
,
S.
,
Jacobs
,
C.
, &
Sack
,
A. T.
(
2011
).
TMS effects on subjective and objective measures of vision: Stimulation intensity and pre-versus post-stimulus masking
.
Consciousness and Cognition
,
20
,
1244
1255
.
de Graaf
,
T. A.
,
Goebel
,
R.
, &
Sack
,
A. T.
(
2012
).
Feedforward and quick recurrent processes in early visual cortex revealed by TMS?
Neuroimage
,
61
,
651
659
.
de Graaf
,
T. A.
,
Koivisto
,
M.
,
Jacobs
,
C.
, &
Sack
,
A. T.
(
2014
).
The chronometry of visual perception: Review of occipital TMS masking studies
.
Neuroscience & Biobehavioral Reviews
,
45
,
295
304
.
Dobs
,
K.
,
Schultz
,
J.
,
Bülthoff
,
I.
, &
Gardner
,
J. L.
(
2018
).
Task-dependent enhancement of facial expression and identity representations in human cortex
.
Neuroimage
,
172
,
689
702
.
Dolan
,
R. J.
,
Heinze
,
H. J.
,
Hurlemann
,
R.
, &
Hinrichs
,
H.
(
2006
).
Magnetoencephalography (MEG) determined temporal modulation of visual and auditory sensory processing in the context of classical conditioning to faces
.
Neuroimage
,
32
,
778
789
.
Doty
,
T. J.
,
Japee
,
S.
,
Ingvar
,
M.
, &
Ungerleider
,
L. G.
(
2013
).
Fearful face detection sensitivity in healthy adults correlates with anxiety-related traits
.
Emotion
,
13
,
183
188
.
Faul
,
F.
,
Erdfelder
,
E.
,
Lang
,
A. G.
, &
Buchner
,
A.
(
2007
).
G*Power: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences
.
Behavior Research Methods
,
39
,
175
191
.
Fox
,
E.
(
2002
).
Processing emotional facial expressions: The role of anxiety and awareness
.
Cognitive, Affective & Behavioral Neuroscience
,
2
,
52
63
.
Green
,
C.
, &
Guo
,
K.
(
2018
).
Factors contributing to individual differences in facial expression categorisation
.
Cognition and Emotion
,
32
,
37
48
.
Greening
,
S. G.
,
Mitchell
,
D. G. V.
, &
Smith
,
F. W.
(
2018
).
Spatially generalizable representations of facial expressions: Decoding across partial face samples
.
Cortex
,
101
,
31
43
.
Guo
,
K.
(
2012
).
Holistic gaze strategy to categorize facial expression of varying intensities
.
PLoS One
,
7
,
e42585
.
Guo
,
K.
,
Soornack
,
Y.
, &
Settle
,
R.
(
2019
).
Expression-dependent susceptibility to face distortions in processing of facial expressions of emotion
.
Vision Research
,
157
,
112
122
.
Gur
,
R. C.
,
Schroeder
,
L.
,
Turner
,
T.
,
McGrath
,
C.
,
Chan
,
R. M.
,
Turetsky
,
B. I.
, et al
(
2002
).
Brain activation during facial emotion processing
.
Neuroimage
,
16
,
651
662
.
Hadj-Bouziane
,
F.
,
Liu
,
N.
,
Bell
,
A. H.
,
Gothard
,
K. M.
,
Luh
,
W.
,
Tootell
,
R. B.
, et al
(
2012
).
Amygdala lesions disrupt modulation of functional MRI activity evoked by facial expression in the monkey inferior temporal cortex
.
Proceedings of the National Academy of Sciences, U.S.A.
,
109
,
3640
3648
.
Hansen
,
C. H.
, &
Hansen
,
R. D.
(
1998
).
Finding the face in the crowd: An anger superiority effect
.
Journal of Personality and Social Psychology
,
54
,
917
924
.
Horstmann
,
G.
(
2007
).
Preattentive face processing: What do visual search experiments with schematic faces tell us?
Visual Cognition
,
15
,
799
833
.
Hunter
,
L. R.
,
Buckner
,
J. D.
, &
Schmidt
,
N. B.
(
2009
).
Interpreting facial expressions: The influence of social anxiety, emotional valence, and race
.
Journal of Anxiety Disorders
,
23
,
482
488
.
Kadosh
,
K. C.
,
Walsh
,
V.
, &
Kadosh
,
R. C.
(
2011
).
Investigating face-property specific processing in the right OFA
.
Social Cognitive and Affective Neuroscience
,
6
,
58
65
.
Kammer
,
T.
(
2007
).
Masking visual stimuli by transcranial magnetic stimulation
.
Psychological Research
,
71
,
659
666
.
Li
,
Z.
,
Yan
,
A.
,
Guo
,
K.
, &
Li
,
W.
(
2019
).
Fear-related signals in the primary visual cortex
.
Current Biology
,
29
,
4078
4083
.
Lundqvist
,
D.
,
Flykt
,
A.
, &
Öhman
,
A.
(
1998
).
The Karolinska Directed Faces – KDEF, CD ROM
.
Psychology Section, Department of Clinical Neuroscience, Karolinska Institute
,
Stockholm, Sweden
.
Lyyra
,
P.
,
Hietanen
,
J. K.
, &
Astikainen
,
P.
(
2014
).
Anger superiority effect for change detection and change blindness
.
Consciousness and Cognition
,
30
,
1
12
.
Öhman
,
A.
,
Lundqvist
,
D.
, &
Esteves
,
F.
(
2001
).
The face in the crowd revisited: A threat advantage with schematic stimuli
.
Journal of Personality and Social Psychology
,
80
,
381
396
.
Padmala
,
S.
, &
Pessoa
,
L.
(
2008
).
Affective learning enhances visual detection and responses in primary visual cortex
.
Journal of Neuroscience
,
28
,
6202
6210
.
Pessoa
,
L.
, &
Adolphs
,
R.
(
2010
).
Emotion processing and the amygdala: From a ‘low road’ to ‘many roads’ of evaluating biological significance
.
Nature Reviews Neuroscience
,
11
,
773
783
.
Petro
,
L. S.
,
Smith
,
F. W.
,
Schyns
,
P. G.
, &
Muckli
,
L.
(
2013
).
Decoding face categories in diagnostic subregions of primary visual cortex
.
European Journal of Neuroscience
,
37
,
1130
1139
.
Phelps
,
E. A.
(
2006
).
Emotion and cognition: insights from studies of the human amygdala
.
Annual Review of Psychology
,
57
,
27
53
.
Pitcher
,
D.
,
Garrido
,
L.
,
Walsh
,
V.
, &
Duchaine
,
B. C.
(
2008
).
Transcranial magnetic stimulation disrupts the perception and embodiment of facial expressions
.
Journal of Neuroscience
,
28
,
8929
8933
.
Pizzagalli
,
D. A.
,
Greischar
,
L. L.
, &
Davidson
,
R. J.
(
2003
).
Spatiotemporal dynamics of brain mechanisms in aversive classical conditioning: High-density event-related potential and brain electrical tomography analyses
.
Neuropsychologia
,
41
,
184
194
.
Pourtois
,
G.
,
Grandjean
,
D.
,
Sander
,
D.
, &
Vuilleumier
,
P.
(
2004
).
Electrophysiological correlates of rapid spatial orienting towards fearful faces
.
Cerebral Cortex
,
14
,
619
633
.
Pourtois
,
G.
,
Thut
,
G.
,
Grave de Peralta
,
R.
,
Michel
,
C.
, &
Vuilleumier
,
P.
(
2005
).
Two electrophysiological stages of spatial orienting towards fearful faces: Early temporo-parietal activation preceding gain control in extrastriate visual cortex
.
Neuroimage
,
26
,
149
163
.
Roebuck
,
H.
,
Bourke
,
P.
, &
Guo
,
K.
(
2014
).
Role of lateral and feedback connections in primary visual cortex in the processing of spatiotemporal regularity—A TMS study
.
Neuroscience
,
263
,
231
239
.
Rutter
,
L. A.
,
Dodell-Feder
,
D.
,
Vahia
,
I. V.
,
Forester
,
B. P.
,
Ressler
,
K. J.
,
Wilmer
,
J. B.
, et al
(
2019
).
Emotion sensitivity across the lifespan: Mapping clinical risk periods to sensitivity to facial emotion intensity
.
Journal of Experimental Psychology: General
,
148
,
1993
2005
.
Sack
,
A. T.
,
van der Mark
,
S.
,
Schuhmann
,
T.
,
Schwarzbach
,
J.
, &
Goebel
,
R.
(
2009
).
Symbolic action priming relies on intact neural transmission along the retino-geniculo-striate pathway
.
Neuroimage
,
44
,
284
293
.
Salminen-Vaparanta
,
N.
,
Noreika
,
V.
,
Revonsuo
,
A.
,
Koivisto
,
M.
, &
Vanni
,
S.
(
2012
).
Is selective primary visual cortex stimulation achievable with TMS?
Human Brain Mapping
,
33
,
652
665
.
Slotnick
,
S. D.
(
2018
).
The experimental parameters that affect attentional modulation of the ERP C1 component
.
Cognitive Neuroscience
,
9
,
53
62
.
Spielberger
,
C. D.
,
Gorsuch
,
R. L.
,
Lushene
,
R.
,
Vagg
,
P. R.
, &
Jacobs
,
G. A.
(
1983
).
Manual for the State-Trait Anxiety Inventory
.
Palo Alto, CA
:
Consulting Psychologists Press
.
Staugaard
,
S. R.
(
2010
).
Threatening faces and social anxiety: A literature review
.
Clinical Psychology Review
,
30
,
669
690
.
Steinberg
,
C.
,
Dobel
,
C.
,
Schupp
,
H. T.
,
Kissler
,
J.
,
Elling
,
L.
,
Pantev
,
C.
, et al
(
2012
).
Rapid and highly resolving: Affective evaluation of olfactorily conditioned faces
.
Journal of Cognitive Neuroscience
,
24
,
17
27
.
Steuwe
,
C.
,
Daniels
,
J. K.
,
Frewen
,
P. A.
,
Densmore
,
M.
,
Pannasch
,
S.
,
Belbo
,
T.
, et al
(
2014
).
Effect of direct eye contact in PTSD related to interpersonal trauma: An fMRI study of activation of an innate alarm system
.
Social Cognitive and Affective Neuroscience
,
9
,
88
97
.
Surcinelli
,
P.
,
Codispoti
,
M.
,
Montebarocci
,
O.
,
Rossi
,
N.
, &
Baldaro
,
B.
(
2006
).
Facial emotion recognition in trait anxiety
.
Journal of Anxiety Disorders
,
20
,
110
117
.
Thielscher
,
A.
,
Reichenbach
,
A.
,
Ugurbil
,
K.
, &
Uludağ
,
K.
(
2010
).
The cortical site of visual suppression by transcranial magnetic stimulation
.
Cerebral Cortex
,
20
,
328
338
.
Vuilleumier
,
P.
, &
Driver
,
J.
(
2007
).
Modulation of visual processing by attention and emotion: Windows on causal interactions between human brain regions
.
Philosophical Transactions of the Royal Society of London, Series B, Biological Sciences
,
362
,
837
855
.
Vuilleumier
,
P.
, &
Pourtois
,
G.
(
2007
).
Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging
.
Neuropsychologia
,
45
,
174
194
.
Walsh
,
V.
, &
Cowey
,
A.
(
2000
).
Transcranial magnetic stimulation and cognitive neuroscience
.
Nature Reviews Neuroscience
,
1
,
73
79
.
Yang
,
E.
,
Zald
,
D. H.
, &
Blake
,
R.
(
2007
).
Fearful expressions gain preferential access to awareness during continuous flash suppression
.
Emotion
,
7
,
882
886
.
Zhu
,
X.
, &
Luo
,
Y.
(
2012
).
Fearful faces evoke a large C1 than happy faces in executive attention task: An event-related potential study
.
Neuroscience Letters
,
526
,
118
121
.