Abstract

When multiple stimuli are concurrently displayed in the visual field, they must compete for neural representation at the processing expense of their contemporaries. This biased competition is thought to begin as early as primary visual cortex, and can be driven by salient low-level stimulus features. Stimuli important for an organism's survival, such as facial expressions signaling environmental threat, might be similarly prioritized at this early stage of visual processing. In the present study, we used ERP recordings from striate cortex to examine whether fear expressions can bias the competition for neural representation at the earliest stage of retinotopic visuo-cortical processing when in direct competition with concurrently presented visual information of neutral valence. We found that within 50 msec after stimulus onset, information processing in primary visual cortex is biased in favor of perceptual representations of fear at the expense of competing visual information (Experiment 1). Additional experiments confirmed that the facial display's emotional content rather than low-level features is responsible for this prioritization in V1 (Experiment 2), and that this competition is reliant on a face's upright canonical orientation (Experiment 3). These results suggest that complex stimuli important for an organism's survival can indeed be prioritized at the earliest stage of cortical processing at the expense of competing information, with competition possibly beginning before encoding in V1.

INTRODUCTION

The ability to expediently detect social signals of environmental threat is a critical part of human cognition. It is now known that facial displays signaling environmental threat bias attentional and perceptual resources (West, Anderson, & Pratt, 2009; Phelps, Ling, & Carrasco, 2006; Calvo & Esteves, 2005; Fox, Russo, Bowles & Dutton, 2001; Fox et al., 2000), and draw on neural structures specialized for their identification (Rudrauf et al., 2008; Anderson, Christoff, Panitz, DeRosa, & Gabrieli, 2003; Vuilleumier, Armony, Driver, & Dolan, 2001; Morris, Öhman, & Dolan, 1999). This prioritization of affective information, however, comes at a cost. Due to the limited processing capacity of the human brain, prioritizing resources toward any one object comes at the expense of other objects concurrently competing for awareness (Desimone & Duncan, 1995). This biased competition for neural representation allows certain percepts to reach awareness over others either based on an organism's current goal state (Folk, Remington, & Johnston, 1992; Yantis & Jonides, 1984) or through an involuntary stimulus-driven process (Beck & Kastner, 2005). The involuntary stimulus-driven process is thought to occur through suppressive interactions between neural populations designated to each stimulus' retinotopic encoding (Yeshurun & Carrasco, 1998). In other words, multiple stimuli compete in a stimulus-driven context for early prioritization.

There is now increasing evidence that object prioritization and even object categorization occur very early in the processing stream. For example, it has been found that frontal activity associated with object classification in humans occurs as early as 150 msec after stimulus onset (Thorpe, Fize, & Marlot, 1996), and primate face-selective cells in the superior temporal sulcus respond about 80–100 msec after stimulus onset (Oram & Perrett, 1992). Further, when considering more basic visual encoding, it has been shown that retinotopically aligned V1 neurons respond as early as 40 msec after stimulus onset (Celebrini, Thorpe, Trotter, & Imbert, 1993). Although neural competition can occur as early as the lateral geniculate nucleus (O'Connor, Fukui, Pinsk, & Kastner, 2002), the first evidence of cortical neural competition between multiple concurrent stimulus inputs has been found to occur at the level of V1, with contextually salient stimuli being prioritized through the attenuation of activity from neighboring retinotopically aligned regions that encode less salient stimuli (Beck & Kastner, 2005; Yeshurun & Carrasco, 1998). It is also possible that salience can be mediated through more complex motivationally significant stimuli important for an organism's survival. Behaviorally, there exists evidence that salient social signals of threat have the ability to bias the contents of awareness when in direct competition with other environmental objects. For example, the perception of facial threat information is accelerated in time relative to competing neutral information (West, Anderson, Bedwell, & Pratt, 2010; West, Anderson, & Pratt, 2009), with facial threat signals also found to be detected more efficiently relative to their nonaffective contemporaries (Öhman, Lundqvist & Esteves, 2001).

As such, there is sufficient behavioral evidence for prioritized processing of social signals of threat over concurrently presented information of neutral valence. It is unclear, however, if this early prioritization results from direct neural competition favoring motivationally significant stimuli. In addition, this competition could occur within the first wave of visual processing in V1, possibly supported by direct feedforward visual inputs (Rudrauf et al., 2008), or it could occur later, reflecting possible subsequent feedback from other structures (Ledoux, 2002; Anderson & Phelps, 2001). To address this issue, we measured ERPs to examine whether fear expressions are prioritized at the very earliest stage of cortical visual processing through a direct neural competition in primary visual cortex that results in the suppression of neural activity associated with competing environmental stimuli. Specifically, we recorded visual evoked potentials (VEPs) to examine the C1 component, which is the earliest evoked visual component peaking negatively between approximately 50 and 90 msec after stimulus onset (Clark, Fan, & Hillyard, 1995) that has been previously demonstrated to be sensitive to emotional (Pourtois, Grandjean, Sander, & Vuilleumier, 2004) and negatively conditioned visual stimuli (Stolarova, Keil, & Moratti, 2006). Substantive evidence places the neural generator of the C1 component in striate cortex within the calcarine fissure (Clark et al., 1995; Mangun, 1995; Jeffreys, & Axford, 1972), making it the only evoked component measurable at the scalp that has a single localized neural source. Due to the retinotopic organization of the calcarine fissure, where the lower and upper visual hemifields are retinotopically mapped in the upper and lower banks, the C1 reverses polarity when stimulation occurs in either the upper or lower hemifields. In other words, when the upper hemifield is stimulated, negative surface-recorded activity is generated from neural populations in the lower banks of the calcarine fissure, evoking the negative C1 component. Conversely, when the lower hemifield is stimulated, positive activity generated from the upper banks of the calcarine fissure is generated, which is difficult to distinguish from the positive P1 component that begins after the C1 component. Thus, the surface-recorded ERPs represent the summated positive and negative electric activity generated from nerve populations encoding stimuli presented in each hemifield (e.g., Talsma & Woldorff, 2005).

In the present study, we directly tested for competitive interactions between facial displays of fear and concurrent stimuli competing for neural representation. We chose facial displays of fear as stimuli signaling environmental threat due to previous demonstrations of their preferential perceptual and attentional processing (e.g., West et al., 2010; Whalen et al., 2004; Vuilleumier, Armony, Driver, & Dolan, 2003). Across three experiments, we examined evoked potentials from the calcarine fissure while contrasting facial fear displays with a competing stimulus pair that either (1) had no meaning, (2) displayed a different emotion, or (3) was perceptually identical but was presented in a noncanonical orientation (thereby decreasing its preferential processing). Pairs of stimuli from each stimulus category (fear or nonfear) were presented in either the upper or lower visual hemifields concurrently with the other categorical stimulus pair appearing in the opposite hemifield. We hypothesized that summated electrophysiological activity from neural populations within the calcarine fissure encoding each stimulus pair would summate to produce a negative evoked potential (C1) when stimuli in the upper hemifield were prioritized during a direct competition for neural representation. Conversely, when stimuli in the lower hemifield are prioritized, we would expect the summated activity to result in the suppression of the negative evoked C1 component. In other words, if one stimulus pair is prioritized during this direct neural competition regardless of where in the visual display it is presented, it should evoke a negative C1 component when presented in the upper hemifield but should suppress the C1 component when presented in the lower hemifield (Di Russo, Martinez, & Hillyard, 2003). Through these electrophysiological measurements, we show that task-irrelevant facial signals of threat directly competing for neural representation in visual cortex are prioritized at the expense of concurrent stimuli competing for awareness.

EXPERIMENT 1

We first wanted to establish whether facial displays of fear were prioritized when in direct neural competition with a stimulus that provided the same amount of retinal stimulation but was not configured to contain any motivational significance or meaning. To accomplish this, we used the Fourier scrambled derivatives of the facials fear displays as contrasting stimuli, where the same global luminance, contrast, and spatial frequencies were maintained between presented stimuli, with only the phase information differing. To avoid any top–down bias toward any particular stimulus category, the stimuli of interest were completely task irrelevant, as participants were instructed to simply report the color of four concurrently presented squares on each trial. If facial displays of fear in fact “win” the competition for privileged neural processing in V1, we would expect a C1 component to be generated when the facial displays are presented in the upper hemifield despite competing positive activity being produced by Fourier scrambled stimuli in the lower hemifield. Similarly, we would expect displays of facial fear in the lower hemifield to suppress the negative C1 component generated from concurrent Fourier stimuli in the upper hemifield.

Methods

Participants

The participants were 10 right-handed undergraduate students (6 women) from the University of Toronto, who had normal or corrected-to-normal vision, and had no history of neurological disorder. The participants were financially compensated for their time and gave informed consent before beginning the experiment. All ethical guidelines set by the University of Toronto were adhered to in full.

Apparatus

Experiments were programmed and displayed using the Presentation software (Version 11.0 www.neuro-bs.com) and ran on a desktop PC using a 21-in. flat CRT monitor (1024 × 768, refresh rate of 60 Hz). Viewing distance was 57 cm and participants made responses using a Microsoft SideWinder controller with either their left or right hand.

EEG was recorded using a BioSemi ActiveTwo system with 64 Ag–AgCl pin-type active electrodes arranged in standard 10–20 placement, with additional flat electrodes at each mastoid, at 1.5 cm lateral to the external canthus of each eye, and below each eye. Data were recorded continuously at a sampling rate of 512 Hz. Raw data were unreferenced due to the architecture of the ActiveTwo system.

Stimuli and Procedure

The stimuli were facial displays of fear and their Fourier scrambled derivatives. Fear displays were grayscale photographs of 17 individuals (9 women and 8 men) taken from two standard datasets of cross-culturally recognized posed facial expressions (Matsumoto & Ekman, 1988; Ekman & Friesen, 1976). Faces subtended 4° × 5° of visual angle at a 70-cm viewing distance. Each face stimulus was processed in Matlab to minimize variation in facial feature positions, and was rigidly aligned such that the centers of the eyes and the tip of the nose were equated across images. Face stimuli were cut out from the background image using a polygonal region defined by a consistent facial contour labeling scheme. Pixels inside the polygonal region defining the face were histogram equalized to remove variability due to lighting differences across faces. Facial contours of each face were then cropped a consistent oval that retained the eyebrows, eyes, nose, and mouth in each image. Fourier stimuli were derived from the processed facial displays of fear which contained the same spatial frequencies, luminance, and contrast as their original counterparts, however, without any structure or meaning. This was done by using a two-dimensional Fast Fourier Transform (FFT), followed by phase randomization and reconstruction using the image's original frequency spectrum. A feathering procedure (a softening of the image's edge) was used on the Fourier scrambled derivatives to minimize the effect of abrupt contrast shifts at the edge of the stimulus. No significant differences in global luminance or contrast levels existed between faces and their scrambled counterparts (ts < 1).The vertical position of the display screen was adjusted individually for each participant so that fixation was consistently at eye level. A fixation cross subtending 0.35° × 0.35° was presented in the center of the screen. On each trial, two pairs of faces were presented, a pair of identical face exemplars and the corresponding pair of their Fourier scrambled derivative. One such pair was presented in either the upper or lower hemifield with the other concurrently appearing in the opposite hemifield, resulting in two experimental conditions: Either facial displays of fear were presented in the upper hemifield concurrently with the corresponding pair of Fourier stimuli in the lower hemifield or the Fourier stimuli were presented in the upper hemifield with the facial displays of fear presented in the lower hemifield. Stimuli were presented equidistantly from fixation at an eccentricity of 4.5°, with stimuli pair centers appearing 6° apart. Facial stimuli were task irrelevant. The instructed task involved four small equiluminant squares (14.0 cd/m2) subtending 0.4° × 0.4° presented along the vertical and horizontal meridians, each directly centered between the respective face/scrambled stimuli within the particular quadrant of the screen. The squares were always the same color on each trial and were equally likely to be green or red.

The experiment consisted of one practice block of 10 trials, followed by eight experimental blocks of 150 trials each (total of 1200 experimental trials). Timed 15-sec breaks were given every 15 trials, with longer untimed breaks given between blocks. Each block contained an equal number of presentations for each condition, with stimulus exemplars being chosen at random on each trial. A typical trial sequence can be seen in Figure 1. Each trial began with a fixation period that was randomly jittered with a value chosen between 600 and 1000 msec, followed by the onset of the stimulus display (both pairs of stimuli being concurrently displayed with the target squares) for 100 msec. Participants were instructed to press the left trigger of the game controller if the squares were red, and the right trigger if the squares were green. The face and Fourier stimuli were task irrelevant. After a response was made, a 1200-msec intertrial interval was presented as a blank screen.

Figure 1. 

Procedure used in all ERP experiments. Participants fixated at the center of the screen for a randomly jittered amount of time, which was followed by the onset of the stimulus display for 100 msec. Participants then made a response indicating whether the target squares were either red or green.

Figure 1. 

Procedure used in all ERP experiments. Participants fixated at the center of the screen for a randomly jittered amount of time, which was followed by the onset of the stimulus display for 100 msec. Participants then made a response indicating whether the target squares were either red or green.

EEG Processing and Analysis

EEG data were processed using EEProbe (version 3.3.118), after being converted from ActiveTwo's .bdf format to EEProbe's .cnt format using PolyRex (Kayser, 2003). Individual datasets were filtered using a finite impulse response (FIR) filter with a high-pass frequency of 1.0 Hz and a low-pass frequency of 30 Hz. The continuous data were then re-referenced to the algebraic mean of the two mastoids, and then rejection markers were generated using channel thresholds (30 standard deviations over a sliding window of 200 msec) over fronto-polar channels Fp1 and Fp2. The same thresholds were used to detect eye movements and blinks, which were monitored using bipolar horizontal and vertical EOG derivations via two pairs of flat electrodes, one pair placed to the exterior canthi and the other placed under each eye. Both rejections and eye blink thresholds were also examined by eye prior to averaging, with additional rejections marked for periods of excessively noisy or artifactual data.

Each dataset was then averaged to generate individual ERPs that were time-locked to stimuli onset, with an epoch window of −150 to 200 msec and a baseline period of −150 to 0 msec relative to stimuli onset. We examined the relationship between stimulus condition and the amplitude of the C1 component maximally peaking at the posterior occipital electrode POz (Di Russo et al., 2003; Miniussi, Girelli, & Marzi, 1998;Clark et al., 1995). The component amplitude was quantified in terms of mean voltage within a specified latency window centered on the components peak. No ERP was based on fewer than 500 trials.

Results and Discussion

Behavioral performance on the color identification task was very high, with an average accuracy of 91% regardless of condition (t < 1). The grand-average ERP waveforms contrasting trials with displays of facial fear presented in the upper hemifield and trials with their Fourier derivatives presented in the upper hemifield are shown in Figure 2A. A planned two-tailed t test was conducted on the mean voltage of the C1 component (30–60 msec), revealing a significant difference between the fear upper hemifield (−1.87 μV) and Fourier scrambled upper hemifield condition [−0.94 μV; t(9) = 3.49, p < .01; d = 2.33]. As later ERP components have multiple potential neural generators (Di Russo et al., 2003), we were not primarily interested in their analysis. We found, however, a statically reliable effect on the P1 component (80–120 msec), where mean amplitude was higher for the fear upper hemifield condition (4.8 μV) compared to the Fourier upper hemifield condition [4.0 μV; t(9) = 2.40, p < .05; d = 1.60]. No other statistically reliable differences were found.

Figure 2. 

(A) Grand-average waveforms evoked by the stimulus display at electrode POz, for trials where fear faces were presented in the upper hemifield versus when Fourier transformed faces were presented in the upper hemifield. The region occupied by the gray bar highlights the difference in the generation of the C1 (30–60 msec) between both conditions. (B) Topographical voltage distribution in the fear upper hemifield condition at the peak time of the C1 component.

Figure 2. 

(A) Grand-average waveforms evoked by the stimulus display at electrode POz, for trials where fear faces were presented in the upper hemifield versus when Fourier transformed faces were presented in the upper hemifield. The region occupied by the gray bar highlights the difference in the generation of the C1 (30–60 msec) between both conditions. (B) Topographical voltage distribution in the fear upper hemifield condition at the peak time of the C1 component.

Even though participants were engaged in a color discrimination task and only exposed passively to facial displays of fear, the results of Experiment 1 reveal that these affective social signals were able to bias the competition for neural representation at earliest stage of cortical visual processing. It is important to keep in mind, however, that although the Fourier stimuli were globally equal in luminance and contrast, we did compare canonical facial displays against nonface displays. Thus, it is possible that the results from the current experiment are due to the prioritization of objects or faces in general and are not related to specifically affective content. We therefore directly contrasted facial displays of fear and their neutral face counterparts in Experiment 2.

EXPERIMENT 2

We hypothesized that if it is, in fact, the social signal of threat that is being prioritized at this early stage of cortical processing, then we would again expect the C1 component to be generated when fear displays are presented in the upper hemifield despite competing activity being generated by their neutral face counterparts in the lower hemifield. In addition, we would again expect activity generated by displays of facial fear in the lower hemifield to suppress activity generated from neutral face stimuli in the upper hemifield.

Methods

Participants

The participants were 12 right-handed undergraduate students (7 women) from the University of Toronto, who had normal or corrected-to-normal vision, and had no history of neurological disorder. The participants were paid and gave informed consent before beginning the experiment. All ethical guidelines set by the University of Toronto were adhered to in full.

Stimuli and Procedure

The same facial fear displays were used as in Experiment 1. The neutral faces used were processed as was described in Experiment 1, and did not differ significantly in luminance from their fear face counterparts (t < 1). The procedure and design were otherwise the same as previously described.

EEG Processing and Analysis

Processing and analysis were the same as in Experiment 1.

Results and Discussion

Behavioral performance on the color identification task was again very high, with an average accuracy of 93% regardless of condition (t < 1). The grand-average ERP waveforms contrasting trials where displays of facial fear were presented in the upper hemifield and trials where their neutral face counterparts were presented in the upper hemifield are shown in Figure 3A. A planned two-tailed t test was conducted on the C1 component (40–70 msec), again revealing a significant difference between the fear upper hemifield (−1.85 μV) and neutral upper hemifield condition [−0.95 μV; t(11) = 3.26, p < .01; d = 1.96]. No other statistically differences were found.

Figure 3. 

(A) Grand-average waveforms evoked by the stimulus display at electrode POz, for trials where fear faces were presented in the upper hemifield versus when neutral faces were presented in the upper hemifield. The region occupied by the gray bar highlights the difference in the generation of the C1 (40–70 msec) between both conditions. (B) Topographical voltage distribution in the fear upper hemifield condition at the peak time of the C1 component.

Figure 3. 

(A) Grand-average waveforms evoked by the stimulus display at electrode POz, for trials where fear faces were presented in the upper hemifield versus when neutral faces were presented in the upper hemifield. The region occupied by the gray bar highlights the difference in the generation of the C1 (40–70 msec) between both conditions. (B) Topographical voltage distribution in the fear upper hemifield condition at the peak time of the C1 component.

The results of Experiment 2 confirm that the affective signal embedded in a display of facial fear is, in fact, prioritized in V1 when directly competing for representation against a neutral facial display. Nevertheless, it is possible that this prioritization of affective information is driven solely by the low-level physical differences between fear and neutral facial displays, rather than affective content. For instance, the physical salience associated with increased eye whites might serve to augment C1 independently of processing the emotional content of fear faces (e.g., Whalen et al., 2004). We therefore conducted a final experiment to test whether it is, in fact, the holistic affective facial display that is being prioritized at this stage of processing, or simply some local luminance or contrast difference that is naturally inherent to these stimuli.

EXPERIMENT 3

In Experiment 3, we wanted to test whether the observed prioritization of facial fear content within V1 was driven by the global processing of the stimulus, or instead by a local low-level featural salience difference between affective and nonaffective stimuli (Whalen et al., 2004). We did this by directly contrasting facial displays of fear with their inverted counterparts. The retinotopic organization of striate cortex between the upper and lower hemifields activates neural populations with geometrically opposite orientations (Di Russo et al., 2003). This means that a stimulus pitted against its inverted counterpart, when equidistantly spaced from fixation, will stimulate the exact same corresponding pattern of neural populations within each bank of the calcarine fissure. Thus, if the canonical orientation of a facial display of fear is important for the nervous system to prioritize its perception at the V1 stage of processing, we would still expect a C1 to be generated despite the canceling activity generated by its inverted counterpart in the opposite hemifield.

Methods

Participants

The participants were 10 right-handed undergraduates (6 women) from the University of Toronto, who had normal or corrected-to-normal vision, and had no history of neurological disorder. The participants were paid and gave informed consent before beginning the experiment. All ethical guidelines set by the University of Toronto were adhered to in full.

Stimuli and Procedure

The same facial fear displays were used as in Experiments 1 and 2, except that they were now contrasted with their inverted counterparts. The procedure and design was otherwise the same as previously described. This means that, in one condition, a pair of upright facial displays of fear was presented in the upper hemifield with their inverted counterparts in the lower hemifield and, in the other condition, the inverted displays of fear were presented in the upper hemifield with the corresponding pair of upright facial displays of fear presented in the lower hemifield.

EEG Processing and Analysis

Processing and analysis were the same as in Experiments 1 and 2.

Results and Discussion

Behavioral performance on the color identification task was again very high, with an average accuracy of 95% regardless of condition (t < 1). The grand-average ERP waveforms contrasting trials where displays of facial fear were presented in the upper hemifield and trials where their inverted counterparts were presented in the upper hemifield are shown in Figure 4A. A planned two-tailed t test was conducted on the mean voltage of the C1 component (40–70 msec), revealing a significant difference between the fear upper hemifield (−1.71 μV) and the inverted fear upper hemifield condition [−1.11 μV; t(9) = 2.81, p < .05; d = 1.87]. Again, we were not primarily interested in later ERP components. We found a statically reliable effect on the N170 component (150–200 msec), however, where mean amplitude was more negative for the fear upper hemifield condition (−2.73 μV) compared to the inverted fear upper hemifield condition [−2.0 μV; t(9) = 3.01, p < .05; d = 2.00].

Figure 4. 

(A) Grand-average waveforms evoked by the stimulus display at electrode POz, for trials where fear faces were presented in the upper hemifield versus when their inverted counterparts were presented in the upper hemifield. The region occupied by the gray bar highlights the difference in the generation of the C1 (40–70 msec) between both conditions. (B) Topographical voltage distribution in the fear upper hemifield condition at the peak time of the C1 component.

Figure 4. 

(A) Grand-average waveforms evoked by the stimulus display at electrode POz, for trials where fear faces were presented in the upper hemifield versus when their inverted counterparts were presented in the upper hemifield. The region occupied by the gray bar highlights the difference in the generation of the C1 (40–70 msec) between both conditions. (B) Topographical voltage distribution in the fear upper hemifield condition at the peak time of the C1 component.

The results of Experiment 3 provide convincing evidence that the visual system is sensitive to the canonical orientation of facial displays of fear as early as 50 msec in V1, with its affective content being globally prioritized at the expense of competing visual information. Considering the mirror retinotopic organization of striate cortex serving each hemifield, we would expect an equal amount of neural signal to be generated for each competing stimulus if canonical orientation was not important for a fear display's orientation. Instead, we find that upright facial displays of fear still received privileged neural processing when directly contrasted with its inverted counterpart. Because we would expect these stimulus inputs to stimulate their corresponding regions of striate cortex, these data suggest that some form of affective stimulus prioritization might occur within or even before this stage of processing in V1, biasing spatial areas retinotopically occupied by affective information for privileged processing. The existence of this early biased competition favoring affective information corroborates with existing evidence that conditioned responses associated with affective stimuli can enhance subsequent activity in primary visual cortex (Padmala & Pessoa, 2008; Stolarova et al., 2006; Pourtois et al., 2004). It is possible that retinotopic competitive biases could occur through either feedforward (e.g., Rudrauf et al., 2008) or re-entrant (Ledoux, 2002; Anderson & Phelps, 2001) inputs relying on subcortical or even cortical feedback, biasing the competition for representation in V1 in favor of biologically important stimuli.

GENERAL DISCUSSION

Displays of facial fear, acting as an important social signal of environmental threat (Whalen et al., 2004; Whalen, 1998), were found to be prioritized at the expense of concurrent stimuli competing for neural representation within the first stages of visual processing in V1. When displays of fear were contrasted with both their Fourier scrambled (Experiment 1) and neutral counterparts (Experiment 2), fear displays presented in the upper visual field elicited a robust C1 component. Consistent with the notion that displays of facial fear win this competition for neural representation in V1, displays of facial fear presented in the lower hemifield suppressed negative C1 activity when directly competing with stimuli in the upper hemifield. This privileged neural processing for displays of fear was still observed when competing against their inverted counterparts, demonstrating that the canonical orientation of an affective stimulus is crucial for its prioritization at this earliest stage of visual processing (Experiment 3). Our C1 results show a remarkable similarity to single-cell recording latencies in V1 (40 msec; Celebrini et al., 1993), which receives projections from the LGN occurring within 20 msec after stimulus onset (Maunsell et al., 1999). Together, these data provide evidence that affective information could be prioritized in the context of a biased competition as early as the first visual cortical synapses in V1.

When fearful displays were presented in the lower hemifield, a positive C1 was not produced, even though we hypothesized that there would be more positive activity produced when compared to negative activity being produced in the upper hemifield. This is likely due to the fact that the C1 is typically more pronounced when generated from stimuli in the upper hemifield (Di Russo et al., 2003). In other words, fear displays generate less positive activity in the lower hemifield compared to negative activity produced when fear faces are presented in the upper hemifield, and thus, can account for the lack of positive C1 in the lower hemifield condition. Consistent with a prioritization of fear displays presented in either hemifield, the negative C1 component produced by nonemotional stimuli presented in the upper hemifield is significantly attenuated when fear displays are concurrently presented in the lower hemifield.

How does the observed early competition of affective prioritization occur? It is possible that the subcortical “retino-tectal” pathway relying on feedforward inputs from retinal sources that project to the pulvinar, amygdala, and V1 (Amaral, Behniea, & Kelly, 2003; LeDoux, 1996) could enhance signal in retinotopically aligned regions containing affective information. This account is supported by fMRI evidence that neural signal in V1 is, indeed, enhanced in retinotopic regions occupied by emotional content (Anderson et al., 2003; Morris, DeGelder, Weiskrantz, & Dolan, 2001; Morris et al., 1998). The involvement of this pathway would also implicate the reliance on coarse low spatial frequency information during affective prioritization, which is conducted at a faster rate compared to cortical pathways that rely on high spatial frequency information (see Pessoa, 2005 and Vuilleumier, 2005 for review). Alternatively, given that retinal stimulation of cortical areas in V1 begins as early as 50 msec after stimulus onset, it is also plausible that our observed affective prioritization in V1 could rely on cortical thalamic projections to V1 or fast-conducting long-range cortical white matter fasciculi could produce the same amplification of affective information in primary visual cortex (Rudrauf et al., 2008).

It should be noted that the discrimination of emotional items does not likely occur as early as V1 (∼60 msec after onset), but rather our observed effects probably result from physical salience of the emotional faces themselves. Evidence from Whalen et al. (2004) has shown that the low spatial frequency reliant amygdala is more responsive to the widened eyes of fear displays compared to smaller eye whites in happy displays. Other work by Morris, deBonis, and Dolan (2002) found similar results that suggest the amygdala preferentially responds to fearful facial. In their study, participants were shown either canonical fearful and neutral faces, or fearful eyes combined with a neutral mouth or neutral eyes combined with a fearful mouth. Their results revealed that the presentation of fearful eyes alone drove increased amygdalar activation. Thus, physical information from the eyes and mouth appears to be crucial in the prioritization of emotional expressions, and in the case of facial fear, the increased local contrast in the areas of the eyes and mouth is thought to facilitate their privileged perceptual processing and may underlie subsequent bias toward these items during frontal categorization processes occurring ∼150 msec after onset (Thorpe et al., 1996), or discrimination between different emotions (e.g., Tsuchiya, Kawasaki, Oya, Howard, & Adolphs, 2008).

Although our main interest was examining the direct neural competition between affective and nonaffective stimuli through examining the C1 component, we also observed a modulation of the P1 component when facial displays of fear were contrasted with their Fourier scrambled derivatives. The P1 likely originates in extrastriate visual cortex and is sensitive to the direction of spatial attention (e.g., Luck, Woodman, & Vogel, 2000). When fear displays were presented in the upper hemifield, a greater P1 amplitude was observed when compared to when Fourier stimuli were presented in the upper hemifield (Experiment 1). This was, however, not observed when fear displays were contrasted with their neutral or inverted counterparts (Experiments 2 and 3). One possible explanation for this is that when a face competes with a nonface object, it garners more attentional resources. Attention has, however, been shown not to affect the amplitude of the C1 component (Luck et al., 2000; Hillyard & Anllo-Vento, 1998), thus it is likely that the observed prioritization of affective information occurs as a precursor to increased spatial attention in regions occupied with face-related information.

Another additional difference was seen during the N170 when fear displays were contrasted with their inverted counterparts in Experiment 3; greater amplitude was observed when fear displays were presented in the upper hemifield. It has previously been found that inverted faces do typically produce a larger N170 wave (e.g., Rossion et al., 2000), but it is unknown how hemifield presentation affects this process. As there does not exist the same retinotopic specificity for both the P1 and N170 that exists in the generation of the C1 component, our concurrently presented stimuli cannot directly comment on these observed electrophysiological differences, but it does appear that differences in attentional and later facial processing do exist between the upper and lower hemifield presentations.

In conclusion, we show evidence that facial displays of fear are prioritized during direct neural competition at the earliest stage of visual processing in V1. This biased competition for representation likely drives increased attentional and later perceptual processing benefits associated with affective stimuli. Future studies will need to address the fear specificity of C1 modulation as well the potential neural origins of fear biased competition in V1.

Reprint requests should be sent to Greg West, Department of Psychology, University of Toronto, 100 St. George Street, Toronto, ON, Canada M5S 3G3, or via e-mail: greg.west@utoronto.ca.

REFERENCES

REFERENCES
Amaral
,
D. G.
,
Behniea
,
H.
, &
Kelly
,
J. L.
(
2003
).
Topographic organization of projections from the amygdale to the visual cortex in the macaque monkey.
Neuroscience
,
118
,
1099
1120
.
Anderson
,
A. K.
,
Christoff
,
K.
,
Panitz
,
D.
,
DeRosa
,
E.
, &
Gabrieli
,
J. D. E.
(
2003
).
Neural correlates of automatic processing of threat facial signals.
Journal of Neuroscience
,
23
,
5627
5633
.
Anderson
,
A. K.
, &
Phelps
,
E. A.
(
2001
).
Lesions of the human amygdala impair enhanced perception of emotionally salient events.
Nature
,
411
,
305
309
.
Beck
,
D. M.
, &
Kastner
,
S.
(
2005
).
Stimulus context modulates competition in human extrastriate cortex.
Nature Neuroscience
,
8
,
1110
1116
.
Calvo
,
M. G.
, &
Esteves
,
F.
(
2005
).
Detection of emotional faces: Low perceptual threshold and wide attentional span.
Visual Cognition
,
12
,
13
27
.
Celebrini
,
S.
,
Thorpe
,
S.
,
Trotter
,
Y.
, &
Imbert
,
I.
(
1993
).
Dynamics of orientation coding in area V1 of the awake primate.
Visual Neuroscience
,
10
,
811
825
.
Clark
,
V. P.
,
Fan
,
S.
, &
Hillyard
,
S. A.
(
1995
).
Identification of early visually evoked potential generators by retinotopic and topographic analysis.
Human Brain Mapping
,
2
,
170
187
.
Desimone
,
R.
, &
Duncan
,
J.
(
1995
).
Neural mechanisms of selective visual attention.
Annual Reviews of Neuroscience
,
18
,
193
222
.
Di Russo
,
F.
,
Martinez
,
A.
, &
Hillyard
,
S. A.
(
2003
).
Source analysis of event-related cortical activity during visuo-spatial attention.
Cerebral Cortex
,
13
,
486
499
.
Ekman
,
P.
, &
Friesen
,
W. V.
(
1976
).
Pictures of facial affect.
Palo Alto, CA
:
Consulting Psychologists Press
.
Folk
,
C. L.
,
Remington
,
R. W.
, &
Johnston
,
J. C.
(
1992
).
Involuntary covert orienting is contingent on attentional control settings.
Journal of Experimental Psychology: Human Perception and Performance
,
18
,
1030
1044
.
Fox
,
E.
,
Lester
,
V.
,
Russo
,
R.
,
Bowles
,
R.
,
Pichler
,
A.
, &
Dutton
,
K.
(
2000
).
Facial expressions of emotion: Are threat faces detected more efficiently?
Cognition and Emotion
,
14
,
61
92
.
Fox
,
E.
,
Russo
,
R.
,
Bowles
,
R.
, &
Dutton
,
K.
(
2001
).
Do threatening stimuli draw or hold visual attention in subclinical anxiety.
Journal of Experimental Psychology
,
130
,
681
700
.
Hillyard
,
S. A.
, &
Anllo-Vento
,
L.
(
1998
).
Event-related brain potentials in the study of visual selective attention.
Proceedings of the National Academy of Sciences, U.S.A.
,
95
,
781
787
.
Jeffreys
,
D. A.
, &
Axford
,
J. G.
(
1972
).
Source locations of pattern-specific components of human visual evoked-potentials: 1. Component of striate cortical origin.
Experimental Brain Research
,
16
,
1
21
.
Kayser
,
J.
(
2003
).
Polygraphic recording data exchange—PolyRex.
Retrieved from http://psychophysiology.cpmc.columbia.edu/PolyRex.htm, Department of Biopsychology, New York State Psychiatric Institute. Accessed June 2008.
Ledoux
,
J.
(
1996
).
The emotional brain: The mysterious underpinnings of emotional life.
New York
:
Simon & Schuster
.
Ledoux
,
J. E.
(
2002
).
The synaptic self.
New York
:
Viking
.
Luck
,
S. J.
,
Woodman
,
G. F.
, &
Vogel
,
E. K.
(
2000
).
Event-related potential studies of attention.
Trends in Cognitive Sciences
,
4
,
432
440
.
Mangun
,
G. R.
(
1995
).
Neural mechanisms of visual selective attention.
Psychophysiology
,
32
,
4
18
.
Matsumoto
,
D.
, &
Ekman
,
P.
(
1988
).
Japanese and Caucasian facial expressions of emotion (JACFEE) and neutral faces (JACNeuF) [Slides]
.
San Francisco, CA
:
Author
.
Maunsell
,
J. H. R.
,
Ghose
,
G. M.
,
Assad
,
J. A.
,
McAdams
,
C. J.
,
Boudreau
,
C. E.
, &
Noerager
,
B. D.
(
1999
).
Visual response latencies of magnocellular and parvocellular LGN neurons in macaque monkeys.
Visual Neuroscience
,
16
,
1
14
.
Miniussi
,
C.
,
Girelli
,
M.
, &
Marzi
,
C. A.
(
1998
).
Neural site of the redundant target effect: Electrophysiological evidence.
Journal of Cognitive Neuroscience
,
10
,
216
230
.
Morris
,
J. S.
,
deBonis
,
M.
, &
Dolan
,
R. J.
(
2002
).
Human amygdala responses to fearful eyes.
Neuroimage
,
17
,
214
222
.
Morris
,
J. S.
,
DeGelder
,
B.
,
Weiskrantz
,
L.
, &
Dolan
,
R. J.
(
2001
).
Differential extrageniculostriate and amygdala responses to presentation of emotional faces in a cortically blind field.
Brain
,
124
,
1241
1252
.
Morris
,
J. S.
,
Friston
,
K. J.
,
Buchel
,
C.
,
Frith
,
C. D.
,
Young
,
A. W.
,
Calder
,
A. J.
,
et al
(
1998
).
A neuromodulatory role for the human amygdala in processing emotional facial expressions.
Brain
,
121
,
47
57
.
Morris
,
J. S.
,
Öhman
,
A.
, &
Dolan
,
R. J.
(
1999
).
A subcortical pathway to the right amygdala mediating “unseen” fear.
Proceedings of the National Academy of Sciences, U.S.A.
,
96
,
1680
1685
.
O'Connor
,
D. H.
,
Fukui
,
M. M.
,
Pinsk
,
M. A.
, &
Kastner
,
S.
(
2002
).
Attention modulates responses in the human lateral geniculate nucleus.
Nature Neuroscience
,
5
,
1203
1209
.
Öhman
,
A.
,
Lundqvist
,
D.
, &
Esteves
,
F.
(
2001
).
The face in the crowd revisited: A threat advantage with schematic stimuli.
Journal of Personality and Social Psychology
,
80
,
381
396
.
Oram
,
M. W.
, &
Perrett
,
D. I.
(
1992
).
Time course of neural responses discriminating different views of the face and head.
Journal of Neurophysiology
,
68
,
71
84
.
Padmala
,
S.
, &
Pessoa
,
L.
(
2008
).
Affective learning enhances visual detection and responses in primary visual cortex.
Journal of Neuroscience
,
28
,
6202
6210
.
Pessoa
,
L.
(
2005
).
To what extent are emotional visual stimuli processed without attention and awareness?
Current Opinion in Neurobiology
,
15
,
188
196
.
Phelps
,
E. A.
,
Ling
,
S.
, &
Carrasco
,
M.
(
2006
).
Emotion facilitates perception and potentates the perceptual benefits of attention.
Psychological Science
,
17
,
292
299
.
Pourtois
,
G.
,
Grandjean
,
D.
,
Sander
,
D.
, &
Vuilleumier
,
P.
(
2004
).
Electrophysiological correlates of rapid spatial orienting towards fearful faces.
Cerebral Cortex
,
14
,
619
633
.
Rossion
,
B.
,
Gauthier
,
I.
,
Tarr
,
M. J.
,
Despland
,
P.
,
Bruyer
,
R.
,
Linotte
,
S.
,
et al
(
2000
).
The N170 occipito-temporal component is delayed and enhanced to inverted faces but not to inverted objects: An electrophysiological account of face-specific processes in the human brain.
NeuroReport
,
11
,
69
72
.
Rudrauf
,
D.
,
David
,
O.
,
Lachaux
,
J.-P.
,
Kovach
,
C. K.
,
Martinerie
,
J.
,
Renault
,
B.
,
et al
(
2008
).
Rapid interactions between the ventral visual stream and emotion-related structures rely on a two-pathway architecture.
Journal of Neuroscience
,
28
,
2793
2803
.
Stolarova
,
M.
,
Keil
,
A.
, &
Moratti
,
S.
(
2006
).
Modulation of the C1 visual event-related component by conditioned stimuli: Evidence for sensory plasticity in early affective perception.
Cerebral Cortex
,
16
,
876
887
.
Talsma
,
D.
, &
Woldorff
,
M. G.
(
2005
).
Selective attention and multisensory integration: Multiple phases of effects on the evoked brain activity.
Journal of Cognitive Neuroscience
,
17
,
1098
1114
.
Thorpe
,
S.
,
Fize
,
D.
, &
Marlot
,
C.
(
1996
).
Speed of processing in the human visual system.
Nature
,
381
,
520
522
.
Tsuchiya
,
N.
,
Kawasaki
,
H.
,
Oya
,
H.
,
Howard
,
M. A.
, &
Adolphs
,
R.
(
2008
).
Decoding face information in time, frequency, and space from direct intracranial recordings of the human brain.
PLoS ONE
,
3
,
e3892
.
Vuilleumier
,
P.
(
2005
).
How brains beware: Neural mechanisms of emotional attention.
Trends in Cognitive Sciences
,
9
,
585
594
.
Vuilleumier
,
P.
,
Armony
,
J. L.
,
Driver
,
J.
, &
Dolan
,
R. J.
(
2001
).
Effects of attention and emotion on face processing in the human brain: An event-related fMRI study.
Neuron
,
30
,
829
841
.
Vuilleumier
,
P.
,
Armony
,
J. L.
,
Driver
,
J.
, &
Dolan
,
R. J.
(
2003
).
Distinct spatial frequency sensitivities for processing faces and emotional expressions.
Nature Neuroscience
,
6
,
624
631
.
West
,
G. L.
,
Anderson
,
A. K.
,
Bedwell
,
J.
, &
Pratt
,
J.
(
2010
).
Red diffuse light suppresses the perceptual acceleration of fear.
Psychological Science
,
21
,
992
999
.
West
,
G. L.
,
Anderson
,
A. K.
, &
Pratt
,
J.
(
2009
).
Motivationally significant stimuli show visual prior entry: Evidence for attentional capture.
Journal of Experimental Psychology: Human Perception and Performance
,
35
,
1032
1042
.
Whalen
,
P.
(
1998
).
Fear, vigilance, and ambiguity: Initial neuroimaging studies of the human amygdala.
Current Directions in Psychological Science
,
7
,
177
188
.
Whalen
,
P. J.
,
Kagan
,
J.
,
Cook
,
R. G.
,
Davis
,
F. C.
,
Kim
,
H.
, &
Polis
,
S.
(
2004
).
Human amygdala responsivity to masked fearful eye whites.
Science
,
306
,
2061
2065
.
Yantis
,
S.
, &
Jonides
,
J.
(
1984
).
Abrupt visual onsets and selective attention: Evidence from visual search.
Journal of Experimental Psychology: Human Perception and Performance
,
5
,
601
621
.
Yeshurun
,
Y.
, &
Carrasco
,
M.
(
1998
).
Attention improves or impairs visual performance by enhancing spatial resolution.
Nature
,
396
,
72
75
.