It is becoming increasingly established that information from long-term memory can influence early perceptual processing, a finding that is in line with recent theoretical approaches to cognition such as the predictive coding framework. Notwithstanding, the impact of semantic knowledge on conscious perception and the temporal dynamics of such an influence remain unclear. To address this question, we presented pictures of novel objects to participants as the second of two targets in an attentional blink paradigm. We found that associating newly acquired semantic knowledge to objects increased overall conscious detection in comparison to objects associated with minimal knowledge while controlling for object familiarity. Additionally, event-related brain potentials revealed a corresponding modulation beginning 100 msec after stimulus presentation in the P1 component. Furthermore, the size of this modulation was correlated with participant's subjective reports of conscious perception. These findings suggest that semantic knowledge can shape the contents of consciousness by affecting early stages of perceptual processing.
Successful conscious detection of stimuli in our environment may vary according to purely sensory properties of the stimulus, such as salience or luminosity, as well as nonsensory aspects arising from the observer's internal states, such as motivations, beliefs, and expectations (Collins & Olson, 2014; Gilbert & Li, 2013). The idea that high-level factors, such as previous experience, emotional content, verbal categories, or semantic information, may play a role in shaping our perceptual experience of the world is supported by various findings. For example, afterimages for objects with intrinsic color are stronger than those for arbitrarily colored objects (Lupyan, 2015), semantic knowledge facilitates the recognition of objects across changes in viewpoint (Collins & Curby, 2013), and associating socially relevant negative information with faces leads participants to perceive and judge the faces and their emotional expressions as more negative (Rabovsky, Stein, & Abdel Rahman, 2016; Suess, Rabovsky, & Abdel Rahman, 2015; Abdel Rahman, 2011). Furthermore, verbal categories have been shown to modulate the detection and discrimination of visual features such as color and shape (Regier & Kay, 2009; Thierry, Athanasopoulos, Wiggett, Dering, & Kuipers, 2009) and may even affect visual consciousness (Maier & Abdel Rahman, 2018). Verbal categories may also affect the detection and discrimination of entire objects (Maier, Glage, Hohlfeld, & Abdel Rahman, 2014), and memory for intrinsic color categories can modulate color experience (Witzel, Valkova, Hansen, & Gegenfurter, 2011; Mitterer, Horschig, Müsseler, & Majid, 2009; Hansen, Olkkonen, Walter, & Gegenfurter, 2006). Note that, although many of these studies have used visual paradigms, the broader phenomenon of previous information or familiarity influencing perception is not limited to the visual domain (e.g., in the auditory domain: the cocktail party effect; Cherry, 1953). In this study, we ask to what extent semantic knowledge may influence our ability to consciously detect an object, independent of object features and familiarity.
The possibility that semantic information may be involved in shaping the contents of conscious perception coheres with predictive coding theories of visual perception1 (Clark, 2013; Friston, 2005; Rao & Ballard, 1999). From this perspective, neural systems are able to take advantage of statistical regularities in the visual environment by incorporating these consistencies into the perceptual process. Such information may come from many sources, including life-long experiences, implicit memory, or conceptual knowledge. Predictive coding theories propose that this information is used by the brain to continuously generate internal predictions about future events, allowing for more efficient stimulus processing (De-Wit, Machilsen, & Putzeys, 2010).
Evidence to suggest that semantic information may play a role in conscious detection comes from a number of recent studies. Two priming studies employed the continuous flash suppression (CFS) paradigm as a variant of binocular rivalry (Sterzer, Stein, Ludwig, Rothkirch, & Hesselmann, 2014; Tsuchiya & Koch, 2005). In this paradigm, stimuli presented to one eye undergo suppression due to the simultaneous presentation of a pattern mask of high contrast to the other eye. Costello, Jiang, Baartman, McGlennen, and He (2009) demonstrated that target words that were undergoing suppression and thus could not be consciously detected could break free from suppression earlier when they were preceded by the presentation of a semantically related prime word. Similarly, Lupyan and Ward (2013) showed that hearing an object's name improved the subsequent detection of objects during CFS. Emotional valence has also been shown to affect suppression times, with emotionally negative utterances showing shorter suppression times than neutral utterances (Sklar et al., 2012; see also Almeida, Pajtas, Mahon, Nakayama, & Caramazza, 2013), and faces associated with negative social information dominate for longer during binocular rivalry than neutral faces (Anderson, Siegel, & Barrett, 2011; but see Rabovsky et al., 2016, for null effects during CFS). These studies suggest that conceptual information can alter the time course and extent to which associated stimuli enter into conscious awareness. In the current study, we recorded the EEG to elucidate the time course of semantic influences over conscious perception and to relate behavioral measures of stimulus detection to preceding modulations of perceptual stages, indexed by early EEG effects.
ERP recordings provide an ideal tool to elucidate the time course of cognitive processes. A number of ERP studies have demonstrated that high-level information can influence the earliest stages of visual processing. Categorical perception effects have been shown to reliably modulate the MMN, an ERP marker taken to be an index of early attentional processing, occurring between 150 and 250 msec (Boutonnet, Dering, Viñas-Guasch, & Thierry, 2012; Mo, Xu, Kay, & Tan, 2011; Thierry et al., 2009). Thierry et al. (2009) further demonstrated influences of verbal categories on the P1, an early ERP component peaking between 100 and 150 msec post-stimulus onset that is thought to reflect basic visual perception, for example, of individual stimulus features, in the extrastriate cortex (Di Russo, Martínez, Sereno, Pitzalis, & Hillyard, 2002), indicating that high-level information can affect a very early stage of stimulus processing. Similarly, using a priming procedure, Boutonnet and Lupyan (2015) showed that the P1 elicited by objects is modulated when they are preceded by the auditory presentation of the object's name. Abdel Rahman and Sommer (2008), using a learning paradigm, selectively manipulated the amount of semantic information associated with initially unfamiliar objects, which were later presented for naming, recognition evaluation, and semantic classification tasks. They found that, in addition to a later modulation of the N400 ERP component, which is related to semantic processing (Kutas & Federmeier, 2011), objects that were associated with semantic information elicited P1 components of reduced amplitude. Rabovsky, Sommer, and Abdel Rahman (2012) found the same P1 effect when performing the same task on object names. These findings suggest that semantic information can modulate early perception. Semantic information may induce such modulation by modifying the perceptual representations of the novel objects formed during training, resulting in more efficient bottom–up propagation of information. Alternatively, given that studies have shown that feedback from frontal cortices to early visual modules can occur after stimulus onset (Bar et al., 2006; Lamme & Roelfsema, 2000), semantic information may exert an influence over extrastriate areas in the form of feedback, resulting in a modulation of the P1 component. The P1 modulation in the studies by Rabovsky et al. (2012) and Abdel Rahman and Sommer (2008) was, however, not accompanied by a corresponding change in RTs. It is possible that the P1 effects in these studies had a functional role in perception, which did not come to bear on behavior because the tasks were very easy at a perceptual level. Thus, behavioral manifestations of the influence of semantic information over conscious perception may only occur in situations of increased perceptual difficulty.
To directly address the role of semantic information in conscious detection, we utilized a similar learning procedure to that used by Abdel Rahman and Sommer (2008), where participants are familiarized with a series of initially unfamiliar object pictures and the amount of semantic information provided for each object was manipulated. Subsequent to this learning procedure, object pictures with in-depth functional versus minimal associated semantic information were presented under conditions of difficult conscious detection in the attentional blink paradigm where, during rapid serial visual presentation, participants are unable to detect the presence of a second target stimulus (T2) when its presentation falls within a certain time window following the processing of a first to-be-reported target (T1; Raymond, Shapiro, & Arnell, 1992). Here, object pictures were briefly presented as the second of two targets, and participants were required to detect the presence of an object within the presentation stream, a task that does not require semantic analysis of the stimulus.
EEG studies of the attentional blink show that, in addition to a preserved early P1/N1 complex, blinked trials (i.e., trials where T2 goes undetected) show a preserved N400 (Rolke, Heil, Streb, & Henninghausen, 2001; Vogel, Luck, & Shapiro, 1998), an ERP component associated with semantic processing (Kutas & Federmeier, 2011). This finding indicates that unreported targets undergo extensive processing, at least to the level of semantic analysis, without participants' awareness. We reasoned that, because unreported targets undergo a high level of stimulus processing, the attentional blink paradigm provides an ideal context to test our hypothesis regarding the influence of semantic information on conscious detection. In contrast to the N400, the P300, a component that has been suggested to reflect the consolidation of information into working memory, is reduced for unreported T2 trials (Sergent, Baillet, & Dehaene, 2005), suggesting that the attentional demands of T1 processing in working memory encoding, episodic processing, and response selection reduce the amount of high-level resources that are available for the processing of T2, resulting in a reduction in conscious detection of T2.
We predicted that if semantic information plays a role in the conscious detection of visual stimuli under difficult conditions in the attentional blink task, then objects associated with more functional–semantic information should be detected to an overall greater extent than objects associated with minimal information. Additionally, in line with previous studies using similar materials and learning procedures (Rabovsky et al., 2012; Abdel Rahman & Sommer, 2008), we expected semantic information to induce modulations of the P1 and N400 ERP components.
A sample of 32 right-handed participants (17 women) took part in the experiment in return for a monetary compensation or course credit. All participants were native speakers of German with normal or corrected-to-normal visual acuity. The sample had a mean age of 27 years (range = 20–34 years). This research was approved by the ethics committee at the Department of Psychology, Humboldt-Universität zu Berlin. Participants provided written informed consent before participation.
Stimuli for T2 targets consisted of grayscale BMP pictures (207 × 207 pixels) of 25 well-known and 40 rare objects, previously used by Abdel Rahman and Sommer (2008) and Rabovsky et al. (2012). Of the 40 rare objects, half were real objects, and half were fictitious. Five of the well-known objects were used for practice trials. For all objects, a sentence was recorded, stating the object's name (e.g., “This is a sofa.”). For the rare objects, pseudowords were used as names that did not reveal any meaningful information regarding functional properties. (e.g., “This is a squonker”). Additional sentences were recorded for the rare objects, which explained their functional use (e.g., “This is a machine for breeding chicken eggs.”). Rare objects were randomly divided into two subsets to be allocated to the two semantic conditions. The assignment of objects to semantic conditions was counterbalanced across participants so that bottom–up influences of differences in contrast, luminance, complexity, and so forth could be excluded. For T1 targets, 164 grayscale photographs of neutral faces were selected, half of which were female, and all were edited for homogeneity of features outside the face. Masking stimuli consisted of 127 grayscale animal pictures. All stimuli were presented in the center of the screen on a light blue background (red, green, blue value: 169, 217, 255).
In the learning phase, participants were familiarized with the objects along with the information associated with each condition. Well-known and rare objects were presented in random order in the center of the monitor screen (DELL 1908FPb, 19 in., 1280 × 1024, 75 Hz). Simultaneously, an auditory sentence was presented. For half of the rare objects, the sentence contained information about the name of the object (minimal knowledge condition), and for the remaining half, the sentence consisted of functional information regarding the use of the object (functional knowledge condition). Well-known objects were presented, along with their name across all participants. All objects were presented for 4000 msec with a 500-msec interstimulus interval during which a fixation cross appeared on the screen and a briefly presented blank screen was used to separate sequential stimuli. Each object was presented three times.
Each trial consisted of the following series of events: A central fixation cross was presented for 500 msec, followed by a series of one to seven masks with the number varying randomly on each trial. T1 was then presented, followed by a single mask. During the SOA between T1 and T2, a fixation cross remained on the screen. T2 was then presented and was followed by two masks. Masks, T1, and T2 were each displayed for 27 msec and were separated by blank screens lasting 41 msec. The SOA between T1 and T2 was either short (258 msec) or long (688 msec). T1 consisted of a male or female face, and masking stimuli were created from a selection of animal pictures. On T2 present trials, rare and well-known objects were presented, and on T2 absent trials, participants were presented with a blank screen. Following the presentation of the stimuli, participants were required to answer a series of questions regarding their experiences for T1 and T2. Participants were first asked whether they have seen an object using a perceptual awareness scale (Sandberg & Overgaard, 2015). They pressed one of four keys: (a) if they did not see any object, (b) if they had the impression of there being an object present, (c) if they were able to perceive parts of but not the full object, and (d) if they perceived a full object. Participants were encouraged to use the full range of response options throughout the experiment. On trials where participants indicated some level of object awareness (Responses b, c, or d), a second question was presented on the screen asking, “Was the object an everyday object?” to which participants gave a binary manual yes or no response. Finally, participants were asked to classify the face presented as the T1 stimulus as male or female with a manual response. For all questions, a schematic representation of the different response options was presented on the screen below the question. Figure 1 provides a visual illustration of the trial scheme. All questions were presented on the screen until a response was given. On two thirds of the trials, T2 was present, and the remaining third were composed of T2 absent trials. Seventy-five percent of all trials, that is, both T2 present and absent trials, were presented in the critical blink condition at the short SOA. Within a single block, all rare and well-known objects were presented once, requiring participants to complete four blocks to rotate all items across the two SOA conditions (due to the 75:25 ratio of short/long SOA). Participants carried out eight blocks with a short break after each block. Before beginning the experiment, there were 20 practice trials, and in the experiment proper, participants completed 720 trials, 240 of which were T2 absent trials and 480 were T2 present trials, 360 with short and 120 with long SOA. For each of the two rare object conditions and for the well-known condition, 120 and 40 trials were presented at the short and long SOAs, respectively. Thus, each object was presented twice at the long SOA and six times at the short SOA.
Participants were subsequently presented with a list of pictures of all rare objects and were asked to write down any information that they could remember from the learning phase for each object.
Continuous EEG was recorded throughout the experimental session with Ag/AgCl electrodes at 64 sites positioned according to the extended 10–20 system (Pivik et al., 1993) at a 500-Hz sampling rate, using a bandpass (0.032–70 Hz) filter. During recording, all electrodes were referenced to the left mastoid, and electrode impedance levels were kept below 5 kΩ. Horizontal and vertical EOGs were recorded from the external canthi and from above and below the midpoint of the right eye. Offline, the EEG was re-referenced to the average voltage of all electrodes, and a low-pass filter of 30 Hz was applied. Eye blink and horizontal and vertical EOG activity were removed using a Gratton and Coles correction (Gratton, Coles, & Donchin, 1983). Remaining artifacts were eliminated using an automatic rejection procedure where amplitudes exceeding ±100 μV or changing by more than 75 μV between successive samples were eliminated. Baseline activity was corrected to a 100-msec time period before the onset of T2, and trials were segmented into time windows 200 msec before and 800 msec after T2 onset. Trials on which participants responded incorrectly to T1 were excluded from the analysis. Time windows for the P1 (100–150 msec) and N400 components (300–500 msec) were based on previous studies demonstrating semantic effects on low-level visual perception and later stages of meaning access (Rabovsky et al., 2012; Abdel Rahman & Sommer, 2008). ROIs were selected based on those clusters of electrodes where effects were maximal (see below).
In analyzing participants' ability to perceive T2 across conditions, we first excluded T1 error trials and made an index of overall object detection by calculating mean responses within each condition. This detection score ranges from 1 to 4, with 1 indicating “no object perception” and 4 indicating “complete object perception.” To assess the influence of semantic information and SOA on conscious detection, we constructed linear mixed models using the lmer function from the lme4 R package, Version 1.1-12 (Bates et al., 2016). For performance and ERPs, we included semantic condition (minimal vs. functional knowledge) and SOA (short, long) as fixed factors and participant as a random unit. Well-known objects were not included in this analysis since they cannot be assigned to different conditions, and therefore, visual differences cannot be excluded. For ERP effects, we additionally included electrode as a random unit, nested within participants (Aarts, Verhage, Veenvliet, Dolan, & van der Sluis, 2014). Because of convergence problems for the maximal random effects structures, we used a data-driven model comparison using restricted maximum likelihood estimation approach (Matuschek, Kliegl, Vasishth, Baayen, & Bates, 2017; Zuur, Ieno, Walker, Saveliev, & Smith, 2009) to identify the optimal random structure that accounted for the most variance, as indicated by Akaike information criterion and log-likelihood values. The final model included a random slope for SOA by subject and by electrode nested within subject, respectively. Subsequently, fixed effects were tested using maximum likelihood estimation. Models were tested using the ANOVA function from the lmerTest R package, Version 2.0-30 (Kuznetsova, Brockhoff, & Christensen, 2014). Explained deviance (dv) was calculated using the pamer.fnc function from the LMERCovenienceFunctions R package (Tremblay & Ransijn, 2015) Post hoc comparisons were Bonferoni-corrected and calculated using the testinteractions function from the R package phia, Version 0.2-1 (De Rosario-Martínez, 2015). Postexperimental questionnaires were scored by two independent evaluators. For the minimal knowledge condition, participant recall for an object was scored as correct if it matched the name that was previously learned. For objects from the functional knowledge condition, participant recall was scored as correct if the gist of the description matched the description that was previously learned for that object. Recall was scored as correct or incorrect.
Participants varied widely in their ability to recall the functional information associated with objects at the beginning of the experiment (mean = 74%, SD = 20%, range = 20–100%). Recall for object labels was generally lower than that for functional information (mean = 17%, SD = 16%, range = 0–70%). To insure that our manipulation was effective, we removed those two participants who showed recall levels for functional information lower than two standard deviations below the mean.
Semantic Knowledge Effects
On 12% of trials, participants incorrectly classified T1. These trials were removed from any further analysis. Participants were able to detect objects to a greater extent at the long SOA (m = 3.21) than at the short SOA (m = 2.84), F(1, 29) = 151.03, p <.001, dv = .79. More importantly, participants also slightly differed in their overall detection of T2 objects across semantic conditions (functional knowledge, m = 2.64; minimal knowledge, m = 2.61), reflected in a significant main effect of Semantic condition, F(1, 58) = 4.15, p < .05, dv = .27 (Figure 2A). This difference in mean detection between semantic conditions, though statistically significant, is small, and we encourage readers to interpret it accordingly. The interaction between the factors Semantic condition and SOA was nonsignificant, F(1, 58) = 0.11, p = .75, dv = .01.
Of primary interest regarding the effects of semantic information on early stages of visual processing, analysis of the P1 time window (ROI: O1, Oz, O2, PO3, POz, PO4) revealed no significant main effects of Semantic condition, F(1, 508) = 0.004, p = .96, dv < .01, or SOA, F(1, 29) = 2.34, p =.14, dv = .02. The effect of semantic condition at the different SOAs showed opposing effects with a positive modulation (i.e., larger amplitudes for the functional knowledge as compared with the minimal knowledge condition) at the short SOA and a negative-going modulation (i.e., smaller amplitudes for the functional knowledge condition) at the long SOA (Figures 2B and 3); this pattern was confirmed by a significant interaction between Semantic condition and SOA, F(1, 508) = 13.81, p < .001, dv = .09. Follow-up comparisons showed that these effects were significant at both the short, χ2 = 6.68, p < .02, and the long, χ2 = 7.14, p < .02, SOA.2
For the N400 component (ROI: O1, Oz, O2, PO3, POz, PO4), there was a significant main effect of SOA, F(1, 29) = 13.22, p = .001, dv = .21, whereas the main effect of Semantic condition was nonsignificant, F(1, 29) = 1.51, p = .23, dv = .02. SOA interacted with Semantic condition, F(1, 479) = 20.71, p < .001, dv = .32. Follow-up contrasts revealed a significant effect of Semantic condition at the long SOA with smaller amplitudes for the functional knowledge condition, χ2 = 7.23, p < .02, whereas at the short SOA, the effect of Semantic condition was nonsignificant, χ2 = 0.14, p = 1.
Correlation Analysis between ERP Amplitude Differences and T2 Detection
To assess the relation between the P1 modulation and behavioral measures of conscious access, we correlated the mean amplitude difference of the P1 component with the difference in detection between knowledge conditions (functional minus minimal knowledge) separately for each SOA. This analysis resulted a nonsignificant correlation at the short SOA (r = −.05, p = .79) and a marginally significant correlation at the long SOA (r = −.31, p = .095). To further explore the relationship between the semantic modulation of the P1 component and the behavioral measure of conscious detection, we collapsed across both SOAs to calculate the overall effect of semantic information on both the P1 and behavior, revealing a significant negative correlation (r = −.46, p = .01; Figure 3). Thus, stronger knowledge effects on P1 amplitudes, as reported in previous studies (Rabovsky et al., 2012; Abdel Rahman & Sommer, 2008), were associated with stronger knowledge-induced facilitation of detection performance. To further explore this relation and its time course, we calculated point-by-point correlations between amplitude differences and the behavioral effect size over consecutive sampling points, every 2 msec from 0 to 200 msec.3 A series of sampling points was considered significant when a minimum of 15 significant, uninterrupted correlations appeared consecutively (Guthrie & Buchwald, 1991). The P1 modulation began to correlate significantly with behavior at 106 msec continuing uninterrupted until 144 msec, a time window that encompasses 20 sampling points in a row, corresponding to 38 msec (Figures 4, 5, and 6).
Our results demonstrate that semantic information increases the extent to which objects may be consciously detected. In the attentional blink at the short SOA when attention is occupied or under conditions of less severe perceptual difficulty in the long SOA, semantic information increased the likelihood of visual objects reaching awareness, causing the observer to consciously experience a more complete object percept. To our knowledge, this study is the first to demonstrate that semantic modulations of the P1 component can influence the contents of visual awareness. This finding, which was observed across SOAs, was associated with a change in the EEG signal occurring as early as 100 msec after object presentation, in the form of a modulation of the P1 component. Previous studies have shown semantically induced P1 modulations for clearly visible objects, but without significant behavioral differences between conditions. Here, we show a similar P1 modulation, namely an amplitude reduction associated with semantic information limited to the long SOA, and accompanied by behavioral differences in conscious perception. Crucially, the P1 modulation was correlated with subjective reports of conscious perception, indicating that semantic information—through a modulation of the brain activation generating the P1—has a functional role in shaping the contents of conscious perception.
An unexpected finding was the interaction between semantic information and SOA on the P1 component, with a reduction at the long SOA, as reported in previous studies (Rabovsky et al., 2012; Abdel Rahman & Sommer, 2008), and an increase at the short SOA. Given that the correlation between conscious detection and the P1 effect was negative, our results suggest that it is the reduction in the P1 amplitude that is associated with facilitation of visual analysis and increases in conscious perception. Additionally, objects associated with functional–semantic information elicited an N400 effect at the long SOA only, indicating that it was primarily in this condition that objects were processed at a semantic level. This is interesting in light of prior studies demonstrating preserved N400 amplitudes during the attentional blink (Rolke et al., 2001; Vogel et al., 1998). The short SOA represents a condition of increased difficulty as attention is unavailable here, suggesting that semantic information may operate in distinct ways when task difficulty is increased. It seems possible that the accessibility of semantic information under such conditions may depend on the novelty of this information in the sense that semantic knowledge, which is well established in long-term memory, may be unconsciously activated while the activation of newly acquired knowledge may more strongly depend on attention and consciousness. This is an interesting question for future research and could be tested by contrasting participants' overall detection performance on a set of well-known items that are better controlled in terms of perceptual familiarity, with their detection performance for stimuli that have been recently associated with functional knowledge. In addition, we would like to again highlight the relatively small effect sizes for participant differences in T2 detection for well-known and minimal knowledge objects. One possible attenuating factor responsible for this is our choice of T1 stimuli: Faces are highly effective in attracting attention and have been shown to augment the attentional blink to a larger extent than other object categories (Landau & Bentin, 2008). It would be interesting in future research to test whether the magnitude of differences in performance would change according the type of T1 stimulus used.
The influence of semantic information on conscious object detection observed here could reflect a number of different underlying mechanisms. One possibility is that the effect is due to a direct modulation of visual processing at a preattentive stage. Semantic information may alter the visual processes that generate the P1, allowing for more efficient perceptual processing of target objects. This kind of facilitation in visual processing may operate via the recruitment of feedback from higher cortical areas involved in the generation of hypotheses regarding the nature of incoming sensory signals (e.g., Bar et al., 2006). Such a mechanism may recruit functional semantic information associated with objects to generate more efficient predictions regarding the nature of the target, thus reducing early perceptual processing demands. This explanation is in line with the recent surge of studies claiming that higher level cognitive factors can influence early perception (e.g., Lupyan, 2015) and with research showing that information may propagate from visual cortices to frontal and parietal areas within 30 msec and frontal areas may become active within 80 msec after stimulus presentation, allowing sufficient time for feedback to extrastriate areas within the P1 time range (Foxe & Simpson, 2002). Modulations in the P1 time range have also been discussed as potential correlates of visual awareness. The ERPs most often reported in relation to conscious perception are an early posterior negativity at around 200 msec (visual awareness negativity) and a later positivity in the P3 time window (late positivity), both of which are enhanced for reported stimuli (Koivisto & Revonsuo, 2010). A large body of studies that contrast reported with unreported conditions also show modulations in the P1 time range, for example, in visual masking (Del Cul, Baillet, & Dehaene, 2007), change blindness (Pourtois, De Pretto, Hauert, & Vuilleumier, 2006), and during bistable perception (Britz, Landis, & Michel, 2009; Kornmeier & Bach, 2006). Koivisto and Revonsuo (2010) interpret these P1 modulations related to seen stimuli as reflecting the preconscious allocation of attention to perception (cf. discussion below). However, a recent study (Davoodi, Moradi, & Yoonessi, 2015) where attention and consciousness were orthogonally manipulated revealed a similar change in the P1 amplitude for seen trials under inattentive conditions, whereas modulations in the P3 range occurred for seen trials in the attentive condition only. Wyart, Dehaene, and Tallon-Baudry (2012) found a similar early component reflecting consciousness under conditions where attention was otherwise engaged, reporting a correlate of conscious detection 120 msec after stimulus onset.
The semantic effects in this study may also be mediated by attention. Attention driven by functional object-related semantic information may lead to a selective enhancement of relevant visual features associated with object functions. Given that attention has been shown to operate at early timescales, modulating the P1 component, the time course of semantic modulations reported here is also in line with such an “attention to perception” explanation. Indeed, it is well established that P1 modulations may reflect the attentional tuning of early visual processing, with stimuli that are presented at attended locations being associated with larger amplitudes (Di Russo, Martínez, & Hillyard, 2003; Mangun, 1995). This has been interpreted as reflecting an attentional “gain control” or amplification of incoming sensory signals (Hillyard, Vogel, & Luck, 1998). Similar P1 modulations can also be observed when attention is directed to nonspatial features, such as color (Zhang & Luck, 2009). The P1 may therefore reflect more engagement of attentional resources at an early stage of visual processing, preceding the appearance of the object into conscious awareness but giving sufficient amplification of the signal to be able to cross the threshold into consciousness.
As discussed above, the relation between consciousness and attention is complex, with various studies demonstrating that they may be two distinct processes, and therefore, attention is not a prerequisite for conscious experience (e.g., Kentridge, Nijboer, & Heywood, 2008; Wyart & Tallon-Baudry, 2008; Koch & Tsuchiya, 2007). These studies, in combination with the significant correlation between the size of the P1 effect and subjective measures of conscious detection, suggest that, rather than exclusively representing preconscious processes, the semantically induced P1 modulation in the current study is functionally linked to conscious perception. Whether conscious detection is mediated via preattentive or early attentive processing, the present findings demonstrate that semantic information has an influence on the emergence of a conscious visual percept and thus is a determining factor for conscious perception.
The present research is the first to demonstrate that semantic modulations of the P1 component may shape the contents of conscious awareness by penetrating early stages of visual perception as reflected in a modulation of EEG activity as early as 100 msec after stimulus presentation. Crucially, behavioral and electrophysiological effects are correlated. Our results provide support for the hypothesis that visual perception is influenced by higher level cognitive processes. We suggest that the power of semantic information to shape the contents of conscious awareness reflects an adaptive mechanism in the sense that semantic meaning—in a similar way to prior experience—can serve as a basis for generating predictions regarding the nature of incoming sensory signals, thus affording a processing advantage for stimuli that match the contents of these predictions.
We would like to thank Guido Kiecker for help in programming this experiment; Luisa Balzus for contributions to stimulus preparation; and Julia Baum, Anna Eiserbeck, and Marie Borowikow for assistance with data acquisition. This work was funded by German Research Foundation grant AB 277-6 to R. A. R.
Reprint requests should be sent to Peter D. Weller, Humboldt-Universitat zu Berlin Lebenswissenschaftliche Fakultat 408761, Rudower Chaussee 18, 12489 Berlin Adlershof, Germany, or via e-mail: email@example.com.
Theoretically, an encapsulated module of perception may be part of a system that includes predictive coding in other components. For instance, modular theories take perception to be encapsulated from higher level cognitive factors (Firestone & Scholl, 2015; Pylyshyn, 1999), whereas other processing components may be affected by higher level cognition or predictive coding, emphasizing the brain's ability to use past experiences to predict the type and nature of incoming signals. In contrast, predictive coding theories of perception assume that perception is directly affected by predictive coding.
An alternative way of looking at these findings is that it is not the presence of functional knowledge that is modulating responses to the P1, but rather the lack of a name for those objects. To rule out this possibility, we analyzed P1 amplitudes within the minimal knowledge and functional knowledge conditions according to participants’ subsequent memory for the information that was associated with each object. To this end, we conducted two separate one-way ANOVAs with Participant recall (yes/no) as independent variable and P1 amplitude as dependent variable. If it is the association with semantic knowledge that drives the P1 effect, then significant differences in the magnitude of the P1 between remembered and forgotten descriptions should be present within the functional knowledge condition only, whereas no differences in P1 amplitude should be observed according to participants’ memory for object labels. In line with our theoretical framework, we found a modulatory effect of memory for functional knowledge, F(1, 26) = 4.26, p < .05 (.049), but no significant modulatory effect of memory for object labels, F(1, 27) = 0.19, p = .23.
Please note that the EEG data are temporally smoothed due to the low-pass filter and that we did not correct for multiple comparisons across time segments because we had very clear hypotheses concerning the P1 segment based on our previous studies (Rabovsky et al., 2012; Abdel Rahman & Sommer, 2008).