Abstract

It is becoming increasingly established that information from long-term memory can influence early perceptual processing, a finding that is in line with recent theoretical approaches to cognition such as the predictive coding framework. Notwithstanding, the impact of semantic knowledge on conscious perception and the temporal dynamics of such an influence remain unclear. To address this question, we presented pictures of novel objects to participants as the second of two targets in an attentional blink paradigm. We found that associating newly acquired semantic knowledge to objects increased overall conscious detection in comparison to objects associated with minimal knowledge while controlling for object familiarity. Additionally, event-related brain potentials revealed a corresponding modulation beginning 100 msec after stimulus presentation in the P1 component. Furthermore, the size of this modulation was correlated with participant's subjective reports of conscious perception. These findings suggest that semantic knowledge can shape the contents of consciousness by affecting early stages of perceptual processing.

INTRODUCTION

Successful conscious detection of stimuli in our environment may vary according to purely sensory properties of the stimulus, such as salience or luminosity, as well as nonsensory aspects arising from the observer's internal states, such as motivations, beliefs, and expectations (Collins & Olson, 2014; Gilbert & Li, 2013). The idea that high-level factors, such as previous experience, emotional content, verbal categories, or semantic information, may play a role in shaping our perceptual experience of the world is supported by various findings. For example, afterimages for objects with intrinsic color are stronger than those for arbitrarily colored objects (Lupyan, 2015), semantic knowledge facilitates the recognition of objects across changes in viewpoint (Collins & Curby, 2013), and associating socially relevant negative information with faces leads participants to perceive and judge the faces and their emotional expressions as more negative (Rabovsky, Stein, & Abdel Rahman, 2016; Suess, Rabovsky, & Abdel Rahman, 2015; Abdel Rahman, 2011). Furthermore, verbal categories have been shown to modulate the detection and discrimination of visual features such as color and shape (Regier & Kay, 2009; Thierry, Athanasopoulos, Wiggett, Dering, & Kuipers, 2009) and may even affect visual consciousness (Maier & Abdel Rahman, 2018). Verbal categories may also affect the detection and discrimination of entire objects (Maier, Glage, Hohlfeld, & Abdel Rahman, 2014), and memory for intrinsic color categories can modulate color experience (Witzel, Valkova, Hansen, & Gegenfurter, 2011; Mitterer, Horschig, Müsseler, & Majid, 2009; Hansen, Olkkonen, Walter, & Gegenfurter, 2006). Note that, although many of these studies have used visual paradigms, the broader phenomenon of previous information or familiarity influencing perception is not limited to the visual domain (e.g., in the auditory domain: the cocktail party effect; Cherry, 1953). In this study, we ask to what extent semantic knowledge may influence our ability to consciously detect an object, independent of object features and familiarity.

The possibility that semantic information may be involved in shaping the contents of conscious perception coheres with predictive coding theories of visual perception1 (Clark, 2013; Friston, 2005; Rao & Ballard, 1999). From this perspective, neural systems are able to take advantage of statistical regularities in the visual environment by incorporating these consistencies into the perceptual process. Such information may come from many sources, including life-long experiences, implicit memory, or conceptual knowledge. Predictive coding theories propose that this information is used by the brain to continuously generate internal predictions about future events, allowing for more efficient stimulus processing (De-Wit, Machilsen, & Putzeys, 2010).

Evidence to suggest that semantic information may play a role in conscious detection comes from a number of recent studies. Two priming studies employed the continuous flash suppression (CFS) paradigm as a variant of binocular rivalry (Sterzer, Stein, Ludwig, Rothkirch, & Hesselmann, 2014; Tsuchiya & Koch, 2005). In this paradigm, stimuli presented to one eye undergo suppression due to the simultaneous presentation of a pattern mask of high contrast to the other eye. Costello, Jiang, Baartman, McGlennen, and He (2009) demonstrated that target words that were undergoing suppression and thus could not be consciously detected could break free from suppression earlier when they were preceded by the presentation of a semantically related prime word. Similarly, Lupyan and Ward (2013) showed that hearing an object's name improved the subsequent detection of objects during CFS. Emotional valence has also been shown to affect suppression times, with emotionally negative utterances showing shorter suppression times than neutral utterances (Sklar et al., 2012; see also Almeida, Pajtas, Mahon, Nakayama, & Caramazza, 2013), and faces associated with negative social information dominate for longer during binocular rivalry than neutral faces (Anderson, Siegel, & Barrett, 2011; but see Rabovsky et al., 2016, for null effects during CFS). These studies suggest that conceptual information can alter the time course and extent to which associated stimuli enter into conscious awareness. In the current study, we recorded the EEG to elucidate the time course of semantic influences over conscious perception and to relate behavioral measures of stimulus detection to preceding modulations of perceptual stages, indexed by early EEG effects.

ERP recordings provide an ideal tool to elucidate the time course of cognitive processes. A number of ERP studies have demonstrated that high-level information can influence the earliest stages of visual processing. Categorical perception effects have been shown to reliably modulate the MMN, an ERP marker taken to be an index of early attentional processing, occurring between 150 and 250 msec (Boutonnet, Dering, Viñas-Guasch, & Thierry, 2012; Mo, Xu, Kay, & Tan, 2011; Thierry et al., 2009). Thierry et al. (2009) further demonstrated influences of verbal categories on the P1, an early ERP component peaking between 100 and 150 msec post-stimulus onset that is thought to reflect basic visual perception, for example, of individual stimulus features, in the extrastriate cortex (Di Russo, Martínez, Sereno, Pitzalis, & Hillyard, 2002), indicating that high-level information can affect a very early stage of stimulus processing. Similarly, using a priming procedure, Boutonnet and Lupyan (2015) showed that the P1 elicited by objects is modulated when they are preceded by the auditory presentation of the object's name. Abdel Rahman and Sommer (2008), using a learning paradigm, selectively manipulated the amount of semantic information associated with initially unfamiliar objects, which were later presented for naming, recognition evaluation, and semantic classification tasks. They found that, in addition to a later modulation of the N400 ERP component, which is related to semantic processing (Kutas & Federmeier, 2011), objects that were associated with semantic information elicited P1 components of reduced amplitude. Rabovsky, Sommer, and Abdel Rahman (2012) found the same P1 effect when performing the same task on object names. These findings suggest that semantic information can modulate early perception. Semantic information may induce such modulation by modifying the perceptual representations of the novel objects formed during training, resulting in more efficient bottom–up propagation of information. Alternatively, given that studies have shown that feedback from frontal cortices to early visual modules can occur after stimulus onset (Bar et al., 2006; Lamme & Roelfsema, 2000), semantic information may exert an influence over extrastriate areas in the form of feedback, resulting in a modulation of the P1 component. The P1 modulation in the studies by Rabovsky et al. (2012) and Abdel Rahman and Sommer (2008) was, however, not accompanied by a corresponding change in RTs. It is possible that the P1 effects in these studies had a functional role in perception, which did not come to bear on behavior because the tasks were very easy at a perceptual level. Thus, behavioral manifestations of the influence of semantic information over conscious perception may only occur in situations of increased perceptual difficulty.

To directly address the role of semantic information in conscious detection, we utilized a similar learning procedure to that used by Abdel Rahman and Sommer (2008), where participants are familiarized with a series of initially unfamiliar object pictures and the amount of semantic information provided for each object was manipulated. Subsequent to this learning procedure, object pictures with in-depth functional versus minimal associated semantic information were presented under conditions of difficult conscious detection in the attentional blink paradigm where, during rapid serial visual presentation, participants are unable to detect the presence of a second target stimulus (T2) when its presentation falls within a certain time window following the processing of a first to-be-reported target (T1; Raymond, Shapiro, & Arnell, 1992). Here, object pictures were briefly presented as the second of two targets, and participants were required to detect the presence of an object within the presentation stream, a task that does not require semantic analysis of the stimulus.

EEG studies of the attentional blink show that, in addition to a preserved early P1/N1 complex, blinked trials (i.e., trials where T2 goes undetected) show a preserved N400 (Rolke, Heil, Streb, & Henninghausen, 2001; Vogel, Luck, & Shapiro, 1998), an ERP component associated with semantic processing (Kutas & Federmeier, 2011). This finding indicates that unreported targets undergo extensive processing, at least to the level of semantic analysis, without participants' awareness. We reasoned that, because unreported targets undergo a high level of stimulus processing, the attentional blink paradigm provides an ideal context to test our hypothesis regarding the influence of semantic information on conscious detection. In contrast to the N400, the P300, a component that has been suggested to reflect the consolidation of information into working memory, is reduced for unreported T2 trials (Sergent, Baillet, & Dehaene, 2005), suggesting that the attentional demands of T1 processing in working memory encoding, episodic processing, and response selection reduce the amount of high-level resources that are available for the processing of T2, resulting in a reduction in conscious detection of T2.

We predicted that if semantic information plays a role in the conscious detection of visual stimuli under difficult conditions in the attentional blink task, then objects associated with more functional–semantic information should be detected to an overall greater extent than objects associated with minimal information. Additionally, in line with previous studies using similar materials and learning procedures (Rabovsky et al., 2012; Abdel Rahman & Sommer, 2008), we expected semantic information to induce modulations of the P1 and N400 ERP components.

METHODS

Participants

A sample of 32 right-handed participants (17 women) took part in the experiment in return for a monetary compensation or course credit. All participants were native speakers of German with normal or corrected-to-normal visual acuity. The sample had a mean age of 27 years (range = 20–34 years). This research was approved by the ethics committee at the Department of Psychology, Humboldt-Universität zu Berlin. Participants provided written informed consent before participation.

Materials

Stimuli for T2 targets consisted of grayscale BMP pictures (207 × 207 pixels) of 25 well-known and 40 rare objects, previously used by Abdel Rahman and Sommer (2008) and Rabovsky et al. (2012). Of the 40 rare objects, half were real objects, and half were fictitious. Five of the well-known objects were used for practice trials. For all objects, a sentence was recorded, stating the object's name (e.g., “This is a sofa.”). For the rare objects, pseudowords were used as names that did not reveal any meaningful information regarding functional properties. (e.g., “This is a squonker”). Additional sentences were recorded for the rare objects, which explained their functional use (e.g., “This is a machine for breeding chicken eggs.”). Rare objects were randomly divided into two subsets to be allocated to the two semantic conditions. The assignment of objects to semantic conditions was counterbalanced across participants so that bottom–up influences of differences in contrast, luminance, complexity, and so forth could be excluded. For T1 targets, 164 grayscale photographs of neutral faces were selected, half of which were female, and all were edited for homogeneity of features outside the face. Masking stimuli consisted of 127 grayscale animal pictures. All stimuli were presented in the center of the screen on a light blue background (red, green, blue value: 169, 217, 255).

Procedure

Learning Phase

In the learning phase, participants were familiarized with the objects along with the information associated with each condition. Well-known and rare objects were presented in random order in the center of the monitor screen (DELL 1908FPb, 19 in., 1280 × 1024, 75 Hz). Simultaneously, an auditory sentence was presented. For half of the rare objects, the sentence contained information about the name of the object (minimal knowledge condition), and for the remaining half, the sentence consisted of functional information regarding the use of the object (functional knowledge condition). Well-known objects were presented, along with their name across all participants. All objects were presented for 4000 msec with a 500-msec interstimulus interval during which a fixation cross appeared on the screen and a briefly presented blank screen was used to separate sequential stimuli. Each object was presented three times.

Test Phase

Each trial consisted of the following series of events: A central fixation cross was presented for 500 msec, followed by a series of one to seven masks with the number varying randomly on each trial. T1 was then presented, followed by a single mask. During the SOA between T1 and T2, a fixation cross remained on the screen. T2 was then presented and was followed by two masks. Masks, T1, and T2 were each displayed for 27 msec and were separated by blank screens lasting 41 msec. The SOA between T1 and T2 was either short (258 msec) or long (688 msec). T1 consisted of a male or female face, and masking stimuli were created from a selection of animal pictures. On T2 present trials, rare and well-known objects were presented, and on T2 absent trials, participants were presented with a blank screen. Following the presentation of the stimuli, participants were required to answer a series of questions regarding their experiences for T1 and T2. Participants were first asked whether they have seen an object using a perceptual awareness scale (Sandberg & Overgaard, 2015). They pressed one of four keys: (a) if they did not see any object, (b) if they had the impression of there being an object present, (c) if they were able to perceive parts of but not the full object, and (d) if they perceived a full object. Participants were encouraged to use the full range of response options throughout the experiment. On trials where participants indicated some level of object awareness (Responses b, c, or d), a second question was presented on the screen asking, “Was the object an everyday object?” to which participants gave a binary manual yes or no response. Finally, participants were asked to classify the face presented as the T1 stimulus as male or female with a manual response. For all questions, a schematic representation of the different response options was presented on the screen below the question. Figure 1 provides a visual illustration of the trial scheme. All questions were presented on the screen until a response was given. On two thirds of the trials, T2 was present, and the remaining third were composed of T2 absent trials. Seventy-five percent of all trials, that is, both T2 present and absent trials, were presented in the critical blink condition at the short SOA. Within a single block, all rare and well-known objects were presented once, requiring participants to complete four blocks to rotate all items across the two SOA conditions (due to the 75:25 ratio of short/long SOA). Participants carried out eight blocks with a short break after each block. Before beginning the experiment, there were 20 practice trials, and in the experiment proper, participants completed 720 trials, 240 of which were T2 absent trials and 480 were T2 present trials, 360 with short and 120 with long SOA. For each of the two rare object conditions and for the well-known condition, 120 and 40 trials were presented at the short and long SOAs, respectively. Thus, each object was presented twice at the long SOA and six times at the short SOA.

Figure 1. 

(A) Sample of the objects used and their associated descriptions during the learning phase for functional knowledge, minimal knowledge, and well-known objects. (B) Illustration of the stimulus sequence presented during the attentional blink phase. After a learning procedure, participants performed the attentional blink task. The second of the two targets (T2) when present was either a functional knowledge, a minimal knowledge, or a well-known object.

Figure 1. 

(A) Sample of the objects used and their associated descriptions during the learning phase for functional knowledge, minimal knowledge, and well-known objects. (B) Illustration of the stimulus sequence presented during the attentional blink phase. After a learning procedure, participants performed the attentional blink task. The second of the two targets (T2) when present was either a functional knowledge, a minimal knowledge, or a well-known object.

Postexperimental Questionnaire

Participants were subsequently presented with a list of pictures of all rare objects and were asked to write down any information that they could remember from the learning phase for each object.

EEG Recording

Continuous EEG was recorded throughout the experimental session with Ag/AgCl electrodes at 64 sites positioned according to the extended 10–20 system (Pivik et al., 1993) at a 500-Hz sampling rate, using a bandpass (0.032–70 Hz) filter. During recording, all electrodes were referenced to the left mastoid, and electrode impedance levels were kept below 5 kΩ. Horizontal and vertical EOGs were recorded from the external canthi and from above and below the midpoint of the right eye. Offline, the EEG was re-referenced to the average voltage of all electrodes, and a low-pass filter of 30 Hz was applied. Eye blink and horizontal and vertical EOG activity were removed using a Gratton and Coles correction (Gratton, Coles, & Donchin, 1983). Remaining artifacts were eliminated using an automatic rejection procedure where amplitudes exceeding ±100 μV or changing by more than 75 μV between successive samples were eliminated. Baseline activity was corrected to a 100-msec time period before the onset of T2, and trials were segmented into time windows 200 msec before and 800 msec after T2 onset. Trials on which participants responded incorrectly to T1 were excluded from the analysis. Time windows for the P1 (100–150 msec) and N400 components (300–500 msec) were based on previous studies demonstrating semantic effects on low-level visual perception and later stages of meaning access (Rabovsky et al., 2012; Abdel Rahman & Sommer, 2008). ROIs were selected based on those clusters of electrodes where effects were maximal (see below).

Data Analysis

In analyzing participants' ability to perceive T2 across conditions, we first excluded T1 error trials and made an index of overall object detection by calculating mean responses within each condition. This detection score ranges from 1 to 4, with 1 indicating “no object perception” and 4 indicating “complete object perception.” To assess the influence of semantic information and SOA on conscious detection, we constructed linear mixed models using the lmer function from the lme4 R package, Version 1.1-12 (Bates et al., 2016). For performance and ERPs, we included semantic condition (minimal vs. functional knowledge) and SOA (short, long) as fixed factors and participant as a random unit. Well-known objects were not included in this analysis since they cannot be assigned to different conditions, and therefore, visual differences cannot be excluded. For ERP effects, we additionally included electrode as a random unit, nested within participants (Aarts, Verhage, Veenvliet, Dolan, & van der Sluis, 2014). Because of convergence problems for the maximal random effects structures, we used a data-driven model comparison using restricted maximum likelihood estimation approach (Matuschek, Kliegl, Vasishth, Baayen, & Bates, 2017; Zuur, Ieno, Walker, Saveliev, & Smith, 2009) to identify the optimal random structure that accounted for the most variance, as indicated by Akaike information criterion and log-likelihood values. The final model included a random slope for SOA by subject and by electrode nested within subject, respectively. Subsequently, fixed effects were tested using maximum likelihood estimation. Models were tested using the ANOVA function from the lmerTest R package, Version 2.0-30 (Kuznetsova, Brockhoff, & Christensen, 2014). Explained deviance (dv) was calculated using the pamer.fnc function from the LMERCovenienceFunctions R package (Tremblay & Ransijn, 2015) Post hoc comparisons were Bonferoni-corrected and calculated using the testinteractions function from the R package phia, Version 0.2-1 (De Rosario-Martínez, 2015). Postexperimental questionnaires were scored by two independent evaluators. For the minimal knowledge condition, participant recall for an object was scored as correct if it matched the name that was previously learned. For objects from the functional knowledge condition, participant recall was scored as correct if the gist of the description matched the description that was previously learned for that object. Recall was scored as correct or incorrect.

RESULTS

Postexperimental Questionnaires

Participants varied widely in their ability to recall the functional information associated with objects at the beginning of the experiment (mean = 74%, SD = 20%, range = 20–100%). Recall for object labels was generally lower than that for functional information (mean = 17%, SD = 16%, range = 0–70%). To insure that our manipulation was effective, we removed those two participants who showed recall levels for functional information lower than two standard deviations below the mean.

Semantic Knowledge Effects

Performance

On 12% of trials, participants incorrectly classified T1. These trials were removed from any further analysis. Participants were able to detect objects to a greater extent at the long SOA (m = 3.21) than at the short SOA (m = 2.84), F(1, 29) = 151.03, p <.001, dv = .79. More importantly, participants also slightly differed in their overall detection of T2 objects across semantic conditions (functional knowledge, m = 2.64; minimal knowledge, m = 2.61), reflected in a significant main effect of Semantic condition, F(1, 58) = 4.15, p < .05, dv = .27 (Figure 2A). This difference in mean detection between semantic conditions, though statistically significant, is small, and we encourage readers to interpret it accordingly. The interaction between the factors Semantic condition and SOA was nonsignificant, F(1, 58) = 0.11, p = .75, dv = .01.

Figure 2. 

Results from the attentional blink task. (A) Participants consciously perceived more of the objects that were associated with semantic knowledge in comparison to those from the minimal knowledge condition. (B) Scalp topographies of ERP difference waves between knowledge conditions (functional minus minimal knowledge) for the P1 and N400 components, separated for long and short SOAs.

Figure 2. 

Results from the attentional blink task. (A) Participants consciously perceived more of the objects that were associated with semantic knowledge in comparison to those from the minimal knowledge condition. (B) Scalp topographies of ERP difference waves between knowledge conditions (functional minus minimal knowledge) for the P1 and N400 components, separated for long and short SOAs.

ERP

Of primary interest regarding the effects of semantic information on early stages of visual processing, analysis of the P1 time window (ROI: O1, Oz, O2, PO3, POz, PO4) revealed no significant main effects of Semantic condition, F(1, 508) = 0.004, p = .96, dv < .01, or SOA, F(1, 29) = 2.34, p =.14, dv = .02. The effect of semantic condition at the different SOAs showed opposing effects with a positive modulation (i.e., larger amplitudes for the functional knowledge as compared with the minimal knowledge condition) at the short SOA and a negative-going modulation (i.e., smaller amplitudes for the functional knowledge condition) at the long SOA (Figures 2B and 3); this pattern was confirmed by a significant interaction between Semantic condition and SOA, F(1, 508) = 13.81, p < .001, dv = .09. Follow-up comparisons showed that these effects were significant at both the short, χ2 = 6.68, p < .02, and the long, χ2 = 7.14, p < .02, SOA.2

Figure 3. 

Distribution of responses across each response option of the perceptual awareness scale, for minimal knowledge and functional knowledge conditions separated by SOA.

Figure 3. 

Distribution of responses across each response option of the perceptual awareness scale, for minimal knowledge and functional knowledge conditions separated by SOA.

For the N400 component (ROI: O1, Oz, O2, PO3, POz, PO4), there was a significant main effect of SOA, F(1, 29) = 13.22, p = .001, dv = .21, whereas the main effect of Semantic condition was nonsignificant, F(1, 29) = 1.51, p = .23, dv = .02. SOA interacted with Semantic condition, F(1, 479) = 20.71, p < .001, dv = .32. Follow-up contrasts revealed a significant effect of Semantic condition at the long SOA with smaller amplitudes for the functional knowledge condition, χ2 = 7.23, p < .02, whereas at the short SOA, the effect of Semantic condition was nonsignificant, χ2 = 0.14, p = 1.

Correlation Analysis between ERP Amplitude Differences and T2 Detection

To assess the relation between the P1 modulation and behavioral measures of conscious access, we correlated the mean amplitude difference of the P1 component with the difference in detection between knowledge conditions (functional minus minimal knowledge) separately for each SOA. This analysis resulted a nonsignificant correlation at the short SOA (r = −.05, p = .79) and a marginally significant correlation at the long SOA (r = −.31, p = .095). To further explore the relationship between the semantic modulation of the P1 component and the behavioral measure of conscious detection, we collapsed across both SOAs to calculate the overall effect of semantic information on both the P1 and behavior, revealing a significant negative correlation (r = −.46, p = .01; Figure 3). Thus, stronger knowledge effects on P1 amplitudes, as reported in previous studies (Rabovsky et al., 2012; Abdel Rahman & Sommer, 2008), were associated with stronger knowledge-induced facilitation of detection performance. To further explore this relation and its time course, we calculated point-by-point correlations between amplitude differences and the behavioral effect size over consecutive sampling points, every 2 msec from 0 to 200 msec.3 A series of sampling points was considered significant when a minimum of 15 significant, uninterrupted correlations appeared consecutively (Guthrie & Buchwald, 1991). The P1 modulation began to correlate significantly with behavior at 106 msec continuing uninterrupted until 144 msec, a time window that encompasses 20 sampling points in a row, corresponding to 38 msec (Figures 4, 5, and 6).

Figure 4. 

Waveforms from the pooled ROI where the P1 component was maximal (linear derivation of electrodes O1, Oz, O2, PO3, POz, PO4), with the time frame of sequential significant point-by-point correlation coefficients highlighted.

Figure 4. 

Waveforms from the pooled ROI where the P1 component was maximal (linear derivation of electrodes O1, Oz, O2, PO3, POz, PO4), with the time frame of sequential significant point-by-point correlation coefficients highlighted.

Figure 5. 

Scatterplot showing amplitude differences in the P1 component between functional knowledge and minimal knowledge conditions on the y-axis, with corresponding detection differences between functional knowledge and minimal knowledge conditions on the x-axis for each participant.

Figure 5. 

Scatterplot showing amplitude differences in the P1 component between functional knowledge and minimal knowledge conditions on the y-axis, with corresponding detection differences between functional knowledge and minimal knowledge conditions on the x-axis for each participant.

Figure 6. 

Results from the point-by-point analysis. For each sampling point, the Pearson's r correlation coefficient between the behavioral effect size and the ERP amplitude difference collapsed across SOA is presented. The red line represents the critical value, above which Pearson’s r values are statistically significant (p < .05).

Figure 6. 

Results from the point-by-point analysis. For each sampling point, the Pearson's r correlation coefficient between the behavioral effect size and the ERP amplitude difference collapsed across SOA is presented. The red line represents the critical value, above which Pearson’s r values are statistically significant (p < .05).

DISCUSSION

Our results demonstrate that semantic information increases the extent to which objects may be consciously detected. In the attentional blink at the short SOA when attention is occupied or under conditions of less severe perceptual difficulty in the long SOA, semantic information increased the likelihood of visual objects reaching awareness, causing the observer to consciously experience a more complete object percept. To our knowledge, this study is the first to demonstrate that semantic modulations of the P1 component can influence the contents of visual awareness. This finding, which was observed across SOAs, was associated with a change in the EEG signal occurring as early as 100 msec after object presentation, in the form of a modulation of the P1 component. Previous studies have shown semantically induced P1 modulations for clearly visible objects, but without significant behavioral differences between conditions. Here, we show a similar P1 modulation, namely an amplitude reduction associated with semantic information limited to the long SOA, and accompanied by behavioral differences in conscious perception. Crucially, the P1 modulation was correlated with subjective reports of conscious perception, indicating that semantic information—through a modulation of the brain activation generating the P1—has a functional role in shaping the contents of conscious perception.

An unexpected finding was the interaction between semantic information and SOA on the P1 component, with a reduction at the long SOA, as reported in previous studies (Rabovsky et al., 2012; Abdel Rahman & Sommer, 2008), and an increase at the short SOA. Given that the correlation between conscious detection and the P1 effect was negative, our results suggest that it is the reduction in the P1 amplitude that is associated with facilitation of visual analysis and increases in conscious perception. Additionally, objects associated with functional–semantic information elicited an N400 effect at the long SOA only, indicating that it was primarily in this condition that objects were processed at a semantic level. This is interesting in light of prior studies demonstrating preserved N400 amplitudes during the attentional blink (Rolke et al., 2001; Vogel et al., 1998). The short SOA represents a condition of increased difficulty as attention is unavailable here, suggesting that semantic information may operate in distinct ways when task difficulty is increased. It seems possible that the accessibility of semantic information under such conditions may depend on the novelty of this information in the sense that semantic knowledge, which is well established in long-term memory, may be unconsciously activated while the activation of newly acquired knowledge may more strongly depend on attention and consciousness. This is an interesting question for future research and could be tested by contrasting participants' overall detection performance on a set of well-known items that are better controlled in terms of perceptual familiarity, with their detection performance for stimuli that have been recently associated with functional knowledge. In addition, we would like to again highlight the relatively small effect sizes for participant differences in T2 detection for well-known and minimal knowledge objects. One possible attenuating factor responsible for this is our choice of T1 stimuli: Faces are highly effective in attracting attention and have been shown to augment the attentional blink to a larger extent than other object categories (Landau & Bentin, 2008). It would be interesting in future research to test whether the magnitude of differences in performance would change according the type of T1 stimulus used.

Underlying Mechanisms

The influence of semantic information on conscious object detection observed here could reflect a number of different underlying mechanisms. One possibility is that the effect is due to a direct modulation of visual processing at a preattentive stage. Semantic information may alter the visual processes that generate the P1, allowing for more efficient perceptual processing of target objects. This kind of facilitation in visual processing may operate via the recruitment of feedback from higher cortical areas involved in the generation of hypotheses regarding the nature of incoming sensory signals (e.g., Bar et al., 2006). Such a mechanism may recruit functional semantic information associated with objects to generate more efficient predictions regarding the nature of the target, thus reducing early perceptual processing demands. This explanation is in line with the recent surge of studies claiming that higher level cognitive factors can influence early perception (e.g., Lupyan, 2015) and with research showing that information may propagate from visual cortices to frontal and parietal areas within 30 msec and frontal areas may become active within 80 msec after stimulus presentation, allowing sufficient time for feedback to extrastriate areas within the P1 time range (Foxe & Simpson, 2002). Modulations in the P1 time range have also been discussed as potential correlates of visual awareness. The ERPs most often reported in relation to conscious perception are an early posterior negativity at around 200 msec (visual awareness negativity) and a later positivity in the P3 time window (late positivity), both of which are enhanced for reported stimuli (Koivisto & Revonsuo, 2010). A large body of studies that contrast reported with unreported conditions also show modulations in the P1 time range, for example, in visual masking (Del Cul, Baillet, & Dehaene, 2007), change blindness (Pourtois, De Pretto, Hauert, & Vuilleumier, 2006), and during bistable perception (Britz, Landis, & Michel, 2009; Kornmeier & Bach, 2006). Koivisto and Revonsuo (2010) interpret these P1 modulations related to seen stimuli as reflecting the preconscious allocation of attention to perception (cf. discussion below). However, a recent study (Davoodi, Moradi, & Yoonessi, 2015) where attention and consciousness were orthogonally manipulated revealed a similar change in the P1 amplitude for seen trials under inattentive conditions, whereas modulations in the P3 range occurred for seen trials in the attentive condition only. Wyart, Dehaene, and Tallon-Baudry (2012) found a similar early component reflecting consciousness under conditions where attention was otherwise engaged, reporting a correlate of conscious detection 120 msec after stimulus onset.

The semantic effects in this study may also be mediated by attention. Attention driven by functional object-related semantic information may lead to a selective enhancement of relevant visual features associated with object functions. Given that attention has been shown to operate at early timescales, modulating the P1 component, the time course of semantic modulations reported here is also in line with such an “attention to perception” explanation. Indeed, it is well established that P1 modulations may reflect the attentional tuning of early visual processing, with stimuli that are presented at attended locations being associated with larger amplitudes (Di Russo, Martínez, & Hillyard, 2003; Mangun, 1995). This has been interpreted as reflecting an attentional “gain control” or amplification of incoming sensory signals (Hillyard, Vogel, & Luck, 1998). Similar P1 modulations can also be observed when attention is directed to nonspatial features, such as color (Zhang & Luck, 2009). The P1 may therefore reflect more engagement of attentional resources at an early stage of visual processing, preceding the appearance of the object into conscious awareness but giving sufficient amplification of the signal to be able to cross the threshold into consciousness.

As discussed above, the relation between consciousness and attention is complex, with various studies demonstrating that they may be two distinct processes, and therefore, attention is not a prerequisite for conscious experience (e.g., Kentridge, Nijboer, & Heywood, 2008; Wyart & Tallon-Baudry, 2008; Koch & Tsuchiya, 2007). These studies, in combination with the significant correlation between the size of the P1 effect and subjective measures of conscious detection, suggest that, rather than exclusively representing preconscious processes, the semantically induced P1 modulation in the current study is functionally linked to conscious perception. Whether conscious detection is mediated via preattentive or early attentive processing, the present findings demonstrate that semantic information has an influence on the emergence of a conscious visual percept and thus is a determining factor for conscious perception.

Conclusion

The present research is the first to demonstrate that semantic modulations of the P1 component may shape the contents of conscious awareness by penetrating early stages of visual perception as reflected in a modulation of EEG activity as early as 100 msec after stimulus presentation. Crucially, behavioral and electrophysiological effects are correlated. Our results provide support for the hypothesis that visual perception is influenced by higher level cognitive processes. We suggest that the power of semantic information to shape the contents of conscious awareness reflects an adaptive mechanism in the sense that semantic meaning—in a similar way to prior experience—can serve as a basis for generating predictions regarding the nature of incoming sensory signals, thus affording a processing advantage for stimuli that match the contents of these predictions.

Acknowledgments

We would like to thank Guido Kiecker for help in programming this experiment; Luisa Balzus for contributions to stimulus preparation; and Julia Baum, Anna Eiserbeck, and Marie Borowikow for assistance with data acquisition. This work was funded by German Research Foundation grant AB 277-6 to R. A. R.

Reprint requests should be sent to Peter D. Weller, Humboldt-Universitat zu Berlin Lebenswissenschaftliche Fakultat 408761, Rudower Chaussee 18, 12489 Berlin Adlershof, Germany, or via e-mail: wellerpe@hu-berlin.de.

Notes

1. 

Theoretically, an encapsulated module of perception may be part of a system that includes predictive coding in other components. For instance, modular theories take perception to be encapsulated from higher level cognitive factors (Firestone & Scholl, 2015; Pylyshyn, 1999), whereas other processing components may be affected by higher level cognition or predictive coding, emphasizing the brain's ability to use past experiences to predict the type and nature of incoming signals. In contrast, predictive coding theories of perception assume that perception is directly affected by predictive coding.

2. 

An alternative way of looking at these findings is that it is not the presence of functional knowledge that is modulating responses to the P1, but rather the lack of a name for those objects. To rule out this possibility, we analyzed P1 amplitudes within the minimal knowledge and functional knowledge conditions according to participants’ subsequent memory for the information that was associated with each object. To this end, we conducted two separate one-way ANOVAs with Participant recall (yes/no) as independent variable and P1 amplitude as dependent variable. If it is the association with semantic knowledge that drives the P1 effect, then significant differences in the magnitude of the P1 between remembered and forgotten descriptions should be present within the functional knowledge condition only, whereas no differences in P1 amplitude should be observed according to participants’ memory for object labels. In line with our theoretical framework, we found a modulatory effect of memory for functional knowledge, F(1, 26) = 4.26, p < .05 (.049), but no significant modulatory effect of memory for object labels, F(1, 27) = 0.19, p = .23.

3. 

Please note that the EEG data are temporally smoothed due to the low-pass filter and that we did not correct for multiple comparisons across time segments because we had very clear hypotheses concerning the P1 segment based on our previous studies (Rabovsky et al., 2012; Abdel Rahman & Sommer, 2008).

REFERENCES

REFERENCES
Aarts
,
E.
,
Verhage
,
M.
,
Veenvliet
,
J. V.
,
Dolan
,
C. V.
, &
van der Sluis
,
S.
(
2014
).
A solution to dependency: Using multilevel analysis to accommodate nested data
.
Nature Neuroscience
,
17
,
491
496
.
Abdel Rahman
,
R.
(
2011
).
Facing good and evil: Early brain signatures of affective biographical knowledge in face recognition
.
Emotion
,
11
,
1397
1405
.
Abdel Rahman
,
R.
, &
Sommer
,
W.
(
2008
).
Seeing what we know and understand: How knowledge shapes perception
.
Psychonomic Bulletin & Review
,
15
,
1055
1063
.
Almeida
,
J.
,
Pajtas
,
P. E.
,
Mahon
,
B. Z.
,
Nakayama
,
K.
, &
Caramazza
,
A.
(
2013
).
Affect of the unconscious: Visually suppressed angry faces modulate our decisions
.
Cognitive Affective Behavioral Neuroscience
,
13
,
94
101
.
Anderson
,
E.
,
Siegel
,
E. H.
, &
Barrett
,
L. F.
(
2011
).
What you feel influences what you see: The role of affective feelings in resolving binocular rivalry
.
Journal of Experimental Social Psychology
,
47
,
856
860
.
Bar
,
M.
,
Kassam
,
K. S.
,
Ghuman
,
A. S.
,
Boshyan
,
J.
,
Schmid
,
A. M.
,
Dale
,
A. M.
, et al
(
2006
).
Top–down facilitation of visual recognition
.
Proceedings of the National Academy of Sciences, U.S.A.
,
103
,
449
454
.
Bates
,
D.
,
Maechler
,
M.
,
Bolker
,
B.
,
Walker
,
S.
,
Christensen
,
R. H. B.
,
Singmann
,
H.
et al
(
2016
).
lme4: Linear mixed-effects model using “Eigen” and S4 (Version 1.1–12) [online]
.
Retrieved from
: https://cran.r-project.org/web/packages/lme4/index.html.
Boutonnet
,
B.
,
Dering
,
B.
,
Viñas-Guasch
,
N.
, &
Thierry
,
G.
(
2012
).
Seeing objects through the language glass
.
Journal of Cognitive Neuroscience
,
25
,
1702
1710
.
Boutonnet
,
X. B.
, &
Lupyan
,
X. G.
(
2015
).
Words jump-start vision: A label advantage in object recognition
.
Journal of Neuroscience
,
35
,
9329
9335
.
Britz
,
J.
,
Landis
,
T.
, &
Michel
,
C. M.
(
2009
).
Right parietal brain activity precedes perceptual alternation of bistable stimuli
.
Cerebral Cortex
,
19
,
55
65
.
Cherry
,
E. C.
(
1953
).
Some experiments on the recognition of speech, with one and with two ears
.
Journal of the Acoustical Society of America
,
25
,
975
979
.
Clark
,
A.
(
2013
).
Whatever next? Predictive brains, situated agents, and the future of cognitive science
.
Behavioral and Brain Sciences
,
36
,
181
204
.
Collins
,
J. A.
, &
Curby
,
K. M.
(
2013
).
Conceptual knowledge attenuates viewpoint dependency in visual object recognition
.
Visual Cognition
,
21
,
945
960
.
Collins
,
J. A.
, &
Olson
,
I. R.
(
2014
).
Knowledge is power: How conceptual knowledge transforms visual cognition
.
Psychonomic Bulletin & Review
,
21
,
843
860
.
Costello
,
P.
,
Jiang
,
Y.
,
Baartman
,
B.
,
McGlennen
,
K.
, &
He
,
S.
(
2009
).
Semantic and subword priming during binocular suppression
.
Consciousness and Cognition
,
18
,
375
382
.
Davoodi
,
R.
,
Moradi
,
M. H.
, &
Yoonessi
,
A.
(
2015
).
Dissociation between attention and consciousness during a novel task: An ERP study
.
Neurophysiology
,
47
,
144
154
.
Del Cul
,
A.
,
Baillet
,
S.
, &
Dehaene
,
S.
(
2007
).
Brain dynamics underlying the nonlinear threshold for access to consciousness
.
PLoS Biology
,
5
,
e260
.
De Rosario-Martinez
,
H
. (
2015
).
phia: Post-hoc interaction analysis (R package version 0.2-1)
. .
De-Wit
,
L.
,
Machilsen
,
B.
, &
Putzeys
,
T.
(
2010
).
Predictive coding and the neural response to predictable stimuli
.
Journal of Neuroscience
,
30
,
8702
8703
.
Di Russo
,
F.
,
Martínez
,
A.
, &
Hillyard
,
S. A.
(
2003
).
Source analysis of event-related cortical activity during visuo-spatial attention
.
Cerebral Cortex
,
13
,
486
499
.
Di Russo
,
F.
,
Martínez
,
A.
,
Sereno
,
M. I.
,
Pitzalis
,
S.
, &
Hillyard
,
S. A.
(
2002
).
Cortical sources of the early components of the visual evoked potential
.
Human Brain Mapping
,
15
,
95
111
.
Firestone
,
C.
, &
Scholl
,
B. J.
(
2015
).
Cognition does not affect perception: Evaluating the evidence for “top–down” effects
.
Behavioral and Brain Sciences
,
20
,
1
77
.
Foxe
,
J. J.
, &
Simpson
,
G. V.
(
2002
).
Flow of activation from V1 to frontal cortex in humans. A framework for defining “early” visual processing
.
Experimental Brain Research
,
142
,
139
150
.
Friston
,
K.
(
2005
).
A theory of cortical responses
.
Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences
,
360
,
815
836
.
Gilbert
,
C. D.
, &
Li
,
W.
(
2013
).
Top–down influences on visual processing
.
Nature Reviews Neuroscience
,
14
,
350
363
.
Gratton
,
G.
,
Coles
,
M. G.
, &
Donchin
,
E.
(
1983
).
A new method for off-line removal of ocular artifacts
.
Electroencephalography and Clinical Neurophysiology
,
55
,
468
484
.
Guthrie
,
D.
, &
Buchwald
,
J. S.
(
1991
).
Significance testing of difference potentials
.
Psychophysiology
,
28
,
240
244
.
Hansen
,
T.
,
Olkkonen
,
M.
,
Walter
,
S.
, &
Gegenfurter
,
K. R.
(
2006
).
Memory modulates color appearance
.
Nature Neuroscience
,
9
,
1367
1368
.
Hillyard
,
S. A.
,
Vogel
,
E. K.
, &
Luck
,
S. J.
(
1998
).
Sensory gain control (amplification) as a mechanism of selective attention: Electrophysiological and neuroimaging evidence
.
Philosophical Transactions of the Royal Society B: Biological Sciences
,
353
,
1257
1270
.
Kentridge
,
R. W.
,
Nijboer
,
T. C.
, &
Heywood
,
C. A.
(
2008
).
Attended but unseen: Visual attention is not sufficient for visual awareness
.
Neuropsychologia
,
46
,
864
869
.
Koch
,
C.
, &
Tsuchiya
,
N.
(
2007
).
Attention and consciousness: Two distinct brain processes
.
Trends in Cognitive Sciences
,
11
,
16
22
.
Koivisto
,
M.
, &
Revonsuo
,
A.
(
2010
).
Event-related brain potential correlates of visual awareness
.
Neuroscience and Biobehavioral Reviews
,
34
,
922
934
.
Kornmeier
,
J.
, &
Bach
,
M.
(
2006
).
Bistable perception—Along the processing chain from ambiguous visual input to a stable percept
.
International Journal of Psychophysiology
,
62
,
345
349
.
Kutas
,
M.
, &
Federmeier
,
K. D.
(
2011
).
Thirty years and counting: Finding meaning in the N400 component of the event-related brain potential (ERP)
.
Annual Review of Psychology
,
62
,
621
647
.
Kuznetsova
,
A.
,
Brockhoff
,
P. B.
, &
Christensen
,
R. H. B.
(
2014
).
lmerTest: Tests for random and fixed effects for linear mixed effect models (lmer objects of lme4 package) (R package version 2.0-6)
. .
Lamme
,
V. A.
, &
Roelfsema
,
P. R.
(
2000
).
The distinct modes of vision offered by feedforward and recurrent processing
.
Trends in Neurosciences
,
23
,
571
579
.
Landau
,
A. N.
, &
Bentin
,
S.
(
2008
).
Attentional and perceptual factors affecting the attentional blink for faces and objects
.
Journal of Experimental Psychology: Human Perception and Performance
,
34
,
818
830
.
Lupyan
,
G.
(
2015
).
Object knowledge changes visual appearance: Semantic effects on color afterimages
.
Acta Psychologica
,
161
,
117
130
.
Lupyan
,
G.
, &
Ward
,
E. J.
(
2013
).
Language can boost otherwise unseen objects into visual awareness
.
Proceedings of the National Academy of Sciences, U.S.A.
,
110
,
14196
14201
.
Maier
,
M.
, &
Abdel Rahman
,
R.
(
2018
).
Native language promotes access to visual consciousness
.
Psychological Science
,
29
,
1757
1772
.
Maier
,
M.
,
Glage
,
P.
,
Hohlfeld
,
A.
, &
Abdel Rahman
,
R.
(
2014
).
Does the semantic content of verbal categories influence categorical perception? An ERP study
.
Brain and Cognition
,
91
,
1
10
.
Mangun
,
G. R.
(
1995
).
Neural mechanisms of visual selective attention
.
Psychophysiology
,
32
,
4
18
.
Matuschek
,
H.
,
Kliegl
,
R.
,
Vasishth
,
S.
,
Baayen
,
H.
, &
Bates
,
D.
(
2017
).
Balancing type I error and power in linear mixed models
.
Journal of Memory and Language
,
94
,
305
315
.
Mitterer
,
H.
,
Horschig
,
J. M.
,
Müsseler
,
J.
, &
Majid
,
A.
(
2009
).
The influence of memory on perception: It's not what things look like, it's what you call them
.
Cognition
,
35
,
1557
1562
.
Mo
,
L.
,
Xu
,
G.
,
Kay
,
P.
, &
Tan
,
L.-H.
(
2011
).
Electrophysiological evidence for the left-lateralized effect of language on preattentive categorical perception of color
.
Proceedings of the National Academy of Sciences, U.S.A.
,
108
,
14026
14030
.
Pivik
,
R. T.
,
Broughton
,
R. J.
,
Coppola
,
R.
,
Davidson
,
R. J.
,
Fox
,
N.
, &
Nuwer
,
M. R.
(
1993
).
Guidelines for the recording and quantitative analysis of electroencephalographic activity in research contexts
.
Psychophysiology
,
30
,
547
558
.
Pourtois
,
G.
,
De Pretto
,
M.
,
Hauert
,
C. A.
, &
Vuilleumier
,
P.
(
2006
).
Time course of brain activity during change blindness and change awareness: Performance is predicted by neural events before change onset
.
Journal of Cognitive Neuroscience
,
18
,
2108
2129
.
Pylyshyn
,
Z.
(
1999
).
Is vision continuous with cognition? The case for cognitive impenetrability of visual perception
.
Behavioral and Brain Sciences
,
22
,
341
423
.
Rabovsky
,
M.
,
Sommer
,
W.
, &
Abdel Rahman
,
R.
(
2012
).
Depth of conceptual knowledge modulates visual processes during word reading
.
Journal of Cognitive Neuroscience
,
24
,
990
1005
.
Rabovsky
,
M.
,
Stein
,
T.
, &
Abdel Rahman
,
R.
(
2016
).
Access to awareness for faces during continuous flash suppression is not modulated by affective knowledge
.
PLoS One
,
11
,
1
17
.
Rao
,
R. P. N.
, &
Ballard
,
D. H.
(
1999
).
Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects
.
Nature Neuroscience
,
2
,
79
87
.
Raymond
,
J. E.
,
Shapiro
,
K. L.
, &
Arnell
,
K. M.
(
1992
).
Temporary suppression of visual processing in an RSVP task: An attentional blink?
Journal of Experimental Psychology: Human Perception and Performance
,
18
,
849
860
.
Regier
,
T.
, &
Kay
,
P.
(
2009
).
Language, thought, and color: Whorf was half right
.
Trends in Cognitive Sciences
,
13
,
439
446
.
Rolke
,
B.
,
Heil
,
M.
,
Streb
,
J.
, &
Henninghausen
,
E.
(
2001
).
Missed prime words within the attentional blink evoke an N400 semantic priming effect
.
Psychophysiology
,
38
,
165
174
.
Sandberg
,
K.
, &
Overgaard
,
M.
(
2015
).
Using the perceptual awareness scale (PAS)
. In
M.
Overgaard
(Ed.),
Behavioral methods in consciousness research
. (pp.
181
195
).
Oxford
:
Oxford University Press
.
Sergent
,
C.
,
Baillet
,
S.
, &
Dehaene
,
S.
(
2005
).
Timing of the brain events underlying access to consciousness during the attentional blink
.
Nature Neuroscience
,
8
,
1391
1400
.
Sklar
,
A. Y.
,
Levy
,
N.
,
Goldstein
,
A.
,
Mandel
,
R.
,
Maril
,
A.
, &
Hassin
,
R. R.
(
2012
).
Reading and doing arithmetic nonconsciously
.
Proceedings of the National Academy of Sciences, U.S.A.
,
109
,
19614
19619
.
Sterzer
,
P.
,
Stein
,
T.
,
Ludwig
,
K.
,
Rothkirch
,
M.
, &
Hesselmann
,
G.
(
2014
).
Neural processing of visual information under interocular suppression: A critical review
.
Frontiers in Psychology
,
5
,
453
.
Suess
,
F.
,
Rabovsky
,
M.
, &
Abdel Rahman
,
R. A.
(
2015
).
Perceiving emotions in neutral faces: Expression processing is biased by affective person knowledge
.
Social Cognitive and Affective Neuroscience
,
10
,
531
536
.
Thierry
,
G.
,
Athanasopoulos
,
P.
,
Wiggett
,
A.
,
Dering
,
B.
, &
Kuipers
,
J.-R.
(
2009
).
Unconscious effects of language-specific terminology on preattentive color perception
.
Proceedings of the National Academy of Sciences, U.S.A.
,
106
,
4567
4570
.
Tremblay
,
A.
, &
Ransijn
,
J.
(
2015
).
LMERConvenienceFunctions package
.
Tsuchiya
,
N.
, &
Koch
,
C.
(
2005
).
Continuous flash suppression reduces negative afterimages
.
Nature Neuroscience
,
8
,
1096
1101
.
Vogel
,
E. K.
,
Luck
,
S. J.
, &
Shapiro
,
K. L.
(
1998
).
Electrophysiological evidence for a postperceptual locus of suppression during the attentional blink
.
Journal of Experimental Psychology: Human Perception and Performance
,
24
,
1656
1674
.
Witzel
,
C.
,
Valkova
,
H.
,
Hansen
,
T.
, &
Gegenfurter
,
K. R.
(
2011
).
Object knowledge modulates colour appearance
.
i-Perception
,
2
,
13
49
.
Wyart
,
V.
,
Dehaene
,
S.
, &
Tallon-Baudry
,
C.
(
2012
).
Early dissociation between neural signatures of endogenous spatial attention and perceptual awareness during visual masking
.
Frontiers in Human Neuroscience
,
6
,
16
.
Wyart
,
V.
, &
Tallon-Baudry
,
C.
(
2008
).
Neural dissociation between visual awareness and spatial attention
.
Journal of Neuroscience
,
28
,
2667
2679
.
Zhang
,
W.
, &
Luck
,
S. J.
(
2009
).
Feature-based attention modulates feedforward visual processing
.
Nature Neuroscience
,
12
,
24
25
.
Zuur
,
A. F.
,
Ieno
,
E. N.
,
Walker
,
N. J.
,
Saveliev
,
A. A.
, &
Smith
,
G. M.
(
2009
).
Mixed effects models and extensions in ecology with R
.
New York, NY
:
Springer
.