Abstract

Recent evidence suggests that conceptual knowledge modulates early visual stages of object recognition. The present study investigated whether similar modulations can be observed also for the recognition of object names, that is, for symbolic representations with only arbitrary relationships between their visual features and the corresponding conceptual knowledge. In a learning paradigm, we manipulated the amount of information provided about initially unfamiliar visual objects while controlling for perceptual stimulus properties and exposure. In a subsequent test session with electroencephalographic recordings, participants performed several tasks on either the objects or their written names. For objects as well as names, knowledge effects were observed as early as about 120 msec in the P1 component of the ERP, reflecting perceptual processing in extrastriate visual cortex. These knowledge-dependent modulations of early stages of visual word recognition suggest that information about word meanings may modulate the perception of arbitrarily related visual features surprisingly early.

INTRODUCTION

In perceiving and understanding the visual world, perceptual inputs provided by our eyes and early visual systems combine with the knowledge stored in our conceptual systems, enabling us, for example, to recognize the white and rectangular object in the kitchen as a refrigerator or the moving object on the street as a car. Whereas conceptual attributes of visual objects are often inherently related to their perceptual attributes, there is no such link for the names of objects, that is, their symbolic representations. Hence, the perceptual information of movement or wheels relates directly to the functional knowledge that cars can be used for transport. In contrast, the conjunction of lines, curves and edges forming the written word “CAR” likewise rapidly activates what we know about cars, but does so merely on the basis of convention. How might this arbitrary relation between visual features and knowledge affect the interplay of perceptual and conceptual systems in word processing? Whereas some theories propose visual word recognition to rely on bottom–up connections only (Carr & Pollatsek, 1985; Forster, 1976), interactive models of reading assume reciprocal connections and feedback from conceptual representations down to visual and orthographic processes (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; Seidenberg & McClelland, 1989).

Recent evidence suggests close interrelations between perceptual and conceptual processes in object recognition. For instance, selective manipulation of conceptual factors in learning paradigms with novel objects affects visual discrimination performance (Gauthier, James, Curby, & Tarr, 2003) and activity in sensory motor areas (Kiefer, Sim, Liebich, Hauk, & Tanaka, 2007; James & Gauthier, 2003). Abdel Rahman and Sommer (2008) manipulated the amount of information learned about initially unfamiliar objects (see Figure 1) and obtained modulations in the P1 component of the ERP, peaking at about 120 msec. The P1 component is assumed to reflect perceptual processing in extrastriate visual cortex (Sereno & Rayner, 2003; Di Russo, Martínez, Sereno, Pitzalis, & Hillyard, 2002). However, recent evidence suggests the information flow through the visual hierarchy to be far more rapid than formerly believed (Foxe & Simpson, 2002). Therefore, the P1 may reflect extrastriate activity not only during the very first feed-forward processing sweep but also after the initiation of recurrent activity from higher to lower areas (e.g., Lamme & Roelfsema, 2000), which might hence underlie the observed influences of semantic information. Here, we investigated whether this knowledge-dependent P1 modulation generalizes to object names, that is, whether conceptual knowledge influences ERPs during early stages of visual word recognition.

Figure 1. 

Example stimuli. (A) Initially unfamiliar objects with the information to be memorized in the first part of the learning session. (B) Short stories (translated into English) as presented in the second part of the learning session to manipulate the amount of object-related information.

Figure 1. 

Example stimuli. (A) Initially unfamiliar objects with the information to be memorized in the first part of the learning session. (B) Short stories (translated into English) as presented in the second part of the learning session to manipulate the amount of object-related information.

Albeit scant, there is some evidence suggesting influences of conceptual variables on perceptual and orthographic processing. Large associative neighborhoods facilitate visual word recognition and decrease activation in several brain regions, including visual areas (Pexman, Hargreaves, Edwards, Henry, & Goodyear, 2007). A processing advantage for semantically ambiguous words has been ascribed to feedback from multiple meanings to orthography, whereas a disadvantage for words with synonyms has been attributed to enhanced competition between different orthographic representations receiving feedback from the corresponding concept (Hino, Lupker, & Pexman, 2002; Pecher, 2001). Facilitating effects of orthographically mediated semantic priming (e.g., frog–[toad]–told) also suggest that activation spreads from a word's semantic representation to its orthography (Reimer, Lorsbach, & Bleakney, 2008).

Unfortunately, the findings discussed above provide only limited insight into the temporal unfolding of semantic influences during reading. More relevant evidence comes from studies exploiting the high temporal resolution of EEG recordings. On the basis of a large body of evidence, meaning access in reading has been typically linked to the N400 component, a negative deflection peaking at about 400 msec after word presentation (Kutas & Hillyard, 1980; see Kutas & Federmeier, 2011; Kutas, van Petten, & Kluender, 2006, for reviews). However, there are some indications of semantic influences on much earlier stages of word recognition. Thus, it has been reported that different attributes of word meaning, such as the semantic dimensions evaluation (good–bad), potency (strong–weak), and activity (active–passive; Skrandies, 1998), as well as emotional valence (Scott, O'Donnell, Leuthold, & Sereno, 2009) can modulate ERPs already in a time range of about 100–150 msec after word presentation. Yet, familiar words differ on multiple dimensions, not only on the targeted semantic attributes but also on a variety of basic visual properties (e.g., number and physical extension of letters) and perceptual familiarity (e.g., frequency of letters and letter combinations). Therefore, even with painstaking controls of confounding perceptual factors, the differential contributions of perceptual and semantic forces are hard to disentangle.

Circumventing this problem, another line of research has employed semantic context manipulations (e.g., cloze probability within sentences), using identical target words across semantic conditions. Context-dependent ERP modulations, starting already at about 100 msec, have recently been demonstrated (Dambacher, Rolfs, Göllner, Kliegl, & Jacobs, 2009; Segalowitz & Zheng, 2009; Penolazzi, Hauk, & Pulvermüller, 2007; Wirth et al., 2007). These findings indicate very early interactions between bottom–up input and anticipation-based top–down activation. The question remains, however, whether knowledge about word meaning per se, without context-induced expectations and preactivations of potential stimulus candidates, likewise affects early perceptual analyses.

Aiming at directly addressing this question while avoiding context-based expectations and possible differences of visual stimulus properties or perceptual familiarity, we used a learning paradigm to explore influences of the depth of conceptual knowledge on perceptual processes during reading. In this paradigm, initially unfamiliar objects and their names were first associated and familiarized. Subsequently, we manipulated the amount of information provided about the objects, counterbalancing the assignment of objects to knowledge conditions and keeping perceptual exposure constant. Importantly, only the objects, but not their names, were shown during information acquisition (see Figure 1).

In a subsequent test session with EEG recordings, we presented both the pictures (as in Abdel Rahman & Sommer, 2008) and the written names of the objects. Participants performed three tasks, which are commonly used in experiments of object and word recognition and that presumably differ in the depth of processing: naming, semantic categorization, and familiarity decision. None of the tasks required retrieval of the acquired knowledge. We focused on the P1 component as an electrophysiological indicator of perceptual processing in visual cortex. It was our primary aim to examine whether newly acquired object-related knowledge influences visual analysis of the corresponding object names despite the absence of intrinsic relations between perceptual and semantic features in visual words. In addition, we analyzed knowledge effects on the amplitude of the N400 component, which has often been associated with semantic processing (Kutas & Hillyard, 1980; see Kutas & Federmeier, 2011; Kutas et al., 2006, for reviews), as well as knowledge effects during object perception, aiming to replicate the findings of Abdel Rahman and Sommer (2008).

METHODS

Participants

Twenty-four native German speakers (15 women), with a mean age of 24 years (range = 18–36 years), took part after giving informed consent. Participants were right-handed and reported normal or corrected-to-normal visual acuity.

Materials

Visual stimuli were pictures and written names of 20 well-known and 40 rare objects. Half of the objects (10 well-known and 20 rare objects) were real, for example, sofa (well-known) and adder (rare; see Figure 1), and the other half were fictitious, for example, UFO (well-known) and sonocor (rare). Overall, well-known objects were familiar to the participants, with the exception of one participant who did not know one and another participant who did not know two of the objects. Rare objects were chosen to be unfamiliar to the vast majority of people. Most participants did not know any of the objects; three participants knew just one object of which they did not know the name but recognized its shape, although they had not seen the specific picture employed in the experiment. For each rare object, a spoken story was recorded, explaining its function (mean duration = 18.3 sec). In addition, we recorded 20 spoken cooking recipes (mean duration = 18.6 sec). Vocal responses of the participants were recorded with a microphone; response latencies were measured as voice onsets, and errors were coded manually by the experimenter.

A full list of object names is provided in the Appendix. As the previous version of the experiment (Abdel Rahman & Sommer, 2008) had focused on object perception, object names were not perfectly matched for newly learned and well-known conditions on relevant dimensions. However, means of number of letters were 7.7 for the well-known names and 7.1 for the newly learned names; an analysis comparing the 20 well-known names with two lists of 20 newly learned names each (which were used for counterbalancing) revealed no significant differences, F(2, 38) = 1.5, p = .24. Yet, two of the well-known object names were abbreviations, one of them additionally containing numbers (UFO and R2D2); another well-known name consisted of two separate words (Fliegender Teppich, English: magic carpet). However, the most relevant comparison is between newly learned names associated with in-depth versus minimal information; here physical stimuli were identical because of counterbalancing across participants.

Procedure and Design

Learning Session

The learning session consisted of two parts. First, participants were familiarized with the 40 rare objects and learned some basic and task-relevant information provided for all of them, namely their semantic categories, that is, real versus fictitious, and their names (see Figure 1A for examples of the stimuli). Each object was presented six times. For the first two presentations of each object, it was first shown together with the written task-relevant information; then the information was erased from the screen, prompting its vocalization. During the subsequent presentations, the objects were first shown alone, prompting the participants to produce name and category from memory; this was immediately followed by the presentation of the written information, enabling participants to check and correct their responses. Objects to be associated with minimal or in-depth information were presented intermixed, precluding any systematic differences in attention, fatigue, etc., between these conditions.

At the end of the first part of the learning session, participants were tested on the acquired information without feedback. One block of testing required object naming, and another one required vocal object categorization (real vs. fictitious). The order of these test blocks was counterbalanced. During each block, objects were presented twice in random order, resulting in a total of 160 trials. At the beginning of each trial, a fixation cross was presented for 0.5 sec, followed by an object picture that remained on the screen until the response had been given or for a maximum of 3 sec. The subsequent trial started 1 sec later.

In the second part of the learning session, short spoken stories were presented while the corresponding object pictures were shown at the screen. For half of the objects, the stories contained object-specific functional information (in-depth knowledge condition; see Figure 1B, top), whereas for the other half, unrelated cooking recipes were presented (minimal knowledge condition; see Figure 1B, bottom). Visual presentation times were the same in both knowledge conditions. The assignment of the objects to the in-depth and minimal knowledge condition was counterbalanced across participants; thus, each object appeared equally often in both conditions. Each participant heard all 20 cooking recipes and 20 of the available 40 object-specific stories, namely those describing the objects assigned to the in-depth knowledge condition in his or her case. Participants were instructed to attend to all objects and listen to all stories. The stories were presented three times. Whereas object-specific stories were always assigned to the same object, recipes were randomly assigned to different objects at each presentation without further instruction to memorize them. The second part ended with the same test as Part 1.

Test Session

Test sessions with EEG recordings took place 2–3 days after learning. Before the start, participants completed a memory questionnaire. They saw the object pictures and were asked to write down the names and categories, as well as semantic information if available. During the subsequent test, pictures and written names of the objects were presented in two separate blocks in counterbalanced order. Pictures or names of the 40 newly learned objects were presented randomly, alternating with pictures or names of 20 well-known objects. In both the picture and the name block, three tasks were employed, block wise, and counterbalanced: a naming task, a spoken semantic categorization task (real vs. fictitious), and a button-press familiarity decision task (well-known vs. newly learned). None of the tasks required retrieval of the knowledge acquired in the second part of the learning session. Object pictures and names were presented three times in each task, resulting in 1080 trials. Each trial began with a fixation cross presented for 0.5 sec, followed by an object picture or a written object name, which stayed on until responding or for maximal 3 sec. The next trial began 1 sec later.

EEG Recording and Data Analysis

The EEG was recorded with Ag/AgCl electrodes from 56 sites according to the extended 10–20 system (please see Figure 2 for a map of the recording sites) and referenced to the left mastoid. The horizontal and vertical EOGs were recorded from the left and right external canthi and from above and below the midpoint of the left eye. Electrode impedance was kept below 5 kΩ. Bandpass of amplifiers (BrainAmp) was 0.032–70 Hz; sampling rate was 500 Hz. Off-line, the EEG was transformed to average reference, proposed as an estimate for an inactive reference and considered to be less biased than other common references (Picton et al., 2000). Eye movement artifacts were removed with a spatio-temporal dipole modeling procedure using BESA software (Berg & Scherg, 1991), based on prototypical eye movements obtained in a calibration procedure at the beginning of the session. After applying a 30-Hz low-pass filter, the continuous EEG was segmented into epochs of 2.5 sec, with a 100-msec prestimulus baseline. Trials with remaining artifacts and with incorrect or missing responses were discarded, resulting in the rejection of 2.9–4.5% of trials, depending on condition.

Figure 2. 

Map of electrode positions according to the extended 10–20 system. Recording sites are depicted in gray; occipital sites selected for additional analyses of P1 knowledge effects are highlighted in dark gray.

Figure 2. 

Map of electrode positions according to the extended 10–20 system. Recording sites are depicted in gray; occipital sites selected for additional analyses of P1 knowledge effects are highlighted in dark gray.

ERP amplitudes were submitted to repeated measures ANOVAs, including the factors Domain (picture vs. word), Task (familiarity decision, semantic categorization, naming), Knowledge (minimal, in-depth, well-known), and Electrode Site (56 levels). Because of the average reference, only effects in interaction with Electrode Site are meaningful in these ANOVAs and are considered as “main effects” of the respective factors. Effects of knowledge were further analyzed at the sites of maximal P1 amplitude. Overall P1 peaked at 130 msec and was most pronounced at electrode site PO8, followed by O1. Thus, further analyses of P1 knowledge effects focused on these electrode sites and their contralateral counterparts (PO8, PO7, O1, and O2; unpooled). All analyses were additionally conducted separately for the stimulus domains to be more conservative concerning possible knowledge effects for object names (the main focus of this study). Degrees of freedom were corrected according to Huynh and Feldt (1976), if appropriate. Post hoc comparisons are reported only for main effects or interactions involving the factor knowledge; significance levels were Bonferroni-corrected. Performance data from the learning session were submitted to repeated measures ANOVAs with factors Task (naming vs. semantic categorization) and Knowledge (minimal vs. in-depth). The same analysis was applied to error rates from the memory questionnaire. ANOVAs on performance data from the test session included the additional factor Domain (pictures vs. words) as well as an additional level of both the task factor (familiarity decision) and the knowledge Factor (well-known stimuli).

RESULTS

Performance

Mean RTs, standard errors of means (SEs), and mean error rates (ERs) from the speeded tasks during the learning and test session are shown in Tables 1 and 2, respectively.

Table 1. 

RT with the Corresponding SEs and ER from the Learning Session


Minimal
In-depth
RT (msec)
SE
ER (%)
RT (msec)
SE
ER (%)
Part 1 
Naming task 1191 45.2 14.4 1187 48.3 16.4 
Semantic task 913 36.4 5.2 919 42.3 7.0 
 
Part 2 
Naming task 1291 53.7 20.8 1404 56.5 24.6 
Semantic task 864 28.8 4.8 909 36.8 5.9 

Minimal
In-depth
RT (msec)
SE
ER (%)
RT (msec)
SE
ER (%)
Part 1 
Naming task 1191 45.2 14.4 1187 48.3 16.4 
Semantic task 913 36.4 5.2 919 42.3 7.0 
 
Part 2 
Naming task 1291 53.7 20.8 1404 56.5 24.6 
Semantic task 864 28.8 4.8 909 36.8 5.9 
Table 2. 

RT with the Corresponding SEs and ER from the Test Session


Minimal
In-depth
Well-known
RT (msec)
SE
ER (%)
RT (msec)
SE
ER (%)
RT (msec)
SE
ER (%)
Words 
Naming task 579 16.2 0.2 573 14.2 0.1 561 14.0 0.1 
Semantic task 1089 40.1 6.5 1074 39.7 6.5 950 31.0 2.6 
Familiarity task 617 22.2 1.7 615 18.3 1.2 633 18.4 6.8 
 
Pictures 
Naming task 1061 45.9 6.0 1076 43.6 6.1 846 23.7 1.7 
Semantic task 883 28.7 4.7 901 32.9 3.3 861 26.0 2.5 
Familiarity task 620 16.0 1.5 623 16.5 1.6 648 19.7 5.1 

Minimal
In-depth
Well-known
RT (msec)
SE
ER (%)
RT (msec)
SE
ER (%)
RT (msec)
SE
ER (%)
Words 
Naming task 579 16.2 0.2 573 14.2 0.1 561 14.0 0.1 
Semantic task 1089 40.1 6.5 1074 39.7 6.5 950 31.0 2.6 
Familiarity task 617 22.2 1.7 615 18.3 1.2 633 18.4 6.8 
 
Pictures 
Naming task 1061 45.9 6.0 1076 43.6 6.1 846 23.7 1.7 
Semantic task 883 28.7 4.7 901 32.9 3.3 861 26.0 2.5 
Familiarity task 620 16.0 1.5 623 16.5 1.6 648 19.7 5.1 

Learning Session

In the test after Part 1 of the learning session, neither RTs nor ERs differed between objects associated with minimal versus in-depth information (Fs(1, 23) < 2.73, ps > .11). Responses were faster and more accurate in the semantic categorization task than in the naming task (Fs(1, 23) > 20.37, ps < .001). After knowledge manipulation during Part 2, objects from the in-depth knowledge condition were responded to slower and elicited more errors than those from the minimal knowledge condition (Fs(1, 23) > 4.41, ps < .05). After the first part, naming was slower and less accurate than semantic categorization (Fs(1, 23) > 35.43, ps < .001).

Test Session

Neither RTs nor ERs differed between in-depth (810 msec and 3.1%, respectively) and minimal (808 msec and 3.4%, respectively) knowledge conditions, Fs < 1, during the test session. Further behavioral results are reported for the sake of completeness.

On average, responses to words were faster than responses to pictures, resulting in a main effect of domain, F(1, 23) = 53.56, p < .001. There also was a significant main effect of task, F(2, 46) = 151.57, p < .001, and a significant Domain × Task interaction, F(2, 46) = 195.77, p < .001. Words were named considerably faster than pictures, F(1, 23) = 200.84, p < .001, whereas pictures elicited faster responses in the semantic categorization task, F(1, 23) = 55.32, p < .001. The effect of Knowledge was highly significant, F(2, 46) = 36.36, p < .001, and interacted with both Domain, F(2, 46) = 9.25, p < .001, and Task, F(4, 92) = 24.78, p < .001; the three-way interaction of Domain × Task × Knowledge was also significant, F(4, 92) = 31.16, p < .001. However, in the ANOVA without the well-known stimuli, which were responded to considerably faster (except for the familiarity decision and for word naming), the knowledge effect as well as the Task × Knowledge and the Domain × Task × Knowledge interactions failed significance, Fs < 1. Although the Domain × Knowledge interaction remained significant, F(1, 23) = 7.49, p < .05, reflecting reverse tendencies for knowledge effects in the picture and word conditions, further testing showed these tendencies to not be reliable, Fs (1, 23) < 2.33, ps > .28.

For ERs, there was a significant main effect of Task, F(2, 46) = 3.40, p < .05, and a significant interaction of Task × Domain, F(2, 46) = 22.97, p < .001. Separate comparisons revealed that, paralleling RTs, ERs for word naming were considerably lower than for picture naming, F(1, 23) = 18.57, p < .001, whereas words elicited more errors than pictures in the categorization task, F(1, 23) = 13.19, p < .01. Knowledge was not significant as a main effect, F < 1, but there was an interaction with Task, F(4, 92) = 25.80, p < .001, as well as a Domain × Task × Knowledge interaction, F(4, 92) = 4.66, p < .01. However, when omitting the well-known stimuli from the ANOVA, which produced fewer errors in categorization and picture naming, these interactions vanished, Fs < 1.

Because every stimulus was presented nine times in the course of the experiment, we also analyzed whether stimulus repetition interacted with the Knowledge factor. ANOVAs showed repetition to facilitate performance, with significant main effects for both RTs, F(8, 128) = 7.64, p < .001, and ERs, F(8, 128) = 2.53, p < .05, but for none of the measures did analyses reveal interactions of Repetition × Knowledge, Fs(16, 256) < 1.46, ps > .11.

Memory Questionnaire

Analysis of ERs in the memory questionnaire completed before the start of the test session revealed that participants remembered the semantic categories better than the names (1.4% vs. 9.8%; F(1, 23) = 9.50, p < .01), but there were no differences between stimuli associated with in-depth versus minimal information, F < 1.

Electrophysiology

Figure 3 depicts influences of knowledge on global field power (GFP), computed as overall ERP activity at each time point across the 56 scalp electrodes (Lehmann & Skrandies, 1980); F values and significance levels for analyses on all electrodes are reported in Table 3. Figure 4 shows ERP amplitudes for both stimulus domains averaged across tasks at the P1 peak sites (PO7, PO8, O1, and O2); statistical values for P1 amplitude analyses at these selected sites are reported in Table 4. As can be seen, knowledge seems to affect processing in the P1 time window, irrespective of task and stimulus domain, with enhanced amplitudes for minimal knowledge as compared with both in-depth knowledge and well-known conditions.

Figure 3. 

Global field power of grand mean ERPs as a function of knowledge and scalp distributions of knowledge effects for words (A) and pictures (B) in three tasks (from left to right: naming, semantic categorization, and familiarity decision). Scalp distributions of knowledge effects (in-depth minus minimal knowledge) correspond to mean amplitude values in the P1 (100–150) and N400 (300–500) time windows.

Figure 3. 

Global field power of grand mean ERPs as a function of knowledge and scalp distributions of knowledge effects for words (A) and pictures (B) in three tasks (from left to right: naming, semantic categorization, and familiarity decision). Scalp distributions of knowledge effects (in-depth minus minimal knowledge) correspond to mean amplitude values in the P1 (100–150) and N400 (300–500) time windows.

Table 3. 

F values and Significance Levels from the Analyses of Variance of Event-related Brain Potential Amplitudes at the 56 Employed Electrode Sites in the P1 and N400 Time Segments

Source
df
Time Segments (msec)
100–150
300–500
All Stimuli 
Knowledge 110, 2530 10.23*** 28.55*** 
 Minimal vs. in-depth 55, 1265 14.17*** 25.46*** 
 Minimal vs. well-known 55, 1265 15.71*** 44.78*** 
 In-depth vs. well-known 55, 1265 <1 10.16*** 
Task 110, 2530 4.06*** 14.09*** 
Domain 55, 1265 25.47*** 26.98*** 
Knowledge × Task 220, 5060 <1 1.24 
Knowledge × Domain 110, 2530 1.36 6.37*** 
Task × Domain 110, 2530 <1 4.91*** 
Knowledge × Task × Domain 220, 5060 1.11 <1 
 
Words Only 
Knowledge 110, 2530 4.24*** 9.05*** 
 Minimal vs. in-depth 55, 1265 6.94*** 8.43*** 
 Minimal vs. well-known 55, 1265 5.27*** 17.36*** 
 In-depth vs. well-known 55, 1265 <1 2.26 
 
Pictures Only 
Knowledge 110, 2530 7.84*** 30.67*** 
 Minimal vs. in-depth 55, 1265 9.56*** 28.25*** 
 Minimal vs. well-known 55, 1265 12.90*** 41.67*** 
 In-depth vs. well-known 55, 1265 1.39 14.12*** 
Source
df
Time Segments (msec)
100–150
300–500
All Stimuli 
Knowledge 110, 2530 10.23*** 28.55*** 
 Minimal vs. in-depth 55, 1265 14.17*** 25.46*** 
 Minimal vs. well-known 55, 1265 15.71*** 44.78*** 
 In-depth vs. well-known 55, 1265 <1 10.16*** 
Task 110, 2530 4.06*** 14.09*** 
Domain 55, 1265 25.47*** 26.98*** 
Knowledge × Task 220, 5060 <1 1.24 
Knowledge × Domain 110, 2530 1.36 6.37*** 
Task × Domain 110, 2530 <1 4.91*** 
Knowledge × Task × Domain 220, 5060 1.11 <1 
 
Words Only 
Knowledge 110, 2530 4.24*** 9.05*** 
 Minimal vs. in-depth 55, 1265 6.94*** 8.43*** 
 Minimal vs. well-known 55, 1265 5.27*** 17.36*** 
 In-depth vs. well-known 55, 1265 <1 2.26 
 
Pictures Only 
Knowledge 110, 2530 7.84*** 30.67*** 
 Minimal vs. in-depth 55, 1265 9.56*** 28.25*** 
 Minimal vs. well-known 55, 1265 12.90*** 41.67*** 
 In-depth vs. well-known 55, 1265 1.39 14.12*** 

Because of the average reference, effects in interaction with electrode site are reported as “main effects” of the respective factors.

***p < .001.

Figure 4. 

Grand mean ERPs at posterior electrode sites as a function of knowledge and topographical distributions of significant knowledge effects for words (A) and pictures (B), averaged over tasks. Topographies depict t values for knowledge effects (in-depth minus minimal knowledge) in the P1 (100–150) and N400 (300–500) time windows; for df = 23, p < .05 if t > 2.069.

Figure 4. 

Grand mean ERPs at posterior electrode sites as a function of knowledge and topographical distributions of significant knowledge effects for words (A) and pictures (B), averaged over tasks. Topographies depict t values for knowledge effects (in-depth minus minimal knowledge) in the P1 (100–150) and N400 (300–500) time windows; for df = 23, p < .05 if t > 2.069.

Table 4. 

F values and Significance Levels from the ANOVAs of ERP Amplitudes at a Region of Interest Including the Four Selected Electrode Sites O1, O2, PO7, and PO8 (Unpooled) in the P1 Time Segment

Source
df
100–150 msec
All Stimuli 
Knowledge 2, 46 27.90*** 
 Minimal vs. in-depth 1, 23 41.06*** 
 Minimal vs. well-known 1, 23 44.08*** 
 In-depth vs. well-known 1, 23 <1 
Knowledge × Task 4, 92 <1 
Knowledge × Domain 2, 46 1.72 
Knowledge × Task × Domain 4, 92 1.62 
 
Words Only 
Knowledge 2, 46 12.68*** 
 Minimal vs. in-depth 1, 23 22.76*** 
 Minimal vs. well-known 1, 23 20.56*** 
 In-depth vs. well-known 1, 23 <1 
 
Pictures Only 
Knowledge 2, 46 16.50*** 
 Minimal vs. in-depth 1, 23 25.04*** 
 Minimal vs. well-known 1, 23 23.81*** 
 In-depth vs. well-known 1, 23 <1 
Source
df
100–150 msec
All Stimuli 
Knowledge 2, 46 27.90*** 
 Minimal vs. in-depth 1, 23 41.06*** 
 Minimal vs. well-known 1, 23 44.08*** 
 In-depth vs. well-known 1, 23 <1 
Knowledge × Task 4, 92 <1 
Knowledge × Domain 2, 46 1.72 
Knowledge × Task × Domain 4, 92 1.62 
 
Words Only 
Knowledge 2, 46 12.68*** 
 Minimal vs. in-depth 1, 23 22.76*** 
 Minimal vs. well-known 1, 23 20.56*** 
 In-depth vs. well-known 1, 23 <1 
 
Pictures Only 
Knowledge 2, 46 16.50*** 
 Minimal vs. in-depth 1, 23 25.04*** 
 Minimal vs. well-known 1, 23 23.81*** 
 In-depth vs. well-known 1, 23 <1 

***p < .001.

In the P1 time window (100-150 msec), ANOVA revealed significant effects of Task and Domain, with higher P1 amplitudes for pictures than for words (see Figure 5). Importantly, analysis confirmed a significant effect of Knowledge, with further tests revealing no difference between well-known and in-depth knowledge conditions, whereas both differed from the minimal knowledge conditions. Separate ANOVAs for the stimulus domains revealed reliable knowledge effects for pictures as well as words. Further analysis revealed consistent results for the domains, with no difference between well-known and in-depth knowledge conditions whereas both differed from minimal knowledge conditions.

Figure 5. 

Global field power of grand mean ERPs associated with the minimal and in-depth knowledge conditions, averaged over tasks, superimposed for words and pictures.

Figure 5. 

Global field power of grand mean ERPs associated with the minimal and in-depth knowledge conditions, averaged over tasks, superimposed for words and pictures.

A comparison of topographical distributions of P1 knowledge effects between stimulus domains (pictures vs. words), for which difference waveforms (in-depth minus minimal knowledge) were scaled to the individual GFP for each participant, revealed no significant difference (F < 1). Absolute values of knowledge effects (in-depth minus minimal knowledge) were largest over occipital sites (specifically: PO8, followed by O1, O2, and PO7).

In addition to the early P1 effects, we found a gradual increase of posterior negativity from minimal over in-depth knowledge to well-known conditions in the N400 time window (300–500 msec) as apparent in Figures 3 and 4. For mean ERP amplitudes in this time window, ANOVA revealed significant effects of Domain, Task, and Task × Domain interaction. Crucially, the effect of knowledge was again significant. Here, separate comparisons revealed differences between all knowledge conditions. Furthermore, Knowledge interacted with Domain, reflecting stronger knowledge effects for pictures than for words (see Figures 35), but further testing showed knowledge effects to be significant for both words and pictures. For pictures, there were reliable differences between all knowledge conditions. For words, amplitudes significantly differed between minimal and in-depth knowledge conditions and minimal knowledge and well-known conditions, but not between in-depth knowledge and well-known conditions.

As each stimulus was presented nine times in the course of the experiment and repetition has been found to influence semantic ERP effects in word recognition, with attenuated effects for repeated stimuli (Kiefer, 2005; Kounios & Holcomb, 1994), we also analyzed whether the observed ERP knowledge effects may have been modulated by stimulus repetition (collapsing across successive presentations in groups of three to have sufficient trials for ERP analyses in each condition). In the P1 time window, there was neither a main effect of Repetition nor a Repetition × Knowledge interaction, Fs < 1.14, ps > .33. In the N400 time window, we found a main effect of Repetition, F(110, 1760) = 2.54, p < .01, with increasing central positivity from Presentations 1–3 over Presentations 4–6 to Presentations 7–9, as reported previously (e.g., Nagy & Rugg, 1989). Repetition also interacted with the knowledge factor in this time segment, F(220, 3520) = 1.80, p < .01, but the interaction failed to reach significance in the ANOVA without the well-known stimuli (F < 1). Thus, importantly, we found no indications of an influence of stimulus repetition on knowledge effects for the newly learned objects in the N400 or P1 time window. Although this seems to be at variance with the above-mentioned attenuations of semantic effects for repeated stimuli, it is in line with recent experiments showing semantic ERP effects to resist multiple repetitions (Renoult & Debruille, 2011; Debruille & Renoult, 2009,).

Further testing revealed the Repetition × Knowledge interaction to be caused by decreasing differences in N400 amplitudes between well-known and newly learned stimuli (from both minimal and in-depth knowledge conditions) from Presentations 1–3 to Presentations 4–6, Fs(55, 935) > 3.65, ps < .001. Interestingly, this repetition-dependent attenuation seems to be responsible for the absence of any differences between well-known words and words with in-depth knowledge in the main analyses (see above), as separate comparisons revealed this difference to be significant at Presentations 1–3, F(55, 1155) = 4.22, p < .01, but not at later presentations, Fs < 1.

DISCUSSION

The present study investigated whether recent findings of knowledge-dependent modulations of early visual processing stages in object recognition (Abdel Rahman & Sommer, 2008; Gauthier et al., 2003) generalize to object names, that is, to visual symbols that are arbitrarily related to conceptual properties of the represented objects.

Knowledge-dependent P1 Modulations

Knowledge affected the amplitude of the P1 component for object pictures as well as their names. P1 amplitude was reduced when comparing in-depth knowledge and well-known conditions with minimal knowledge conditions across all tasks (naming, semantic categorization, familiarity decision; see Figure 3). As discussed in the Introduction, the P1 component presumably reflects activity within extrastriate visual brain regions. However, rapid feedback from higher to lower brain regions already seems to contribute substantially to the underlying activity (Foxe & Simpson, 2002). Such top–down contributions provide a plausible framework for understanding knowledge-dependent ERP amplitude modulations (see below for further discussion). Furthermore, top–down contributions may be task-dependent and may, therefore, induce slight topographical differences between the knowledge effects across tasks (see Figure 3). In general, however, the topographical distributions of the P1 knowledge effects seem to be consistent with the typical P1 localizations as reported, for example, by Di Russo et al. (2002), suggesting sources within dorsal and ventral extrastriate visual cortex.

To our knowledge, the present study is the first to provide evidence for semantic P1 effects across stimulus domains and tasks using a learning paradigm. It does not seem possible to base strong claims on the comparisons with the well-known condition, which included physically different stimuli that also varied on other relevant dimensions such as frequency of occurrence and familiarity (higher for well-known stimuli) as well as recent exposure (lower for well-known stimuli because they were not shown during the learning session). However, the amplitude differences between newly learned stimuli associated with in-depth versus minimal information suggest that the depth of conceptual knowledge influences perceptual processing not only of visual objects but also their written names.

Please note that the P1 amplitude modulation cannot be explained in terms of learning-induced differences in visual experience per se. Participants were first familiarized with all objects and their names before the amount of object-related information was manipulated, ensuring identical initial learning conditions and familiarity levels. Furthermore, perceptual exposure was held constant between conditions. To preclude the possibility of enhanced attention and visual inspection of object names while providing functional as compared with unrelated information, the names were not shown while participants listened to the stories. Although object pictures were displayed during knowledge acquisition in the present experiment, possible concerns that the effects may have been caused by knowledge-induced enhancement of exploration and perceptual encoding of visual object features during the learning phase can be ruled out on the grounds of control conditions in the study of Abdel Rahman and Sommer (2008; Group 2 in Experiment 1 as well as all participants in Experiment 2). In this study, the same P1 effects as observed here had been obtained although object pictures had not been shown during knowledge acquisition. Therefore, we are confident that the present results reflect an influence of the depth of conceptual knowledge on early processes during word reading.

Possible Mechanisms Underlying Knowledge-dependent P1 Modulations

Word reading is a complex and multifaceted process, so that different mechanisms may underlie the obtained knowledge-dependent modulations. Perceptual representations directly based on the visual input activate the corresponding phonological (e.g., Ashby, 2010; Harm & Seidenberg, 2004) and conceptual representations. Conceptual representations comprise information about the word's referent, presumably its sensory, functional, and encyclopedic attributes (Barsalou, 2008; McRae, Cree, Seidenberg, & McNorgan, 2005; Cree & McRae, 2003), not inherently related to the perceptual features of the word. Visual processes involved in word reading are mostly attributed to the ventral visual stream—occasionally supported by dorsal areas (see, e.g., Rosazza, Cai, Minati, Paulignan, & Nazir, 2009)—where activation spreads from unspecific visual representations (lines, edges, etc.) to word-form-specific (orthographic) representations of increasing complexity. Word form specific representations are presumably localized in a region within the left fusiform gyrus (visual word form area; McCandliss, Cohen, & Dehaene, 2003), which is modulated by factors such as orthographic familiarity (e.g., Kronbichler et al., 2008). It is interesting to note that the assumed electrophysiological correlate of word-form-specific processes within the visual word form area, the left-lateralized N170 component (Maurer, Brandeis, & McCandliss, 2005; Cohen et al., 2000), peaks at about 170 msec, hence after the P1 component. Thus, conceptual information seems to modulate visual feature processing even below the level of word-form-specific (orthographic) representations, in line with the observation of very similar effects for object and word perception. In the following, we will tentatively discuss theoretical alternatives as to the functional localization and mechanism underlying the observed knowledge-dependent modulation within the visual word processing stream.

First, knowledge-induced P1 modulations during word reading may be because of rapid feedback from higher-level conceptual to lower-level perceptual areas, providing evidence for very early interactions between vision and knowledge even for symbolic representations without inherent relations between perceptual features and conceptual attributes. In line with this view, recent evidence suggests the temporal sequence of visual processing to be far more rapid that formerly believed (Sereno & Rayner, 2003; Foxe & Simpson, 2002), with the P1 component presumably reflecting feedback activity from higher to lower brain regions. Thus, P1 amplitude modulations may be induced by feedback from conceptual systems (Binder, Desai, Graves, & Conant, 2009).

Second, although our learning paradigm strictly controlled for perceptual exposure, it is not clear whether providing functional as compared with unrelated information may have enhanced the frequency or intensity of rehearsing associations and reactivating visual representations in the absence of exposure. Such knowledge-induced processes might have caused differences in visual familiarity of object names and might have contributed to the obtained effects in addition to (or even instead of) rapid on-line feedback from semantic representations. However, this explanation seems to raise the issue of why enhanced visual familiarity did not result in facilitated performance. Most importantly, such activations in the absence of exposure may be inherently related to conceptual knowledge in realistic situations. This raises the theoretical issue of whether controlling such associations and/or reactivations is desirable, in addition to the pragmatic issue to preclude differential reactivation during learning and retention. In any case, such a semantically driven enhancement of visual familiarity through rehearsing associations and reactivation would still suggest an influence of the depth of conceptual knowledge on perceptual representations of visual words, although being less pertinent as to when those influences take place.

Somewhat similarly, it might be argued that the effects observed during word processing may be elicited by automatically activating the associated object images, that is, the visual features of the word's referent. Thus, although obtained during word reading, the modulations may be more closely related to processing pictorial representations, rather than names. However, as there is indeed growing evidence that perceptual features of a word's referent are automatically activated during word reading (e.g., Barsalou, 2008), the activation of visual object features does not seem to be a confound of the present study's design but a part of word reading in general, at least when reading concrete words whose conceptual representations presumably include the perceptual features of their referents (e.g., Barsalou, 2008; Kiefer, Sim, Herrnberger, Grothe, & Hoenig, 2008; McRae et al., 2005). Furthermore, if the knowledge effects were because of automatic activation of the referent's visual features arbitrarily associated with their symbolic representations (the visual words), it would seem quite remarkable that this process would already occur about 120 msec after word presentation. Additionally, the findings would provide evidence not only for early access to visual features of a word's referent but also for a knowledge-dependent modulation of either the representation of the visual features or the activation of this representation during reading.

It may also be suggested that conceptual knowledge modulates the quality of lexical representations by affecting phonological processing efficiency. During learning, participants associated object pictures with orthographical as well as phonological representations of their names and supposedly also rehearsed the names phonologically. Although phonological rehearsal was presumably most prominent during memorization in the first part of the learning session (hence before the amount of object-related information was manipulated), participants may have continued to occasionally rehearse the names during and after knowledge acquisition. Thus, knowledge may have influenced the frequency or intensity of rehearsing phonological codes of object names, enhancing the quality of phonological representations. There is evidence that the involvement of phonology in reading is automatic and can take place as early as within 100 msec (Ashby, 2010; Wheat, Cornelissen, Frost, & Hansen, 2010; Ashby, Sanders, & Kingston, 2009; Harm & Seidenberg, 2004; Pammer et al., 2004). Thus, the obtained early influences of conceptual knowledge during word reading may be mediated by phonology. However, as phonology is presumably more tightly associated with object names than with object pictures, the similarity of knowledge effects across stimulus domains seems to diminish the plausibility of this account. Furthermore, similar as above, if knowledge would have influenced phonology in our experiment, such influences might be inherently related to conceptual knowledge rather than to a confound induced by the present study's design. This interpretation would still suggest semantic influences on early processes during reading.

Alternatively, could differences in terms of learning difficulty underlie the observed P1 modulations? During the second part of the learning session, performance was impaired in the in-depth knowledge condition as compared with the minimal knowledge condition, and there is indeed evidence that (perceptual) learning difficulty can modulate P1 amplitudes (Wang, Song, Qu, & Ding, 2010). However, Wang et al. found enhanced P1 amplitudes for more difficult learning, whereas in the present study P1 was reduced in the condition with impaired performance during learning. Furthermore, P1 effects did not match performance in the test session. Neither retrieval from memory as assessed by the questionnaire completed at the beginning of the session nor performance in any of the three speeded tasks performed during EEG recordings differed between minimal and in-depth knowledge conditions. Hence, performance data do not support an explanation in terms of difficulty. Some reservation might seem appropriate because possible relations between the P1 effects and performance may have been obscured by knowledge effects on subsequent processing stages as reflected in the N400 time window (see below). Depending on the manipulated factor the relation with performance may vary for both the P1 (compare, e.g., Dambacher et al., 2009, with Segalowitz & Zheng, 2009) and the N400 component (see Kutas & Federmeier, 2011; Kutas et al., 2006, for reviews), making it hard to predict the combined effects. Most crucially, however, even if the P1 effects reflected differences in learning difficulty, this would presumably concern perceptual difficulty, as for example in the study by Wang et al. (2010). As perceptual factors (physical stimuli and visual exposure) as well as the tasks were kept constant in the present experiment, any modulations of perceptual learning difficulty would have to be mediated through the acquisition of semantic information. Therefore, an explanation in terms of learning difficulty would still suggest semantic influences on perceptual processing of object names.

Finally, it might be suggested that the obtained P1 effects indicate differences in attention, as the P1 has been shown to increase with visual attention (Hillyard & Anllo-Vento, 1998). Similar to the explanations in terms of visual familiarity or perceptual learning difficulty, however, the plausibility of such an account is reduced by the fact that performance did not differ between knowledge conditions. In any case, because perceptual factors and tasks were held constant between knowledge conditions, possible differences in terms of visual attention would have to be semantically mediated. Thus, as for the above-mentioned interpretations, an account of the present findings in terms of visual attention would still imply influences of the depth of conceptual knowledge on visual processes during word reading.

In summary, the present study is the first that employs a learning paradigm and reports evidence for semantic P1 effects across different tasks during word reading. As noted above, it is currently difficult to make unequivocal statements as to which of the subprocesses of word reading is modulated by the depth of conceptual knowledge and through which mechanism the depth of knowledge induces the observed modulation. Although the lack of knowledge effects on performance seems to raise issues for accounts based on concepts with rather direct relations to behavioral facilitation or impairment (such as familiarity, attention, and difficulty), some interpretational ambiguity persists. It seems, however, that this ambiguity is not because of specific problems or confounds in the design of the present study but inherent to the complexities of reading and conceptual knowledge. Although further research is desirable, it seems safe to conclude from the present results that the depth of semantic knowledge associated with concrete words modulates early (100–150 msec) processes during word reading.

Relation to Previous Evidence for Semantic Influences on Early Word Perception

In suggesting such an early onset of semantic influences on word reading, our results converge with recent EEG studies as discussed in the Introduction (Dambacher et al., 2009; Scott et al., 2009; Segalowitz & Zheng, 2009; Penolazzi et al., 2007; Wirth et al., 2007; Skrandies, 1998). Importantly, the present findings extend previous work by showing that these early semantic effects do not depend on context-induced expectations and appear even though the same physical stimuli were used across different knowledge conditions.

Although it is not trivial to relate signals obtained from fMRI and EEG measurements, it seems interesting to note a possible link between the reduced P1 amplitudes when processing words associated with in-depth knowledge as compared with words with minimal knowledge and the diminished hemodynamic activation in visual brain regions when processing visual words with many rather than few semantic associations (Pexman et al., 2007). In both studies, more associated semantics were accompanied by a decrease in visual activation (Pexman et al., 2007).

On the other hand, previous reports of early semantic ERP effects in reading have been mixed with respect to the polarity of the modulations. Segalowitz and Zheng (2009) compared ERPs to words presented during standard lexical decisions with a lexical semantic version of the task. In the latter task, all words within a given block belonged to the same semantic category; after each block, participants performed a four-choice semantic category match for the presented items. In contrast to our finding of reduced amplitudes in the conditions involving more semantic activation, Segalowitz and Zheng observed enhanced P1 amplitudes in the lexical semantic condition. Of course we can only speculate on the reasons, but one important difference between these studies may be that the semantic manipulation in the experiment of Segalowitz and Zheng was task-relevant. Thus, their semantic condition involved not only more semantic activation, as our in-depth knowledge condition, but also an additional semantic task, which could have triggered more attentive word processing. Wirth et al. (2007) found a decreased early negativity for targets following an unrelated relative to a related prime. Although the polarity of the effect, that is, more positivity in the condition presumably involving less semantic activation, seems to fit with the present results, their effect takes place mainly during the P1–N1 transition period, which makes direct comparison difficult. Manipulating cloze probability within sentences, Dambacher et al. (2009) found reduced P1 amplitudes for highly predictable words, whereas Penolazzi et al. (2007) observed the opposite, namely an amplitude reduction in the low-predictability condition, although only for short words. Clearly, further research is required for a better understanding of the factors modulating the polarity of semantic P1 amplitude modulations.

Knowledge-dependent N400 Modulations

As to be expected, knowledge also modulated N400 amplitudes, which increased with the amount of associated information from minimal over in-depth knowledge conditions to well-known stimuli. This finding replicates the N400 modulations during object recognition reported by Abdel Rahman and Sommer (2008) and seems to be well in line with evidence that concrete words elicit larger N400 amplitudes than abstract words (Kounios & Holcomb, 1994). There are two main accounts of concreteness effects, namely dual coding (Paivio, 1986) as well as context availability (Schwanenflugel, 1991). Interestingly, both accounts imply richer representations in semantic space for more concrete words either because of additional pictorial representations or stronger contextual embedding. Similarly, in the present study, semantic representations were presumably enriched for stimuli associated with in-depth as compared with minimal semantic information. Hence, both concreteness and depth of knowledge may enhance N400 amplitudes as a result of enriched semantic representations.

As discussed in the Introduction, it is important to note that, whereas some studies observe early semantic ERP effects (Dambacher et al., 2009; Scott et al., 2009; Segalowitz & Zheng, 2009; Penolazzi et al., 2007; Wirth et al., 2007; Skrandies, 1998), others do not report any semantic modulation before the N400 component (e.g., Cristescu, Devlin, & Nobre, 2006; see also Kutas & Federmeier, 2011; Kutas et al., 2006, for reviews); currently it seems unclear which factors cause such differences across studies. It may be worth mentioning that similarly inconsistent evidence has also been obtained on the time line of lexicality effects (Segalowitz & Zheng, 2009; Rabovsky, Álvarez, Hohlfeld, & Sommer, 2008; Braun et al., 2006; Sereno, Rayner, & Posner, 1998)—but please note that words and pseudowords differ not only in terms of association with semantic information but also on perceptual dimensions such as visual familiarity.

Some factors that may contribute to such inconsistencies have been discussed by Hauk, Pulvermüller, Ford, Marslen-Wilson, and Davis (2009), who argue that early semantic and other psycholinguistic ERP effects during reading may often go undetected because they are rather small and short-lived as compared with later N400 modulations. Furthermore, influences of perceptual factors (such as word length) may induce latency variability of the effects within conditions, further degrading their impact on the average waveform. Therefore, very strict control of perceptual factors may be necessary to observe these early modulations (Penolazzi et al., 2007). Although this argument is plausible, many other variables, such as the experimental context, the employed task, or the direction of attention, may also modulate the occurrence and characteristics of semantic ERP modulations. Further research is required to identify the relevant factors.

Similar Onsets for Influences of Stimulus Domain and Knowledge

An interesting, although unpredicted, observation of our study was that differential processing of objects and words did not precede the effects of knowledge on perception. As can be seen in Figure 5, effects of stimulus domain and knowledge effects within domains emerged almost simultaneously in the P1 time window. In line with established processing differences between objects and words (Moore & Price, 1999), the basic perceptual distinction between these stimulus domains modulated the ERP more strongly than the semantic distinction between stimuli with minimal and in-depth knowledge. However, domain and knowledge effects did not differ in terms of their time course. Although not necessarily allowing for conclusions concerning the very first analysis of visual input, the finding that basic visual (pictures vs. words) and higher conceptual (amount of information) variables elicit early influences on processing in a similar time window provides strong evidence against assumptions of discrete and modular processing stages in visual recognition, with processing in one stage reaching a settled end state before the output of this stage is made available for the next level (Pylyshyn, 1999; Fodor, 1983). Instead, the observation supports interactive models of cognition assuming feedback between cognitive subprocesses (Bar et al., 2006; Lamme & Roelfsema, 2000; Seidenberg & McClelland, 1989; McClelland, Rumelhart, & the PDP Research Group, 1986).

Conclusion

In summary, the present study shows influences of the depth of conceptual knowledge on early processes during word reading. Intrinsic relations between perceptual and conceptual properties do not seem necessary for early influences of the depth of conceptual knowledge on vision.

APPENDIX: OBJECT NAMES

Rare (Real)
Rare (Fictitious)
Well-known (Rare and Fictitious)
Tonometer Trinisphäre Besen [broom] 
Theremin Ornithopter Palette [color palette] 
Shruti Heliophiole Eimer [bucket] 
Pallheber Grondoq Schlüssel [key] 
Kartik Erkomat Sofa [sofa] 
Heper Tronser Topf [jar] 
Groma Insolatus Gitarre [guitar] 
Brechel Nikipur Hammer [hammer] 
Sobriator Nahilator Zahnbürste [toothbrush] 
Trasofin Karemma Lippenstift [lipstick] 
Calimat Planeo Drache [dragon] 
Plondex Brenette Darthvader 
Ganosis Sonocor Enterprise 
Vimax Lucinet Flaschengeist [genie] 
Nuscüp Mobero Fliegender Teppich [magic carpet] 
Adder Pato Hexenhaus [gingerbread house] 
Fliktor Kitara R2D2 
Notande Opane Ring [ring] 
Stumer Tongo UFO [UFO] 
Squonker Scarus Zauberhut [magic hat] 
Rare (Real)
Rare (Fictitious)
Well-known (Rare and Fictitious)
Tonometer Trinisphäre Besen [broom] 
Theremin Ornithopter Palette [color palette] 
Shruti Heliophiole Eimer [bucket] 
Pallheber Grondoq Schlüssel [key] 
Kartik Erkomat Sofa [sofa] 
Heper Tronser Topf [jar] 
Groma Insolatus Gitarre [guitar] 
Brechel Nikipur Hammer [hammer] 
Sobriator Nahilator Zahnbürste [toothbrush] 
Trasofin Karemma Lippenstift [lipstick] 
Calimat Planeo Drache [dragon] 
Plondex Brenette Darthvader 
Ganosis Sonocor Enterprise 
Vimax Lucinet Flaschengeist [genie] 
Nuscüp Mobero Fliegender Teppich [magic carpet] 
Adder Pato Hexenhaus [gingerbread house] 
Fliktor Kitara R2D2 
Notande Opane Ring [ring] 
Stumer Tongo UFO [UFO] 
Squonker Scarus Zauberhut [magic hat] 

Acknowledgments

This work was supported by German Research Foundation grants AB277 1 and AB277 3 to Rasha Abdel Rahman. We thank Johannes Rost, Kerstin Unger, and David Wisniewski for assistance and six anonymous reviewers for helpful comments.

Reprint requests should be sent to Milena Rabovsky, Department of Psychology, Humboldt-Universität zu Berlin, Rudower Chaussee 18, 12489 Berlin, Germany, or via e-mail: milena.rabovsky@hu-berlin.de.

REFERENCES

Abdel Rahman
,
R.
, &
Sommer
,
W.
(
2008
).
Seeing what we know and understand: How knowledge shapes perception.
Psychonomic Bulletin & Review
,
15
,
1055
1063
.
Ashby
,
J.
(
2010
).
Phonology is fundamental in skilled reading: Evidence from ERPs.
Psychonomic Bulletin & Review
,
17
,
95
100
.
Ashby
,
J.
,
Sanders
,
L. D.
, &
Kingston
,
J.
(
2009
).
Skilled readers begin processing sub-phonemic features by 80 ms during visual word recognition: Evidence from ERPs.
Biological Psychology
,
80
,
84
94
.
Bar
,
M.
,
Kassam
,
K. S.
,
Ghuman
,
A. S.
,
Boshyan
,
J.
,
Schmid
,
A. M.
,
Dale
,
A. M.
,
et al
(
2006
).
Top–down facilitation of visual recognition.
Proceedings of the National Academy of Sciences, U.S.A.
,
103
,
449
454
.
Barsalou
,
L. W.
(
2008
).
Grounded cognition.
Annual Review of Psychology
,
59
,
617
645
.
Berg
,
P.
, &
Scherg
,
M.
(
1991
).
Dipole modeling of eye activity and its application to the removal of eye artefacts from the EEG and MEG.
Clinical Physiology and Physiological Measurements
,
12
,
49
54
.
Binder
,
J. R.
,
Desai
,
R. H.
,
Graves
,
W. W.
, &
Conant
,
L. L.
(
2009
).
Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies.
Cerebral Cortex
,
19
,
2767
2796
.
Braun
,
M.
,
Jacobs
,
A. M.
,
Hahne
,
A.
,
Ricker
,
B.
,
Hofmann
,
M.
, &
Hutzler
,
F.
(
2006
).
Model-generated lexical activity predicts graded ERP amplitudes in lexical decision.
Brain Research
,
1073–1074
,
431
439
.
Carr
,
T. H.
, &
Pollatsek
,
A.
(
1985
).
Recognizing printed words: A look at current models.
In D. Besner, T. G. Waller, & G. E. MacKinnon (Eds.),
Reading research: Advances in theory and practice 5
(pp.
1
82
).
San Diego, CA
:
Academic Press
.
Cohen
,
L.
,
Dehaene
,
S.
,
Naccache
,
L.
,
Lehericy
,
S.
,
Dehaene-Lambertz
,
G.
,
Henaff
,
M. A.
,
et al
(
2000
).
The visual word form area: Spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients.
Brain
,
123
,
291
307
.
Coltheart
,
M.
,
Rastle
,
K.
,
Perry
,
C.
,
Langdon
,
R.
, &
Ziegler
,
J. C.
(
2001
).
DRC: A dual-route cascaded model of visual word recognition and reading aloud.
Psychological Review
,
108
,
204
256
.
Cree
,
G. S.
, &
McRae
,
K.
(
2003
).
Analyzing the factors underlying the structure and computation of the meaning of chipmunk, cherry, chisel, cheese, and cello (and many other such concrete nouns).
Journal of Experimental Psychology: General
,
132
,
163
201
.
Cristescu
,
T. C.
,
Devlin
,
J. T.
, &
Nobre
,
A. C.
(
2006
).
Orienting attention to semantic categories.
Neuroimage
,
33
,
1178
1187
.
Dambacher
,
M.
,
Rolfs
,
M.
,
Göllner
,
K.
,
Kliegl
,
R.
, &
Jacobs
,
A. M.
(
2009
).
Event-related potentials reveal rapid verification of predicted visual input.
PLoS ONE
,
4
,
e5047
.
Debruille
,
J. B.
, &
Renoult
,
L.
(
2009
).
Effects of semantic matching and semantic category on reaction time and N400 that resist numerous repetitions.
Neuropsychologia
,
47
,
506
517
.
Di Russo
,
F.
,
Martínez
,
A.
,
Sereno
,
M. I.
,
Pitzalis
,
S.
, &
Hillyard
,
S. A.
(
2002
).
Cortical sources of the early components of the visual evoked potential.
Human Brain Mapping
,
15
,
95
111
.
Fodor
,
J. A.
(
1983
).
The modularity of mind: An essay on faculty psychology.
Cambridge, MA
:
MIT Press
.
Forster
,
K. I.
(
1976
).
Accessing the mental lexicon.
In R. J. Wales & E. W. Walker (Eds.),
New approaches to language mechanisms
(pp.
257
287
).
Amsterdam
:
North-Holland
.
Foxe
,
J. J.
, &
Simpson
,
G. V.
(
2002
).
Flow of activation from V1 to frontal cortex in humans.
Experimental Brain Research
,
142
,
139
150
.
Gauthier
,
I.
,
James
,
T. W.
,
Curby
,
K. M.
, &
Tarr
,
M. J.
(
2003
).
The influence of conceptual knowledge on visual discrimination.
Cognitive Neuropsychology
,
20
,
507
523
.
Harm
,
M. W.
, &
Seidenberg
,
M. S.
(
2004
).
Computing the meaning of words in reading: Cooperative division of labor between visual and phonological processing.
Psychological Review
,
111
,
662
720
.
Hauk
,
O.
,
Pulvermüller
,
F.
,
Ford
,
M.
,
Marslen-Wilson
,
W. D.
, &
Davis
,
M. H.
(
2009
).
Can I have a quick word? Early electrophysiological manifestations of psycholinguistic processes revealed by event-related regression analysis of the EEG.
Biological Psychology
,
80
,
64
74
.
Hillyard
,
S. A.
, &
Anllo-Vento
,
L.
(
1998
).
Event-related brain potentials in the study of visual selective attention.
Proceedings of the National Academy of Sciences, U.S.A.
,
95
,
781
787
.
Hino
,
Y.
,
Lupker
,
S. J.
, &
Pexman
,
P. M.
(
2002
).
Ambiguity and synonymy effects in lexical decision, naming, and semantic categorization tasks: Interactions between orthography, phonology, and semantics.
Journal of Experimental Psychology: Learning, Memory, & Cognition
,
28
,
686
713
.
Huynh
,
H.
, &
Feldt
,
L. S.
(
1976
).
Estimation of the box correction for degrees of freedom from sample data in randomized block and splitblock designs.
Journal of Educational Statistics
,
1
,
69
82
.
James
,
T. W.
, &
Gauthier
,
I.
(
2003
).
Auditory and action semantic features activate sensory-specific perceptual brain regions.
Current Biology
,
13
,
1792
1796
.
Kiefer
,
M.
(
2005
).
Repetition-priming modulates category-related effects on event-related potentials: Further evidence for multiple cortical semantic systems.
Journal of Cognitive Neuroscience
,
17
,
199
211
.
Kiefer
,
M.
,
Sim
,
E.-J.
,
Herrnberger
,
B.
,
Grothe
,
J.
, &
Hoenig
,
K.
(
2008
).
The sound of concepts: Four markers for a link between the auditory and conceptual brain systems.
The Journal of Neuroscience
,
28
,
12224
12230
.
Kiefer
,
M.
,
Sim
,
E. J.
,
Liebich
,
S.
,
Hauk
,
O.
, &
Tanaka
,
J.
(
2007
).
Experience-dependent plasticity of conceptual representations in human sensory-motor areas.
Journal of Cognitive Neuroscience
,
19
,
525
542
.
Kounios
,
J.
, &
Holcomb
,
P. J.
(
1994
).
Concreteness effects in semantic processing: ERP evidence supporting dual-coding theory.
Journal of Experimental Psychology: Learning, Memory, and Cognition
,
20
,
804
823
.
Kronbichler
,
M.
,
Klackl
,
J.
,
Richlan
,
F.
,
Schurz
,
M.
,
Staffen
,
W.
,
Ladurner
,
G.
,
et al
(
2008
).
On the functional neuroanatomy of visual word processing: Effects of case and letter deviance.
Journal of Cognitive Neuroscience
,
21
,
222
229
.
Kutas
,
M.
, &
Federmeier
,
K. D.
(
2011
).
Thirty years and counting: Finding meaning in the N400 component of the event-related brain potential (ERP).
Annual Review of Psychology
,
62
,
621
647
.
Kutas
,
M.
, &
Hillyard
,
S. A.
(
1980
).
Reading senseless sentences: Brain potentials reflect semantic incongruity.
Science
,
207
,
203
205
.
Kutas
,
M.
, &
van Petten
,
C. K.
(
2006
).
Psycholingustics electrified II (1994–2005)
. In M. A. Gernsbacher & M. Traxler (Eds.),
Handbook of psycholinguistics
(2nd ed., pp.
659
724
).
New York
:
Elsevier Press
.
Lamme
,
V. A. F.
, &
Roelfsema
,
P. R.
(
2000
).
The distinct modes of vision offered by feedforward and recurrent processing.
Trends in Neurosciences
,
23
,
571
579
.
Lehmann
,
D.
, &
Skrandies
,
W.
(
1980
).
Reference-free identification of components of checkerboard-evoked multichannel potential fields.
Electroencephalography and Clinical Neurophysiology
,
48
,
609
621
.
Maurer
,
U.
,
Brandeis
,
D.
, &
McCandliss
,
B. D.
(
2005
).
Fast, visual specialization for reading in English revealed by the topography of the N170 ERP response.
Behavioral and Brain Functions
,
1
,
13
.
McCandliss
,
B. D.
,
Cohen
,
L.
, &
Dehaene
,
S.
(
2003
).
The visual word form area: Expertise for reading in the fusiform gyrus.
Trends in Cognitive Sciences
,
13
,
155
161
.
McClelland
,
J. L.
,
Rumelhart
,
D. E.
, &
the PDP Research Group
(
1986
).
Parallel distributed processing: Explorations in the microstructure of cognition
(
Vol. 2
).
Cambridge, MA
:
MIT Press
.
McRae
,
K.
,
Cree
,
G. S.
,
Seidenberg
,
M. S.
, &
McNorgan
,
C.
(
2005
).
Semantic feature production norms for a large set of living and nonliving things.
Behavior Research Methods
,
37
,
547
559
.
Moore
,
C. J.
, &
Price
,
C. J.
(
1999
).
Three distinct ventral occipitotemporal regions for reading and object naming.
Neuroimage
,
10
,
181
192
.
Nagy
,
M. E.
, &
Rugg
,
M. D.
(
1989
).
Modulation of event-related potentials by word repetition: The effects of inter-item lag.
Psychophysiology
,
26
,
431
436
.
Paivio
,
A.
(
1986
).
Mental representations: A dual coding approach.
New York
:
Oxford University Press
.
Pammer
,
K.
,
Hansen
,
P. C.
,
Kringelbach
,
M. L.
,
Holliday
,
I.
,
Barnes
,
G.
,
Hillebrand
,
A.
,
et al
(
2004
).
Visual word recognition: The first half second.
Neuroimage
,
22
,
1819
1825
.
Pecher
,
D.
(
2001
).
Perception is a two-way junction: Feedback semantics in word recognition.
Psychonomic Bulletin & Review
,
8
,
545
551
.
Penolazzi
,
B.
,
Hauk
,
O.
, &
Pulvermüller
,
F.
(
2007
).
Early semantic context integration and lexical access as revealed by event-related brain potentials.
Biological Psychology
,
74
,
374
388
.
Pexman
,
P. M.
,
Hargreaves
,
I. S.
,
Edwards
,
J. D.
,
Henry
,
L. C.
, &
Goodyear
,
B. G.
(
2007
).
The neural consequences of semantic richness. When more comes to mind, less activation is observed.
Psychological Science
,
18
,
401
406
.
Picton
,
T. W.
,
Bentin
,
S.
,
Berg
,
P.
,
Donchin
,
E.
,
Hillyard
,
S. A.
,
Johnson
,
R.
,
et al
(
2000
).
Guidelines for using human event-related potentials to study cognition: Recording standards and publication criteria.
Psychophysiology
,
37
,
127
152
.
Pylyshyn
,
Z.
(
1999
).
Is vision continuous with cognition? The case for cognitive impenetrability of visual perception.
Behavioral and Brain Sciences
,
22
,
341
423
.
Rabovsky
,
M.
,
Álvarez
,
C. J.
,
Hohlfeld
,
A.
, &
Sommer
,
W.
(
2008
).
Is lexical access autonomous? Evidence from combining overlapping tasks with recording event-related brain potentials.
Brain Research
,
1222
,
156
165
.
Reimer
,
J. F.
,
Lorsbach
,
T. C.
, &
Bleakney
,
D. M.
(
2008
).
Automatic semantic feedback during visual word recognition.
Memory & Cognition
,
36
,
641
658
.
Renoult
,
L.
, &
Debruille
,
J. B.
(
2011
).
N400-like potentials and reaction times index semantic relations between highly repeated individual words.
Journal of Cognitive Neuroscience
,
23
,
905
922
.
Rosazza
,
C.
,
Cai
,
Q.
,
Minati
,
L.
,
Paulignan
,
Y.
, &
Nazir
,
T. A.
(
2009
).
Early involvement of dorsal and ventral pathways in visual word recognition: An ERP study.
Brain Research
,
1272
,
32
44
.
Schwanenflugel
,
P. J.
(
1991
).
Why are abstract concepts hard to understand? In P. J. Schanenflugel (Ed.),
The psychology of word meanings
(pp.
223
250
).
Hillsdale, NJ
:
Erlbaum
.
Scott
,
G. G.
,
O'Donnell
,
P. J.
,
Leuthold
,
H.
, &
Sereno
,
S. C.
(
2009
).
Early emotion word processing: Evidence from event-related potentials.
Biological Psychology
,
80
,
95
104
.
Segalowitz
,
S. J.
, &
Zheng
,
X.
(
2009
).
An ERP study of category priming: Evidence of early lexical semantic access.
Biological Psychology
,
80
,
122
129
.
Seidenberg
,
M. S.
, &
McClelland
,
J. L.
(
1989
).
A distributed developmental model of word recognition and naming.
Psychological Review
,
96
,
523
568
.
Sereno
,
S. C.
, &
Rayner
,
K.
(
2003
).
Measuring word recognition in reading: Eye movements and event-related potentials.
Trends in Cognitive Sciences
,
7
,
489
493
.
Sereno
,
S. C.
,
Rayner
,
K.
, &
Posner
,
M. I.
(
1998
).
Establishing a time-line of word recognition: Evidence from eye-movements and event-related potentials.
NeuroReport
,
9
,
2195
2200
.
Skrandies
,
W.
(
1998
).
Evoked potential correlates of semantic meaning—A brain mapping study.
Cognitive Brain Research
,
6
,
173
183
.
Wang
,
Y.
,
Song
,
Y.
,
Qu
,
Z.
, &
Ding
,
Y.
(
2010
).
Task difficulty modulates electrophysiological correlates of perceptual learning.
International Journal of Psychophysiology
,
75
,
234
240
.
Wheat
,
K. L.
,
Cornelissen
,
P. L.
,
Frost
,
S. J.
, &
Hansen
,
P. C.
(
2010
).
During visual word recognition, phonology is accessed within 100 ms and may be mediated by a speech production code: Evidence from magnetencephalography.
The Journal of Neuroscience
,
30
,
5229
5233
.
Wirth
,
M.
,
Horn
,
H.
,
Koenig
,
T.
,
Stein
,
M.
,
Federspiel
,
A.
,
Meier
,
B.
,
et al
(
2007
).
Sex differences in semantic processing: Event-related brain potentials distinguish between lower and higher order semantic analysis during word reading.
Cerebral Cortex
,
17
,
1987
1997
.