Whether verbal labels help infants visually process and categorize objects is a contentious issue. Using electroencephalography, we investigated whether possessing familiar or novel labels for objects directly enhances 1-year-old children's neural processes underlying the perception of those objects. We found enhanced gamma-band (20–60 Hz) oscillatory activity over the visual cortex in response to seeing objects with labels familiar to the infant (Experiment 1) and those with novel labels just taught to the infant (Experiment 2). No such effect was observed for objects that infants were familiar with but had no label for. These results demonstrate that learning verbal labels modulates how the visual system processes the images of the associated objects and suggest a possible top–down influence of semantic knowledge on object perception.
There is a long-standing debate on whether the language we speak affects the way we describe, remember, or perceive the world. Evidence was brought both in support (Davidoff, 2001) and against (Kay & Regier, 2007) the interaction between linguistic and perceptual distinctions the speakers of a certain language make. It has also been lengthily discussed whether possessing language modulates an organism's perception of the world (Phillips & Santos, 2007; Mandler, 2004). This question is particularly relevant with respect to human development. Some propose that acquiring language, specifically object names, impacts children's learning about objects (Mandler, 2004). For example, acquiring labels has been found to guide the learning of object properties: Children easily generalize the properties of an object to all objects having the same name (Sloutsky, Lo, & Fisher, 2001) and are more likely to learn the distinctive properties of two objects when they have been labeled by different names (Norcross & Spiker, 1957). Although these studies demonstrate that learning words guides knowledge acquisition, they do not provide evidence for the stronger claim that labeling has an impact on object perception.
This debate has been recently refueled by findings that learning object names facilitates the visual processing of objects, both in infants and in adults. In adults, categorization performance on novel objects is facilitated by learning names for the categories even when the labels are entirely redundant to the visual information (Lupyan, Rakison, & McClelland, 2007). Teaching a label for or spontaneously labeling an unfamiliar grapheme also speeds up its visual detection in an array of graphemes (Lupyan, 2008a, 2008b). A similar phenomenon exists in infancy, where learning object names helps infants individuate or categorize objects. In a series of studies, 9- to 12-month-olds successfully grouped new objects into categories only when the objects were verbally labeled during presentation and if the same label was used for each object (Waxman & Braun, 2005; Balaban & Waxman, 1997). This effect applies only to words rather than nonlinguistic sounds (e.g., tones) (Plunkett, Hu, & Cohen, 2007). Similar effects were also reported in studies investigating the ability to track the number of objects across occlusion. In these studies, infants were more likely to individuate objects if they knew their names (Rivera & Zawaydeh, 2007; Xu & Carey, 1996) or if the objects were consistently associated with different labels during the study (Feigenson & Halberda, 2008; Xu, 2002).
In adults, these top–down effects are thought to facilitate the detection of meaningful stimuli in the visual scene (Lupyan & Spivey, 2008). In infants who are in the process of acquiring object knowledge, top–down effects from labeling are suggested to also facilitate the learning of categories and concepts (Gopnik & Meltzoff, 1992). It has been proposed that the effects of labeling on object processing stem from the general property of common names, which refer to object kinds or categories at sub- or superordinate levels rather than to particular exemplars. In one view, names act as placeholders for object kinds; they do not affect the way objects are represented but help keep track of the number of different kinds present in a scene (Xu, 2007). In a slightly different account, naming orients attention to those perceptual object properties that could assist categorization or discrimination (Waxman, 2004). In this case, it is expected that a labeled object be represented differently, either by more visual information or by different types of features (e.g., shape but not texture). Behavioral studies have shown that infants look longer toward objects with labels they knew (Schafer & Plunkett, 1999) or had just acquired (Baldwin & Markman, 1989). To date, however, there is little evidence that labeling directly modulates object-directed attention and object representation in the infant brain.
The way semantic information affects bottom–up processing has extensively been studied in the adult brain. Feed-forward processing in occipital and temporal visual areas is modulated by information fed back from frontal areas involved in semantic processing or in extracting the “gist” of visual scenes (Bar et al., 2006; Lamme & Roelfsema, 2000). No such study has been carried out in infancy. In the present study, we investigated the top–down effects of labeling on perceptual object processing by measuring brain activity while infants were presented with pictures of objects for which they had acquired a label prior to coming to the lab (Experiment 1) or during a word-learning session (Experiment 2). Brain activation for these objects was compared with that elicited by familiar but unlabeled objects. We focused on a particular measure of brain activity that has been shown to reflect visual object processing in both infants and adults (Csibra & Johnson, 2007; Tallon-Baudry & Bertrand, 1999). The amplitude of induced oscillatory gamma-band activity in adults is frequently correlated with the familiarity of the object kind eliciting that activation (Busch, Herrmann, Muller, Lenz, & Gruber, 2006; Gruber, Tsivilis, Montaldi, & Muller, 2004; Gruber, Muller, & Keil, 2002; Tallon-Baudry, Bertrand, Delpuech, & Pernier, 1996). Some suggest that these familiarity effects reflect a matching mechanism between bottom–up information and object representations in long-term memory (Herrmann, Munk, & Engel, 2004).
Because most objects adults are familiar with also have a label (or can be assimilated into a labeled object kind), previous studies have not teased apart the contribution of object familiarity and that for the label (or kind membership). However, this question can be addressed in young infants, who are in the process of learning names for objects. Previous studies have shown that gamma-band activity in infants is induced under similar conditions as in adults, such as when infants have to maintain an object representation in memory while the object is occluded (Southgate, Csibra, Kaufman, & Johnson, 2008; Kaufman, Csibra, & Johnson, 2005). We made use of this electrophysiological signature of object representation to explore the interaction between naming and visual object processing in 1-year-old children.
We presented 12 one-year-old children with pictures of objects belonging to three categories (labeled, familiar, and unfamiliar). The verbal labels of the objects used in the labeled category (e.g., banana, duck) are generally known by infants at this age (Hamilton, Plunkett, & Schafer, 2000). We selected objects for the familiar category that are thought to be present in the infants' visual environment but are unlikely to be recognized by their name (e.g., umbrella, cupboard). The objects in the unfamiliar category represented items that 1-year-old children rarely see (e.g., stapler, harp). Images of objects from the three categories were presented in a random order. During the presentation, we continuously recorded infants' EEG, which were later analyzed for induced gamma-band oscillatory responses over the occipital cortex.
Twelve infants were included in the final sample. The average age of the three boys and nine girls was 365.9 days (range = 348–378 days). A further 19 infants were tested but not included because of refusing to wear the electrode net (11) or for not accumulating enough artifact-free trials (8).
Stimuli were selected from color photographs depicting 24 object kinds, 8 different object kinds for each condition (labeled condition, familiar condition, and unfamiliar condition). Parents were shown a sample photograph of each object kind and were asked to choose the same number of object kinds within each condition, corresponding to their child's actual word comprehension and familiarity with the objects. For the labeled condition, they were asked to choose objects that their child recognized by name from the following list: duck, banana, apple, cup, dog, car, shoe, and spoon. For the familiar condition, parents were asked to select objects that were familiar to their children but they did not know their name from the following list: chair, bag, lamp, leaf, umbrella, clock, butterfly, and bread loaf. For the unfamiliar condition, they chose objects that they judged were novel to their child from the following list: sushi, stapler, crab, lava lamp, jigsaw, guitar, harp, and torch. Parents chose between 4, 5, or 6 (average = 4.9) object kinds for each condition. The first two most common choices were banana (90%) and duck (83.3%) for the labeled condition, bag (100%) and leaf (75%) for the familiar condition, and sushi (83.3%) and stapler (66.6%) for the unfamiliar condition. For each object kind, four different photographs were presented (e.g., four different toy ducks). For example, infants for whom the caregiver chose four different object kinds per condition were presented with 16 different images within each condition and with 48 different images in total. The objects corresponding to the three conditions were similar in surface area and average luminosity, area, F(2, 98) = 1.27, p > .1, luminosity, F(2, 98) = 1.47, p > .1.
Infants were seated in their parents' lap, in a dimmed room, approximately 80 cm away from the stimulus monitor. The caregiver was asked not to point to the screen and not to name the stimuli. Images of objects from the three conditions were presented in a fully randomized order. The images were 17 × 17 cm size, subtending 11° to 12° from the 80-cm viewing distance, were presented on a white background, and stayed on the screen for 700 msec. An attention-getting stimulus (a spiraling colorful geometric shape) was presented 200 msec after the stimulus was removed from the screen for a random time interval between 500 and 800 msec. We recorded the EEG and videotaped the infants' looking behavior as long as they were willing to watch the screen. When infants were distracted, computer sounds were played to reorient their attention to the screen.
Data recording and analysis
The EEG was recorded using a Geodesic Sensor Net comprised of 62 electrodes distributed evenly across the scalp (Tucker, 1993). The vertex electrode served as reference, and the EEG was sampled at 250 Hz. The continuous EEG was segmented into 2100-msec-long segments (500 msec before and 1600 msec after the stimulus onset). Trials in which infants looked away or containing movement artifacts were excluded from the analysis. Each infant was required to contribute at least 10 trials to each condition. The average number of artifact-free trials was 18.4 in the labeled condition (range = 10–38), 20.0 in the familiar condition (range = 10–48), and 20.2 in the unfamiliar condition (range = 10–48). Induced gamma-band oscillatory activation was analyzed using an established procedure (Kaufman et al., 2005; Kaufman, Csibra, & Johnson, 2003; Csibra, Davis, Spratling, & Johnson, 2000). We used a continuous wavelet transform to single trials of EEG in each channel using Morlet wavelets at 1-Hz intervals in the 10- to 90-Hz range, and average wavelet coefficients for each condition were calculated by taking the mean across trials within infants. To exclude distortions introduced by this analysis, 200-msec recordings were removed from each of the extremities before the activations were baseline corrected by subtracting the average activity during the 200-msec interval starting 300 msec before stimulus onset. For each infant, the average amplitude of the induced oscillatory activity within the gamma range (20–60 Hz) was extracted. This is the frequency range that was associated with object processing in previous infants studies (Kaufman et al., 2005; Csibra, Davis, et al., 2000). To assess the activation of visual areas, five groups of posterior electrodes, representing medial and lateral parts of the occipital cortex over both hemispheres and a midline region, were chosen for analyses. Because we were interested in top–down semantic effects on object processing, we chose to analyze the gamma-band activity from 500- to 800-msec poststimulus presentation. This latency corresponds to the time window within which semantic integration effects have been recorded in previous studies in infants (Friedrich & Friederici, 2005).
The amplitude of the induced gamma-band activity during the 500- to 800-msec time window after stimulus onset was entered into a three-way ANOVA with Laterality (medial electrodes, lateral electrodes), Hemisphere (left, right), and Category (labeled, familiar, and unfamiliar) as within-subject factors. This analysis yielded a main effect of Laterality, F(1, 11) = 9.67, p = .01, as well as Laterality × Hemisphere × Category interaction, F(2, 22) = 10.54, p = .001. To resolve this interaction, we first ran separate ANOVAs at each laterality. This analysis revealed a significant Category × Hemisphere interaction over the medial electrodes, F(2, 22) = 4.06, p = .035, but not over the lateral ones, F(2, 22) = 0.55, p > .1. To resolve the Category × Hemisphere interaction at the medial electrodes, we carried out separate one-way ANOVAs at each hemisphere. These analyses yielded a significant effect of Category over the left, F(2, 22) = 4.07, p = .048, but not over the right, F(2, 22) = 0.95, p > .1, hemisphere. Follow-up paired t tests confirmed that the labeled category tended to induce higher gamma-band activations than did the familiar, t(11) = 2.11, p = .059, and unfamiliar, t(11) = 2.20, p = .049, objects, whereas there was no difference between the familiar and the unfamiliar conditions, t(11) = 0.18, p > .1. This pattern of results suggests that seeing objects for which the infants had already acquired a label produced stronger activity over left medial electrode sites than did either familiar or unfamiliar but nameless objects (Figures 1 and 2). A further one-sample t test confirmed that the gamma-band activity over the left medial electrodes was significantly higher in the labeled condition than during the baseline, t(11) = 3.33, p = .007.
Just as in previous studies with adults (Gruber, Maess, Trujillo-Barreto, & Muller, 2008; Gruber et al., 2002), 12-month-old infants produced increased gamma-band neural activity over posterior electrodes when presented with photographs of objects with familiar labels. Moreover, our results suggest that this electrophysiological response does not reflect familiarity with an object but is related to the semantic information (i.e., its label) that infants have associated with the object. The increase in gamma-band activity we measured in infants occurred later than in adults (500–800 msec in this study vs. 200–300 msec in Gruber et al., 2002). This latency corresponds to the time window within which a semantic mismatch between images and words was recorded in 14-month-old infants (Friedrich & Friederici, 2005).
The fact that we did not find any differences in activation between familiar and unfamiliar object kinds suggests that familiarity is not a crucial factor in the generation of this activity in infants. Alternatively, it is possible that the lack of difference between the brain response to familiar and unfamiliar objects could be due to parental overestimation of their child's familiarity with the items used in the familiar condition. This condition included object kinds that infants frequently see, but our participants may not have encountered the exact exemplars that they were shown in the study. Generally, objects with familiar labels tend to relate to frequent exposure in infancy (Rivera & Zawaydeh, 2007). Ensuring that the objects in the familiar condition were indeed familiar to infants is crucial for our main hypothesis; differential engagement of perceptual processing between these objects and those in the labeled category was indeed the result of having access to object labels. Experiment 2 was designed to test these alternative explanations by familiarizing infants to novel objects labels during their visit to the laboratory.
To control the amount of exposure infants had with the stimuli, we presented infants with three objects they were unlikely to have seen previously. This study consisted of two phases. In the teaching phase, we presented 1-year-old children with two novel objects in a structured play situation. One of the two objects, the labeled object, was repeatedly labeled with a novel label (“blicket”) during the play session, whereas the other one, the familiar object, was only referred to using nonlinguistic referential gestures and pronouns (e.g., “it”). In the subsequent EEG recording phase, infants saw images of the labeled object, the familiar object, and of a completely novel (hence unfamiliar) object on the computer screen while we measured their EEG the same way as in Experiment 1.
The 12 infants included in the Experiment 2 (6 boys and 6 girls) had an average age of 360.5 days (range = 348–375 days). A further 21 infants were tested but not included in the analysis because of refusing to wear the electrode net (16 infants) or for not accumulating enough artifact-free trials (5 infants).
Stimuli were three small objects with distinct shapes, colors, and textures but similar in size (Figure 2). Two of these objects were used during the teaching phase as either the labeled object or the familiar object, whereas the third (unfamiliar) one was not introduced to the infants before the EEG phase. Each object was assigned to each role (labeled, familiar, and unfamiliar) with equal probability, counterbalanced across infants. Photographs of the three objects, in four different orientations, were used in the subsequent EEG phase. The photographs were digitally manipulated so that they have approximately the same size as the ones used in Experiment 1.
In Experiment 2, the EEG recording phase (which was similar to that of Experiment 1) was preceded by a teaching phase.
During the teaching phase, infants were introduced to two new objects in a semicontrived play routine. The experimenter engaged the infant in playing with each object and included it in a series of activities, like hiding the object, giving it to the caregiver or placing it on the infant's head. The two objects were introduced successively and the experimenter attempted to repeat the same activities with both objects and in the same order. What differentiated the activities with two objects was the accompanying verbal routine. One of the objects (labeled condition) was repeatedly named by the same label during the play routine (e.g., “Look at the blicket! Can I have the blicket back? Where is the blicket?”). For the other object (familiar condition), the label was replaced by pronouns (e.g., “it, this, that”). When the number of all the utterances in which the object had been referred to was enumerated (for the labeled object this includes the use of pronouns), no significant difference in the amount of reference was found between the two conditions (labeled: M = 65.75 times, range = 28–96; familiar: M = 61.75, range = 22–95), t(11) < 1.0. The experimenter spent on average 3.50 min (range = 2.02–5.02 min) interacting with the labeled object and 4.08 min (range = 1.51–5.86 min) interacting with the familiar object, t(11) = 1.96, p = .076. The order in which the two objects were introduced was counterbalanced across infants.
Data recording and analysis
The procedure was identical to Experiment 1. Infants contributed on average 16.0 trials to the labeled condition (range = 10–29), 16.0 trials to the familiar condition (range = 10–28), and 15.2 trials to the unfamiliar condition (range = 10–25).
Gamma-band oscillatory activation during the EEG phase was analyzed the same way as that of Experiment 1. An ANOVA with Laterality (medial electrodes, lateral electrodes), Hemisphere (left, right), and Category (labeled, familiar, and unfamiliar) as within-subject factors on the average gamma-band activity in the 500- to 800-msec time window resulted in a marginally significant interaction between Category and Laterality, F(2, 22) = 3.39, p = .058. To resolve this interaction, we collapsed the data across the two hemispheres and entered it in two ANOVAs, one for each electrode group (medial and lateral), with Category as within-subject factor. This yielded a marginal effect of Category on the lateral electrodes, F(2, 22) = 3.85, p = .052, but not on the medial ones, F(2, 22) = 0.62, p = .54. To further explore which of the three objects contributed to the effect of Condition on the lateral electrodes, we ran three paired t tests. As expected, we found that the gamma-band activity elicited by the labeled condition was significantly stronger than that for the familiar, F(1, 11) = 5.53, p = .038, and the unfamiliar conditions, F(1, 11) = 5.49, p = .039. No significant difference was found when comparing the familiar and unfamiliar conditions, F(1, 11) = 1.2, p = .29. Thus, at lateral electrodes over both hemispheres, the labeled object produced higher gamma-band activation than did the other two objects (Figures 1 and 2). One-sample t tests confirmed that gamma activity was higher than the baseline oscillatory activity only at the left lateral electrodes, t(11) = 2.35, p = .038.
Recent reports have demonstrated that induced gamma-band oscillatory responses in adults could be the results of small saccadic eye movements rather than generated by cortical neural activity (Yuval-Greenberg, Tomer, Keren, Nelken, & Deouell, 2008). However, it is highly unlikely that the effect we found in our study was due to eye movements. First, the saccade-related spike potentials that produce artificial gamma-band oscillations are absent in young infants (Csibra, Tucker, & Johnson, 1998) and have very low amplitude even in 1-year-olds (Csibra, Tucker, Volein, & Johnson, 2000). Second, the microsaccades that have been reported to generate gamma-band activity are usually concentrated to the 200- to 300-msec time window after stimulus presentation, whereas our effect arose much later and was more sustained than the one reported in adults' scalp measurements. Although this could reflect slower visual processing in infants than in adults, it is worth noting that the maximum amplitude of the saccade-related spike potentials is around the midline electrodes and is not left-lateralized like the gamma-band activity in our results was. We analyzed gamma-band activity over central parietal and occipital electrodes, the electrodes that produced activity correlated with microsaccades in adults (Yuval-Greenberg et al., 2008). For each experiment, gamma-band activity within the 200- to 500-msec and the 500- to 800-msec time intervals was averaged over a group of posterior midline electrodes (33, 34, 37, 38, 40, and 41). Four ANOVAs were conducted, one for each of the two experiments and for the two time windows. We found no significant main effects of Condition: Experiment 1, 200–500 msec, F(2, 22) = 1.28, p > .1; 500–800 msec, F(2, 22) < 1; Experiment 2, 200–500 msec, F(2, 22) < 1; 500–800 msec, F(2, 22) < 1. This additional analysis confirms that the gamma-band activity increase we found had a different brain origin than the gamma-band activity generated by microsaccades in adults. We therefore conclude that the source of this activity is more likely to be located in higher visual areas than in extracerebral structures, such as the orbital muscles.
As in Experiment 1, the labeled object induced significantly more gamma-band activity than the familiar and the unfamiliar objects in Experiment 2. This effect was induced only after a brief teaching session and is, to our knowledge, the first evidence that labeling directly modulates infants' perceptual processing of objects. Because infants spent the same amount of time interacting with the familiar and labeled object (and both of them had been novel to them previously), the difference between these two conditions could not have been due to differential familiarity with the objects. The number of times the objects had been referred to, either by their name (the labeled object) or by a pronoun (the familiar object), was comparable across conditions. It is therefore not the amount of referential attention given to the objects but the nature of verbal reference (novel noun vs. pronoun) that produced the difference between neural processing of the labeled and the familiar object.
Just as in Experiment 1, no significant difference was found between the unfamiliar and the familiar conditions (see Figure 1). The average time infants spent with the familiar object in the teaching phase was 4 min, which is long enough for infants of this age to remember the item 10 min later (Nelson, 2002; Rovee-Collier, 1997). Thus, the lack of significant difference between the gamma-band activity elicited by the familiar and unfamiliar objects cannot have been due to both items being unfamiliar to infants. Rather, this result, which replicates that of Experiment 1, suggests that it is not familiarity with an object per se that induces the significant increase in gamma-band activity over the visual cortex.
The slight difference in the topography of the effect between the two experiments (i.e., a medial effect in Experiment 1 and more a lateral effect in Experiment 2) may reflect the difference between accessing consolidated and newly acquired label-object associations. A PET study contrasted areas activated by naming objects adults had just learned the labels for with areas activated by naming familiar objects (Gronholm, Rinne, Vorobyev, & Laine, 2005). Newly learned object–label pairs activated the left anterior temporal regions, whereas naming familiar objects resulted in occipital areas activation. Whether the topographical differences we found between the two experiments are related to the contrast between freshly acquired and consolidated word-object associations is to be explored in further studies.
The current results indicate that in infancy an object's label enhances the cortical activity associated with its visual processing, as measured by an increase in gamma-band activity over posterior scalp regions. This effect of labeling is immediate and long lasting, as it was found both after only a few minutes of interaction with an object (Experiment 2) and with objects for which infants had learned the labels before their visit to the lab (Experiment 1). Thus, the results of the two experiments complement and reinforce each other. Experiment 1 demonstrated that named and nameless objects are processed differently, whereas Experiment 2 showed that the act of naming produces this effect rather than some other correlated property, such as increased familiarity with objects for which infants have labels.
These results shed light on previous behavioral findings, suggesting that increased perceptual processing of object features explains why infants are better at object categorization (Waxman & Braun, 2005; Balaban & Waxman, 1997) and individuation (Xu & Carey, 1996) with labeled objects. On the other hand, our results do not support alternative accounts, which propose that linguistic input overshadows visual input in labeling situations due to a primacy of auditory over visual information (Sloutsky & Napolitano, 2003; Sloutsky & Lo, 1999).
Although the neural source of gamma-band activity is difficult to localize in the infant brain (Csibra & Johnson, 2007), the scalp location of the effect we found clearly indicates an occipito-temporal origin of the extra activity elicited by images of the labeled objects. This suggests that these stimuli received extra visual processing, probably induced by feedback from other cortical areas responsible for extracting the semantic information associated with the depicted object. Such feedback may have increased infants' attention toward the referred object and, consequently, the perceptual processing of this item. The relatively late start and long duration of the activation also suggests top–down facilitation, and the location of the activation is remarkably similar to that of the ERP effects found to be sensitive to re-entrant visual processing of masked images in infants (Kotsoni, Mareschal, Csibra, & Johnson, 2006). In this study, unmasked but not masked stimuli elicited a positive wave around 300 msec, over posterior channels. It should be noted that, irrespective of whether the increase in gamma activity was the result of increased attention or of qualitative changes in processing, the hypothesis about a selective effect of labeling on visual processing has been confirmed by our findings.
In both experiments, the effects of naming affected more strongly the left than the right hemisphere. This is not entirely surprising given the linguistic nature of the contrast between the labeled and the other conditions. It is possible that the representations of objects for which infants have already acquired a word are stored in the dominant hemisphere. Such explanation was put forward to explain the left lateralization of the visual word-processing areas (Dehaene, 2004) and of visual areas involved in color categorical perception (Gilbert, Regier, Kay, & Ivry, 2006). Interestingly, although infants' categorical perception of colors is initially lateralized to the right hemisphere, it becomes so once they have learnt the relevant color terms (Franklin et al., 2008). Given that it is not until later in the second year of life that processing of familiar words becomes left-lateralized (Mills, Plunkett, Prat, & Schafer, 2005), we remain cautious with respect to extending this hypothesis to our current findings. Further studies are required to determine how early in development and under which conditions words and their referents become predominantly processed in the left hemisphere.
The main theoretical question arising from our findings concerns the nature of changes that labeling induces in the visual processing of objects. Common names refer to object kinds or categories rather than to particular exemplars. Thus, hearing a novel label may inspire infants to search for the features that define the new kind/category, making them process the visual properties of the object more thoroughly. The effects that we measured in Experiment 2 probably reflect the additional neural processing devoted to the analysis of the visual features on the images of the labeled object that had previously been novel and may explain the ease with which named novel objects are categorized (Waxman & Braun, 2005). On the other hand, when seeing a familiar object for which they already have a name, infants might update their category knowledge or access additional information about the members of that category. This was probably what induced the extra activation recorded in Experiment 1 and might be related to the increased looking time toward named objects found in previous studies (Schafer & Plunkett, 1999; Baldwin & Markman, 1989).
Although we make a case here for the role of labeling in enhancing object processing, we are aware that this phenomenon might not be restricted to verbal naming. Arbitrarily associated tones or emotional expressions do not facilitate categorization or individuation of objects (Xu, Cote, & Baker, 2005; Balaban & Waxman, 1997), but presenting infants with the function of the object does (Booth & Waxman, 2002). The function of an artifact is the defining attribute of the object kind that it belongs to, and just like its name, it assigns the object to a particular category. We would therefore expect that any property that makes infants process objects in terms of kinds or categories would facilitate visual object processing similarly as labeling did in our study. It is a question for further research whether similar effects can be induced nonlinguistically or in species lacking language. Words are special because, unlike functions, they are arbitrary and can only identify the extension of a category through its exemplars. Crucially, common nouns can fulfill such a role only if category and word learners possess this assumption a priori. As our study suggests and previous behavioral studies have shown, infants apply this assumption early in life (Fulkerson & Waxman, 2007).
In conclusion, our study demonstrates that object naming changes how infants perceive their visual world. Because word leaning and object learning are intrinsically related in humans, we believe that there is much more to be learned from studying these processes together rather than independently.
Reprint requests should be sent to Teodora Gliga, Centre for Brain and Cognitive Development, Birkbeck, University of London, or via e-mail: firstname.lastname@example.org.