Objects are grouped into categories through a complex combination of statistical and structural regularities. We sought to better understand the neural responses to the structural features of object categories that result from implicit learning. Adult participants were exposed to 32 object categories that contained three structural properties: frequency, variability, and co-occurrences, during an implicit learning task. After this exposure, participants completed a recognition task and were then presented with blocks of learned object categories during fMRI sessions. Analyses were performed by extracting data from ROIs placed throughout the fusiform gyri and lateral occipital cortex and comparing the effects of the different structural properties throughout the ROIs. Behaviorally, we found that symbol category recognition was supported by frequency, but not variability. Neurally, we found that sensitivity to object categories was greater in the right hemisphere and increased as ROIs were moved posteriorly. Frequency and variability altered the brain activation while processing object categories, although the presence of learned co-occurrences did not. Moreover, variability and co-occurrence interacted as a function of ROI, with the posterior fusiform gyrus being most sensitive to this relationship. This result suggests that variability may guide the learner to relevant co-occurrences and this is supported by the posterior ventral temporal cortex. Broadly, our results suggest that the internal features of the categories themselves are key factors in the category learning system.

As we encounter objects in our environment, we implicitly group them into categories. The ability to form categories of objects that are similar in a given dimension or dimensions organizes and simplifies our knowledge. Categorization also allows us to understand new objects by associating them with known objects. However, understanding how we initially form categories of objects, how category boundaries are defined, and the dynamic nature of categories remains elusive. Understanding category formation is further complicated by varying theories regarding the subcomponents that underlie the structure of the categories themselves.

Perhaps the simplest way to consider category learning is the situation where we learn the name of a new object based on its visual appearance. In doing so, we extract information from new events that has commonalities and differences with previous events. For example, visual statistical learning allows for the linking of co-occurrences such as an object and its name. Research has demonstrated that both infants and adults have powerful statistical tracking mechanisms that allow them to overcome ambiguity in an environment and link word–object pairs based on these probabilistic regularities (Smith & Yu, 2008; Yu & Smith, 2007).

Much of our focus in studying category learning is on the capabilities of the learner. For example, the learner acquires some categories by producing them by hand (Vinci-Booher & James, 2020; Vinci-Booher, Cheng, & James, 2019; James, 2017; James & Engelhardt, 2012) or by physically exploring the category (Slone, Smith, & Yu, 2019; James, Jones, Swain, Pereira, & Smith, 2014; James & Swain, 2011; James, 2010). Nonetheless, the learner is only one piece of the puzzle in this system. Just as the learner has limitations and competencies that interact with the environment to support learning, object categories themselves also have properties that are worthwhile to study. There is some evidence that the statistical, internal properties of categories themselves influence learning. For example, category structures such a density and sparsity are known to affect the ease of acquisition of categories, with dense categories (i.e., those with many predictive features such as cats and dogs) being developmentally easier to acquire than sparse categories (i.e., those with more deterministic boundaries that have specific and necessary prerequisites like the concept “electron”; Sloutsky, 2010; Kloos & Sloutsky, 2008). If category structure has an effect on learning measured with overt behavioral responses, then neural systems must also show a sensitivity to category structure.

A significant amount of neuroimaging research has been devoted to understanding how categories of objects are processed in the brain (for a review, see Grill-Spector & Weiner, 2014). This body of research has focused on visual processing in the ventral temporal cortex (VTC), a broad neural system that has been shown to process object properties in a nested, hierarchical manner (Grill-Spector & Weiner, 2014). This research, however, has focused predominately in the object properties themselves (shape, color, size; e.g., Vinberg & Grill-Spector, 2008), as well as the “level” of categorization that is required of a given task (individual level, subordinate or superordinate; e.g., Grill-Spector, Knouf, & Kanwisher, 2004). In contrast, how the structure of the category itself affects processing in the VTC is still an open question. That is, the exemplars in a given category have a relationship to one another in the context of the category. When a category is learned, there may be exemplars that occur more frequently during learning events, but it is not known whether the VTC is sensitive to how frequently an exemplar occurs within a category. Furthermore, a category may be composed of exemplars that vary in terms of their similarity to one another: Some categories contain highly variable exemplars, whereas other categories may contain exemplars that are visually very similar, even within the same level of categorization (as defined by Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976). In addition, objects rarely occur in isolation (except in experimental setups), and the visual system is sensitive to co-occurrences of objects both spatially and temporally (e.g., Turk-Browne, Scholl, Chun, & Johnson, 2009). Thus, structural properties of categories may affect overt learning and recruit different levels of processing in the visual system. Below, we review these three structural features of categories—frequency, variability, and co-occurrence—that past research has suggested play roles in learning. We then discuss an experiment that investigates the neural basis for these structural features after learning novel object categories.

Object Frequency

One potentially important structural feature of object categories is the frequency by which individuals encounter category exemplars in their environment. Recent evidence from the home environments of infants revealed that a small number of objects were extremely frequent, demonstrating that the distribution of visual objects in the real world may be highly skewed (Clerkin, Hart, Rehg, Yu, & Smith, 2017). Interestingly, the highly frequent objects were the normatively acquired, first learned words (Smith, Jayaraman, Clerkin, & Yu, 2018; Clerkin et al., 2017). In behavioral categorization tasks, classification accuracy has been found to be higher for high-frequency exemplars (Nosofsky, 1988). Furthermore, past research has speculated that frequency is largely related to object typicality (Rosch & Mervis, 1975).

Recent evidence examining typicality effects in the brain have revealed a role of object-selective brain regions in processing typical and atypical exemplars. Specifically, Iordan, Greene, Beck, and Fei-Fei (2016) used brain imaging techniques to better understand neural representation of natural categories such “fish” and “dogs” that contained exemplars that varied across typicality judgments. Although Iordan et al. (2016) did not explicitly test exemplar frequency, typicality may be a proxy for frequency as the two constructs have a mutual relationship where they alter our perception of category exemplars (Nosofsky, 1988). Through multivoxel pattern analysis, Iordan et al. found that atypical exemplars (e.g., a pufferfish) produced activation patterns that were different from the central tendencies of other, more common category members (e.g., a clownfish) through the lateral occipital complex (LOC). Furthermore, this pattern was not present in early visual areas suggesting that this effect was not driven by lower-level, perceptual features. Thus, the brain may process statistical information such as exemplar frequency (although frequency has not been directly tested) and typicality and this may be driven by the LOC.

Object Variability

The variability or distribution of exemplars within an object category has also been noted as having impacts on learning. For example, exposure to variable symbol forms such as handwritten symbols results in greater categorization ability compared to exposure to highly similar visual outputs such as typed or traced symbols (Li & James, 2016). Similarly, adults demonstrate faster object recognition if they were exposed to an object from multiple, randomly sampled viewpoints (high visual variability) as opposed to objects that were observed from spatially continuous viewpoints (low visual variability; Harman & Humphrey, 1999). The underlying hypothesis is that variability may allow individuals to extrapolate the central features that allow them to form generalizable biases to support future learning (Perry, Samuelson, Malloy, & Schiffer, 2010).

Recent evidence examining the neural mechanisms of both letter perception and category learning have also identified brain regions associated with variability. Specifically, Vinci-Booher and James (2020) found that young children who are still learning letters demonstrated greater activation in the left middle fusiform gyrus (FFG) for handwritten forms compared to typed letters. However, older children and adults who were literate did not show this effect. Vinci-Booher and James (2020) interpreted this finding as variability contributing to initial category formation. Thus, older children and adults did not show this effect because they already had expert knowledge of letter categories. Recent evidence examining novel category learning in older children and adults have identified similar findings. Specifically, Plebanek and James (2021) found that 8-year-olds and adults demonstrated greater activity associated with the right posterior fusiform gyrus when learning variable compared to tight categories. Furthermore, these researchers also found that variability led to activity associated with the fusiform gyri driven by an invariant feature that defined the category, whereas tightly organized categories led to responses based on overall similarity (but without a consistent feature). Thus, the brain appears to respond to variability among category members and the fusiform gyrus, in particular, is highly involved in this process.

It is worth noting, however, that variability may not be entirely beneficial for learning. Work examining chicks' development of object invariance have suggested that small, tightly confined viewpoint changes in an object may be sufficient to support invariance development (Wood, 2016). Similarly, infants' object recognition appears to be improved when object movement is restricted so that they are observed from a fewer number of viewpoints (Kraebel & Gerhardstein, 2006). Thus, although there are both behavioral and brain-based effects from varying object category exemplars, whether or not variability is beneficial to learning remains controversial.

Objects and Co-occurrences

At the broadest level, categories, objects, and other information present in the world is full of probabilistic regularities that the learner can use to predict future events. The brain is capable of extracting these regularities from a young age (Saffran, Aslin, & Newport, 1996). In fact, the brain is so attuned to these regularities that there is activation throughout category-relevant visual areas in the ventral temporal cortex even though individuals do not explicitly recall the regularities (Turk-Browne et al., 2009). These processes may be important in formulating links among features, dimensions spanning values of features, or objects that co-occur across space and time. These co-occurrences may then be the building blocks of representations as measured by neural instantiations of object knowledge (see Sherman, Graves, & Turk-Browne, 2020, for a review). The neural representations of objects appear to be driven, in part, by incidental co-occurrences across time and space. For example, Schapiro, Kustner, and Turk-Browne (2012) discovered the patterns of neural representations throughout the medial temporal lobe for novel objects were more similar when the objects occurred together in time. Similar mechanisms may underlie object recognition more broadly. For example, objects are organized by temporal structure that links multiple features of objects across different views to create composite object representations (see Wallis & Bülthoff, 1999, for a review). This co-occurrence structure is commonly associated with the VTC and may explain how features play a role in object recognition (Wallis & Bülthoff, 1999). More specific mapping of spatial co-occurrence sensitivity in the VTC has shown that the anterior fusiform gyrus responds to co-occurrence more than posterior VTC structures (Stansbury, Naselaris, & Gallant, 2013). Therefore, recent neuroimaging work has pointed to the VTC and specifically the anterior fusiform gyrus for possible neural mechanisms that are sensitive and/or support co-occurrences within a category.

This Study

Taken together, these studies shed light on the way the brain processes regularities as we learn objects. Within a single object category, these regularities take many forms. First, at the level of the category, how frequent or typical an exemplar is in the overall scheme of the category may influence how it is processed. Second, the variability and diversity of category members and features can also influence how a category is learned and generalized. Third, features may co-occur and predict other features and category membership. All of these regularities matter and may guide the learner to a specific representation or category judgment. Also of note is the regularity by which we see subregions of the ventral temporal cortex respond to these three (frequency, variability, and co-occurrences) structural elements separately, but how the structural elements interact is unknown (Plebanek & James, 2021; Iordan et al., 2016; Stansbury et al., 2013; Turk-Browne et al., 2009).

Taken together, research supports the idea that different structures within the VTC support different aspects of category structure: frequency by the LOC (Iordan et al., 2016), variability by the middle fusiform gyrus (Vinci-Booher & James, 2020), and co-occurrence by the anterior fusiform gyrus (Stansbury et al., 2013). None of the past work, however, compared these properties directly within each of these ROIs. The present work sought to address this gap in the literature.

Therefore, we were interested in two main questions: (1) How does the brain process three different structural properties that are relevant for learning new object categories: frequency, variability, and co-occurrences among features? (2) Are these structural properties differentiated from one another as reflected by differences in neural responses in specified ROIs?

To answer these questions, we created a metrically organized set of novel categories that allowed us control these three structural elements. Participants were exposed to the object categories over two days and then underwent two MRI sessions that measured the brain responses to the object categories. Given that frequency, variability, and co-occurrence all affect object learning and have been shown individually to recruit different regions within the VTC, we expected to see preference in certain ROIs for the different types of structure, but not exclusivity in relative responses.

Participants

Seventeen literate English-speaking adults (M = 23.9 years, range = 3.3 years, 7 men) completed this study. Participants were graduate and undergraduate students from a small, Midwestern town and were recruited through word of mouth. All participants were right-handed and were screened for neurological trauma, developmental disorders, and magnetic resonance (MR) contraindications. Three additional participants were excluded for the following reasons: One did not complete the study, and two were excluded for excessive motion. All participants provided informed consent in accordance with the Indiana University institutional review board. Participants received $10 for each behavioral session and $25 for each MRI session. For completing all sessions, they received a $20 bonus, resulting in a total of $90.

Materials

A set of 90 novel object categories defined by shape were created for this study (see Figure 1 for examples). The objects were multistroke two-dimensional letter-like symbols that were similar to sets previously used in novel-object learning experiments (e.g., James & Atwood, 2009). This set was used for ease of manipulation of category structure while still maintaining the complexity of naturally occurring categories such as symbols and letters (see Figure 1). They were constructed with a computer drawing program and were composed of strokes that occur in written letters. Thirty-two of these object categories were present during training. The remaining 58 object categories were reserved for new categories during the MRI sessions or the recognition test. Object categories were composed of symbols that varied in size and color (see Figure 1), which are labeled here as object features. Both size and color varied metrically across 12 steps (Figure 2). The smallest size value was 50 × 50 pixels, and each step increased size by 25 pixels, with the largest value (Value 12) being approximately 325 × 325. The first color value in red, green, blue coordinates was [255 122 122] and was a pink color. The R value incrementally changed by −22 in each metric step until the red, green, blue values were [13 122 122] and was a teal color.

Figure 1.

Object category shapes grouped by condition. Groups of shapes (rows) were randomly assigned to a condition defined by frequency and variability. Symbol assignments to conditions were counterbalanced across participants.

Figure 1.

Object category shapes grouped by condition. Groups of shapes (rows) were randomly assigned to a condition defined by frequency and variability. Symbol assignments to conditions were counterbalanced across participants.

Close modal
Figure 2.

Size and color metric dimensions for variable (left) and tight (right) variability structures. The center of the variable dimension graphic is size and color value one. The furthest circle is size and color Value 12. The center of the tight dimension graphic is size and color value four. The furthest circle is size and color seven.

Figure 2.

Size and color metric dimensions for variable (left) and tight (right) variability structures. The center of the variable dimension graphic is size and color value one. The furthest circle is size and color Value 12. The center of the tight dimension graphic is size and color value four. The furthest circle is size and color seven.

Close modal

Furthermore, each object category was organized according to three structural properties: frequency of identical exemplars, variability among members, and co-occurrence between features. Frequency was defined as either high or low frequency depending on how many times a particular exemplar from the object category was presented during training. For high-frequency categories, identical exemplars were presented a total of 140 times across all training blocks. For low-frequency categories, identical exemplars were presented a total of 40 times across all training blocks. Variability was defined as the distribution of features individuals saw during training and the fMRI sessions. For training, tight categories' feature values were only four through seven for the color and size. For variable categories, feature values were a broader distribution [1, 2, 3, 5, 7, 10, 11, 12] of the object features. During the MR session, tight and variable categories were presented with the same distribution as training. Finally, co-occurrence reflected the pairing of the values of the features during training. During training, each feature value was linked so that a person saw the same numerical value for both size and color (i.e., if they saw a value four for size, it was also a value four for color). During the MR session, some of the blocks were unlinked—the features of color and size were randomly paired.

Design

Object categories were then randomly assigned to a condition based on these structural properties so that each condition contained eight different object categories. During implicit learning, participants saw four conditions: [high, variable, linked], [low, variable, linked], [high, tight, linked], [low, tight, linked]. During MRI, participants saw these four conditions as well as their unlinked counterparts: [high, variable, unlinked], [low, variable, unlinked], [high, tight, unlinked], [low, tight, unlinked]. Participants also saw a ninth condition consisting of new items that were both variable and linked [new, variable, linked].

Finally, during the recognition task presented after training, participants saw black symbols for each category that were from Size Value 6. This was required so that participants would not continue to associate the feature co-occurrences after training. This resulted in a design with the following factors and levels: frequency (levels: high and low), variability (levels: variable and tight), co-occurrence (levels: linked and unlinked). These factors were combined to create eight conditions in a 2 (frequency: high vs. low) × 2 (variability: tight vs. variable) × 2 (co-occurrence: linked vs. unlinked) repeated-measures design. In addition, the [new variable linked] condition was used to examine learning and novelty effects.

Procedure

The study was completed over 4 days. During the first day, participants completed three blocks of an implicit training task, reflecting our interest in statistical learning. During the second day, participants completed two blocks of implicit training and a recognition task. The third and fourth days consisted of MRI sessions. All sessions are explained in detail below.

Implicit Training Sessions

After providing informed consent, participants were taken to a quiet room. They were told that they were going to be seeing some novel symbols, two-at-a-time on a computer screen. If the participant thought the symbols were the same, they were told to press the number “1” on the keyboard. If they were different, they were told to press “0.” Participants were explicitly told that there was no correct answer to this task and to simply use their best judgment. Therefore, any learning that occurred would be a result of this implicit task. Although there is controversy as to whether category learning should be studied through explicit or implicit tasks (see Ashby & Valentin, 2017), we chose to use an implicit task because of our interest in statistical learning and from demonstrations that category learning often proceeds in this manner (e.g., Sherman et al., 2020).

Symbols were presented so that they were vertically centered with one symbol on the left side of the screen and one symbol on the right side of the screen (see Figure 3). Once the symbols appeared, participants were required to wait to make their judgment for 750 msec until the computer prompted them for their answer. The symbols and the prompt remained on the screen until participants responded. Combinations of symbols were organized so that pairs matched on object category (shape) on only 28 trials per block (9.72% of trials). Similarly, features (color and size) matched across both objects on 28 trials per block. Therefore, the majority of trials presented unassociated exemplars. Co-occurrences of size and color were always linked in this task. During Day 1, participants completed three blocks. During Day 2, participants completed two blocks.

Figure 3.

Examples of the trial types present in the learning task. (A) represents an object category (shape) match. (B) represents a feature match (color and size). (C) represents no match.

Figure 3.

Examples of the trial types present in the learning task. (A) represents an object category (shape) match. (B) represents a feature match (color and size). (C) represents no match.

Close modal

Recognition Test

After completing the learning session on Day 2, participants immediately began the recognition test. Participants were told that they would see a briefly presented symbol in the center of the screen followed by a static Gaussian noise mask. They were required to press the numeral “1” button on a keyboard if they had seen the symbol during the training sessions, and press “0” if they had not. Each symbol was presented for 150 msec; the mask was presented for 100 msec, followed by a response prompt. There was no time limit to respond. Response time and sensitivity (hits − false alarms) were measured.

MRI Sessions

Participants completed two consecutive days of imaging sessions each lasting 45–60 min. The structure of the two days was the same with the exception that, on their first day, a high-resolution anatomical scan was completed prior to the functional runs. On each day, participants completed eight functional runs (16 total). The order of these functional runs was randomized across the 2 days.

Each functional run consisted of nine blocks each lasting 20 sec. Blocks contained 20 exemplars of a symbol category, with each symbol appearing on screen for 800 msec followed by a 200-msec fixation cross. Thus, each block consisted of only one object category (see Figure 4). There was a 10-sec interblock interval that was not analyzed. There was also a 10-sec rest period at the beginning and end of each run. Thus, runs lasted approximately 4 min 40 sec. The order of blocks within each run was randomized.

Figure 4.

Schematic of the fMRI paradigm. Participants saw a single object category per block. Each block consisted of 20 presentations of object category exemplars for 800 msec. Each exemplar was separated by a 200-msec fixation cross. Each block in a run consisted of different combinations of structural features.

Figure 4.

Schematic of the fMRI paradigm. Participants saw a single object category per block. Each block consisted of 20 presentations of object category exemplars for 800 msec. Each exemplar was separated by a 200-msec fixation cross. Each block in a run consisted of different combinations of structural features.

Close modal

Learned categories were repeated across the experiment in four blocks. Two blocks contained linked features whereas two blocks presented unlinked features. New object categories appeared in a total of two blocks only and were always linked. Within each individual run, each training condition appeared once. Each condition was a separate block with a different object category in each block.

Scanning Parameters

Neuroimaging was conducted using a Siemens Magnetom Tim Trio 3-T whole-body MRI system located in the Indiana University Imaging Research Facility at the Department of Psychological and Brain Sciences. The high-resolution T1-weighted anatomical scans were conducted using a magnetization prepared rapid gradient echo sequence: inversion time = 900 msec, echo time = 2.98 msec, repetition time = 2300 msec, flip angle = 9°, with 176 sagittal slices of 1.0-mm thickness, a field of view 256 × 248 mm, and an isometric voxel of 1.0 mm3. For functional images, the field of view was 220 × 220 mm, with an in-plane resolution of 110 × 110 pixels and 72 axial slices of 2.0-mm thickness per volume with 0% slice gap, producing an isometric voxel size of 2.0 mm3. Functional images were acquired using a gradient echo EPI sequence with interleaved slice order: echo time = 30 msec, repetition time = 2000 msec, flip angle = 52° for BOLD imaging.

Analyses

The main analyses consisted of standard preprocessing pipeline for fMRI data. Analyses and preprocessing were conducted using BrainVoyager v20.6 (Brain Innovation).

Preprocessing and Motion Correction

Each individual's anatomical volumes were standardized to Talairach space (Talairach & Tournoux, 1988). Preprocessing of function volumes included slice-time correction, 3-D motion correction using trilinear, sinc-interpolation, and 3-D Gaussian spatial smoothing at an FWHM of 6 mm. Temporal high-pass filtering was also used with a voxel-wise general linear model (GLM) that included a Fourier basis with a cutoff of two sine/cosine pairs and a linear trend predictor. A rigid body transformation was used to coregister anatomical and functional volumes. To account for head motion, rigid body transformation parameters were added to the study design matrix as predictors of no interest (Bullmore et al., 1999). As previously mentioned, two participants were excluded because of their motion: one for having multiple runs with motion spikes greater than 2 mm and one for drifting more than 3 mm for multiple runs.

Data Analyses

Participants completed 16 functional runs. Fourteen of these runs were randomly selected for ROIs analyses. The remaining two runs were selected for a whole-brain contrast that served to localize the ROIs (thus avoiding “double-dipping” from the data). Thus, the data were analyzed using a random-effects GLM using BrainVoyager's multisubject GLM module. This whole-brain analysis served to demarcate broad regions that responded more to objects compared with fixation (see Figure 5, Table 1). The resultant regions were then subdivided anatomically into ROIs for further analyses (Figure 6).

Figure 5.

Results from a whole-brain contrast comparing learned object categories and interblock fixation, p < .001, cluster corrected for six contiguous voxels.

Figure 5.

Results from a whole-brain contrast comparing learned object categories and interblock fixation, p < .001, cluster corrected for six contiguous voxels.

Close modal
Table 1.

Region of Interest Localizer Analysis

ContrastCluster Size (Voxels)Talairach CoordinatesAnatomical Location
Peak xPeak yPeak zPeak t
Learned > Fixation 21586 −42 −64 −20 12.70 Left ventral temporal 
21147 39 −70 −20 12.70 Right ventral temporal cortex 
Fixation > Learned 1477 12 94 10.34 Bilateral lingual gyrus 
ContrastCluster Size (Voxels)Talairach CoordinatesAnatomical Location
Peak xPeak yPeak zPeak t
Learned > Fixation 21586 −42 −64 −20 12.70 Left ventral temporal 
21147 39 −70 −20 12.70 Right ventral temporal cortex 
Fixation > Learned 1477 12 94 10.34 Bilateral lingual gyrus 

This table presents cluster sizes, peak coordinates, and peak t values for regions that were significant with our localizer contrast.

Figure 6.

Schematic of ROI placement (performed individually, this depicts average placement). Blue: anterior fusiform; green: middle fusiform; pink: posterior fusiform; gray: LOC; red: primary visual cortex.

Figure 6.

Schematic of ROI placement (performed individually, this depicts average placement). Blue: anterior fusiform; green: middle fusiform; pink: posterior fusiform; gray: LOC; red: primary visual cortex.

Close modal

Individual brains were first normalized to the stereotaxic space of Talairach and Tournoux (1988). After the whole-brain contrast was performed, we divided the resultant regions anatomically for subsequent ROI analyses. Three of these regions corresponded to subdivisions of the fusiform gyrus, one corresponded to the LOC, and one served as a control region in primary visual cortex. To subdivide the fusiform gyri, we used similar procedures to James and Engelhardt (2012). On the x dimension, 10 mm was used because this is the average distance from the lateral occipital sulcus and the collateral sulcus. The fusiform gyrus is respectively bounded laterally and medially by these structures. Within the z dimension, we placed ROIs on the ventral temporal surface that extended 10 mm dorsally. On the y dimension, we followed the collateral sulcus posteriorly, splitting the region in three equal portions. The resulting ROIs were 10 × 10 × 10 mm3. For the lateral occipital region, dimensions were kept as 10 × 10 × 10 mm3 to maintain consistency across ROIs. On the z dimension, the ROI was placed on the ventral occipital surface and extended 10 mm dorsally. On the y dimension, the ROI place to was posterior to the previous ROIs. This ROI was bounded to the lateral occipital sulcus. As with the fusiform ROIs, this area largely corresponded to the most posterior region of the brain that responded more to learned symbols than fixation. The primary visual cortex was localized in each individual by first locating the broad region that responded more to fixation than learned symbols. Then, we anatomically localized the calcarine sulcus, with the anterior boundary of the ROI specified by the cuneal point, and then the 10-mm3 voxel was placed posterior to this within the calcarine folds (Hinds et al., 2008). Given the large variability in functional localization of area V1, we assumed that this large anatomically placed ROI would capture most of primary visual cortex and potentially visual association areas that surround it. Because of its role as a control area, individual retinotopic mapping was not performed. This procedure was carried out for each individual. Details regarding each individual's ROIs are present in Table 2.

Table 2.

Region of Interest Coordinates

ParticipantRegionx-Rangey-Rangez-Range
TK LaFFG −44…−35 −41…−32 −21…−12 
LmFFG −45…−36 −53…−44 −21…−12 
LpFFG −42…−33 −65…−56 −21…−12 
LLOC −32….23 −92…−84 −21…−12 
RaFFG 34…43 −45…−36 −21…−12 
RmFFG 34…43 −58…−49 −21…−12 
RpFFG 32…41 −69…−60 −21…−12 
RLOC 25…34 −88…−79 −21…−12 
VR LaFFG −35…−26 −43…−35 −22…−13 
LmFFG −36…−27 −55…−46 −22…−13 
LpFFG −30…−21 −60…69 −22…−13 
LLOC −30…21 −79…−70 −22…−13 
RaFFG 37…46 −36…−45 −22…−13 
RmFFG 41…50 −55…−46 −22…−13 
RpFFG 38…47 −66…−57 −22…−13 
RLOC 32…41 −86…−77 −22…13 
MT LaFFG −38…−29 −43…−34 −21…−12 
LmFFG −36…−27 −54…−46 −21…−12 
LpFFG −34…−25 −65…−56 −21…−12 
LLOC −29…−20 −85…−77 −21…−12 
RaFFG 32…41 −44…−35 −24…−15 
RmFFG 32…41 −56…−47 −24…−15 
RpFFG 28…37 −69…−60 −24…−15 
RLOC 19…28 −91…−82 −24…−15 
EC LaFFG −44…−35 −41…−32 −25…−16 
LmFFG −42…−33 −55…−46 −25…−16 
LpFFG −41…−32 −65…−56 −25…−16 
LLOC −32…−23 −90…−81 −25…−16 
RaFFG 37…47 −42…−33 −25…−16 
RmFFG 38…47 −55…−46 −25…−16 
RpFFG 37…46 −67…−58 −25…−16 
RLOC 25…34 −92…−82 −25…−16 
JF LaFFG −28…−19 −42…−33 −16…−7 
LmFFG −25…−16 −53…−45 −16…−7 
LpFFG −24…−15 −66…57 −16…−7 
LLOC −24…−15 −91…−82 −13…−4 
RaFFG 41…40 −36…−27 −15…−6 
RmFFG 39…48 −49…40 −15…−6 
RpFFG 38…47 −63…−54 −15…−6 
RLOC 34…43 −81…−73 −15…−6 
DL LaFFG −47…−38 −37…−28 −26…−17 
LmFFG −44…−35 −51…−42 −26…−17 
LpFFG −44…−35 −51…−42 −26…−17 
LLOC −36…−26 −85…−76 −26…−17 
RaFFG 36…45 −39…−30 −27…−18 
RmFFG 36…45 −53…−44 −27…−18 
RpFFG 36…45 −67…−58 −27…−18 
RLOC 23…30 −85…−76 −27…−17 
AB LaFFG −44…−34 −38…−29 −21…−12 
LmFFG −41…−32 −51…−42 −21…−12 
LpFFG −40…−31 −65…−56 −21…−12 
LLOC −34…−25 −83…−74 −23…−14 
RaFFG 41…50 −26…−17 −23…−14 
RmFFG 38…47 −40…−31 −23…−14 
RpFFG 38…47 −54…−45 −23…−14 
RLOC 34…43 −82…−73 −23…−14 
EM LaFFG −42…−33 −36…−27 −26…−17 
LmFFG −45…−36 −47…−38 −26…−17 
LpFFG −44…−35 −62…−54 −26…−17 
LLOC −39…−30 −87…−78 −17…−8 
RaFFG 37…48 −32…−23 −26…−17 
RmFFG 36…45 −49…−40 −26…−17 
RpFFG 35…44 −63…−54 −26…−17 
RLOC 30…39 −87…−78 −17…−8 
BM LaFFG −42…−33 −38…−29 −26…−17 
LmFFG −41…−32 −52…−43 −26…−17 
LpFFG −40…−31 −70…−61 −26…−17 
LLOC −31…−22 −87…−78 −26…−17 
RaFFG 36…45 −37…−28 −26…−17 
RmFFG 36…45 −42…−50 −26…−17 
RpFFG 34…43 −63…−54 −26…−17 
RLOC 24…33 −84…−75 −22…−13 
PM LaFFG −41…−32 −37…−27 −34…−25 
LmFFG −41…−32 −50…−41 −34…−25 
LpFFG −40…−31 −65…−56 −34…−25 
LLOC −34…−25 −86…−77 −34…−25 
RaFFG 38…47 −34…−25 −29…−20 
RmFFG 38…47 −46…−36 −29…−20 
RpFFG 38…47 −57…−48 −29…−20 
RLOC 35…44 −80…−71 −29…−20 
ML LaFFG −37…−28 −41…−32 −26…−17 
LmFFG −36…−27 −55…−47 −26…−17 
LpFFG −35…−26 −68…−59 −26…−17 
LLOC −38…−29 −84…−76 −26…−17 
RaFFG 35…44 −30…−21 −30…−21 
RmFFG 35…44 −44…−35 −30…−21 
RpFFG 34…44 −60…−51 −30…−21 
RLOC 24…33 −87…−78 −30…−21 
BC LaFFG −42…−33 −37…−26 −20…−11 
LmFFG −42…−33 −50…−41 −20…−11 
LpFFG −38…−29 −61…−52 −20…−11 
LLOC −29…−20 −91…−82 −16…−7 
RaFFG 38…47 −37…−28 −20…−11 
RmFFG 37…46 −50…−41 −20…−11 
RpFFG 37…46 −63…−55 −20…−11 
RLOC 33…42 −88…−79 −13…−4 
AM LaFFG −38…−29 −34…−25 −25…−16 
LmFFG −40…−31 −45…−36 −25…−16 
LpFFG −35…−26 −54…−46 −25…−16 
LLOC −51…−42 −73…−64 −25…−16 
RaFFG 37…46 −33…−24 −25…−16 
RmFFG 37…46 −44…−35 −25…−16 
RpFFG 34…43 −54…−46 −25…−16 
RLOC 43…−52 −72…−64 −19…−10 
IE LaFFG −41…−32 −40…−31 −23…−14 
LmFFG −38…−29 −51…−42 −23…−14 
LpFFG −37…−28 −62…−53 −23…−14 
LLOC −45…−26 −77…−68 −23…−14 
RaFFG 38…47 −38…−29 −23…−14 
RmFFG 34…43 −49…−40 −23…−14 
RpFFG 29…38 −64…−55 −26…−17 
RLOC 25…34 −89…80 −22…−13 
KH LaFFG −42…−33 −32…−25 −21…−12 
LmFFG −41…−32 −45…−36 −23…−14 
LpFFG −39…−30 −56…−47 −23…−14 
LLOC −47…−38 −73…−64 −23…14 
RaFFG 34…43 −32…−23 −22…−13 
RmFFG 35…44 −43…−34 −23…−14 
RpFFG 34…43 −55…−46 −23…−14 
RLOC 35…43 −80…−71 −19…−10 
AM2 LaFFG −38…−29 −43…−34 −21…−12 
LmFFG −36…−27 −55…−46 −21…−12 
LpFFG −36…−27 −66…−58 −21…−12 
LLOC −44…−35 −82…−73 −21…−12 
RaFFG 32…41 −41…−32 −24…−15 
RmFFG 33…42 −55…−46 −24…−15 
RpFFG 31…40 −66…−57 −17…−8 
RLOC 28…37 −86…−77 −17…−8 
CC LaFFG −35…−26 −42…−33 −22…−13 
LmFFG −34…−25 −55…−46 −22…−13 
LpFFG −33…−24 −65…−56 −22…−13 
LLOC −46…−37 −81…−72 −22…−13 
RaFFG 34…43 −44…−35 −22…−13 
RmFFG 32…41 −56…−47 −22…−13 
RpFFG 27…37 −67…−58 −22…−13 
RLOC 36…45 −87…−78 −22…−13 
ParticipantRegionx-Rangey-Rangez-Range
TK LaFFG −44…−35 −41…−32 −21…−12 
LmFFG −45…−36 −53…−44 −21…−12 
LpFFG −42…−33 −65…−56 −21…−12 
LLOC −32….23 −92…−84 −21…−12 
RaFFG 34…43 −45…−36 −21…−12 
RmFFG 34…43 −58…−49 −21…−12 
RpFFG 32…41 −69…−60 −21…−12 
RLOC 25…34 −88…−79 −21…−12 
VR LaFFG −35…−26 −43…−35 −22…−13 
LmFFG −36…−27 −55…−46 −22…−13 
LpFFG −30…−21 −60…69 −22…−13 
LLOC −30…21 −79…−70 −22…−13 
RaFFG 37…46 −36…−45 −22…−13 
RmFFG 41…50 −55…−46 −22…−13 
RpFFG 38…47 −66…−57 −22…−13 
RLOC 32…41 −86…−77 −22…13 
MT LaFFG −38…−29 −43…−34 −21…−12 
LmFFG −36…−27 −54…−46 −21…−12 
LpFFG −34…−25 −65…−56 −21…−12 
LLOC −29…−20 −85…−77 −21…−12 
RaFFG 32…41 −44…−35 −24…−15 
RmFFG 32…41 −56…−47 −24…−15 
RpFFG 28…37 −69…−60 −24…−15 
RLOC 19…28 −91…−82 −24…−15 
EC LaFFG −44…−35 −41…−32 −25…−16 
LmFFG −42…−33 −55…−46 −25…−16 
LpFFG −41…−32 −65…−56 −25…−16 
LLOC −32…−23 −90…−81 −25…−16 
RaFFG 37…47 −42…−33 −25…−16 
RmFFG 38…47 −55…−46 −25…−16 
RpFFG 37…46 −67…−58 −25…−16 
RLOC 25…34 −92…−82 −25…−16 
JF LaFFG −28…−19 −42…−33 −16…−7 
LmFFG −25…−16 −53…−45 −16…−7 
LpFFG −24…−15 −66…57 −16…−7 
LLOC −24…−15 −91…−82 −13…−4 
RaFFG 41…40 −36…−27 −15…−6 
RmFFG 39…48 −49…40 −15…−6 
RpFFG 38…47 −63…−54 −15…−6 
RLOC 34…43 −81…−73 −15…−6 
DL LaFFG −47…−38 −37…−28 −26…−17 
LmFFG −44…−35 −51…−42 −26…−17 
LpFFG −44…−35 −51…−42 −26…−17 
LLOC −36…−26 −85…−76 −26…−17 
RaFFG 36…45 −39…−30 −27…−18 
RmFFG 36…45 −53…−44 −27…−18 
RpFFG 36…45 −67…−58 −27…−18 
RLOC 23…30 −85…−76 −27…−17 
AB LaFFG −44…−34 −38…−29 −21…−12 
LmFFG −41…−32 −51…−42 −21…−12 
LpFFG −40…−31 −65…−56 −21…−12 
LLOC −34…−25 −83…−74 −23…−14 
RaFFG 41…50 −26…−17 −23…−14 
RmFFG 38…47 −40…−31 −23…−14 
RpFFG 38…47 −54…−45 −23…−14 
RLOC 34…43 −82…−73 −23…−14 
EM LaFFG −42…−33 −36…−27 −26…−17 
LmFFG −45…−36 −47…−38 −26…−17 
LpFFG −44…−35 −62…−54 −26…−17 
LLOC −39…−30 −87…−78 −17…−8 
RaFFG 37…48 −32…−23 −26…−17 
RmFFG 36…45 −49…−40 −26…−17 
RpFFG 35…44 −63…−54 −26…−17 
RLOC 30…39 −87…−78 −17…−8 
BM LaFFG −42…−33 −38…−29 −26…−17 
LmFFG −41…−32 −52…−43 −26…−17 
LpFFG −40…−31 −70…−61 −26…−17 
LLOC −31…−22 −87…−78 −26…−17 
RaFFG 36…45 −37…−28 −26…−17 
RmFFG 36…45 −42…−50 −26…−17 
RpFFG 34…43 −63…−54 −26…−17 
RLOC 24…33 −84…−75 −22…−13 
PM LaFFG −41…−32 −37…−27 −34…−25 
LmFFG −41…−32 −50…−41 −34…−25 
LpFFG −40…−31 −65…−56 −34…−25 
LLOC −34…−25 −86…−77 −34…−25 
RaFFG 38…47 −34…−25 −29…−20 
RmFFG 38…47 −46…−36 −29…−20 
RpFFG 38…47 −57…−48 −29…−20 
RLOC 35…44 −80…−71 −29…−20 
ML LaFFG −37…−28 −41…−32 −26…−17 
LmFFG −36…−27 −55…−47 −26…−17 
LpFFG −35…−26 −68…−59 −26…−17 
LLOC −38…−29 −84…−76 −26…−17 
RaFFG 35…44 −30…−21 −30…−21 
RmFFG 35…44 −44…−35 −30…−21 
RpFFG 34…44 −60…−51 −30…−21 
RLOC 24…33 −87…−78 −30…−21 
BC LaFFG −42…−33 −37…−26 −20…−11 
LmFFG −42…−33 −50…−41 −20…−11 
LpFFG −38…−29 −61…−52 −20…−11 
LLOC −29…−20 −91…−82 −16…−7 
RaFFG 38…47 −37…−28 −20…−11 
RmFFG 37…46 −50…−41 −20…−11 
RpFFG 37…46 −63…−55 −20…−11 
RLOC 33…42 −88…−79 −13…−4 
AM LaFFG −38…−29 −34…−25 −25…−16 
LmFFG −40…−31 −45…−36 −25…−16 
LpFFG −35…−26 −54…−46 −25…−16 
LLOC −51…−42 −73…−64 −25…−16 
RaFFG 37…46 −33…−24 −25…−16 
RmFFG 37…46 −44…−35 −25…−16 
RpFFG 34…43 −54…−46 −25…−16 
RLOC 43…−52 −72…−64 −19…−10 
IE LaFFG −41…−32 −40…−31 −23…−14 
LmFFG −38…−29 −51…−42 −23…−14 
LpFFG −37…−28 −62…−53 −23…−14 
LLOC −45…−26 −77…−68 −23…−14 
RaFFG 38…47 −38…−29 −23…−14 
RmFFG 34…43 −49…−40 −23…−14 
RpFFG 29…38 −64…−55 −26…−17 
RLOC 25…34 −89…80 −22…−13 
KH LaFFG −42…−33 −32…−25 −21…−12 
LmFFG −41…−32 −45…−36 −23…−14 
LpFFG −39…−30 −56…−47 −23…−14 
LLOC −47…−38 −73…−64 −23…14 
RaFFG 34…43 −32…−23 −22…−13 
RmFFG 35…44 −43…−34 −23…−14 
RpFFG 34…43 −55…−46 −23…−14 
RLOC 35…43 −80…−71 −19…−10 
AM2 LaFFG −38…−29 −43…−34 −21…−12 
LmFFG −36…−27 −55…−46 −21…−12 
LpFFG −36…−27 −66…−58 −21…−12 
LLOC −44…−35 −82…−73 −21…−12 
RaFFG 32…41 −41…−32 −24…−15 
RmFFG 33…42 −55…−46 −24…−15 
RpFFG 31…40 −66…−57 −17…−8 
RLOC 28…37 −86…−77 −17…−8 
CC LaFFG −35…−26 −42…−33 −22…−13 
LmFFG −34…−25 −55…−46 −22…−13 
LpFFG −33…−24 −65…−56 −22…−13 
LLOC −46…−37 −81…−72 −22…−13 
RaFFG 34…43 −44…−35 −22…−13 
RmFFG 32…41 −56…−47 −22…−13 
RpFFG 27…37 −67…−58 −22…−13 
RLOC 36…45 −87…−78 −22…−13 

This table presents individual participants' ROI coordinates.

Data were extracted from the primary visual cortex and compared across conditions, but was not used in the overall data analyses for the remaining ROIs given that it was simply used as a control region to determine sensitivity to category structure within a region that would not be predicted to have such sensitivity. We then extracted each individual's data from three ROIs within the left and right fusiform gyri and one ROI within the left and right LOC (eight ROIs in total). Average activation across the time course (excluding first and last three time points) from these data was used as the dependent measure in a 2 (Frequency: high vs. low) × 2 (Variability: tight vs. variable) × 2 (Co-occurrence: linked vs. unlinked) × 4 (Region: anterior FFG vs. mid FFG vs. posterior FFG vs. LOC) × 2 (Hemisphere: left vs. right) ANOVA was performed on the resultant data. Follow-up analyses on simple effects and a priori t tests were also conducted.

Categorization Performance

Proportions of “same” responses in the learning sessions was calculated for the three conditions: same object shape, same features, and no match items (Figure 7). These data were submitted to a one-way ANOVA. There was a significant effect of Trial type, F(2, 32) = 650.41, p < .001, η2 = .946. Follow-up comparisons revealed that participants were more likely to categorize objects together when the object shape matched than when color and size matched, t(16) = 21.84, p < .001, d = 5.40, or when there was no match across the objects, t(16) = 39.99, p < .001, d = 9.61. There were no differences between feature matches and no matches, although there was a trend, t(16) = 1.833, p = .086, d = 0.503, with numerically higher “same” responses for feature match items. However, the proportions of same matches on both feature match and no match items were extremely low. Thus, not surprisingly, category formation reflected a preference for shape similarity in these overt behavioral responses.

Figure 7.

Proportion of “same” responses in the categorization task across each trial type, *p < .001. Error bars represent ±1 standard error of the mean.

Figure 7.

Proportion of “same” responses in the categorization task across each trial type, *p < .001. Error bars represent ±1 standard error of the mean.

Close modal

Recognition Performance

Sensitivity was calculated for each participant as hitsfalse alarms separately for each condition (Figure 8). Sensitivity was then submitted to a 2 (Frequency: high vs. low) × 2 (Variability: variable vs. tight) repeated-measures ANOVA. There was no main effect of Variability, F(1, 16) = 1.225, p = .285, η2 = .071. There was, however, a main effect of Frequency, F(1, 16) = 13.88, p = .002, η2 = .464, with high-frequency items having higher accuracy than low-frequency items. This interaction was not significant, F(1, 16) = 0.585, p = .455, η2 = .035.

Figure 8.

Sensitivity (hits − false alarms) by frequency collapsing across variability in the recognition task, *p < .01. Error bars represent ±1 standard error of the mean.

Figure 8.

Sensitivity (hits − false alarms) by frequency collapsing across variability in the recognition task, *p < .01. Error bars represent ±1 standard error of the mean.

Close modal

We also examined RTs on correct responses for learned items by performing a 2 (Frequency: high vs. low) × 2 (Variability: variable vs. tight) repeated-measures ANOVA (Figure 9). There were no main effects (ps > .25) or interactions, F(1, 16) = 1.62, p = .221.

Figure 9.

Comparison of high, low, and new across variable and linked items in the VTC ROIs (Figure 5, *p < .05). Error bars represent ±1 standard error of the mean.

Figure 9.

Comparison of high, low, and new across variable and linked items in the VTC ROIs (Figure 5, *p < .05). Error bars represent ±1 standard error of the mean.

Close modal

fMRI Data

Data Localization

The whole-brain contrast of learned object categories > fixation revealed significant activation differences in two large clusters spanning the left and right ventral temporal and occipital cortices (see Figure 5 and Table 1) when using a voxel-wise error rate p < .001. We corrected for multiple comparisons by using the BrainVoyager cluster threshold estimator plug in tool. We chose to use a whole-brain false-positive discovery rate of p < .05, which resulted in a cluster correction of six contiguous voxels, and a voxel-wise error rate of p < .001. Thus, there is evidence that ventral temporal regions such as the fusiform gyrus and the LOC are involved in learning these novel symbols. This therefore served as our justification for placing ROIs throughout these regions. See Figure 6 for a schematic of the ROI placement within these regions.

ROI Analyses

Familiar versus new.

We first examined the overall effect of familiarity to determine if the neural ROIs were distinguishing between the implicitly learned and previously unseen objects. Data were therefore extracted from the eight ROIs for three conditions: [high variable linked], [low variable linked], and [new variable linked]. These conditions were selected to examine learning differences while equating variability and co-occurrences. Resultant data were then analyzed via a one-way ANOVA. Planned comparisons were then performed to better understand the role of familiarity.

There was a significant effect of Condition (violated sphericity p = .022, Greenhouse Geisser: F(1.43, 22.90) = 4.47, p = .034, η2 = .217; see Figure 10). Follow-up comparisons revealed no difference between high- and low-frequency items, although activity for low was numerically greater, t(16) = 1.53 p = .145, d = 0.383. New items demonstrated significantly greater activity than high items, t(16) = 2.33, p = .033, d = 0.565. Similarly, new items demonstrated greater activation than low items, but this only trended toward significance, t(16) = 2.06, p = .056, d = 0.477. These initial results served to suggest that the VTC was sensitive to the difference between the implicitly learned items and unseen items, but did not reveal differential responding to high versus low frequency, implying that there was no potential effects of adaptation.

Figure 10.

Main effect of Variability collapsed across both hemispheres, all regions, and all other structural features, p < .01. Error bars represent ±1 standard error of the mean.

Figure 10.

Main effect of Variability collapsed across both hemispheres, all regions, and all other structural features, p < .01. Error bars represent ±1 standard error of the mean.

Close modal

Comparison of Structural Features

Our main goal was to examine how the three structural features that were learned during the training sessions (frequency, variability, and co-occurrences) impacted the brain regions involved in processing object categories. ROIs were the same as the previous analysis except that, here, we also analyzed the data from primary visual cortex separately from the overall model to test whether there was differential responding based on category structure in this region. In primary visual cortex, we first performed a 2 (Frequency: high vs. low) × 2 (Variability: tight vs. variable) × 2 (Co-occurrence: linked vs. unlinked) × 2 (Hemisphere: left vs. right) repeated-measures ANOVA. In primary visual cortex, there was no main effects for Frequency, F(1, 16) = 0.54, ns; Variability, F(1, 16) = 0.32, ns; Co-occurrence, F(1, 16) = 0.21, ns; or Hemisphere, F(1, 16) = 0.43, ns; and no interaction among the variables (all Fs < 1.0). Thus, primary visual areas did not show sensitivity to category structure in this design, but responded with a similar amplitude to all the presented objects.

The data extracted from the remaining ROIs were then analyzed via a 2 (Frequency: high vs. low) × 2 (Variability: tight vs. variable) × 2 (Co-occurrence: linked vs. unlinked) × 4 (Region: anterior FFG vs. mid FFG vs. posterior FFG vs. LOC) × 2 (Hemisphere: left vs. right) repeated-measures ANOVA. Planned follow-up comparisons were performed for significant interactions and main effects.

First, frequency trended toward significance, F(1, 16) = 3.61, p =.071, η2 = .184, with low-frequency object categories having numerically greater BOLD activation than high-frequency object categories. There was a main effect of Variability, F(1, 16) = 11.97, p = .003, η2 = .428, with greater activation for variable object categories compared to tight object categories (Figure 11). There was no main effect of Co-occurrence (p > .20).

Figure 11.

Main effect of Region collapsing across hemispheres and structural features with follow-up comparisons, *p < .05. Error bars represent ±1 standard error of the mean.

Figure 11.

Main effect of Region collapsing across hemispheres and structural features with follow-up comparisons, *p < .05. Error bars represent ±1 standard error of the mean.

Close modal

In terms of brain areas, there was a significant main effect of Region (violated sphericity, p = .001, Greenhouse Geisser: F(1.92, 30.71) = 40.29, p < .001, η2 = .716; see Figure 12). We then performed planned comparisons to better understand the effect of region in processing object categories. In general, this main effect reflected the pattern that activation while processing object categories increased as the ROIs were placed more posteriorly. Specifically, the LOC showed higher activation than all other regions, ts(16) > 5.30, ps < .001, ds > 1.22. The posterior fusiform gyrus was significantly greater than both the mid and anterior fusiform gyri, ts(16) > 2.70, ps < .016, ds > 0.657. Finally, the mid fusiform was greater than the anterior fusiform, t(16) = 4.66, p < .001, d = 1.18. There was also a main effect of hemisphere, F(1, 16) = 11.54, p = .004, η2 = .419, with greater activation across the right hemisphere while processing object categories (see Figure 12).

Figure 12.

Main effect of Hemisphere collapsing across regions and structural features, p < .01. Error bars represent ±1 standard error of the mean.

Figure 12.

Main effect of Hemisphere collapsing across regions and structural features, p < .01. Error bars represent ±1 standard error of the mean.

Close modal

We then examined interactions among our factors. No five-way or four-way interactions were significant. Two interactions were significant. First, there was a significant variability by co-occurrence interaction, F(1, 16) = 7.60, p = .014, η2 = .322 (Figure 13). Specifically, there was significantly greater activation for processing variable compared to tight object categories when the object features were also unlinked, t(16) = 4.23, p < .001, d = 1.10. No other differences were significant (ps > .075). In short, the interaction was driven by greater activity for variability-unlinked compared to variable-linked items. This suggests that variability may impact the detection of relevant co-occurrences of features.

Figure 13.

A 2 (Variability: variability vs. tight) × 2 (Co-occurrence: linked vs. unlinked) interaction collapsing across co-occurrence, region, and hemisphere with follow-up comparisons, *p < .001. Error bars represent ±1 standard error of the mean.

Figure 13.

A 2 (Variability: variability vs. tight) × 2 (Co-occurrence: linked vs. unlinked) interaction collapsing across co-occurrence, region, and hemisphere with follow-up comparisons, *p < .001. Error bars represent ±1 standard error of the mean.

Close modal

There was also a significant Variability × Co-Occurrence × Region interaction (violated sphericity, p = .021, F(2.25, 36) = 4.11, p = 021, η2 = .204). We performed further analyses examining the Variability × Co-Occurrence interaction within individual regions (see Figure 14). There was no significant interaction in the anterior fusiform gyrus (p = .403). However, there were significant Variability × Co-occurrence interactions within the middle fusiform, F(1, 16) = 4.82, p = .043, η2 = .221; posterior fusiform, F(1, 16) = 11.26, p = .004, η2 = .413; and the LOC, F(1, 16) = 4.98, p = .040, η2 = .237. Within the middle frontal gyrus, the interaction was driven by greater activity for variable-unlinked items compared to tight-unlinked items, t(16) = 2.93, p = .010, d = 0.710). No other comparisons were significant in this region, (ps > .10). Within the posterior fusiform gyrus, the pattern of the interaction was more complex. First, variability resulted in greater activation for unlinked items, t(16) = 4.46, p < .001, d = 1.09, but no differences for linked items (p > .150). Comparing across co-occurrence levels revealed greater activation for unlinked variable compared to linked variable items, t(16) = 2.37, p = .030, d = 0.466. However, tight categories showed the reverse patterns with tight linked being greater than tight unlinked, t(16) = 2.332, p = .033, d = 0.565. Within the LOC, the pattern of the interaction was similar to the middle fusiform gyrus. Specifically, the interaction was driven by greater activity for variable-unlinked items compared to tight-unlinked items, t(16) = 4.27, p = .001, d = 1.05. No other comparisons were significant in this region, (ps > .10). In summary, variability is related to detecting the relevant co-occurrences of object features and this detection is primarily associated with the posterior fusiform gyrus.

Figure 14.

A 2 (Variability: variable vs. tight) × 2 (Co-occurrence: linked vs. unlinked) × 4 (Region: anterior vs. middle vs. posterior vs. LOC) interaction with follow-up comparisons, *p < .05, **p < .01. Error bars represent ±1 standard error of the mean.

Figure 14.

A 2 (Variability: variable vs. tight) × 2 (Co-occurrence: linked vs. unlinked) × 4 (Region: anterior vs. middle vs. posterior vs. LOC) interaction with follow-up comparisons, *p < .05, **p < .01. Error bars represent ±1 standard error of the mean.

Close modal

To better understand the role that structural aspects of categories play in forming object categories, we examined the responsiveness of the ventral temporal cortex to three previously learned structural aspects: frequency of exemplars, variability among exemplars, and co-occurrences among features among exemplars. To achieve this goal, we created a set of metrically organized object categories through which we could manipulate and quantify these properties. Participants were then exposed to the object categories during an implicit learning task. Following this task, participants completed a recognition task as well as neuroimaging sessions during which participants observed learned an unlearned object categories composed of different structures. Through this paradigm, we were able to not only demonstrate that the fusiform gyrus and the LOC are sensitive to variability (and to a lesser extent frequency). We also demonstrated that some structural elements interact to impact how the brain regions process the categories. Our results can be summarized by three critical contributions: (1) Variability among category members influences the detection of co-occurrences between object features. (2) This detection is also modulated by the brain region, with the posterior fusiform gyrus being especially sensitive to the variability–co-occurrence relationship. (3) Although shape frequency within a category affects overt measures of recognition and has some effect on BOLD signal, it does not interact in the same manner as variability and co-occurrence.

Frequency

Frequency and object typicality have long been known to play a role in the perception and learning of object categories. As previously noted, frequency and typicality have a mutual relationship and typicality judgments increase as category members are presented at higher frequencies (Nosofsky, 1988). Throughout development, frequency appears to be an important factor in understanding the early acquisition of infant's words to adult's categorization accuracy (Clerkin et al., 2017; Nosofsky, 1988). Past neuroimaging findings have implicated the fusiform gyrus and the LOC in typicality measures of category representation. For example, Davis and Poldrack (2014) created a category stimulus space that allowed for the manipulation of category exemplar features such as typicality. Patterns of activation for typical members were more similar to each other throughout the ventral–temporal and occipital regions than they were to atypical exemplars (Davis & Poldrack, 2014). In addition, Iordan et al. (2016) found that representational similarity associated with the LOC decreased as typicality of exemplars decreased. Our results corroborate these findings by suggesting that “surprising” or atypical object categories may activate object category learning brain systems more than well-learned categories as evidenced by the greater activation associated with low frequency and new categories compared to higher frequency categories. This interpretation of our neural data is in line with our behavioral data as well, where lower frequency items resulted in poorer accuracy suggesting that they leave weaker memory traces. Thus, the decreased responsiveness of the high-frequency (and low frequency to a lesser extent) categories compared to new categories may reflect more established representations of the learned categories. In summary, typicality and frequency may be proxies for object familiarity and can impact the brain systems responsible for processing object categories and shape their neural representations as well as overt recognition.

Variability and Co-occurrences

Variability has also been established as a factor in the mechanisms supporting category representation. Behaviorally, variability has been known to support object recognition, categorization, and generalization across the life span (Li & James, 2016; Perry et al., 2010; Harman & Humphreys, 1999). Our findings demonstrate that the ventral temporal cortex is involved in processing this variability and corroborates previous findings in symbol and category formation (Plebanek & James, 2021; Vinci-Booher et al., 2019; James, 2017; James & Engelhardt, 2012). We believe that this variability is most important in forming the initial representation of category.

However, our findings point to another role for variability: identifying relevant co-occurrences within the category structure. Research has already established the brain is primed to extract structural regularities even though the learner may not be explicitly aware of such regularities (Turk-Browne et al., 2009). Our findings suggest that variability may make the representation of the feature co-occurrences a stronger component of the object category representation. Specifically, unlinked features resulted in greater activation when the categories were also variable whereas linked features were more equivalently processed for variable and tight category structures.

Previous findings in category generalization have supported this role. Plebanek and James (2021) found that providing adults and 8-year-olds with variability during category learning leads to generalization (based on increased brain activation) via the feature that was invariant. In contrast, highly similar exposure led to generalization based on the overall appearance of the exemplar (Plebanek & James, 2021). Taken together, these and our current findings suggest that, from variability, category structure emerges. Furthermore, this structure may be representative of co-occurrences: be they co-occurrences of time and space, features and category membership, or features to other features.

Categories and the Brain

The neural correlates of category learning have long been debated. At the heart of this debate is the origin of category representations in the brain. Some researchers propose functionally specialized centers such as the fusiform face area (Kanwisher, McDermott, & Chun, 1997) whereas others propose process-driven expertise with categories drive neural specialization (Gauthier, Skudlarski, Gore, & Anderson, 2000). An additional candidate theory proposes that the brain represents information about categories that may overlap throughout the ventral temporal cortex (Haxby et al., 2001). These theories place different burdens on the role of the category. Specialization theories suggest something inherent about the category triggers domain-specific brain systems (Kanwisher, 2017). Alternatively, these specialized regions may encode other information regarding nonspecialized categories (Haxby et al., 2001). Thus, there is a conflict in the relationship between the brain regions and categories in dictating category formation.

Our findings suggest a different pathway toward category representation. The internal structure of categories recruit different neural systems. Specifically, we identified sensitivity to variability and feature co-occurrence that increased in the posterior fusiform gyrus relative to other regions. This finding parallels past research profiling the fusiform gyrus' responsiveness to letters and letter strings. For example, James, James, Jobard, Wong, and Gauthier (2005) found that the left anterior fusiform gyrus was selective for individual letters whereas the posterior fusiform gyrus was selective for strings of letters. Other works on the organization of the brain regions involved in processing letters have supported a gradient-style organization, although the exact distribution of sensitivity is disagreed upon (Vinckier et al., 2007). More broadly, the occipito-temporal cortex may also show graded sensitivity to the eccentricity of objects (Hasson, Levy, Behrmann, Hendler, & Malach, 2002). Our results suggest that a potential explanation for this heterogeneity within brain regions is the subtle statistical differences, most likely those stemming from variability, present in object categories.

Thus, the recruitment of different regions throughout the ventral temporal cortices may also reflect the extraction of the internal, statistical structures of categories that guides the formation of representations. In our study, structural elements such as variability may be closely tied to perceptual features and thus guide the patterns of similarity in the brain as well as the systems that process categories. Therefore, future evaluation of this theory may be supplemented with the more distributed accounts of category representations, which are guided, in part, by perceptual elements that reflect the acquired stimulus space (Kriegeskorte, Mur, & Bandettini, 2008).

Conclusions

Object categories have rich, internal structures that may impact the brain systems recruited to learn novel categories. Here, we created metrically organized categories that allowed us to operationalize variability, frequency, and co-occurrence structures of novel categories. We have demonstrated that the fusiform gyrus and the LOC are sensitive to these structural elements. Moreover, as regions move posteriorly through the fusiform gyrus, sensitivity to structural properties (in particular the variability-to-co-occurrence relationship) increases. In short, we have demonstrated that the internal statistics of object categories are critical in learning. Thus, future research should not take for granted the role of structure in exploring category learning dynamics.

This project was supported by National Institute of Health 2 T32 grant HD 007475-21 and by the Indiana University Office of the Vice President for Research Emerging Area of Research Initiative, Learning: Brains, Machines, and Children. We thank Annie Abioye, Julia Lambert, and Lauren Wilkins for their assistance on this project. No funding sources were involved in the study design, analysis, or interpretation of the data, in the writing of this paper, or in the decision to submit this paper for publication.

The final version of this work is published after the death of the first author, Daniel Plebanek. Science has lost a bright young light in his passing.

Reprint requests should be sent to Karin H. James, Department of Psychological and Brain Sciences, Indiana University Bloomington, 1101 East 10th St., Bloomington, IN 47405-7000, or via e-mail: [email protected].

Daniel J. Plebanek: Conceptualization; Data curation; Formal analysis; Investigation; Methodology; Project administration; Software; Validation; Visualization; Writing—Original draft; Writing—Review & editing. Karin H. James: Conceptualization; Formal analysis; Funding acquisition; Methodology; Resources; Supervision; Visualization; Writing—Review & editing.

Karin H. James, National Institute of Health 2 T32, grant number: HD 007475-21. Karin H. James, Indiana University Office of the Vice President for Research Emerging Area of Research Initiative, grant number: Learning: Brains, Machines, and Children.

A retrospective analysis of the citations in every article published in this journal from 2010 to 2020 has revealed a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .408, W(oman)/M = .335, M/W = .108, and W/W = .149, the comparable proportions for the articles that these authorship teams cited were M/M = .579, W/M = .243, M/W = .102, and W/W = .076 (Fulvio et al., JoCN, 33:1, pp. 3–7). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance.

Ashby
,
F. G.
, &
Valentin
,
V. V.
(
2017
).
Multiple systems of perceptual category learning: Theory and cognitive tests
. In
H.
Cohen
and
C.
Lefebvre
(Eds.),
Handbook of categorization in cognitive science
(2nd ed., pp.
157
188
).
Cambridge, MA
:
Elsevier
.
Bullmore
,
E. T.
,
Brammer
,
M. J.
,
Rabe-Hesketh
,
S.
,
Curtis
,
V. A.
,
Morris
,
R. G.
,
Williams
,
S. C.
, et al
(
1999
).
Methods for diagnosis and treatment of stimulus-correlated motion in generic brain activation studies using fMRI
.
Human Brain Mapping
,
7
,
38
48
. ,
[PubMed]
Clerkin
,
E. M.
,
Hart
,
E.
,
Rehg
,
J. M.
,
Yu
,
C.
, &
Smith
,
L. B.
(
2017
).
Real-world visual statistics and infants' first-learned object names
.
Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences
,
372
,
20160055
. ,
[PubMed]
Davis
,
T.
, &
Poldrack
,
R. A.
(
2014
).
Quantifying the internal structure of categories using a neural typicality measure
.
Cerebral Cortex
,
24
,
1720
1737
. ,
[PubMed]
Gauthier
,
I.
,
Skudlarski
,
P.
,
Gore
,
J. C.
, &
Anderson
,
A. W.
(
2000
).
Expertise for cars and birds recruits brain areas involved in face recognition
.
Nature Neuroscience
,
3
,
191
197
. ,
[PubMed]
Grill-Spector
,
K.
,
Knouf
,
N.
, &
Kanwisher
,
N.
(
2004
).
The fusiform face area subserves face perception, not generic within-category identification
.
Nature Neuroscience
,
7
,
555
562
. ,
[PubMed]
Grill-Spector
,
K.
, &
Weiner
,
K. S.
(
2014
).
The functional architecture of the ventral temporal cortex and its role in categorization
.
Nature Reviews Neuroscience
,
15
,
536
548
. ,
[PubMed]
Harman
,
K. L.
, &
Humphrey
,
G. K.
(
1999
).
Encoding ‘regular’ and ‘random’ sequences of views of novel three-dimensional objects
.
Perception
,
28
,
601
615
. ,
[PubMed]
Hasson
,
U.
,
Levy
,
I.
,
Behrmann
,
M.
,
Hendler
,
T.
, &
Malach
,
R.
(
2002
).
Eccentricity bias as an organizing principle for human high-order object areas
.
Neuron
,
34
,
479
490
. ,
[PubMed]
Haxby
,
J. V.
,
Gobbini
,
M. I.
,
Furey
,
M. L.
,
Ishai
,
A.
,
Schouten
,
J. L.
, &
Pietrini
,
P.
(
2001
).
Distributed and overlapping representations of faces and objects in ventral temporal cortex
.
Science
,
293
,
2425
2430
. ,
[PubMed]
Hinds
,
O. P.
,
Rajendran
,
N.
,
Polimeni
,
J. R.
,
Augustinack
,
J. C.
,
Wiggins
,
G.
,
Wald
,
L. L.
, et al
(
2008
).
Accurate prediction of V1 location from cortical folds in a surface coordinate system
.
Neuroimage
,
39
,
1585
1599
. ,
[PubMed]
Iordan
,
M. C.
,
Greene
,
M. R.
,
Beck
,
D. M.
, &
Fei-Fei
,
L.
(
2016
).
Typicality sharpens category representations in object-selective cortex
.
Neuroimage
,
134
,
170
179
. ,
[PubMed]
James
,
K. H.
(
2010
).
Sensori-motor experience leads to changes in visual processing in the developing brain
.
Developmental Science
,
13
,
279
288
. ,
[PubMed]
James
,
K. H.
(
2017
).
The importance of handwriting experience on the development of the literate brain
.
Current Directions in Psychological Science
,
26
,
502
508
.
James
,
K. H.
, &
Atwood
,
T. P.
(
2009
).
The role of sensorimotor learning in the perception of letter-like forms: Tracking the causes of neural specialization for letters
.
Cognitive Neuropsychology
,
26
,
91
110
. ,
[PubMed]
James
,
K. H.
, &
Engelhardt
,
L.
(
2012
).
The effects of handwriting experience on functional brain development in pre-literate children
.
Trends in Neuroscience and Education
,
1
,
32
42
. ,
[PubMed]
James
,
K. H.
,
James
,
T. W.
,
Jobard
,
G.
,
Wong
,
A. C. N.
, &
Gauthier
,
I.
(
2005
).
Letter processing in the visual system: Different activation patterns for single letters and strings
.
Cognitive, Affective, & Behavioral Neuroscience
,
5
,
452
466
. ,
[PubMed]
James
,
K. H.
,
Jones
,
S. S.
,
Swain
,
S.
,
Pereira
,
A.
, &
Smith
,
L. B.
(
2014
).
Some views are better than others: Evidence for a visual bias in object views self-generated by toddlers
.
Developmental Science
,
17
,
338
351
. ,
[PubMed]
James
,
K. H.
, &
Swain
,
S. N.
(
2011
).
Only self-generated actions create sensori-motor systems in the developing brain
.
Developmental Science
,
14
,
673
678
. ,
[PubMed]
Kanwisher
,
N.
(
2017
).
The quest for the FFA and where it led
.
Journal of Neuroscience
,
37
,
1056
1061
. ,
[PubMed]
Kanwisher
,
N.
,
McDermott
,
J.
, &
Chun
,
M. M.
(
1997
).
The fusiform face area: A module in human extrastriate cortex specialized for face perception
.
Journal of Neuroscience
,
17
,
4302
4311
. ,
[PubMed]
Kloos
,
H.
, &
Sloutsky
,
V. M.
(
2008
).
What's behind different kinds of kinds: Effects of statistical density on learning and representation of categories
.
Journal of Experimental Psychology: General
,
137
,
52
72
. ,
[PubMed]
Kraebel
,
K. S.
, &
Gerhardstein
,
P. C.
(
2006
).
Three-month-old infants' object recognition across changes in viewpoint using an operant learning procedure
.
Infant Behavior & Development
,
29
,
11
23
. ,
[PubMed]
Kriegeskorte
,
N.
,
Mur
,
M.
, &
Bandettini
,
P.
(
2008
).
Representational similarity analysis—Connecting the branches of systems neuroscience
.
Frontiers in Systems Neuroscience
,
2
,
4
. ,
[PubMed]
Li
,
J. X.
, &
James
,
K. H.
(
2016
).
Handwriting generates variable visual output to facilitate symbol learning
.
Journal of Experimental Psychology: General
,
145
,
298
313
. ,
[PubMed]
Nosofsky
,
R. M.
(
1988
).
Similarity, frequency, and category representations
.
Journal of Experimental Psychology: Learning, Memory, and Cognition
,
14
,
54
65
.
Perry
,
L. K.
,
Samuelson
,
L. K.
,
Malloy
,
L. M.
, &
Schiffer
,
R. N.
(
2010
).
Learn locally, think globally: Exemplar variability supports higher-order generalization and word learning
.
Psychological Science
,
21
,
1894
1902
. ,
[PubMed]
Plebanek
,
D. J.
, &
James
,
K. H.
(
2021
).
Category structure guides the formation of neural representations
.
Experimental Brain Research
,
239
,
1667
1684
. ,
[PubMed]
Rosch
,
E.
, &
Mervis
,
C. B.
(
1975
).
Family resemblances: Studies in the internal structure of categories
.
Cognitive Psychology
,
7
,
573
605
.
Rosch
,
E.
,
Mervis
,
C. B.
,
Gray
,
W. D.
,
Johnson
,
D. M.
, &
Boyes-Braem
,
P.
(
1976
).
Basic objects in natural categories
.
Cognitive Psychology
,
8
,
382
439
.
Saffran
,
J. R.
,
Aslin
,
R. N.
, &
Newport
,
E. L.
(
1996
).
Statistical learning by 8-month-old infants
.
Science
,
274
,
1926
1928
. ,
[PubMed]
Schapiro
,
A. C.
,
Kustner
,
L. V.
, &
Turk-Browne
,
N. B.
(
2012
).
Shaping of object representations in the human medial temporal lobe based on temporal regularities
.
Current Biology
,
22
,
1622
1627
. ,
[PubMed]
Sherman
,
B. E.
,
Graves
,
K. N.
, &
Turk-Browne
,
N. B.
(
2020
).
The prevalence and importance of statistical learning in human cognition and behavior
.
Current Opinion in Behavioral Sciences
,
32
,
15
20
. ,
[PubMed]
Slone
,
L. K.
,
Smith
,
L. B.
, &
Yu
,
C.
(
2019
).
Self-generated variability in object images predicts vocabulary growth
.
Developmental Science
,
22
,
e12816
. ,
[PubMed]
Sloutsky
,
V. M.
(
2010
).
From perceptual categories to concepts: What develops?
Cognitive Science
,
34
,
1244
1286
. ,
[PubMed]
Smith
,
L. B.
,
Jayaraman
,
S.
,
Clerkin
,
E.
, &
Yu
,
C.
(
2018
).
The developing infant creates a curriculum for statistical learning
.
Trends in Cognitive Sciences
,
22
,
325
336
. ,
[PubMed]
Smith
,
L.
, &
Yu
,
C.
(
2008
).
Infants rapidly learn word-referent mappings via cross-situational statistics
.
Cognition
,
106
,
1558
1568
. ,
[PubMed]
Stansbury
,
D. E.
,
Naselaris
,
T.
, &
Gallant
,
J. L.
(
2013
).
Natural scene statistics account for the representation of scene categories in human visual cortex
.
Neuron
,
79
,
1025
1034
. ,
[PubMed]
Talairach
,
J.
, &
Tournoux
,
P.
(
1988
).
Co-planar stereotaxic atlas of the human brain. 3-Dimensional proportional system: An approach to cerebral imaging
.
Stuttgart
:
Thieme
.
Turk-Browne
,
N. B.
,
Scholl
,
B. J.
,
Chun
,
M. M.
, &
Johnson
,
M. K.
(
2009
).
Neural evidence of statistical learning: Efficient detection of visual regularities without awareness
.
Journal of Cognitive Neuroscience
,
21
,
1934
1945
. ,
[PubMed]
Vinberg
,
J.
, &
Grill-Spector
,
K.
(
2008
).
Representation of shapes, edges, and surfaces across multiple cues in the human visual cortex
.
Journal of Neurophysiology
,
99
,
1380
1393
. ,
[PubMed]
Vinckier
,
F.
,
Dehaene
,
S.
,
Jobert
,
A.
,
Dubus
,
J. P.
,
Sigman
,
M.
, &
Cohen
,
L.
(
2007
).
Hierarchical coding of letter strings in the ventral stream: Dissecting the inner organization of the visual word-form system
.
Neuron
,
55
,
143
156
. ,
[PubMed]
Vinci-Booher
,
S.
,
Cheng
,
H.
, &
James
,
K. H.
(
2019
).
An analysis of the brain systems involved with producing letters by hand
.
Journal of Cognitive Neuroscience
,
31
,
138
154
. ,
[PubMed]
Vinci-Booher
,
S.
, &
James
,
K. H.
(
2020
).
Visual experiences during letter production contribute to the development of the neural systems supporting letter perception
.
Developmental Science
,
23
,
e12965
. ,
[PubMed]
Wallis
,
G.
, &
Bülthoff
,
H.
(
1999
).
Learning to recognize objects
.
Trends in Cognitive Sciences
,
3
,
22
31
.
Wood
,
J. N.
(
2016
).
A smoothness constraint on the development of object recognition
.
Cognition
,
153
,
140
145
. ,
[PubMed]
Yu
,
C.
, &
Smith
,
L. B.
(
2007
).
Rapid word learning under uncertainty via cross-situational statistics
.
Psychological Science
,
18
,
414
420
. ,
[PubMed]