Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Alex Clarke
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2025) 37 (1): 155–166.
Published: 02 January 2025
FIGURES
| View All (6)
Abstract
View articletitled, Subsequent Memory Effects in Cortical Pattern Similarity Differ by Semantic Class
View
PDF
for article titled, Subsequent Memory Effects in Cortical Pattern Similarity Differ by Semantic Class
Although living and nonliving stimuli are known to rely on distinct brain regions during perception, it is largely unknown if their episodic memory encoding mechanisms differ as well. To investigate this issue, we asked participants to encode object pictures (e.g., a picture of a tiger) and to retrieve them later in response to their names (e.g., word “tiger”). For each of four semantic classes (living-animate, living-inanimate, nonliving-large, and nonliving-small), we examined differences in the similarity in activation patterns (neural pattern similarity [NPS]) for subsequently remembered versus forgotten items. Higher NPS for remembered items suggests an advantage of within-class item similarity, whereas lower NPS for remembered items indicates an advantage for item distinctiveness. We expect NPS within class-specific regions to be higher for remembered than for forgotten items. For example, the parahippocampal cortex has a well-known role in scene processing [Aminoff, E. M., Kveraga, K., & Bar, M. The role of the parahippocampal cortex in cognition. Trends in Cognitive Sciences, 17 , 379–390, 2013], and the anterior temporal and inferior frontal gyrus have well-known roles in object processing [Clarke, A., & Tyler, L. K. Object-specific semantic coding in human perirhinal cortex. Journal of Neuroscience, 34 , 4766–4775, 2014]. As such, we expect to see higher NPS for remembered items in these regions pertaining to scenes and objects, respectively. Consistent with this hypothesis, in fusiform, parahippocampal, and retrosplenial regions, higher NPS predicted memory for subclasses of nonliving objects, whereas in the left inferior frontal and left retrosplenial regions, lower NPS predicted memory for subclasses of living objects. Taken together, the results support the idea that subsequent memory depends on a balance of similarity and distinctiveness and demonstrate that the neural mechanisms of episodic encoding differ across semantic categories.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2018) 30 (11): 1590–1605.
Published: 01 November 2018
FIGURES
| View All (6)
Abstract
View articletitled, Oscillatory Dynamics of Perceptual to Conceptual Transformations in the Ventral Visual Pathway
View
PDF
for article titled, Oscillatory Dynamics of Perceptual to Conceptual Transformations in the Ventral Visual Pathway
Object recognition requires dynamic transformations of low-level visual inputs to complex semantic representations. Although this process depends on the ventral visual pathway, we lack an incremental account from low-level inputs to semantic representations and the mechanistic details of these dynamics. Here we combine computational models of vision with semantics and test the output of the incremental model against patterns of neural oscillations recorded with magnetoencephalography in humans. Representational similarity analysis showed visual information was represented in low-frequency activity throughout the ventral visual pathway, and semantic information was represented in theta activity. Furthermore, directed connectivity showed visual information travels through feedforward connections, whereas visual information is transformed into semantic representations through feedforward and feedback activity, centered on the anterior temporal lobe. Our research highlights that the complex transformations between visual and semantic information is driven by feedforward and recurrent dynamics resulting in object-specific semantics.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (7): 1010–1023.
Published: 01 July 2016
FIGURES
| View All (7)
Abstract
View articletitled, Learning Warps Object Representations in the Ventral Temporal Cortex
View
PDF
for article titled, Learning Warps Object Representations in the Ventral Temporal Cortex
The human ventral temporal cortex (VTC) plays a critical role in object recognition. Although it is well established that visual experience shapes VTC object representations, the impact of semantic and contextual learning is unclear. In this study, we tracked changes in representations of novel visual objects that emerged after learning meaningful information about each object. Over multiple training sessions, participants learned to associate semantic features (e.g., “made of wood,” “floats”) and spatial contextual associations (e.g., “found in gardens”) with novel objects. fMRI was used to examine VTC activity for objects before and after learning. Multivariate pattern similarity analyses revealed that, after learning, VTC activity patterns carried information about the learned contextual associations of the objects, such that objects with contextual associations exhibited higher pattern similarity after learning. Furthermore, these learning-induced increases in pattern information about contextual associations were correlated with reductions in pattern information about the object's visual features. In a second experiment, we validated that these contextual effects translated to real-life objects. Our findings demonstrate that visual object representations in VTC are shaped by the knowledge we have about objects and show that object representations can flexibly adapt as a consequence of learning with the changes related to the specific kind of newly acquired information.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (10): 1723–1735.
Published: 01 October 2013
FIGURES
| View All (5)
Abstract
View articletitled, Objects and Categories: Feature Statistics and Object Processing in the Ventral Stream
View
PDF
for article titled, Objects and Categories: Feature Statistics and Object Processing in the Ventral Stream
Recognizing an object involves more than just visual analyses; its meaning must also be decoded. Extensive research has shown that processing the visual properties of objects relies on a hierarchically organized stream in ventral occipitotemporal cortex, with increasingly more complex visual features being coded from posterior to anterior sites culminating in the perirhinal cortex (PRC) in the anteromedial temporal lobe (aMTL). The neurobiological principles of the conceptual analysis of objects remain more controversial. Much research has focused on two neural regions—the fusiform gyrus and aMTL, both of which show semantic category differences, but of different types. fMRI studies show category differentiation in the fusiform gyrus, based on clusters of semantically similar objects, whereas category-specific deficits, specifically for living things, are associated with damage to the aMTL. These category-specific deficits for living things have been attributed to problems in differentiating between highly similar objects, a process that involves the PRC. To determine whether the PRC and the fusiform gyri contribute to different aspects of an object's meaning, with differentiation between confusable objects in the PRC and categorization based on object similarity in the fusiform, we carried out an fMRI study of object processing based on a feature-based model that characterizes the degree of semantic similarity and difference between objects and object categories. Participants saw 388 objects for which feature statistic information was available and named the objects at the basic level while undergoing fMRI scanning. After controlling for the effects of visual information, we found that feature statistics that capture similarity between objects formed category clusters in fusiform gyri, such that objects with many shared features (typical of living things) were associated with activity in the lateral fusiform gyri whereas objects with fewer shared features (typical of nonliving things) were associated with activity in the medial fusiform gyri. Significantly, a feature statistic reflecting differentiation between highly similar objects, enabling object-specific representations, was associated with bilateral PRC activity. These results confirm that the statistical characteristics of conceptual object features are coded in the ventral stream, supporting a conceptual feature-based hierarchy, and integrating disparate findings of category responses in fusiform gyri and category deficits in aMTL into a unifying neurocognitive framework.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (8): 1887–1899.
Published: 01 August 2011
FIGURES
| View All (6)
Abstract
View articletitled, The Evolution of Meaning: Spatio-temporal Dynamics of Visual Object Recognition
View
PDF
for article titled, The Evolution of Meaning: Spatio-temporal Dynamics of Visual Object Recognition
Research on the spatio-temporal dynamics of visual object recognition suggests a recurrent, interactive model whereby an initial feedforward sweep through the ventral stream to prefrontal cortex is followed by recurrent interactions. However, critical questions remain regarding the factors that mediate the degree of recurrent interactions necessary for meaningful object recognition. The novel prediction we test here is that recurrent interactivity is driven by increasing semantic integration demands as defined by the complexity of semantic information required by the task and driven by the stimuli. To test this prediction, we recorded magnetoencephalography data while participants named living and nonliving objects during two naming tasks. We found that the spatio-temporal dynamics of neural activity were modulated by the level of semantic integration required. Specifically, source reconstructed time courses and phase synchronization measures showed increased recurrent interactions as a function of semantic integration demands. These findings demonstrate that the cortical dynamics of object processing are modulated by the complexity of semantic information required from the visual input.