Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-8 of 8
Thomas A. Carlson
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (2): 290–312.
Published: 05 January 2022
FIGURES
| View All (7)
Abstract
View article
PDF
Attention can be deployed in different ways: When searching for a taxi in New York City, we can decide where to attend (e.g., to the street) and what to attend to (e.g., yellow cars). Although we use the same word to describe both processes, nonhuman primate data suggest that these produce distinct effects on neural tuning. This has been challenging to assess in humans, but here we used an opportunity afforded by multivariate decoding of MEG data. We found that attending to an object at a particular location and attending to a particular object feature produced effects that interacted multiplicatively. The two types of attention induced distinct patterns of enhancement in occipital cortex, with feature-selective attention producing relatively more enhancement of small feature differences and spatial attention producing relatively larger effects for larger feature differences. An information flow analysis further showed that stimulus representations in occipital cortex were Granger-caused by coding in frontal cortices earlier in time and that the timing of this feedback matched the onset of attention effects. The data suggest that spatial and feature-selective attention rely on distinct neural mechanisms that arise from frontal-occipital information exchange, interacting multiplicatively to selectively enhance task-relevant information.
Journal Articles
The Rapid Emergence of Auditory Object Representations in Cortex Reflect Central Acoustic Attributes
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2020) 32 (1): 111–123.
Published: 01 January 2020
FIGURES
Abstract
View article
PDF
Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magnetoencephalography, we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the magnetoencephalographic recordings. We show that sound tokens can be decoded from brain activity beginning 90 msec after stimulus onset with peak decoding performance occurring at 155 msec poststimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2017) 29 (12): 1995–2010.
Published: 01 December 2017
FIGURES
| View All (10)
Abstract
View article
PDF
Animacy is a robust organizing principle among object category representations in the human brain. Using multivariate pattern analysis methods, it has been shown that distance to the decision boundary of a classifier trained to discriminate neural activation patterns for animate and inanimate objects correlates with observer RTs for the same animacy categorization task [Ritchie, J. B., Tovar, D. A., & Carlson, T. A. Emerging object representations in the visual system predict reaction times for categorization. PLoS Computational Biology, 11, e1004316, 2015; Carlson, T. A., Ritchie, J. B., Kriegeskorte, N., Durvasula, S., & Ma, J. Reaction time for object categorization is predicted by representational distance. Journal of Cognitive Neuroscience, 26, 132–142, 2014]. Using MEG decoding, we tested if the same relationship holds when a stimulus manipulation (degradation) increases task difficulty, which we predicted would systematically decrease the distance of activation patterns from the decision boundary and increase RTs. In addition, we tested whether distance to the classifier boundary correlates with drift rates in the linear ballistic accumulator [Brown, S. D., & Heathcote, A. The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology, 57, 153–178, 2008]. We found that distance to the classifier boundary correlated with RT, accuracy, and drift rates in an animacy categorization task. Split by animacy, the correlations between brain and behavior were sustained longer over the time course for animate than for inanimate stimuli. Interestingly, when examining the distance to the classifier boundary during the peak correlation between brain and behavior, we found that only degraded versions of animate, but not inanimate, objects had systematically shifted toward the classifier decision boundary as predicted. Our results support an asymmetry in the representation of animate and inanimate object categories in the human brain.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2017) 29 (4): 677–697.
Published: 01 April 2017
FIGURES
| View All (15)
Abstract
View article
PDF
Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analyzing fMRI data. Although decoding methods have been extensively applied in brain–computer interfaces, these methods have only recently been applied to time series neuroimaging data such as MEG and EEG to address experimental questions in cognitive neuroscience. In a tutorial style review, we describe a broad set of options to inform future time series decoding studies from a cognitive neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to “decode” different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalization, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time series decoding experiments.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2014) 26 (10): 2370–2384.
Published: 01 October 2014
FIGURES
Abstract
View article
PDF
Objects occupy space. How does the brain represent the spatial location of objects? Retinotopic early visual cortex has precise location information but can only segment simple objects. On the other hand, higher visual areas can resolve complex objects but only have coarse location information. Thus coarse location of complex objects might be represented by either (a) feedback from higher areas to early retinotopic areas or (b) coarse position encoding in higher areas. We tested these alternatives by presenting various kinds of first- (edge-defined) and second-order (texture) objects. We applied multivariate classifiers to the pattern of EEG amplitudes across the scalp at a range of time points to trace the temporal dynamics of coarse location representation. For edge-defined objects, peak classification performance was high and early and thus attributable to the retinotopic layout of early visual cortex. For texture objects, it was low and late. Crucially, despite these differences in peak performance and timing, training a classifier on one object and testing it on others revealed that the topography at peak performance was the same for both first- and second-order objects. That is, the same location information, encoded by early visual areas, was available for both edge-defined and texture objects at different time points. These results indicate that locations of complex objects such as textures, although not represented in the bottom–up sweep, are encoded later by neural patterns resembling the bottom–up ones. We conclude that feedback mechanisms play an important role in coarse location representation of complex objects.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2014) 26 (1): 120–131.
Published: 01 January 2014
FIGURES
Abstract
View article
PDF
In the ventral visual pathway, early visual areas encode light patterns on the retina in terms of image properties, for example, edges and color, whereas higher areas encode visual information in terms of objects and categories. At what point does semantic knowledge, as instantiated in human language, emerge? We examined this question by studying whether semantic similarity in language relates to the brain's organization of object representations in inferior temporal cortex (ITC), an area of the brain at the crux of several proposals describing how the brain might represent conceptual knowledge. Semantic relationships among words can be viewed as a geometrical structure with some pairs of words close in their meaning (e.g., man and boy) and other pairs more distant (e.g., man and tomato). ITC's representation of objects similarly can be viewed as a complex structure with some pairs of stimuli evoking similar patterns of activation (e.g., man and boy) and other pairs evoking very different patterns (e.g., man and tomato). In this study, we examined whether the geometry of visual object representations in ITC bears a correspondence to the geometry of semantic relationships between word labels used to describe the objects. We compared ITC's representation to semantic structure, evaluated by explicit ratings of semantic similarity and by five computational measures of semantic similarity. We show that the representational geometry of ITC—but not of earlier visual areas (V1)—is reflected both in explicit behavioral ratings of semantic similarity and also in measures of semantic similarity derived from word usage patterns in natural language. Our findings show that patterns of brain activity in ITC not only reflect the organization of visual information into objects but also represent objects in a format compatible with conceptual thought and language.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2014) 26 (1): 132–142.
Published: 01 January 2014
FIGURES
Abstract
View article
PDF
How does the brain translate an internal representation of an object into a decision about the object's category? Recent studies have uncovered the structure of object representations in inferior temporal cortex (IT) using multivariate pattern analysis methods. These studies have shown that representations of individual object exemplars in IT occupy distinct locations in a high-dimensional activation space, with object exemplar representations clustering into distinguishable regions based on category (e.g., animate vs. inanimate objects). In this study, we hypothesized that a representational boundary between category representations in this activation space also constitutes a decision boundary for categorization. We show that behavioral RTs for categorizing objects are well described by our activation space hypothesis. Interpreted in terms of classical and contemporary models of decision-making, our results suggest that the process of settling on an internal representation of a stimulus is itself partially constitutive of decision-making for object categorization.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2003) 15 (5): 704–717.
Published: 01 May 2003
Abstract
View article
PDF
Object perception has been a subject of extensive fMRI studies in recent years. Yet the nature of the cortical representation of objects in the human brain remains controversial. Analyses of fMRI data have traditionally focused on the activation of individual voxels associated with presentation of various stimuli. The current analysis approaches functional imaging data as collective information about the stimulus. Linking activity in the brain to a stimulus is treated as a pattern-classification problem. Linear discriminant analysis was used to reanalyze a set of data originally published by Ishai et al. (2000), available from the fMRIDC (accession no. 2-20001113D). Results of the new analysis reveal that patterns of activity that distinguish one category of objects from other categories are largely independent of one another, both in terms of the activity and spatial overlap. The information used to detect objects from phase-scrambled control stimuli is not essential in distinguishing one object category from another. Furthermore, performing an object-matching task during the scan significantly improved the ability to predict objects from controls, but had minimal effect on object classification, suggesting that the task-based attentional benefit was nonspecific to object categories.