Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Maryam Vaziri-Pashkam
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2024) 36 (12): 2584–2593.
Published: 01 December 2024
FIGURES
Abstract
View article
PDF
Ungerleider and Mishkin, in their influential work that relied on detailed anatomical and ablation studies, suggested that visual information is processed along two distinct pathways: the dorsal “where” pathway, primarily responsible for spatial vision, and the ventral “what” pathway, dedicated to object vision. This strict division of labor has faced challenges in light of compelling evidence revealing robust shape and object selectivity within the putative “where” pathway. This article reviews evidence that supports the presence of shape selectivity in the dorsal pathway. A comparative examination of dorsal and ventral object representations in terms of invariance, task dependency, and representational content reveals similarities and differences between the two pathways. Both exhibit some level of tolerance to image transformations and are influenced by tasks, but responses in the dorsal pathway show weaker tolerance and stronger task modulations than those in the ventral pathway. Furthermore, an examination of their representational content highlights a divergence between the responses in the two pathways, suggesting that they are sensitive to distinct features of objects. Collectively, these findings suggest that two networks exist in the human brain for processing object shapes, one in the dorsal and another in the ventral visual cortex. These studies lay the foundation for future research aimed at revealing the precise roles the two “what” networks play in our ability to understand and interact with objects.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (12): 2406–2435.
Published: 01 November 2022
FIGURES
| View All (5)
Abstract
View article
PDF
Previous research shows that, within human occipito-temporal cortex (OTC), we can use a general linear mapping function to link visual object responses across nonidentity feature changes, including Euclidean features (e.g., position and size) and non-Euclidean features (e.g., image statistics and spatial frequency). Although the learned mapping is capable of predicting responses of objects not included in training, these predictions are better for categories included than those not included in training. These findings demonstrate a near-orthogonal representation of object identity and nonidentity features throughout human OTC. Here, we extended these findings to examine the mapping across both Euclidean and non-Euclidean feature changes in human posterior parietal cortex (PPC), including functionally defined regions in inferior and superior intraparietal sulcus. We additionally examined responses in five convolutional neural networks (CNNs) pretrained with object classification, as CNNs are considered as the current best model of the primate ventral visual system. We separately compared results from PPC and CNNs with those of OTC. We found that a linear mapping function could successfully link object responses in different states of nonidentity transformations in human PPC and CNNs for both Euclidean and non-Euclidean features. Overall, we found that object identity and nonidentity features are represented in a near-orthogonal, rather than complete-orthogonal, manner in PPC and CNNs, just like they do in OTC. Meanwhile, some differences existed among OTC, PPC, and CNNs. These results demonstrate the similarities and differences in how visual object information across an identity-preserving image transformation may be represented in OTC, PPC, and CNNs.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2019) 31 (1): 49–63.
Published: 01 January 2019
FIGURES
| View All (5)
Abstract
View article
PDF
Primate ventral and dorsal visual pathways both contain visual object representations. Dorsal regions receive more input from magnocellular system while ventral regions receive inputs from both magnocellular and parvocellular systems. Due to potential differences in the spatial sensitivites of manocellular and parvocellular systems, object representations in ventral and dorsal regions may differ in how they represent visual input from different spatial scales. To test this prediction, we asked observers to view blocks of images from six object categories, shown in full spectrum, high spatial frequency (SF), or low SF. We found robust object category decoding in all SF conditions as well as SF decoding in nearly all the early visual, ventral, and dorsal regions examined. Cross-SF decoding further revealed that object category representations in all regions exhibited substantial tolerance across the SF components. No difference between ventral and dorsal regions was found in their preference for the different SF components. Further comparisons revealed that, whereas differences in the SF component separated object category representations in early visual areas, such a separation was much smaller in downstream ventral and dorsal regions. In those regions, variations among the object categories played a more significant role in shaping the visual representational structures. Our findings show that ventral and dorsal regions are similar in how they represent visual input from different spatial scales and argue against a dissociation of these regions based on differential sensitivity to different SFs.