Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Rosemary A. Cowell
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2017) 29 (6): 1075–1088.
Published: 01 June 2017
FIGURES
| View All (6)
Abstract
View article
PDF
Damage to the medial temporal lobe (MTL) has long been known to impair declarative memory, and recent evidence suggests that it also impairs visual perception. A theory termed the representational-hierarchical account explains such impairments by assuming that MTL stores conjunctive representations of items and events, and that individuals with MTL damage must rely upon representations of simple visual features in posterior visual cortex, which are inadequate to support memory and perception under certain circumstances. One recent study of visual discrimination behavior revealed a surprising antiperceptual learning effect in MTL-damaged individuals: With exposure to a set of visual stimuli, discrimination performance worsened rather than improved [Barense, M. D., Groen, I. I. A., Lee, A. C. H., Yeung, L. K., Brady, S. M., Gregori, M., et al. Intact memory for irrelevant information impairs perception in amnesia. Neuron, 75, 157–167, 2012]. We extend the representational-hierarchical account to explain this paradox by assuming that difficult visual discriminations are performed by comparing the relative “representational tunedness”—or familiarity—of the to-be-discriminated items. Exposure to a set of highly similar stimuli entails repeated presentation of simple visual features, eventually rendering all feature representations maximally and, thus, equally familiar; hence, they are inutile for solving the task. Discrimination performance in patients with MTL lesions is therefore impaired by stimulus exposure. Because the unique conjunctions represented in MTL do not occur repeatedly, healthy individuals are shielded from this perceptual interference. We simulate this mechanism with a neural network previously used to explain recognition memory, thereby providing a model that accounts for both mnemonic and perceptual deficits caused by MTL damage with a unified architecture and mechanism.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (11): 1777–1793.
Published: 01 November 2013
FIGURES
| View All (7)
Abstract
View article
PDF
We trained a neurocomputational model on six categories of photographic images that were used in a previous fMRI study of object and face processing. Multivariate pattern analyses of the activations elicited in the object-encoding layer of the model yielded results consistent with two previous, contradictory fMRI studies. Findings from one of the studies [Haxby, J. V., Gobbini, M. I., Furey, M. L., Ishai, A., Schouten, J. L., & Pietrini, P. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293, 2425–2430, 2001] were interpreted as evidence for the object-form topography model. Findings from the other study [Spiridon, M., & Kanwisher, N. How distributed is visual category information in human occipito-temporal cortex? An fMRI study. Neuron, 35, 1157–1165, 2002] were interpreted as evidence for neural processing mechanisms in the fusiform face area that are specialized for faces. Because the model contains no special processing mechanism or specialized architecture for faces and yet it can reproduce the fMRI findings used to support the claim that there are specialized face-processing neurons, we argue that these fMRI results do not actually support that claim. Results from our neurocomputational model therefore constitute a cautionary tale for the interpretation of fMRI data.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (9): 1807–1825.
Published: 01 September 2012
FIGURES
| View All (7)
Abstract
View article
PDF
One strong claim made by the representational–hierarchical account of cortical function in the ventral visual stream (VVS) is that the VVS is a functional continuum: The basic computations carried out in service of a given cognitive function, such as recognition memory or visual discrimination, might be the same at all points along the VVS. Here, we use a single-layer computational model with a fixed learning mechanism and set of parameters to simulate a variety of cognitive phenomena from different parts of the functional continuum of the VVS: recognition memory, categorization of perceptually related stimuli, perceptual learning of highly similar stimuli, and development of retinotopy and orientation selectivity. The simulation results indicate—consistent with the representational–hierarchical view—that the simple existence of different levels of representational complexity in different parts of the VVS is sufficient to drive the emergence of distinct regions that appear to be specialized for solving a particular task, when a common neurocomputational learning algorithm is assumed across all regions. Thus, our data suggest that it is not necessary to invoke computational differences to understand how different cortical regions can appear to be specialized for what are considered to be very different psychological functions.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (11): 2460–2479.
Published: 01 November 2010
FIGURES
| View All (12)
Abstract
View article
PDF
We examined the organization and function of the ventral object processing pathway. The prevailing theoretical approach in this field holds that the ventral object processing stream has a modular organization, in which visual perception is carried out in posterior regions and visual memory is carried out, independently, in the anterior temporal lobe. In contrast, recent work has argued against this modular framework, favoring instead a continuous, hierarchical account of cognitive processing in these regions. We join the latter group and illustrate our view with simulations from a computational model that extends the perceptual-mnemonic feature-conjunction model of visual discrimination proposed by Bussey and Saksida [Bussey, T. J., & Saksida, L. M. The organization of visual object representations: A connectionist model of effects of lesions in perirhinal cortex. European Journal of Neuroscience, 15 , 355–364, 2002]. We use the extended model to revisit early data from Iwai and Mishkin [Iwai, E., & Mishkin, M. Two visual foci in the temporal lobe of monkeys. In N. Yoshii & N. Buchwald (Eds.), Neurophysiological basis of learning and behavior (pp. 1–11). Japan: Osaka University Press, 1968]; this seminal study was interpreted as evidence for the modularity of visual perception and visual memory. The model accounts for a double dissociation in monkeys' visual discrimination performance following lesions to different regions of the ventral visual stream. This double dissociation is frequently cited as evidence for separate systems for perception and memory. However, the model provides a parsimonious, mechanistic, single-system account of the double dissociation data. We propose that the effects of lesions in ventral visual stream on visual discrimination are due to compromised representations within a hierarchical representational continuum rather than impairment in a specific type of learning, memory, or perception. We argue that consideration of the nature of stimulus representations and their processing in cortex is a more fruitful approach than attempting to map cognition onto functional modules.