Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Garrison W. Cottrell
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (11): 1777–1793.
Published: 01 November 2013
FIGURES
| View All (7)
Abstract
View article
PDF
We trained a neurocomputational model on six categories of photographic images that were used in a previous fMRI study of object and face processing. Multivariate pattern analyses of the activations elicited in the object-encoding layer of the model yielded results consistent with two previous, contradictory fMRI studies. Findings from one of the studies [Haxby, J. V., Gobbini, M. I., Furey, M. L., Ishai, A., Schouten, J. L., & Pietrini, P. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293, 2425–2430, 2001] were interpreted as evidence for the object-form topography model. Findings from the other study [Spiridon, M., & Kanwisher, N. How distributed is visual category information in human occipito-temporal cortex? An fMRI study. Neuron, 35, 1157–1165, 2002] were interpreted as evidence for neural processing mechanisms in the fusiform face area that are specialized for faces. Because the model contains no special processing mechanism or specialized architecture for faces and yet it can reproduce the fMRI findings used to support the claim that there are specialized face-processing neurons, we argue that these fMRI results do not actually support that claim. Results from our neurocomputational model therefore constitute a cautionary tale for the interpretation of fMRI data.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (7): 998–1007.
Published: 01 July 2013
FIGURES
| View All (7)
Abstract
View article
PDF
Hemispheric asymmetry in the processing of local and global features has been argued to originate from differences in frequency filtering in the two hemispheres, with little neurophysiological support. Here we test the hypothesis that this asymmetry takes place at an encoding stage beyond the sensory level, due to asymmetries in anatomical connections within each hemisphere. We use two simple encoding networks with differential connection structures as models of differential encoding in the two hemispheres based on a hypothesized generalization of neuroanatomical evidence from the auditory modality to the visual modality: The connection structure between columns is more distal in the language areas of the left hemisphere and more local in the homotopic regions in the right hemisphere. We show that both processing differences and differential frequency filtering can arise naturally in this neurocomputational model with neuroanatomically inspired differences in connection structures within the two model hemispheres, suggesting that hemispheric asymmetry in the processing of local and global features may be due to hemispheric asymmetry in connection structure rather than in frequency tuning.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2008) 20 (12): 2298–2307.
Published: 01 December 2008
Abstract
View article
PDF
Anatomical evidence shows that our visual field is initially split along the vertical midline and contralaterally projected to different hemispheres. It remains unclear at which processing stage the split information converges. In the current study, we applied the Double Filtering by Frequency (DFF) theory (Ivry & Robertson, 1998) to modeling the visual field split; the theory assumes a right-hemisphere/low-frequency bias. We compared three cognitive architectures with different timings of convergence and examined their cognitive plausibility to account for the left-side bias effect in face perception observed in human data. We show that the early convergence model failed to show the left-side bias effect. The modeling, hence, suggests that the convergence may take place at an intermediate or late stage, at least after information has been extracted/encoded separately in the two hemispheres, a fact that is often overlooked in computational modeling of cognitive processes. Comparative anatomical data suggest that this separate encoding process that results in differential frequency biases in the two hemispheres may be engaged from V1 up to the level of area V3a and V4v, and converge at least after the lateral occipital region. The left-side bias effect in our model was also observed in Greeble recognition; the modeling, hence, also provides testable predictions about whether the left-side bias effect may also be observed in (expertise-level) object recognition.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2002) 14 (8): 1158–1173.
Published: 15 November 2002
Abstract
View article
PDF
There are two competing theories of facial expression recognition. Some researchers have suggested that it is an example of “categorical perception.” In this view, expression categories are considered to be discrete entities with sharp boundaries, and discrimination of nearby pairs of expressive faces is enhanced near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for instance, “surprise” expressions lie between “happiness” and “fear” expressions due to their perceptual similarity. In this article, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, predicts data used to support both of these theories. Without any parameter tuning, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the tasks' implementations in the brain.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1992) 4 (3): 289–298.
Published: 01 July 1992
Abstract
View article
PDF
Four models were compared on repeated explicit memory (fragment cued recall) or implicit memory (fragment completion) tasks (Hayman & Tulving, 1989a). In the experiments, when given explicit instructions to complete fragments with words from a just-studied list—the explicit condition—people showed a dependence relation between the first and the second fragment targeted at the same word. However, when subjects were just told to complete the (primed) fragments—the implicit condition—stochastic independence between the two fragments resulted. Three distributed models—CHARM, a competitive-learning model, and a back-propagation model produced dependence, as in the explicit memory test. In contrast, a separate-trace model, MINERVA, showed independence, as in the implicit task. It was concluded that explicit memory is based on a highly interactive network that glues or binds together the features within the items, as do the first three models. The binding accounts for the dependence relation. Implicit memory appears to be based, instead, on separate non interacting traces.