Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Randall C. O'Reilly
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2021) 33 (6): 1158–1196.
Published: 01 May 2021
FIGURES
| View All (22)
Abstract
View article
PDF
How do humans learn from raw sensory experience? Throughout life, but most obviously in infancy, we learn without explicit instruction. We propose a detailed biological mechanism for the widely embraced idea that learning is driven by the differences between predictions and actual outcomes (i.e., predictive error-driven learning). Specifically, numerous weak projections into the pulvinar nucleus of the thalamus generate top–down predictions, and sparse driver inputs from lower areas supply the actual outcome, originating in Layer 5 intrinsic bursting neurons. Thus, the outcome representation is only briefly activated, roughly every 100 msec (i.e., 10 Hz, alpha), resulting in a temporal difference error signal, which drives local synaptic changes throughout the neocortex. This results in a biologically plausible form of error backpropagation learning. We implemented these mechanisms in a large-scale model of the visual system and found that the simulated inferotemporal pathway learns to systematically categorize 3-D objects according to invariant shape properties, based solely on predictive learning from raw visual inputs. These categories match human judgments on the same stimuli and are consistent with neural representations in inferotemporal cortex in primates.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (6): 843–851.
Published: 01 June 2013
FIGURES
| View All (4)
Abstract
View article
PDF
We can learn from the wisdom of others to maximize success. However, it is unclear how humans take advice to flexibly adapt behavior. On the basis of data from neuroanatomy, neurophysiology, and neuroimaging, a biologically plausible model is developed to illustrate the neural mechanisms of learning from instructions. The model consists of two complementary learning pathways. The slow-learning parietal pathway carries out simple or habitual stimulus–response (S-R) mappings, whereas the fast-learning hippocampal pathway implements novel S-R rules. Specifically, the hippocampus can rapidly encode arbitrary S-R associations, and stimulus-cued responses are later recalled into the basal ganglia-gated pFC to bias response selection in the premotor and motor cortices. The interactions between the two model learning pathways explain how instructions can override habits and how automaticity can be achieved through motor consolidation.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (2): 351–366.
Published: 01 February 2012
FIGURES
| View All (7)
Abstract
View article
PDF
Appetitive goal-directed behavior can be associated with a cue-triggered expectancy that it will lead to a particular reward, a process thought to depend on the OFC and basolateral amygdala complex. We developed a biologically informed neural network model of this system to investigate the separable and complementary roles of these areas as the main components of a flexible expectancy system. These areas of interest are part of a neural network with additional subcortical areas, including the central nucleus of amygdala, ventral (limbic) and dorsomedial (associative) striatum. Our simulations are consistent with the view that the amygdala maintains Pavlovian associations through incremental updating of synaptic strength and that the OFC supports flexibility by maintaining an activation-based working memory of the recent reward history. Our model provides a mechanistic explanation for electrophysiological evidence that cue-related firing in OFC neurons is nonselectively early after a contingency change and why this nonselective firing is critical for promoting plasticity in the amygdala. This ambiguous activation results from the simultaneous maintenance of recent outcomes and obsolete Pavlovian contingencies in working memory. Furthermore, at the beginning of reversal, the OFC is critical for supporting responses that are no longer inappropriate. This result is inconsistent with an exclusive inhibitory account of OFC function.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2006) 18 (1): 22–32.
Published: 01 January 2006
Abstract
View article
PDF
We address the connection between conceptual knowledge and cognitive control using a neural network model. This model extends a widely held theory of cognitive control [Cohen, J. D., Dunbar, K., & McClelland, J. L. On the control of automatic processes: A parallel distributed processing model of the Stroop effect. Psychological Review , 97, 332-361, 1990] so that it can explain new empirical findings. Leveraging other computational modeling work, we hypothesize that representations used for task control are recruited from preexisting representations for categories, such as the concept of color relevant to the Stroop task we model here. This hypothesis allows the model to account for otherwise puzzling fMRI results, such as increased activity in brain regions processing to-be-ignored information. In addition, biologically motivated changes in the model's pattern of connectivity show how global competition can arise when inhibition is strictly local, as it seems to be in the cortex. We also discuss the potential for this theory to unify models of task control with other forms of attention.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2001) 13 (1): 44–58.
Published: 01 January 2001
Abstract
View article
PDF
Visual object representation was studied in free-ranging rhesus monkeys. To facilitate comparison with humans, and to provide a new tool for neurophysiologists, we used a looking time procedure originally developed for studies of human infants. Monkeys' looking times were measured to displays with one or two distinct objects, separated or together, stationary or moving. Results indicate that rhesus monkeys used featural information to parse the displays into distinct objects, and they found events in which distinct objects moved together more novel or unnatural than events in which distinct objects moved separately. These findings show both common-alities and contrasts with those obtained from human infants. We discuss their implications for the development and neural mechanisms of higher-level vision.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1990) 2 (2): 141–155.
Published: 01 April 1990
Abstract
View article
PDF
A subset of visually sensitive neurons in the parietal lobe apparently can encode the locations of stimuli, whereas visually sensitive neurons in the inferotemporal cortex (area IT) cannot. This finding is puzzling because both sorts of neurons have large receptive fields, and yet location can be encoded in one case, but not in the other. The experiments reported here investigated the hypothesis that a crucial difference between the IT and parietal neurons is the spatial distribution of their response profiles. In particular, IT neurons typically respond maximally when stimuli are presented at the fovea, whereas parietal neurons do not. We found that a parallel-distributed-processing network could map a point in an array to a coordinate representation more easily when a greater proportion of its input units had response peaks off the center of the input array. Furthermore, this result did not depend on potentially implausible assumptions about the regularity of the overlap in receptive fields or the homogeneity of the response profiles of different units. Finally, the internal representations formed within the network had receptive fields resembling those found in area 7a of the parietal lobe.