Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
William H. Alexander
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (11): 2354–2410.
Published: 01 November 2015
FIGURES
| View All (15)
Abstract
View article
PDF
Anterior cingulate and dorsolateral prefrontal cortex (ACC and dlPFC, respectively) are core components of the cognitive control network. Activation of these regions is routinely observed in tasks that involve monitoring the external environment and maintaining information in order to generate appropriate responses. Despite the ubiquity of studies reporting coactivation of these two regions, a consensus on how they interact to support cognitive control has yet to emerge. In this letter, we present a new hypothesis and computational model of ACC and dlPFC. The error representation hypothesis states that multidimensional error signals generated by ACC in response to surprising outcomes are used to train representations of expected error in dlPFC, which are then associated with relevant task stimuli. Error representations maintained in dlPFC are in turn used to modulate predictive activity in ACC in order to generate better estimates of the likely outcomes of actions. We formalize the error representation hypothesis in a new computational model based on our previous model of ACC. The hierarchical error representation (HER) model of ACC/dlPFC suggests a mechanism by which hierarchically organized layers within ACC and dlPFC interact in order to solve sophisticated cognitive tasks. In a series of simulations, we demonstrate the ability of the HER model to autonomously learn to perform structured tasks in a manner comparable to human performance, and we show that the HER model outperforms current deep learning networks by an order of magnitude.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (6): 1511–1527.
Published: 01 June 2010
FIGURES
Abstract
View article
PDF
Hyperbolic discounting of future outcomes is widely observed to underlie choice behavior in animals. Additionally, recent studies (Kobayashi & Schultz, 2008 ) have reported that hyperbolic discounting is observed even in neural systems underlying choice. However, the most prevalent models of temporal discounting, such as temporal difference learning, assume that future outcomes are discounted exponentially. Exponential discounting has been preferred largely because it can be expressed recursively, whereas hyperbolic discounting has heretofore been thought not to have a recursive definition. In this letter, we define a learning algorithm, hyperbolically discounted temporal difference (HDTD) learning, which constitutes a recursive formulation of the hyperbolic model.