Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Marc W. Howard
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2022) 34 (3): 642–685.
Published: 17 February 2022
Abstract
View articletitled, Predicting the Future With a Scale-Invariant Temporal Memory for the Past
View
PDF
for article titled, Predicting the Future With a Scale-Invariant Temporal Memory for the Past
In recent years, it has become clear that the brain maintains a temporal memory of recent events stretching far into the past. This letter presents a neurally inspired algorithm to use a scale-invariant temporal representation of the past to predict a scale-invariant future. The result is a scale-invariant estimate of future events as a function of the time at which they are expected to occur. The algorithm is time-local, with credit assigned to the present event by observing how it affects the prediction of the future. To illustrate the potential utility of this approach, we test the model on simultaneous renewal processes with different timescales. The algorithm scales well on these problems despite the fact that the number of states needed to describe them as a Markov process grows exponentially.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (7): 1379–1407.
Published: 01 July 2020
Abstract
View articletitled, Generation of Scale-Invariant Sequential Activity in Linear Recurrent Networks
View
PDF
for article titled, Generation of Scale-Invariant Sequential Activity in Linear Recurrent Networks
Sequential neural activity has been observed in many parts of the brain and has been proposed as a neural mechanism for memory. The natural world expresses temporal relationships at a wide range of scales. Because we cannot know the relevant scales a priori, it is desirable that memory, and thus the generated sequences, is scale invariant. Although recurrent neural network models have been proposed as a mechanism for generating sequences, the requirements for scale-invariant sequences are not known. This letter reports the constraints that enable a linear recurrent neural network model to generate scale-invariant sequential activity. A straightforward eigendecomposition analysis results in two independent conditions that are required for scale invariance for connectivity matrices with real, distinct eigenvalues. First, the eigenvalues of the network must be geometrically spaced. Second, the eigenvectors must be related to one another via translation. These constraints are easily generalizable for matrices that have complex and distinct eigenvalues. Analogous albeit less compact constraints hold for matrices with degenerate eigenvalues. These constraints, along with considerations on initial conditions, provide a general recipe to build linear recurrent neural networks that support scale-invariant sequential activity.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (4): 681–709.
Published: 01 April 2019
FIGURES
| View All (10)
Abstract
View articletitled, Estimating Scale-Invariant Future in Continuous Time
View
PDF
for article titled, Estimating Scale-Invariant Future in Continuous Time
Natural learners must compute an estimate of future outcomes that follow from a stimulus in continuous time. Widely used reinforcement learning algorithms discretize continuous time and estimate either transition functions from one step to the next (model-based algorithms) or a scalar value of exponentially discounted future reward using the Bellman equation (model-free algorithms). An important drawback of model-based algorithms is that computational cost grows linearly with the amount of time to be simulated. An important drawback of model-free algorithms is the need to select a timescale required for exponential discounting. We present a computational mechanism, developed based on work in psychology and neuroscience, for computing a scale-invariant timeline of future outcomes. This mechanism efficiently computes an estimate of inputs as a function of future time on a logarithmically compressed scale and can be used to generate a scale-invariant power-law-discounted estimate of expected future reward. The representation of future time retains information about what will happen when. The entire timeline can be constructed in a single parallel operation that generates concrete behavioral and neural predictions. This computational mechanism could be incorporated into future reinforcement learning algorithms.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (12): 2594–2627.
Published: 01 December 2016
FIGURES
| View All (49)
Abstract
View articletitled, Neural Mechanism to Simulate a Scale-Invariant Future
View
PDF
for article titled, Neural Mechanism to Simulate a Scale-Invariant Future
Predicting the timing and order of future events is an essential feature of cognition in higher life forms. We propose a neural mechanism to nondestructively translate the current state of spatiotemporal memory into the future, so as to construct an ordered set of future predictions almost instantaneously. We hypothesize that within each cycle of hippocampal theta oscillations, the memory state is swept through a range of translations to yield an ordered set of future predictions through modulations in synaptic connections. Theoretically, we operationalize critical neurobiological findings from hippocampal physiology in terms of neural network equations representing spatiotemporal memory. Combined with constraints based on physical principles requiring scale invariance and coherence in translation across memory nodes, the proposition results in Weber-Fechner spacing for the representation of both past (memory) and future (prediction) timelines. We show that the phenomenon of phase precession of neurons in the hippocampus and ventral striatum correspond to the cognitive act of future prediction.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (1): 134–193.
Published: 01 January 2012
FIGURES
| View All (16)
Abstract
View articletitled, A Scale-Invariant Internal Representation of Time
View
PDF
for article titled, A Scale-Invariant Internal Representation of Time
We propose a principled way to construct an internal representation of the temporal stimulus history leading up to the present moment. A set of leaky integrators performs a Laplace transform on the stimulus function, and a linear operator approximates the inversion of the Laplace transform. The result is a representation of stimulus history that retains information about the temporal sequence of stimuli. This procedure naturally represents more recent stimuli more accurately than less recent stimuli; the decrement in accuracy is precisely scale invariant. This procedure also yields time cells that fire at specific latencies following the stimulus with a scale-invariant temporal spread. Combined with a simple associative memory, this representation gives rise to a moment-to-moment prediction that is also scale invariant in time. We propose that this scale-invariant representation of temporal stimulus history could serve as an underlying representation accessible to higher-level behavioral and cognitive mechanisms. In order to illustrate the potential utility of this scale-invariant representation in a variety of fields, we sketch applications using minimal performance functions to problems in classical conditioning, interval timing, scale-invariant learning in autoshaping, and the persistence of the recency effect in episodic memory across timescales.
Includes: Supplementary data