Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Karthik H. Shankar
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (12): 2594–2627.
Published: 01 December 2016
FIGURES
| View All (49)
Abstract
View article
PDF
Predicting the timing and order of future events is an essential feature of cognition in higher life forms. We propose a neural mechanism to nondestructively translate the current state of spatiotemporal memory into the future, so as to construct an ordered set of future predictions almost instantaneously. We hypothesize that within each cycle of hippocampal theta oscillations, the memory state is swept through a range of translations to yield an ordered set of future predictions through modulations in synaptic connections. Theoretically, we operationalize critical neurobiological findings from hippocampal physiology in terms of neural network equations representing spatiotemporal memory. Combined with constraints based on physical principles requiring scale invariance and coherence in translation across memory nodes, the proposition results in Weber-Fechner spacing for the representation of both past (memory) and future (prediction) timelines. We show that the phenomenon of phase precession of neurons in the hippocampus and ventral striatum correspond to the cognitive act of future prediction.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (1): 134–193.
Published: 01 January 2012
FIGURES
| View All (16)
Abstract
View article
PDF
We propose a principled way to construct an internal representation of the temporal stimulus history leading up to the present moment. A set of leaky integrators performs a Laplace transform on the stimulus function, and a linear operator approximates the inversion of the Laplace transform. The result is a representation of stimulus history that retains information about the temporal sequence of stimuli. This procedure naturally represents more recent stimuli more accurately than less recent stimuli; the decrement in accuracy is precisely scale invariant. This procedure also yields time cells that fire at specific latencies following the stimulus with a scale-invariant temporal spread. Combined with a simple associative memory, this representation gives rise to a moment-to-moment prediction that is also scale invariant in time. We propose that this scale-invariant representation of temporal stimulus history could serve as an underlying representation accessible to higher-level behavioral and cognitive mechanisms. In order to illustrate the potential utility of this scale-invariant representation in a variety of fields, we sketch applications using minimal performance functions to problems in classical conditioning, interval timing, scale-invariant learning in autoshaping, and the persistence of the recency effect in episodic memory across timescales.
Includes: Supplementary data