Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Terence D. Sanger
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (11): 2069–2084.
Published: 01 November 2020
Abstract
View article
PDF
The cerebellum is known to have an important role in sensing and execution of precise time intervals, but the mechanism by which arbitrary time intervals can be recognized and replicated with high precision is unknown. We propose a computational model in which precise time intervals can be identified from the pattern of individual spike activity in a population of parallel fibers in the cerebellar cortex. The model depends on the presence of repeatable sequences of spikes in response to conditioned stimulus input. We emulate granule cells using a population of Izhikevich neuron approximations driven by random but repeatable mossy fiber input. We emulate long-term depression (LTD) and long-term potentiation (LTP) synaptic plasticity at the parallel fiber to Purkinje cell synapse. We simulate a delay conditioning paradigm with a conditioned stimulus (CS) presented to the mossy fibers and an unconditioned stimulus (US) some time later issued to the Purkinje cells as a teaching signal. We show that Purkinje cells rapidly adapt to decrease firing probability following onset of the CS only at the interval for which the US had occurred. We suggest that detection of replicable spike patterns provides an accurate and easily learned timing structure that could be an important mechanism for behaviors that require identification and production of precise time intervals.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (12): 2669–2691.
Published: 01 December 2014
FIGURES
| View All (15)
Abstract
View article
PDF
Human movement differs from robot control because of its flexibility in unknown environments, robustness to perturbation, and tolerance of unknown parameters and unpredictable variability. We propose a new theory, risk-aware control, in which movement is governed by estimates of risk based on uncertainty about the current state and knowledge of the cost of errors. We demonstrate the existence of a feedback control law that implements risk-aware control and show that this control law can be directly implemented by populations of spiking neurons. Simulated examples of risk-aware control for time-varying cost functions as well as learning of unknown dynamics in a stochastic risky environment are provided.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (8): 1911–1934.
Published: 01 August 2011
FIGURES
| View All (7)
Abstract
View article
PDF
Control in the natural environment is difficult in part because of uncertainty in the effect of actions. Uncertainty can be due to added motor or sensory noise, unmodeled dynamics, or quantization of sensory feedback. Biological systems are faced with further difficulties, since control must be performed by networks of cooperating neurons and neural subsystems. Here, we propose a new mathematical framework for modeling and simulation of distributed control systems operating in an uncertain environment. Stochastic differential operators can be derived from the stochastic differential equation describing a system, and they map the current state density into the differential of the state density. Unlike discrete-time Markov update operators, stochastic differential operators combine linearly for a large class of linear and nonlinear systems, and therefore the combined effects of multiple controllable and uncontrollable subsystems can be predicted. Design using these operators yields systems whose statistical behavior can be specified throughout state-space. The relationship to Bayesian estimation and discrete-time Markov processes is described.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (9): 1873–1886.
Published: 01 September 2004
Abstract
View article
PDF
For certain complex motor tasks, humans may experience the frustration of a lack of improvement despite repeated practice. We investigate a computational basis for failure of motor learning when there is no prior information about the system to be controlled and when it is not practical to perform a thorough random exploration of the set of possible commands. In this case, if the desired movement has never yet been performed, then it may not be possible to learn the correct motor commands since there will be no appropriate training examples. We derive the mathematical basis for this phenomenon when the controller can be modeled as a linear combination of nonlinear basis functions trained using a gradient descent learning rule on the observed commands and their results. We show that there are two failure modes for which continued training examples will never lead to improvement in performance. We suggest that this may provide a model for the lack of improvement in human skills that can occur despite repeated practice of a complex task.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (1): 29–37.
Published: 01 January 1994
Abstract
View article
PDF
Recent evidence of population coding in motor cortex has led some researchers to claim that certain variables such as hand direction or force may be coded within a Cartesian coordinate system with respect to extra personal space. These claims are based on the ability to predict the rectangular coordinates of hand movement direction using a “population vector” computed from multiple cells' firing rates. I show here that such a population vector can always be found given a very general set of assumptions. Therefore the existence of a population vector constitutes only weak support for the explicit use of a particular coordinate representation by motor cortex.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1991) 3 (1): 67–78.
Published: 01 March 1991
Abstract
View article
PDF
I describe a new algorithm for approximating continuous functions in high-dimensional input spaces. The algorithm builds a tree-structured network of variable size, which is determined both by the distribution of the input data and by the function to be approximated. Unlike other tree-structured algorithms, learning occurs through completely local mechanisms and the weights and structure are modified incrementally as data arrives. Efficient computation in the tree structure takes advantage of the potential for low-order dependencies between the output and the individual dimensions of the input. This algorithm is related to the ideas behind k-d trees (Bentley 1975), CART (Breiman et al. 1984), and MARS (Friedman 1988). I present an example that predicts future values of the Mackey-Glass differential delay equation.