Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-9 of 9
L. F. Abbott
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (10): 2163–2193.
Published: 01 October 2014
FIGURES
| View All (15)
Abstract
View article
PDF
We propose and develop a hierarchical approach to network control of complex tasks. In this approach, a low-level controller directs the activity of a “plant,” the system that performs the task. However, the low-level controller may be able to solve only fairly simple problems involving the plant. To accomplish more complex tasks, we introduce a higher-level controller that controls the lower-level controller. We use this system to direct an articulated truck to a specified location through an environment filled with static or moving obstacles. The final system consists of networks that have memorized associations between the sensory data they receive and the commands they issue. These networks are trained on a set of optimal associations generated by minimizing cost functions. Cost function minimization requires predicting the consequences of sequences of commands, which is achieved by constructing forward models, including a model of the lower-level controller. The forward models and cost minimization are used only during training, allowing the trained networks to respond rapidly. In general, the hierarchical approach can be extended to larger numbers of levels, dividing complex tasks into more manageable subtasks. The optimization procedure and the construction of the forward models and controllers can be performed in similar ways at each level of the hierarchy, which allows the system to be modified to perform other tasks or to be extended for more complex tasks without retraining lower-levels.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (3): 609–631.
Published: 01 March 2005
Abstract
View article
PDF
Neural networks that are trained to perform specific tasks must be developed through a supervised learning procedure. This normally takes the form of direct supervision of synaptic plasticity. We explore the idea that supervision takes place instead through the modulation of neuronal excitability. Such supervision can be done using conventional synaptic feedback pathways rather than requiring the hypothetical actions of unknown modulatory agents. During task learning, supervised response modulation guides Hebbian synaptic plasticity indirectly by establishing appropriate patterns of correlated network activity. This results in robust learning of function approximation tasks even when multiple output units representing different functions share large amounts of common input. Reward-based supervision is also studied, and a number of potential advantages of neuronal response modulation are identified.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (2): 313–335.
Published: 01 February 2000
Abstract
View article
PDF
Sets of neuronal tuning curves, which describe the responses of neurons as functions of a stimulus, can serve as a basis for approximating other functions of stimulus parameters. In a function-approximating network, synaptic weights determined by a correlation-based Hebbian rule are closely related to the coefficients that result when a function is expanded in an orthogonal basis. Although neuronal tuning curves typically are not orthogonal functions, the relationship between function approximation and correlation-based synaptic weights can be retained if the tuning curves satisfy the conditions of a tight frame. We examine whether the spatial receptive fields of simple cells in cat and monkey primary visual cortex (V1) form a tight frame, allowing them to serve as a basis for constructing more complicated extrastriate receptive fields using correlation-based synaptic weights. Our calculations show that the set of V1 simple cell receptive fields is not tight enough to account for the acuity observed psychophysically.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (5): 1079–1096.
Published: 01 July 1999
Abstract
View article
PDF
Activity-dependent plasticity appears to play an important role in the modification of neurons and neural circuits that occurs during development and learning. Plasticity is also essential for the maintenance of stable patterns of activity in the face of variable environmental and internal conditions. Previous theoretical and experimental results suggest that neurons stabilize their activity by altering the number or characteristics of ion channels to regulate their intrinsic electrical properties. We present both experimental and modeling evidence to show that activity-dependent regulation of conductances, operating at the level of individual neurons, can also stabilize network activity. These results indicate that the stomatogastric ganglion of the crab can generate a characteristic rhythmic pattern of activity in two fundamentally different modes of operation. In one mode, the rhythm is strictly conditional on the presence of neuromodulatory afferents from adjacent ganglia. In the other, it is independent of neuromodulatory input but relies on newly developed intrinsic properties of the component neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (1): 91–101.
Published: 01 January 1999
Abstract
View article
PDF
We study the impact of correlated neuronal firing rate variability on the accuracy with which an encoded quantity can be extracted from a population of neurons. Contrary to widespread belief, correlations in the variabilities of neuronal firing rates do not, in general, limit the increase in coding accuracy provided by using large populations of encoding neurons. Furthermore, in some cases, but not all, correlations improve the accuracy of a population code.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (1): 85–93.
Published: 01 January 1996
Abstract
View article
PDF
Using experimental facts about long-term potentiation (LTP) and hippocampal place cells, we model how a spatial map of the environment can be created in the rat hippocampus. Sequential firing of place cells during exploration induces, in the model, a pattern of LTP between place cells that shifts the location coded by their ensemble activity away from the actual location of the animal. These shifts provide a navigational map that, in a simulation of the Morris maze, can guide the animal toward its goal. The model demonstrates how behaviorally generated modifications of synaptic strengths can be read out to affect subsequent behavior. Our results also suggest a way that navigational maps can be constructed from experimental recordings of hippocampal place cells.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (3): 507–517.
Published: 01 May 1995
Abstract
View article
PDF
Model neural networks can perform dimensional reductions of input data sets using correlation-based learning rules to adjust their weights. Simple Hebbian learning rules lead to an optimal reduction at the single unit level but result in highly redundant network representations. More complex rules designed to reduce or remove this redundancy can develop optimal principal component representations, but they are not very compelling from a biological perspective. Neurons in biological networks have restricted receptive fields limiting their access to the input data space. We find that, within this restricted receptive field architecture, simple correlation-based learning rules can produce surprisingly efficient reduced representations. When noise is present, the size of the receptive fields can be optimally tuned to maximize the accuracy of reconstructions of input data from a reduced representation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1993) 5 (6): 823–842.
Published: 01 November 1993
Abstract
View article
PDF
We analyze neuron models in which the maximal conductances of membrane currents are slowly varying dynamic variables regulated by the intracellular calcium concentration. These models allow us to study possible activity-dependent effects arising from processes that maintain and modify membrane channels in real neurons. Regulated model neurons maintain a constant average level of activity over a wide range of conditions by appropriately adjusting their conductances. The intracellular calcium concentration acts as a feedback element linking maximal conductances to electrical activity. The resulting plasticity of intrinsic characteristics has important implications for network behavior. We first study a simple two-conductance model, then introduce techniques that allow us to analyze dynamic regulation with an arbitrary number of conductances, and finally illustrate this method by studying a seven-conductance model. We conclude with an analysis of spontaneous differentiation of identical model neurons in a two-cell network.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1991) 3 (4): 487–497.
Published: 01 December 1991
Abstract
View article
PDF
The pyloric network of the stomatogastric ganglion in crustacea is a central pattern generator that can produce the same basic rhythm over a wide frequency range. Three electrically coupled neurons, the anterior burster (AB) neuron and two pyloric dilator (PD) neurons, act as a pacemaker unit for the pyloric network. The functional characteristics of the pacemaker network are the result of electrical coupling between neurons with quite different intrinsic properties, each contributing a basic feature to the complete circuit. The AB neuron, a conditional oscillator, plays a dominant role in rhythm generation. In the work described here, we manipulate the frequency of the AB neuron both isolated and electrically coupled to the PD neurons. Physiological and modeling studies indicate that the PD neurons play an important role in regulating the duration of the bursts produced by the pacemaker unit.