Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Peter E. Latham
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (6): 1408–1439.
Published: 01 June 2013
FIGURES
| View All (25)
Abstract
View article
PDF
The brain is easily able to process and categorize complex time-varying signals. For example, the two sentences, “It is cold in London this time of year” and “It is hot in London this time of year,” have different meanings, even though the words hot and cold appear several seconds before the ends of the two sentences. Any network that can tell these sentences apart must therefore have a long temporal memory. In other words, the current state of the network must depend on events that happened several seconds ago. This is a difficult task, as neurons are dominated by relatively short time constants—tens to hundreds of milliseconds. Nevertheless, it was recently proposed that randomly connected networks could exhibit the long memories necessary for complex temporal processing. This is an attractive idea, both for its simplicity and because little tuning of recurrent synaptic weights is required. However, we show that when connectivity is high, as it is in the mammalian brain, randomly connected networks cannot exhibit temporal memory much longer than the time constants of their constituent neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (7): 1385–1412.
Published: 01 July 2004
Abstract
View article
PDF
Cortical neurons are predominantly excitatory and highly interconnected. In spite of this, the cortex is remarkably stable: normal brains do not exhibit the kind of runaway excitation one might expect of such a system. How does the cortex maintain stability in the face of this massive excitatory feedback? More importantly, how does it do so during computations, which necessarily involve elevated firing rates? Here we address these questions in the context of attractor networks—networks that exhibit multiple stable states, or memories. We find that such networks can be stabilized at the relatively low firing rates observed in vivo if two conditions are met: (1) the background state, where all neurons are firing at low rates, is inhibition dominated, and (2) the fraction of neurons involved in a memory is above some threshold, so that there is sufficient coupling between the memory neurons and the background. This allows “dynamical stabilization” of the attractors, meaning feedback from the pool of background neurons stabilizes what would otherwise be an unstable state. We suggest that dynamical stabilization may be a strategy used for a broad range of computations, not just those involving attractors.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (10): 2281–2306.
Published: 01 October 2003
Abstract
View article
PDF
We calculate the firing rate of the quadratic integrate-and-fire neuron in response to a colored noise input current. Such an input current is a good approximation to the noise due to the random bombardment of spikes, with the correlation time of the noise corresponding to the decay time of the synapses. The key parameter that determines the firing rate is the ratio of the correlation time of the colored noise, τ s , to the neuronal time constant, τ m . We calculate the firing rate exactly in two limits: when the ratio, τ s /τ m , goes to zero (white noise) and when it goes to infinity. The correction to the short correlation time limit is O(τ s /τ m ), which is qualitatively different from that of the leaky integrate-and-fire neuron, where the correction is O(√τ s /τ m ). The difference is due to the different boundary conditions of the probability density function of the membrane potential of the neuron at firing threshold. The correction to the long correlation time limit is O(τ m /τ s ). By combining the short and long correlation time limits, we derive an expression that provides a good approximation to the firing rate over the whole range of τ s /τ m in the suprathreshold regime—that is, in a regime in which the average current is sufficient to make the cell fire. In the subthreshold regime, the expression breaks down somewhat when τ s becomes large compared to τ m .
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (1): 85–90.
Published: 01 January 1999
Abstract
View article
PDF
Neurophysiologists are often faced with the problem of evaluating the quality of a code for a sensory or motor variable, either to relate it to the performance of the animal in a simple discrimination task or to compare the codes at various stages along the neuronal pathway. One common belief that has emerged from such studies is that sharpening of tuning curves improves the quality of the code, although only to a certain point; sharpening beyond that is believed to be harmful. We show that this belief relies on either problematic technical analysis or improper assumptions about the noise. We conclude that one cannot tell, in the general case, whether narrow tuning curves are better than wide ones; the answer depends critically on the covariance of the noise. The same conclusion applies to other manipulations of the tuning curve profiles such as gain increase.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (2): 373–401.
Published: 15 February 1998
Abstract
View article
PDF
Coarse codes are widely used throughout the brain to encode sensory and motor variables. Methods designed to interpret these codes, such as population vector analysis, are either inefficient (the variance of the estimate is much larger than the smallest possible variance) or biologically implausible, like maximum likelihood. Moreover, these methods attempt to compute a scalar or vector estimate of the encoded variable. Neurons are faced with a similar estimation problem. They must read out the responses of the presynaptic neurons, but, by contrast, they typically encode the variable with a further population code rather than as a scalar. We show how a nonlinear recurrent network can be used to perform estimation in a near-optimal way while keeping the estimate in a coarse code format. This work suggests that lateral connections in the cortex may be involved in cleaning up uncorrelated noise among neurons representing similar variables.