Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Walter Senn
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (7): 1698–1717.
Published: 01 July 2010
FIGURES
Abstract
View article
PDF
We investigate a recently proposed model for decision learning in a population of spiking neurons where synaptic plasticity is modulated by a population signal in addition to reward feedback. For the basic model, binary population decision making based on spike/no-spike coding, a detailed computational analysis is given about how learning performance depends on population size and task complexity. Next, we extend the basic model to -ary decision making and show that it can also be used in conjunction with other population codes such as rate or even latency coding.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (2): 340–352.
Published: 01 February 2009
FIGURES
Abstract
View article
PDF
We introduce a new supervised learning rule for the tempotron task: the binary classification of input spike trains by an integrate-and-fire neuron that encodes its decision by firing or not firing. The rule is based on the gradient of a cost function, is found to have enhanced performance, and does not rely on a specific reset mechanism in the integrate-and-fire neuron.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (11): 2881–2912.
Published: 01 November 2007
Abstract
View article
PDF
We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons).
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (10): 2106–2138.
Published: 01 October 2005
Abstract
View article
PDF
Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memories. Here we show that this forgetting can be avoided by introducing additional constraints on the synaptic and neural dynamics. We consider Hebbian plasticity of excitatory synapses. A synapse is modified only if the postsynaptic response does not match the desired output. With this learning rule, the original memory performances with unbounded weights are regained, provided that (1) there is some global inhibition, (2) the learning rate is small, and (3) the neurons can discriminate small differences in the total synaptic input (e.g., by making the neuronal threshold small compared to the total postsynaptic input). We prove in the form of a generalized perceptron convergence theorem that under these constraints, a neuron learns to classify any linearly separable set of patterns, including a wide class of highly correlated random patterns. During the learning process, excitation becomes roughly balanced by inhibition, and the neuron classifies the patterns on the basis of small differences around this balance. The fact that synapses saturate has the additional benefit that nonlinearly separable patterns, such as similar patterns with contradicting outputs, eventually generate a subthreshold response, and therefore silence neurons that cannot provide any information.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (10): 2101–2124.
Published: 01 October 2004
Abstract
View article
PDF
Rate models are often used to study the behavior of large networks of spiking neurons. Here we propose a procedure to derive rate models that take into account the fluctuations of the input current and firing-rate adaptation, two ubiquitous features in the central nervous system that have been previously overlooked in constructing rate models. The procedure is general and applies to any model of firing unit. As examples, we apply it to the leaky integrate-and-fire (IF) neuron, the leaky IF neuron with reversal potentials, and to the quadratic IF neuron. Two mechanisms of adaptation are considered, one due to an after hyperpolarization current and the other to an adapting threshold for spike emission. The parameters of these simple models can be tuned to match experimental data obtained from neocortical pyramidal neurons. Finally, we show how the stationary model can be used to predict the time-varying activity of a large population of adapting neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (3): 583–619.
Published: 01 March 2002
Abstract
View article
PDF
Systematic temporal relations between single neuronal activities or population activities are ubiquitous in the brain. No experimental evidence, however, exists for a direct modification of neuronal delays during Hebbian-type stimulation protocols. We show that in fact an explicit delay adaptation is not needed if one assumes that the synaptic strengths are modified according to the recently observed temporally asymmetric learning rule with the downregulating branch dominating the upregulating branch. During development, slow, unbiased fluctuations in the transmission time, together with temporally correlated network activity, may control neural growth and implicitly induce drifts in the axonal delays and dendritic latencies. These delays and latencies become optimally tuned in the sense that the synaptic response tends to peak in the soma of the postsynaptic cell if this is most likely to fire. The nature of the selection process requires unreliable synapses in order to give successful synapses an evolutionary advantage over the others. The width of the learning function also determines the preferred dendritic delay and the preferred width of the postsynaptic response. Hence, it may implicitly determine whether a synaptic connection provides a precisely timed or a broadly tuned “contextual” signal.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (1): 35–67.
Published: 01 January 2001
Abstract
View article
PDF
The precise times of occurrence of individual pre- and postsynaptic action potentials are known to play a key role in the modification of synaptic efficacy. Based on stimulation protocols of two synaptically connected neurons, we infer an algorithm that reproduces the experimental data by modifying the probability of vesicle discharge as a function of the relative timing of spikes in the pre- and postsynaptic neurons. The primary feature of this algorithm is an asymmetry with respect to the direction of synaptic modification depending on whether the presynaptic spikes precede or follow the postsynaptic spike. Specifically, if the presynaptic spike occurs up to 50 ms before the postsynaptic spike, the probability of vesicle discharge is upregulated, while the probability of vesicle discharge is downregulated if the presynaptic spike occurs up to 50 ms after the postsynaptic spike. When neurons fire irregularly with Poisson spike trains at constant mean firing rates, the probability of vesicle discharge converges toward a characteristic value determined by the preand postsynaptic firing rates. On the other hand, if the mean rates of the Poisson spike trains slowly change with time, our algorithm predicts modifications in the probability of release that generalize Hebbian and Bienenstock-Cooper-Munro rules. We conclude that the proposed spike- based synaptic learning algorithm provides a general framework for regulating neurotransmitter release probability.