Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-16 of 16
Wulfram Gerstner
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (2): 269–340.
Published: 01 February 2021
FIGURES
| View All (8)
Abstract
View article
PDF
Surprise-based learning allows agents to rapidly adapt to nonstationary stochastic environments characterized by sudden changes. We show that exact Bayesian inference in a hierarchical model gives rise to a surprise-modulated trade-off between forgetting old observations and integrating them with the new ones. The modulation depends on a probability ratio, which we call the Bayes Factor Surprise, that tests the prior belief against the current belief. We demonstrate that in several existing approximate algorithms, the Bayes Factor Surprise modulates the rate of adaptation to new observations. We derive three novel surprise-based algorithms, one in the family of particle filters, one in the family of variational learning, and one in the family of message passing, that have constant scaling in observation sequence length and particularly simple update dynamics for any distribution in the exponential family. Empirical results show that these surprise-based algorithms estimate parameters better than alternative approximate approaches and reach levels of performance comparable to computationally more expensive algorithms. The Bayes Factor Surprise is related to but different from the Shannon Surprise. In two hypothetical experiments, we make testable predictions for physiological indicators that dissociate the Bayes Factor Surprise from the Shannon Surprise. The theoretical insight of casting various approaches as surprise-based learning, as well as the proposed online algorithms, may be applied to the analysis of animal and human behavior and to reinforcement learning in nonstationary environments.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (1): 34–83.
Published: 01 January 2018
FIGURES
| View All (8)
Abstract
View article
PDF
Surprise describes a range of phenomena from unexpected events to behavioral responses. We propose a novel measure of surprise and use it for surprise-driven learning. Our surprise measure takes into account data likelihood as well as the degree of commitment to a belief via the entropy of the belief distribution. We find that surprise-minimizing learning dynamically adjusts the balance between new and old information without the need of knowledge about the temporal statistics of the environment. We apply our framework to a dynamic decision-making task and a maze exploration task. Our surprise-minimizing framework is suitable for learning in complex environments, even if the environment undergoes gradual or sudden changes, and it could eventually provide a framework to study the behavior of humans and animals as they encounter surprising events.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (2): 458–484.
Published: 01 February 2017
FIGURES
| View All (17)
Abstract
View article
PDF
We show that Hopfield neural networks with synchronous dynamics and asymmetric weights admit stable orbits that form sequences of maximal length. For units, these sequences have length ; that is, they cover the full state space. We present a mathematical proof that maximal-length orbits exist for all , and we provide a method to construct both the sequence and the weight matrix that allow its production. The orbit is relatively robust to dynamical noise, and perturbations of the optimal weights reveal other periodic orbits that are not maximal but typically still very long. We discuss how the resulting dynamics on slow time-scales can be used to generate desired output sequences.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (12): 3016–3069.
Published: 01 December 2011
FIGURES
| View All (21)
Abstract
View article
PDF
Multiple measures have been developed to quantify the similarity between two spike trains. These measures have been used for the quantification of the mismatch between neuron models and experiments as well as for the classification of neuronal responses in neuroprosthetic devices and electrophysiological experiments. Frequently only a few spike trains are available in each class. We derive analytical expressions for the small-sample bias present when comparing estimators of the time-dependent firing intensity. We then exploit analogies between the comparison of firing intensities and previously used spike train metrics and show that improved spike train measures can be successfully used for fitting neuron models to experimental data, for comparisons of spike trains, and classification of spike train data. In classification tasks, the improved similarity measures can increase the recovered information. We demonstrate that when similarity measures are used for fitting mathematical models, all previous methods systematically underestimate the noise. Finally, we show a striking implication of this deterministic bias by reevaluating the results of the single-neuron prediction challenge.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (12): 2973–3002.
Published: 01 December 2008
Abstract
View article
PDF
Fast oscillations and in particular gamma-band oscillation (20–80 Hz) are commonly observed during brain function and are at the center of several neural processing theories. In many cases, mathematical analysis of fast oscillations in neural networks has been focused on the transition between irregular and oscillatory firing viewed as an instability of the asynchronous activity. But in fact, brain slice experiments as well as detailed simulations of biological neural networks have produced a large corpus of results concerning the properties of fully developed oscillations that are far from this transition point. We propose here a mathematical approach to deal with nonlinear oscillations in a network of heterogeneous or noisy integrate-and-fire neurons connected by strong inhibition. This approach involves limited mathematical complexity and gives a good sense of the oscillation mechanism, making it an interesting tool to understand fast rhythmic activity in simulated or biological neural networks. A surprising result of our approach is that under some conditions, a change of the strength of inhibition only weakly influences the period of the oscillation. This is in contrast to standard theoretical and experimental models of interneuron network gamma oscillations (ING), where frequency tightly depends on inhibition strength, but it is similar to observations made in some in vitro preparations in the hippocampus and the olfactory bulb and in some detailed network models. This result is explained by the phenomenon of suppression that is known to occur in strongly coupled oscillating inhibitory networks but had not yet been related to the behavior of oscillation frequency.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (3): 639–671.
Published: 01 March 2007
Abstract
View article
PDF
We studied the hypothesis that synaptic dynamics is controlled by three basic principles: (1) synapses adapt their weights so that neurons can effectively transmit information, (2) homeostatic processes stabilize the mean firing rate of the postsynaptic neuron, and (3) weak synapses adapt more slowly than strong ones, while maintenance of strong synapses is costly. Our results show that a synaptic update rule derived from these principles shares features, with spike-timing-dependent plasticity, is sensitive to correlations in the input and is useful for synaptic memory. Moreover, input selectivity (sharply tuned receptive fields) of postsynaptic neurons develops only if stimuli with strong features are presented. Sharply tuned neurons can coexist with unselective ones, and the distribution of synaptic weights can be unimodal or bimodal. The formulation of synaptic dynamics through an optimality criterion provides a simple graphical argument for the stability of synapses, necessary for synaptic memory.
Journal Articles
Optimal Spike-Timing-Dependent Plasticity for Precise Action Potential Firing in Supervised Learning
Publisher: Journals Gateway
Neural Computation (2006) 18 (6): 1318–1348.
Published: 01 June 2006
Abstract
View article
PDF
In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes by gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of up- and downregulating synaptic efficacies depends on the relative timing between presynaptic spike arrival and desired postsynaptic firing. If the presynaptic spike arrives before the desired postsynaptic spike timing, our optimal learning rule predicts that the synapse should become potentiated. The dependence of the potentiation on spike timing directly reflects the time course of an excitatory postsynaptic potential. However, our approach gives no unique reason for synaptic depression under reversed spike timing. In fact, the presence and amplitude of depression of synaptic efficacies for reversed spike timing depend on how constraints are implemented in the optimization problem. Two different constraints, control of postsynaptic rates and control of temporal locality, are studied. The relation of our results to spike-timing-dependent plasticity and reinforcement learning is discussed.
Journal Articles
Synaptic Shot Noise and Conductance Fluctuations Affect the Membrane Voltage with Equal Significance
Publisher: Journals Gateway
Neural Computation (2005) 17 (4): 923–947.
Published: 01 April 2005
Abstract
View article
PDF
The subthreshold membrane voltage of a neuron in active cortical tissue is a fluctuating quantity with a distribution that reflects the firing statistics of the presynaptic population. It was recently found that conductance-based synaptic drive can lead to distributions with a significant skew. Here it is demonstrated that the underlying shot noise caused by Poissonian spike arrival also skews the membrane distribution, but in the opposite sense. Using a perturbative method, we analyze the effects of shot noise on the distribution of synaptic conductances and calculate the consequent voltage distribution. To first order in the perturbation theory, the voltage distribution is a gaussian modulated by a prefactor that captures the skew. The gaussian component is identical to distributions derived using current-based models with an effective membrane time constant. The well-known effective-time-constant approximation can therefore be identified as the leading-order solution to the full conductance-based model. The higher-order modulatory prefactor containing the skew comprises terms due to both shot noise and conductance fluctuations. The diffusion approximation misses these shot-noise effects implying that analytical approaches such as the Fokker-Planck equation or simulation with filtered white noise cannot be used to improve on the gaussian approximation. It is further demonstrated that quantities used for fitting theory to experiment, such as the voltage mean and variance, are robust against these non-Gaussian effects. The effective-time-constant approximation is therefore relevant to experiment and provides a simple analytic base on which other pertinent biological details may be added.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (5): 987–997.
Published: 01 May 2002
Abstract
View article
PDF
We investigate the propagation of pulses of spike activity in a neuronal network with feedforward couplings. The neurons are of the spike-response type with a firing probability that depends linearly on the membrane potential. After firing, neurons enter a phase of refractoriness. Spike packets are described in terms of the moments of the firing-time distribution so as to allow for an analytical treatment of the evolution of the spike packet as it propagates from one layer to the next. Analytical results and simulations show that depending on the synaptic coupling strength, a stable propagation of the packet with constant waveform is possible. Crucial for this observation is neither the existence of a firing threshold nor a sigmoidal gain function—both are absent in our model—but the refractory behavior of the neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (12): 2709–2741.
Published: 01 December 2001
Abstract
View article
PDF
We study analytically a model of long-term synaptic plasticity where synaptic changes are triggered by presynaptic spikes, postsynaptic spikes, and the time differences between presynaptic and postsynaptic spikes. The changes due to correlated input and output spikes are quantified by means of a learning window. We show that plasticity can lead to an intrinsic stabilization of the mean firing rate of the postsynaptic neuron. Subtractive normalization of the synaptic weights (summed over all presynaptic inputs converging on a postsynaptic neuron) follows if, in addition, the mean input rates and the mean input correlations are identical at all synapses. If the integral over the learning window is positive, firing-rate stabilization requires a non-Hebbian component, whereas such a component is not needed if the integral of the learning window is negative. A negative integral corresponds to anti-Hebbian learning in a model with slowly varying firing rates. For spike-based learning, a strict distinction between Hebbian and anti-Hebbian rules is questionable since learning is driven by correlations on the timescale of the learning window. The correlations between presynaptic and postsynaptic firing are evaluated for a piecewise-linear Poisson model and for a noisy spiking neuron model with refractoriness. While a negative integral over the learning window leads to intrinsic rate stabilization, the positive part of the learning window picks up spatial and temporal correlations in the input.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (2): 367–384.
Published: 01 February 2000
Abstract
View article
PDF
We analyze the effect of noise in integrate-and-fire neurons driven by time-dependent input and compare the diffusion approximation for the membrane potential to escape noise. It is shown that for time-dependent subthreshold input, diffusive noise can be replaced by escape noise with a hazard function that has a gaussian dependence on the distance between the (noise-free) membrane voltage and threshold. The approximation is improved if we add to the hazard function a probability current proportional to the derivative of the voltage. Stochastic resonance in response to periodic input occurs in both noise models and exhibits similar characteristics.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (1): 43–89.
Published: 01 January 2000
Abstract
View article
PDF
An integral equation describing the time evolution of the population activity in a homogeneous pool of spiking neurons of the integrate-and-fire type is discussed. It is analytically shown that transients from a state of incoherent firing can be immediate. The stability of incoherent firing is analyzed in terms of the noise level and transmission delay, and a bifurcation diagram is derived. The response of a population of noisy integrate-and-fire neurons to an input current of small amplitude is calculated and characterized by a linear filter L. The stability of perfectly synchronized “locked” solutions is analyzed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (8): 1987–2017.
Published: 15 November 1998
Abstract
View article
PDF
How does a neuron vary its mean output firing rate if the input changes from random to oscillatory coherent but noisy activity? What are the critical parameters of the neuronal dynamics and input statistics? To answer these questions, we investigate the coincidence-detection properties of an integrate-and-fire neuron. We derive an expression indicating how coincidence detection depends on neuronal parameters. Specifically, we show how coincidence detection depends on the shape of the postsynaptic response function, the number of synapses, and the input statistics, and we demonstrate that there is an optimal threshold. Our considerations can be used to predict from neuronal parameters whether and to what extent a neuron can act as a coincidence detector and thus can convert a temporal code into a rate code.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (5): 1015–1045.
Published: 01 July 1997
Abstract
View article
PDF
It is generally believed that a neuron is a threshold element that fires when some variable u reaches a threshold. Here we pursue the question of whether this picture can be justified and study the four-dimensional neuron model of Hodgkin and Huxley as a concrete example. The model is approximated by a response kernel expansion in terms of a single variable, the membrane voltage. The first-order term is linear in the input and its kernel has the typical form of an elementary postsynaptic potential. Higher-order kernels take care of nonlinear interactions between input spikes. In contrast to the standard Volterra expansion, the kernels depend on the firing time of the most recent output spike. In particular, a zero-order kernel that describes the shape of the spike and the typical after-potential is included. Our model neuron fires if the membrane voltage, given by the truncated response kernel expansion, crosses a threshold. The threshold model is tested on a spike train generated by the Hodgkin-Huxley model with a stochastic input current. We find that the threshold model predicts 90 percent of the spikes correctly. Our results show that, to good approximation, the description of a neuron as a threshold element can indeed be justified.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (8): 1653–1676.
Published: 01 November 1996
Abstract
View article
PDF
Exploiting local stability, we show what neuronal characteristics are essential to ensure that coherent oscillations are asymptotically stable in a spatially homogeneous network of spiking neurons. Under standard conditions, a necessary and, in the limit of a large number of interacting neighbors, also sufficient condition is that the postsynaptic potential is increasing in time as the neurons fire. If the postsynaptic potential is decreasing, oscillations are bound to be unstable. This is a kind of locking theorem and boils down to a subtle interplay of axonal delays, postsynaptic potentials, and refractory behavior. The theorem also allows for mixtures of excitatory and inhibitory interactions. On the basis of the locking theorem, we present a simple geometric method to verify the existence and local stability of a coherent oscillation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (5): 905–914.
Published: 01 September 1995
Abstract
View article
PDF
As a simple model of the cortical sheet, we study a locally connected net of spiking neurons, Refractoriness, noise, axonal delays, and the time course of excitatory and inhibitory postsynaptic potentials are taken into account explicitly. In addition to a low-activity state and depending on the synaptic efficacy, four different scenarios evolve spontaneously, viz., stripes, spirals, rings, and collective bursts. Our results can be related to experimental observations of drug-induced epilepsy and hallucinations.