Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-11 of 11
Ernst Niebur
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (11): 2833–2867.
Published: 01 November 2011
FIGURES
| View All (8)
Abstract
View article
PDF
When a neuronal spike train is observed, what can we deduce from it about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate-and-fire dynamics, Paninski, Pillow, and Simoncelli ( 2004 ) showed that its negative log-likelihood function is convex and that, at least in principle, its unique global minimum can thus be found by gradient descent techniques. Many biological neurons are, however, known to generate a richer repertoire of spiking behaviors than can be explained in a simple integrate-and-fire model. For instance, such a model retains only an implicit (through spike-induced currents), not an explicit, memory of its input; an example of a physiological situation that cannot be explained is the absence of firing if the input current is increased very slowly. Therefore, we use an expanded model (Mihalas & Niebur, 2009 ), which is capable of generating a large number of complex firing patterns while still being linear. Linearity is important because it maintains the distribution of the random variables and still allows maximum likelihood methods to be used. In this study, we show that although convexity of the negative log-likelihood function is not guaranteed for this model, the minimum of this function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) usually reaches the global minimum.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (2): 421–434.
Published: 01 February 2011
FIGURES
| View All (15)
Abstract
View article
PDF
An accurate calculation of the first passage time probability density (FPTPD) is essential for computing the likelihood of solutions of the stochastic leaky integrate-and-fire model. The previously proposed numerical calculation of the FPTPD based on the integral equation method discretizes the probability current of the voltage crossing the threshold. While the method is accurate for high noise levels, we show that it results in large numerical errors for small noise. The problem is solved by analytically computing, in each time bin, the mean probability current. Efficiency is further improved by identifying and ignoring time bins with negligible mean probability current.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (3): 704–718.
Published: 01 March 2009
FIGURES
Abstract
View article
PDF
For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model's rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (11): 2637–2661.
Published: 01 November 2008
Abstract
View article
PDF
We provide analytical solutions for mean firing rates and cross-correlations of coincidence detector neurons in recurrent networks with excitatory or inhibitory connectivity, with rate-modulated steady-state spiking inputs. We use discrete-time finite-state Markov chains to represent network state transition probabilities, which are subsequently used to derive exact analytical solutions for mean firing rates and cross-correlations. As illustrated in several examples, the method can be used for modeling cortical microcircuits and clarifying single-neuron and population coding mechanisms. We also demonstrate that increasing firing rates do not necessarily translate into increasing cross-correlations, though our results do support the contention that firing rates and cross-correlations are likely to be coupled. Our analytical solutions underscore the complexity of the relationship between firing rates and cross-correlations.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (7): 1720–1738.
Published: 01 July 2007
Abstract
View article
PDF
Recent technological advances as well as progress in theoretical understanding of neural systems have created a need for synthetic spike trains with controlled mean rate and pairwise cross-correlation. This report introduces and analyzes a novel algorithm for the generation of discretized spike trains with arbitrary mean rates and controlled cross correlation. Pairs of spike trains with any pairwise correlation can be generated, and higher-order correlations are compatible with common synaptic input. Relations between allowable mean rates and correlations within a population are discussed. The algorithm is highly efficient, its complexity increasing linearly with the number of spike trains generated and therefore inversely with the number of cross-correlated pairs.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (4): 881–902.
Published: 01 April 2005
Abstract
View article
PDF
We provide an analytical recurrent solution for the firing rates and cross-correlations of feedforward networks with arbitrary connectivity, excitatory or inhibitory, in response to steady-state spiking input to all neurons in the first network layer. Connections can go between any two layers as long as no loops are produced. Mean firing rates and pairwise cross-correlations of all input neurons can be chosen individually. We apply this method to study the propagation of rate and synchrony information through sample networks to address the current debate regarding the efficacy of rate codes versus temporal codes. Our results from applying the network solution to several examples support the following conclusions: (1) differential propagation efficacy of rate and synchrony to higher layers of a feedforward network is dependent on both network and input parameters, and (2) previous modeling and simulation studies exclusively supporting either rate or temporal coding must be reconsidered within the limited range of network and input parameters used. Our exact, analytical solution for feedforward networks of coincidence detectors should prove useful for further elucidating the efficacy and differential roles of rate and temporal codes in terms of different network and input parameter ranges.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (10): 2339–2358.
Published: 01 October 2003
Abstract
View article
PDF
In this letter, we extend our previous analytical results (Mikula & Niebur, 2003) for the coincidence detector by taking into account probabilistic frequency-dependent synaptic depression. We present a solution for the steady-state output rate of an ideal coincidence detector receiving an arbitrary number of input spike trains with identical binomial count distributions (which includes Poisson statistics as a special case) and identical arbitrary pairwise cross-correlations, from zero correlation (independent processes) to perfect correlation (identical processes). Synapses vary their efficacy probabilistically according to the observed depression mechanisms. Our results show that synaptic depression, if made sufficiently strong, will result in an inverted U-shaped curve for the output rate of a coincidence detector as a function of input rate. This leads to the counterintuitive prediction that higher presynaptic (input) rates may lead to lower postsynaptic (output) rates where the output rate may fall faster than the inverse of the input rate.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (3): 539–547.
Published: 01 March 2003
Abstract
View article
PDF
We derive analytically the solution for the output rate of the ideal coincidence detector. The solution is for an arbitrary number of input spike trains with identical binomial count distributions (which includes Poisson statistics as a special case) and identical arbitrary pairwise cross-correlations, from zero correlation (independent processes) to complete correlation (identical processes).
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (4): 602–614.
Published: 01 July 1994
Abstract
View article
PDF
Visual space is represented by cortical cells in an orderly manner. Only little variation in the cell behavior is found with changing depth below the cortical surface, that is, all cells in a column with axis perpendicular to the cortical plane have approximately the same properties (Hubel and Wiesel 1962, 1963, 1968). Therefore, the multiple features of the visual space (e.g., position in visual space, preferred orientation, and orientation tuning strength) are mapped on a two-dimensional space, the cortical plane. Such a dimension reduction leads to complex maps (Durbin and Mitchison 1990) that so far have evaded an intuitive understanding. Analyzing optical imaging data (Blasdel 1992a, b; Blasdel and Salama 1986; Grinvald et al . 1986) using a theoretical approach we will show that the most salient features of these maps can be understood from a few basic design principles: local correlation, modularity, isotropy, and homogeneity. These principles can be defined in a mathematically exact sense in the Fourier domain by a rather simple annulus-like spectral structure. Many of the models that have been developed to explain the mapping of the preferred orientations (Cooper et al . 1979; Legendy 1978; Linsker 1986a, b; Miller 1992; Nass and Cooper 1975; Obermayer et al . 1990, 1992; Soodak 1987; Swindale 1982, 1985, 1992; von der Malsburg 1973; von der Malsburg and Cowan 1982) are quite successful in generating maps that are close to experimental maps. We suggest that this success is due to these principles, which are common properties of the models and of biological maps.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1993) 5 (4): 570–586.
Published: 01 July 1993
Abstract
View article
PDF
We study the dynamics of completely connected populations of refractory integrate-and-fire neurons in the presence of noise. Solving the master equation based on a mean-field approach, and by computer simulations, we find sustained states of activity that correspond to fixed points and show that for the same value of external input, the system has one or two attractors. The dynamic behavior of the population under the influence of external input and noise manifests hysteresis effects that might have a functional role for memory. The temporal dynamics at higher temporal resolution, finer than the transmission delay times and the refractory period, are characterized by synchronized activity of subpopulations. The global activity of the population shows aperiodic oscillations analogous to experimentally found field potentials.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1992) 4 (3): 332–340.
Published: 01 May 1992
Abstract
View article
PDF
To what extent do the mechanisms generating different receptive field properties of neurons depend on each other? We investigated this question theoretically within the context of orientation and direction tuning of simple cells in the mammalian visual cortex. In our model a cortical cell of the "simple" type receives its orientation tuning by afferent convergence of aligned receptive fields of the lateral geniculate nucleus (Hubel and Wiesel 1962). We sharpen this orientation bias by postulating a special type of radially symmetric long-range lateral inhibition called circular inhibition . Surprisingly, this isotropic mechanism leads to the emergence of a strong bias for the direction of motion of a bar. We show that this directional anisotropy is neither caused by the probabilistic nature of the connections nor is it a consequence of the specific columnar structure chosen but that it is an inherent feature of the architecture of visual cortex.