Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Néstor Parga
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (6): 1528–1572.
Published: 01 June 2010
FIGURES
| View All (9)
Abstract
View article
PDF
Delivery of neurotransmitter produces on a synapse a current that flows through the membrane and gets transmitted into the soma of the neuron, where it is integrated. The decay time of the current depends on the synaptic receptor's type and ranges from a few (e.g., AMPA receptors) to a few hundred milliseconds (e.g., NMDA receptors). The role of the variety of synaptic timescales, several of them coexisting in the same neuron, is at present not understood. A prime question to answer is which is the effect of temporal filtering at different timescales of the incoming spike trains on the neuron's response. Here, based on our previous work on linear synaptic filtering, we build a general theory for the stationary firing response of integrate-and-fire (IF) neurons receiving stochastic inputs filtered by one, two, or multiple synaptic channels, each characterized by an arbitrary timescale. The formalism applies to arbitrary IF model neurons and arbitrary forms of input noise (i.e., not required to be gaussian or to have small amplitude), as well as to any form of synaptic filtering (linear or nonlinear). The theory determines with exact analytical expressions the firing rate of an IF neuron for long synaptic time constants using the adiabatic approach. The correlated spiking (cross-correlations function) of two neurons receiving common as well as independent sources of noise is also described. The theory is illustrated using leaky, quadratic, and noise-thresholded IF neurons. Although the adiabatic approach is exact when at least one of the synaptic timescales is long, it provides a good prediction of the firing rate even when the timescales of the synapses are comparable to that of the leak of the neuron; it is not required that the synaptic time constants are longer than the mean interspike intervals or that the noise has small variance. The distribution of the potential for general IF neurons is also characterized. Our results provide powerful analytical tools that can allow a quantitative description of the dynamics of neuronal networks with realistic synaptic dynamics.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (7): 1651–1705.
Published: 01 July 2008
Abstract
View article
PDF
Spike correlations between neurons are ubiquitous in the cortex, but their role is not understood. Here we describe the firing response of a leaky integrate-and-fire neuron (LIF) when it receives a temporarily correlated input generated by presynaptic correlated neuronal populations. Input correlations are characterized in terms of the firing rates, Fano factors, correlation coefficients, and correlation timescale of the neurons driving the target neuron. We show that the sum of the presynaptic spike trains cannot be well described by a Poisson process. In fact, the total input current has a nontrivial two-point correlation function described by two main parameters: the correlation timescale (how precise the input correlations are in time) and the correlation magnitude (how strong they are). Therefore, the total current generated by the input spike trains is not well described by a white noise gaussian process. Instead, we model the total current as a colored gaussian process with the same mean and two-point correlation function, leading to the formulation of the problem in terms of a Fokker-Planck equation. Solutions of the output firing rate are found in the limit of short and long correlation timescales. The solutions described here expand and improve on our previous results (Moreno, de la Rocha, Renart, & Parga, 2002) by presenting new analytical expressions for the output firing rate for general IF neurons, extending the validity of the results for arbitrarily large correlation magnitude, and by describing the differential effect of correlations on the mean-driven or noise-dominated firing regimes. Also the details of this novel formalism are given here for the first time. We employ numerical simulations to confirm the analytical solutions and study the firing response to sudden changes in the input correlations. We expect this formalism to be useful for the study of correlations in neuronal networks and their role in neural processing and information transmission.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (1): 1–46.
Published: 01 January 2007
Abstract
View article
PDF
Spike trains from cortical neurons show a high degree of irregularity, with coefficients of variation (CV) of their interspike interval (ISI) distribution close to or higher than one. It has been suggested that this irregularity might be a reflection of a particular dynamical state of the local cortical circuit in which excitation and inhibition balance each other. In this “balanced” state, the mean current to the neurons is below threshold, and firing is driven by current fluctuations, resulting in irregular Poisson-like spike trains. Recent data show that the degree of irregularity in neuronal spike trains recorded during the delay period of working memory experiments is the same for both low-activity states of a few Hz and for elevated, persistent activity states of a few tens of Hz. Since the difference between these persistent activity states cannot be due to external factors coming from sensory inputs, this suggests that the underlying network dynamics might support coexisting balanced states at different firing rates. We use mean field techniques to study the possible existence of multiple balanced steady states in recurrent networks of current-based leaky integrate-and-fire (LIF) neurons. To assess the degree of balance of a steady state, we extend existing mean-field theories so that not only the firing rate, but also the coefficient of variation of the interspike interval distribution of the neurons, are determined self-consistently. Depending on the connectivity parameters of the network, we find bistable solutions of different types. If the local recurrent connectivity is mainly excitatory, the two stable steady states differ mainly in the mean current to the neurons. In this case, the mean drive in the elevated persistent activity state is suprathreshold and typically characterized by low spiking irregularity. If the local recurrent excitatory and inhibitory drives are both large and nearly balanced, or even dominated by inhibition, two stable states coexist, both with subthreshold current drive. In this case, the spiking variability in both the resting state and the mnemonic persistent state is large, but the balance condition implies parameter fine-tuning. Since the degree of required fine-tuning increases with network size and, on the other hand, the size of the fluctuations in the afferent current to the cells increases for small networks, overall we find that fluctuation-driven persistent activity in the very simplified type of models we analyze is not a robust phenomenon. Possible implications of considering more realistic models are discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (4): 763–793.
Published: 01 April 2000
Abstract
View article
PDF
We present a formalism that leads naturally to a hierarchical description of the different contrast structures in images, providing precise definitions of sharp edges and other texture components. Within this formalism, we achieve a decomposition of pixels of the image in sets, the fractal components of the image, such that each set contains only points characterized by a fixed strength of the singularity of the contrast gradient in its neighborhood. A crucial role in this description of images is played by the behavior of contrast differences under changes in scale. Contrary to naive scaling ideas where the image is thought to have uniform transformation properties (Field, 1987), each of these fractal components has its own transformation law and scaling exponents. A conjecture on their biological relevance is also given.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (6): 1349–1388.
Published: 15 August 1999
Abstract
View article
PDF
Cortical areas are characterized by forward and backward connections between adjacent cortical areas in a processing stream. Within each area there are recurrent collateral connections between the pyramidal cells. We analyze the properties of this architecture for memory storage and processing. Hebb-like synaptic modifiability in the connections and attractor states are incorporated. We show the following: (1) The number of memories that can be stored in the connected modules is of the same order of magnitude as the number that can be stored in any one module using the recurrent collateral connections, and is proportional to the number of effective connections per neuron. (2) Cooperation between modules leads to a small increase in memory capacity. (3) Cooperation can also help retrieval in a module that is cued with a noisy or incomplete pattern. (4) If the connection strength between modules is strong, then global memory states that reflect the pairs of patterns on which the modules were trained together are found. (5) If the intermodule connection strengths are weaker, then separate, local memory states can exist in each module. (6) The boundaries between the global and local retrieval states, and the nonretrieval state, are delimited. All of these properties are analyzed quantitatively with the techniques of statistical physics.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (6): 1507–1525.
Published: 15 August 1998
Abstract
View article
PDF
Objects can be reconized independently of the view they present, of their position on the retina, or their scale. It has been suggested that one basic mechanism that makes this possible is a memory effect, or a trace, that allows associations to be made between consecutive views of one object. In this work, we explore the possibility that this memory trace is provided by the sustained activity of neurons in layers of the visual pathway produced by an extensive recurrent connectivity. We describe a model that contains this high recurrent connectivity and synaptic efficacies built with contributions from associations between pairs of views that is simple enough to be treated analytically. The main result is that there is a change of behavior as the strength of the association between views of the same object, relative to the association within each view of an object, increases. When its value is small, sustained activity in the network is produced by the views themselves. As it increases above a threshold value, the network always reaches a particular state (which represents the object) independent of the particular view that was seen as a stimulus. In this regime, the network can still store an extensive number of objects, each defined by a finite (although it can be large) number of views.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (7): 1421–1456.
Published: 10 July 1997
Abstract
View article
PDF
In the context of both sensory coding and signal processing, building factorized codes has been shown to be an efficient strategy. In a wide variety of situations, the signal to be processed is a linear mixture of statistically independent sources. Building a factorized code is then equivalent to performing blind source separation. Thanks to the linear structure of the data, this can be done, in the language of signal processing, by finding an appropriate linear filter, or equivalently, in the language of neural modeling, by using a simple feedforward neural network. In this article, we discuss several aspects of the source separation problem. We give simple conditions on the network output that, if satisfied, guarantee that source separation has been obtained. Then we study adaptive approaches, in particular those based on redundancy reduction and maximization of mutual information. We show how the resulting updating rules are related to the BCM theory of synaptic plasticity. Eventually we briefly discuss extensions to the case of nonlinear mixtures. Through out this article, we take care to put into perspective our work with other studies on source separation and redundancy reduction. In particular we review algebraic solutions, pointing out their simplicity but also their drawbacks.