Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-4 of 4
Emmanuel Guigon
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (9): 2060–2076.
Published: 01 September 2005
Abstract
View articletitled, Supervised Learning in a Recurrent Network of Rate-Model Neurons Exhibiting Frequency Adaptation
View
PDF
for article titled, Supervised Learning in a Recurrent Network of Rate-Model Neurons Exhibiting Frequency Adaptation
For gradient descent learning to yield connectivity consistent with real biological networks, the simulated neurons would have to include more realistic intrinsic properties such as frequency adaptation. However, gradient descent learning cannot be used straightforwardly with adapting rate-model neurons because the derivative of the activation function depends on the activation history. The objectives of this study were to (1) develop a simple computational approach to reproduce mathematical gradient descent and (2) use this computational approach to provide supervised learning in a network formed of rate-model neurons that exhibit frequency adaptation. The results of mathematical gradient descent were used as a reference in evaluating the performance of the computational approach. For this comparison, standard (nonadapting) rate-model neurons were used for both approaches. The only difference was the gradient calculation: the mathematical approach used the derivative at a point in weight space, while the computational approach used the slope for a step change in weight space. Theoretically, the results of the computational approach should match those of the mathematical approach, as the step size is reduced but floating-point accuracy formed a lower limit to usable step sizes. A systematic search for an optimal step size yielded a computational approach that faithfully reproduced the results of mathematical gradient descent. The computational approach was then used for supervised learning of both connection weights and intrinsic properties of rate-model neurons to convert a tonic input into a phasic-tonic output pattern. Learning produced biologically realistic connectivity that essentially used a monosynaptic connection from the tonic input neuron to an output neuron with strong frequency adaptation as compared to a complex network when using nonadapting neurons. Thus, more biologically realistic connectivity was achieved by implementing rate-model neurons with more realistic intrinsic properties. Our computational approach could be applied to learning of other neuron properties.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (9): 2115–2127.
Published: 01 September 2003
Abstract
View articletitled, Computing with Populations of Monotonically Tuned Neurons
View
PDF
for article titled, Computing with Populations of Monotonically Tuned Neurons
The parametric variation in neuronal discharge according to the values of sensory or motor variables strongly influences the collective behavior of neuronal populations. A multitude of studies on the populations of broadly tuned neurons (e.g., cosine tuning) have led to such well-known computational principles as population coding, noise suppression, and line attractors. Much less is known about the properties of populations of monotonically tuned neurons. In this letter, we show that there exists an efficient weakly biased linear estimator for monotonic populations and that neural processing based on linear collective computation and least-square error learning in populations of intensity-coded neurons has specific generalization capacities.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (2): 279–308.
Published: 01 February 2003
Abstract
View articletitled, Reliability of Spike Timing Is a General Property of Spiking Model Neurons
View
PDF
for article titled, Reliability of Spike Timing Is a General Property of Spiking Model Neurons
The responses of neurons to time-varying injected currents are reproducible on a trial-by-trial basis in vitro, but when a constant current is injected, small variances in interspike intervals across trials add up, eventually leading to a high variance in spike timing. It is unclear whether this difference is due to the nature of the input currents or the intrinsic properties of the neurons. Neuron responses can fail to be reproducible in two ways: dynamical noise can accumulate over time and lead to a desynchronization over trials, or several stable responses can exist, depending on the initial condition. Here we show, through simulations and theoretical considerations, that for a general class of spiking neuron models, which includes, in particular, the leaky integrate-and-fire model as well as nonlinear spiking models, aperiodic currents, contrary to periodic currents, induce reproducible responses, which are stable under noise, change in initial conditions and deterministic perturbations of the input. We provide a theoretical explanation for aperiodic currents that cross the threshold.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (4): 845–871.
Published: 01 April 2002
Abstract
View articletitled, Population Computation of Vectorial Transformations
View
PDF
for article titled, Population Computation of Vectorial Transformations
Many neurons of the central nervous system are broadly tuned to some sensory or motor variables. This property allows one to assign to each neuron a preferred attribute (PA). The width of tuning curves and the distribution of PAs in a population of neurons tuned to a given variable define the collective behavior of the population. In this article, we study the relationship of the nature of the tuning curves, the distribution of PAs, and computational properties of linear neuronal populations. We show that noise-resistant distributed linear algebraic processing and learning can be implemented by a population of cosine tuned neurons assuming a nonuniform but regular distribution of PAs. We extend these results analytically to the noncosine tuning and uniform distribution case and show with a numerical simulation that the results remain valid for a nonuniform regular distribution of PAs for broad noncosine tuning curves. These observations provide a theoretical basis for modeling general nonlinear sensorimotor transformations as sets of local linearized representations.