Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
G. Mato
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (7): 1768–1789.
Published: 01 July 2011
FIGURES
| View All (9)
Abstract
View article
PDF
Plastic changes in synaptic efficacy can depend on the time ordering of presynaptic and postsynaptic spikes. This phenomenon is called spike-timing-dependent plasticity (STDP). One of the most striking aspects of this plasticity mechanism is that the STDP windows display a great variety of forms in different parts of the nervous system. We explore this issue from a theoretical point of view. We choose as the optimization principle the minimization of conditional entropy or maximization of reliability in the transmission of information. We apply this principle to two types of postsynaptic dynamics, designated type I and type II. The first is characterized as being an integrator, while the second is a resonator. We find that, depending on the parameters of the models, the optimization principle can give rise to a wide variety of STDP windows, such as antisymmetric Hebbian, predominantly depressing or symmetric with one positive region and two lateral negative regions. We can relate each of these forms to the dynamical behavior of the different models. We also propose experimental tests to assess the validity of the optimization principle.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (7): 1837–1859.
Published: 01 July 2010
FIGURES
| View All (9)
Abstract
View article
PDF
Neurons in the nervous system display a wide variety of plasticity processes. Among them are covariance-based rules and homeostatic plasticity. By themselves, the first ones tend to generate instabilities because of the unbounded potentiation of synapses. The second ones tend to stabilize the system by setting a target for the postsynaptic firing rate. In this work, we analyze the combined effect of these two mechanisms in a simple model of hypercolumn of the visual cortex. We find that the presence of homeostatic plasticity together with nonplastic uniform inhibition stabilizes the effect of Hebbian plasticity. The system can reach nontrivial solutions, where the recurrent intracortical connections are strongly modulated. The modulation is strong enough to generate contrast invariance. Moreover, this state can be reached even beginning from a weakly modulated initial condition.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (1): 1–56.
Published: 01 January 2003
Abstract
View article
PDF
We investigate theoretically the conditions for the emergence of synchronous activity in large networks, consisting of two populations of extensively connected neurons, one excitatory and one inhibitory. The neurons are modeled with quadratic integrate-and-fire dynamics, which provide a very good approximation for the subthreshold behavior of a large class of neurons. In addition to their synaptic recurrent inputs, the neurons receive a tonic external input that varies from neuron to neuron. Because of its relative simplicity, this model can be studied analytically. We investigate the stability of the asynchronous state (AS) of the network with given average firing rates of the two populations. First, we show that the AS can remain stable even if the synaptic couplings are strong. Then we investigate the conditions under which this state can be destabilized. We show that this can happen in four generic ways. The first is a saddle-node bifurcation, which leads to another state with different average firing rates. This bifurcation, which occurs for strong enough recurrent excitation, does not correspond to the emergence of synchrony. In contrast, in the three other instability mechanisms, Hopf bifurcations, which correspond to the emergence of oscillatory synchronous activity, occur. We show that these mechanisms can be differentiated by the firing patterns they generate and their dependence on the mutual interactions of the inhibitory neurons and cross talk between the two populations. We also show that besides these codimension 1 bifurcations, the system can display several codimension 2 bifurcations: Takens-Bogdanov, Gavrielov-Guckenheimer, and double Hopf bifurcations.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (7): 1607–1641.
Published: 01 July 2000
Abstract
View article
PDF
The emergence of synchrony in the activity of large, heterogeneous networks of spiking neurons is investigated. We define the robustness of synchrony by the critical disorder at which the asynchronous state becomes linearly unstable. We show that at low firing rates, synchrony is more robust in excitatory networks than in inhibitory networks, but excitatory networks cannot display any synchrony when the average firing rate becomes too high. We introduce a new regime where all inputs, external and internal, are strong and have opposite effects that cancel each other when averaged. In this regime, the robustness of synchrony is strongly enhanced, and robust synchrony can be achieved at a high firing rate in inhibitory networks. On the other hand, in excitatory networks, synchrony remains limited in frequency due to the intrinsic instability of strong recurrent excitation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (2): 467–483.
Published: 15 February 1998
Abstract
View article
PDF
It is shown that very small time steps are required to reproduce correctly the synchronization properties of large networks of integrate-and-fire neurons when the differential system describing their dynamics is integrated with the standard Euler or second-order Runge-Kutta algorithms. The reason for that behavior is analyzed, and a simple improvement of these algorithms is proposed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (2): 270–299.
Published: 15 February 1996
Abstract
View article
PDF
We study neural network models of discriminating between stimuli with two similar angles, using the two-alternative forced choice (2AFC) paradigm. Two network architectures are investigated: a two-layer perceptron network and a gating network. In the two-layer network all hidden units contribute to the decision at all angles, while in the other architecture the gating units select, for each stimulus, the appropriate hidden units that will dominate the decision. We find that both architectures can perform the task reasonably well for all angles. Perceptual learning has been modeled by training the networks to perform the task, using unsupervised Hebb learning algorithms with pairs of stimuli at fixed angles θ and δθ . Perceptual transfer is studied by measuring the performance of the network on stimuli with θ′ ≠ θ . The two-layer perceptron shows a partial transfer for angles that are within a distance a from θ , where a is the angular width of the input tuning curves. The change in performance due to learning is positive for angles close to θ , but for |θ − θ′| ≈ a it is negative, i.e., its performance after training is worse than before. In contrast, negative transfer can be avoided in the gating network by limiting the effects of learning to hidden units that are optimized for angles that are close to the trained angle.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (2): 307–337.
Published: 01 March 1995
Abstract
View article
PDF
Synchronization properties of fully connected networks of identical oscillatory neurons are studied, assuming purely excitatory interactions. We analyze their dependence on the time course of the synaptic interaction and on the response of the neurons to small depolarizations. Two types of responses are distinguished. In the first type, neurons always respond to small depolarization by advancing the next spike. In the second type, an excitatory postsynaptic potential (EPSP) received after the refractory period delays the firing of the next spike, while an EPSP received at a later time advances the firing. For these two types of responses we derive general conditions under which excitation destabilizes in-phase synchrony. We show that excitation is generally desynchronizing for neurons with a response of type I but can be synchronizing for responses of type II when the synaptic interactions are fast. These results are illustrated on three models of neurons: the Lapicque integrate-and-fire model, the model of Connor et al ., and the Hodgkin-Huxley model. The latter exhibits a type II response, at variance with the first two models, that have type I responses. We then examine the consequences of these results for large networks, focusing on the states of partial coherence that emerge. Finally, we study the Lapicque model and the model of Connor et al . at large coupling and show that excitation can be desynchronizing even beyond the weak coupling regime.