Abstract

It has been established that homeostatic synaptic scaling plasticity can maintain neural network activity in a stable regime. However, the underlying learning rule for this mechanism is still unclear. Whether it is dependent on the presynaptic site remains a topic of debate. Here we focus on two forms of learning rules: traditional synaptic scaling (SS) without presynaptic effect and presynaptic-dependent synaptic scaling (PSD). Analysis of the synaptic matrices reveals that transition matrices between consecutive synaptic matrices are distinct: they are diagonal and linear to neural activity under SS, but become nondiagonal and nonlinear under PSD. These differences produce different dynamics in recurrent neural networks. Numerical simulations show that network dynamics are stable under PSD but not SS, which suggests that PSD is a better form to describe homeostatic synaptic scaling plasticity. Matrix analysis used in the study may provide a novel way to examine the stability of learning dynamics.

1.  Introduction

It is well established that neurons undergo homeostatic forms of plasticity (Ramakers, Corner, & Habets, 1990; Karmarkar & Dan, 2006; Nelson & Turrigiano, 2008; Pozo & Goda, 2010), in which neurons up- or downregulate different neuronal or synaptic properties in response to changes in overall levels of neural activity. In recent years, one form of homeostatic plasticity, called homeostatic synaptic scaling, has been of particular interest to both experimental and theoretical neuroscientists, since it has been proposed to stabilize the positive feedback of the Hebbian rule (van Rossum, Bi, & Turrigiano, 2000; Houweling, Bazhenov, Timofeev, Steriade, & Sejnowski, 2005; Renart, Song, & Wang, 2003; Frohlich, Bazhenov, & Sejnowski, 2008; Turrigiano, 2007).

The original study revealed uniformly multiplicative synaptic scaling (Turrigiano, Leslie, Desai, Rutherford, & Nelson, 1998) in which all the synapses onto a postsynaptic neuron are scaled by the same factor according to the difference between desired and actual average activity levels in neurons. However, recent experimental results show that synaptic scaling is observed in some situations but not in others (Goel & Lee, 2007; Turrigiano, 2007; Kim & Tsien, 2008). These results suggest that synaptic scaling is not always uniform and that changes in network activity do not equally affect all presynaptic inputs onto a given neuron (Pozo & Goda, 2010). Therefore, the traditional synaptic scaling (SS) learning rule formulated with one uniform scale, which is dependent only on postsynaptic neural activity (van Rossum et al., 2000), may not be correct. Indeed, it has been shown that in recurrent networks, SS is unstable (Buonomano, 2005; Houweling et al., 2005; Frohlich et al., 2008; Liu & Buonomano, 2009; Liu & She, 2009). Another type of learning rule, termed presynaptic-dependent synaptic scaling (PSD), has been proposed (Buonomano, 2005; Liu & Buonomano, 2009). PSD is a variation of synaptic scaling that takes into account the average levels of activity of the presynaptic neurons. Computer simulations have established that PSD can stabilize multiple network-wide trajectories in recurrent networks (Liu & Buonomano, 2009).

In this study, we mathematically analyze the learning rules in both SS and PSD forms. We show that the mathematical structures of these learning rules are dramatically different. The transition matrix, which is defined as the synaptic scaling factor between synaptic matrices of two consecutive time steps, is diagonal and linearly proportional to neural activity under usual matrix multiplication under SS, while under PSD, it is nondiagonal and nonlinear under the Hadamard (entrywise) product. Through simulating recurrent networks with these two rules, we find that SS produces unstable dynamics with excitation explosion—runaway activity of high firing rates in all excitatory neurons of the network. However, PSD provides stable dynamics with self-organized neural trajectories that resemble behaviorally relevant spatiotemporal patterns of activity for sensory inputs (Broome, Jayaraman, & Laurent, 2006), motor behaviors (Hahnloser, Kozhevnikov, & Fee, 2002), as well as memory and planning (Pastalkova, Itskov, Amarasingham, & Buzsaki, 2008). Furthermore, the maximal eigenvalue of the synaptic matrix converges to a steady and larger-than-one value under PSD, but it does not converge under SS. Although the learning rules, whose synaptic matrices have all eigenvalues less than 1, have been shown to be sufficient for network stability (Rajan & Abbott, 2006; Siri, Quoy, Delord, Cessac, & Berry, 2007; Siri, Berry, Cessac, Delord, & Quoy, 2008; Goldman, 2009), our results indicate that this condition is not necessary; the maximal eigenvalue of synaptic matrix under stable learning dynamics can be larger than 1, which is consistent with a recent study (Sussillo & Abbott, 2009). Together with these results, we provide a new way to investigate the stability of learning rules and extend the previous view on the relationship between stable learning dynamics and synaptic matrix eigenvalues. Our results suggest that the learning rule of homeostatic synaptic scaling depends not only on postsynaptic but also on presynaptic neural activity.

2.  Formulation and Simulation of Synaptic Scaling

2.1.  Traditional Synaptic Scaling.

The traditional synaptic scaling (SS) was observed to globally and uniformly scale all synapses connected to a postsynaptic neuron up or down in strength to stabilize neural firing (Turrigiano et al., 1998). As a result, the proposed learning rule is dependent on only postsynaptic neural activity and has a form (van Rossum et al., 2000; Renart et al., 2003; Goldman, 2009)
formula
2.1
where wij is the synaptic strength from presynaptic neuron j to postsynaptic neuron i, and νgoal is the target firing rate. is the average activity (firing rate) of neuron i given by
formula
2.2
where δ is the Dirac function, tmax is the time window for the spike count, and tk, k=1, 2, …, are the spike times relative to the onset of stimulus. The characteristic timescale of synaptic scaling is long, and τw and are large (Turrigiano, 2007). As in the previous study (Frohlich et al., 2008), we separate the neural network dynamics into two timescales and approximate slow synaptic regulation with a discrete time update scheme.
In a discrete time neural network, the synaptic weight at trial τ can be denoted as w(τ)ij. Then equations 2.1 and 2.2 can be written as
formula
2.3
formula
2.4
where and αw=1/τw. Learning dynamics and neural activity are coupled via , that is, the firing rate of neuron i at the τth trial. The mismatch between the instantaneous and the average firing rates adjusts the interaction between neural activity and learning dynamics until the network reaches a stable state with .

2.2.  Presynaptic-Dependent Synaptic Scaling.

Previous studies found that SS generates unstable activity in a recurrent network in response to a brief impulse of stimulus (Buonomano, 2005; Liu & Buonomano, 2009) and proposed an alternative learning rule, termed presynaptic-dependent synaptic scaling (PSD). With the assumption that the change of weights is dependent on both pre- and postsynaptic neural activity, PSD has a different equation for wij:
formula
2.5
Under this PSD rule, a postsynaptic neuron will preferentially potentiate the synapses from more active neurons.

Although the learning dynamics, equations 2.1 and 2.2, and the neural and synaptic dynamics form a closed system, this system is difficult to analyze mathematically. Instead, we focus on the learning dynamics, equations 2.3 and 2.5, and show that network dynamics depend only on the learning rule chosen, not on the specific neural dynamics and synaptic kinetics.

2.3.  Simulated Network Dynamics Is Stable with PSD, Not SS.

We conducted a series of numerical simulations to study learning dynamics with a recurrent neural network consisting of 500 (400 excitatory and 100 inhibitory) integrate-and-fire neurons and excitatory AMPA and NMDA and inhibitory GABAA kinetic synapses. (Neural network model and simulation details can be found in the appendix.)

Neural activity patterns under SS are presented in Figure 1A. At τ=1, only the stimulated neurons are active. As learning proceeds, more neurons fire, and at τ=169, approximately all neurons fire. However, excitation explodes only one trial later, at τ=170. Such a sharp transition indicates excitation explosion, or what has been termed a synfire explosion (Mehring, Hehl, Kubo, Diesmann, & Aertsen, 2003; Vogels, Rajan, & Abbott, 2005; Destexhe & Contreras, 2006). At τ=173, activity decreases quickly. In contrast, activity patterns under PSD are significantly different (see Figure 1 B), where neurons fire in a stable manner as learning proceeds. Network activity at τ=200, 300, and 500 represents typical patterns of different learning phases. In contrast with SS, the final neural activity is stable, and no excitation explosion is observed at any point during the simulation.

Figure 1:

Network dynamics are unstable under SS and stable under PSD. (A) Raster patterns under SS at τ=1, 169, 170, and 173. (B) Raster patterns under PSD at τ=1, 200, 300, and 500. Black dots are spikes. (C) Mean firing rate averaged over all neurons exhibits large oscillations under SS and stably converges to the target under PSD. (D) indicates the degree of jump discontinuity. Excitation explosion is exhibited under SS but depressed under PSD. In panels A and B, all indexes of neurons (y-axis) are sorted according to their spiking time after learning. Data under SS are gray, and they are black under PSD in all figures.

Figure 1:

Network dynamics are unstable under SS and stable under PSD. (A) Raster patterns under SS at τ=1, 169, 170, and 173. (B) Raster patterns under PSD at τ=1, 200, 300, and 500. Black dots are spikes. (C) Mean firing rate averaged over all neurons exhibits large oscillations under SS and stably converges to the target under PSD. (D) indicates the degree of jump discontinuity. Excitation explosion is exhibited under SS but depressed under PSD. In panels A and B, all indexes of neurons (y-axis) are sorted according to their spiking time after learning. Data under SS are gray, and they are black under PSD in all figures.

Pathological excitation explosion can be formalized as a sharp discontinuity, in a mathematical sense, since it exhibits the large jump of neural activity between two consecutive trials or within a short time period (Mehring et al., 2003), which indicates the instability of the learning rule. Stability can therefore be defined as that is, the discrete derivative of the average firing rate over the whole network is less than a small number (ε≪1). Using equation 2.4, this condition becomes Given that is small and N is large, when network dynamics are stable, one expects to be small with ε≪1.

Stability is visualized in the plot of the mean firing rate averaged over all neurons as a function of learning trial τ. As shown in Figure 1C, under SS is oscillating with jumps, whereas under PSD, develops stably and converges gradually to the target firing rate, νgoal=1. Learning dynamics can also be described by the derivative of the curve, . Figure 1D shows a clear jump discontinuity under SS, with a larger bound ε<0.5, but this discontinuity is prevented by PSD, with a small bound ε<0.01.

Note that in Figure 1C, the activity goes up quickly and then goes down slowly. Such a sharp upstroke is a signature of SS. As shown in equation 2.4, the change of synaptic weights under SS is dependent on only postsynaptic neural activity. Therefore, under SS, all synaptic weights onto a postsynaptic neuron are scaled up by the same amount, which corresponds to the amount of excitatory postsynaptic potential (EPSP, denoted as Vss) induced by this increased weight. A large number of synaptic weights (denoted as Nss) are on the same postsynaptic neuron, and they all induce an amount of EPSP, Vss. There is no competition among these synapses, and the ratios between these synaptic weights remain unchanged. Thus, the overall strength of this postsynaptic neuron will be Nss×Vss, which is large enough to make this neuron fire much faster. Overall, at the network level, the activity increases quickly, which results in a sharp upstroke or excitation explosion. Changing parameters alters only the magnitude and period of oscillations, but the excitation explosion remains unchanged. The instability of SS is unchanged when noise is considered, as shown in Liu and Buonomano (2009).

2.4.  Homeostatic Control Realized by PSD, Not SS.

The reason for this instability of SS is that the presynaptic index j is free from the change of synaptic weights—the contribution of Δw is dependent on only postsynaptic neural activity. Therefore, if the synaptic matrix is summed over all presynaptic elements for each postsynaptic index, a vector can be defined as sw(τ)i≔∑jw(τ)ij, which is denoted as the prestrength vector. One expects that sw(τ)i changes uniformly over learning trials by the same scale without changing the inner ratio of presynaptic strengths converging to a postsynaptic neuron i. Essentially the competition of synapses is absent under SS. Similarly, one can define a vector sw(τ)j≔∑iw(τ)ij as the poststrength vector.

Figure 2A shows that the elements of the prestrength vector swi under SS are distributed uniformly within one trial except for the first 24 neurons, which are stimulated as an input. The same panel also shows that swi are scaled globally across different trials τ=1, 100, and 300. This global scaling is a typical signature of SS due to no synaptic competition among all synapses projecting onto a postsynaptic neuron. However, in Figure 2B swi under PSD are distributed and scaled heterogeneously, particularly at trial τ=300. The absence of single global scaling across trials under PSD stems from the strong synaptic competition among all synapses of the network. Similarly, in Figure 2C, pos-strengths swj under SS are scaled globally across trials, even though they are distributed less uniformly within one trial. In Figure 2D, swj under PSD are heterogeneous both within one trial and across trials. Note that stimulated neurons have larger values of swj because they fire all the time during learning, and synapses from them are preferentially potentiated under PSD. Further examination of the standard deviation σ of distributions, in Figure 2E, shows that they are significantly different and separated under PSD and nearly overlapping under SS, which indicates that synaptic competition is missing under SS and present under PSD.

Figure 2:

Synaptic competition is realized by PSD, not SS. (A) Prestrengths swi under SS are distributed uniformly within one trial and scaled globally across different trials τ=1, 100, and 300. (B) swi under PSD are distributed and changed heterogeneously, particularly at τ=300. (C) Poststrengths swj under SS are scaled globally across trials, even they are distributed less uniformly within one trial. (D) swj under PSD are heterogeneous both within one trial and across trials. (E) Standard deviations σsw of SSpre in A, PSDpre in B, SSpost in C, and PSDpost in D, are significantly different and separated under PSD and nearly overlapping under SS, which indicates that synaptic competition is missed under SS but exhibited under PSD.

Figure 2:

Synaptic competition is realized by PSD, not SS. (A) Prestrengths swi under SS are distributed uniformly within one trial and scaled globally across different trials τ=1, 100, and 300. (B) swi under PSD are distributed and changed heterogeneously, particularly at τ=300. (C) Poststrengths swj under SS are scaled globally across trials, even they are distributed less uniformly within one trial. (D) swj under PSD are heterogeneous both within one trial and across trials. (E) Standard deviations σsw of SSpre in A, PSDpre in B, SSpost in C, and PSDpost in D, are significantly different and separated under PSD and nearly overlapping under SS, which indicates that synaptic competition is missed under SS but exhibited under PSD.

3.  Matrix Analysis of Learning Dynamics

3.1.  Matrix Form of SS.

When the SS rule is applied, we obtain the following equation:
formula
3.1
where W(τ+1) is the synaptic matrix. The matrix D(τ) with diagonal elements , which describes the change of synaptic weights due to learning, can be defined as a one-step transition matrix between two consecutive time steps. Note that this matrix yields a simple diagonal structure with each element linearly proportional to the average activity of the postsynaptic neuron. It is easy to show by recurrence that equation 3.1 becomes
formula
3.2
where the is diagonal with elements , which can be defined as an all-step transition matrix, since it includes the whole history of learning process. Thus, SS is essentially governed by equations 3.1 and 3.2, which satisfy
formula
3.3
where the spectral norm ‖·‖2 is used since all matrices are square. By definition, the spectral norm of D(τ) is the largest singular value of the matrix:
formula
3.4
where σ1(·) is the largest singular value, and ρ(·) is the largest eigenvalue—the spectral radius. These two are equal since D(τ) is diagonal. Equation 3.5 can be tested by calculating the ratios
formula
3.5
which can be used to examine numerical results and find the specific values of r1, r2.

3.2.  Matrix Form of PSD.

Similarly, when the PSD rule is applied, the only difference from SS is the synaptic weight equation, where Δwij is dependent on both pre- and postsynaptic neural activity. As a result, PSD is more complicated than SS due to this nonlinear interaction , which induces a more complex mathematical structure. To see this, one can write the matrix equation as
formula
3.6
where ○ denotes the Hadamard product, a special product of matrices with entrywise instead of row- and column-wise multiplication, and T(τ)=JwΓ(τ), in which J is the identity matrix under the Hadamard product with all entries equal to 1, and Γ(τ) denotes the activity matrix with each element as . Rewriting this equation by recurrence, we get
formula
3.7
where has elements as . Note that T(τ) is nonlinear with respect to neural activity, and the Hadamard product is an unusual matrix product. Given these difficulties and the fact that T(τ) is not symmetric since synapses are directional with respect to neurons, there is one useful property of the Hadamard product (Horn & Johnson, 1994), which yields
formula
3.8
To compare with SS, we write the ratios r1 and r2 as
formula
3.9
Note that under PSD, the largest singular value σ1(·) is not the largest eigenvalue ρ(·). The behavior of r2 is related to r1, but it is not a simple linear relationship, since r2 is from the multiplication of all transition matrices with the whole history of learning trials. As a result, r2 reflects the mutual coupling of neural and learning dynamics. Since matrix structures are different between two learning rules, one expects the behaviors of r1 and r2 to be different as well.
Figure 3

The synaptic matrix is convergent under PSD, not SS. (A) r1 (solid line, gray) and r2 (dashed line, gray) are close to the theoretical bound 1 under SS. r1 (solid line, black) and r2 (dashed line, black) are 100-fold less under PSD. (B) The spectral norm of the all-step transition matrix, (C) the largest eigenvalue of synaptic matrix, and (D) the spectral norm of one-step transition matrix are always larger than 1, and convergent under PSD but not convergent under SS.

Figure 3

The synaptic matrix is convergent under PSD, not SS. (A) r1 (solid line, gray) and r2 (dashed line, gray) are close to the theoretical bound 1 under SS. r1 (solid line, black) and r2 (dashed line, black) are 100-fold less under PSD. (B) The spectral norm of the all-step transition matrix, (C) the largest eigenvalue of synaptic matrix, and (D) the spectral norm of one-step transition matrix are always larger than 1, and convergent under PSD but not convergent under SS.

3.3.  Synaptic Matrix Converged Under PSD, Not SS.

Once the matrix structures of the learning rules are obtained, we reexamine the idea that the stability of network dynamics can be achieved by controlling the synaptic matrix so that the largest eigenvalue is less than 1. In our case, this is obtained by letting the maximal eigenvalue of one-step transition matrices D(τ) of SS be less than 1. Given that under SS , and supposing that the maximal eigenvalue is reached at the index m, we have, after simple calculations,
formula
3.10
It is a sufficient condition to achieve the stability, for the synaptic weight equation, as discussed previously (Rajan & Abbott, 2006; Siri et al., 2007, 2008; Goldman, 2009). However, is the average firing rate starting from 0 and converging to the equilibrium firing rate νgoal. Ideally, we have without oscillations, or with decaying oscillations. Therefore, there is no index m satisfying the sufficient condition, and this condition cannot hold in our case. This prevents any analysis of the synaptic matrix such that ρ(W(τ))<1 during and after learning. It may turn out that ρ(W(τ))<1 may be less realistic or not necessary. Instead, one expects the ratios r1 and r2 to include useful information for stability, although their actual values may vary from case to case.

Now we analyze synaptic matrices of SS and PSD. In Figure 3A (the gray lines), the ratios r1 and r2 shaped by SS are close to 1, the theoretical upper-bound. We find that the largest eigenvalue of the synaptic matrix, ρ(W(τ)) in Figure 3C (the gray lines), the all-step transition matrix, in Figure 3B (the gray lines), and the one-step transition matrix, ρ(D(τ)), Figure 3D (the gray lines) are always larger than 1. In particular, ρ(W(τ)) and are increasing without an upper bound under SS. In contrast, the matrices shaped by PSD (see Figure 3, the black lines) are different from those shaped by SS (see Figure 3 the gray lines). The ratios are significantly less than 1: r1≪1 and r2≪1. The largest eigenvalue of the synaptic matrix under PSD (see Figure 3C the black lines) is also larger than 1, which violates the sufficient condition where it should be less than 1. Because the singular values of all-step and one-step transition matrices under PSD are not their largest eigenvalues, they are much larger than those of SS as in Figures 3B and 3D (the black lines). Under both SS and PSD, results with the larger-than-1 largest eigenvalues for synaptic matrix W are in contrast to the traditional viewpoint.

4.  Discussion

In this work, we have studied analytically and numerically two types of homeostatic synaptic scaling learning rules in recurrent neural networks. In particular, the underlying mathematical structures of learning rules are identified. The difference is captured by the transition matrix between synaptic matrices, which is diagonal and linear under SS but nondiagonal and nonlinear under the Hadamard product under PSD. Through numerical simulations, we have confirmed that SS generates an unstable excitation explosion, and PSD gives stable network dynamics. Furthermore, the stable PSD learning produces a synaptic matrix in which the largest eigenvalue is larger than 1. These results, together with recent experiments (Goel & Lee, 2007; Kim & Tsien, 2008), suggest that homeostatic synaptic scaling is dependent on both pre- and postsynaptic neural activity.

Note that the above analysis of learning rules is independent of the underlying neural dynamics. To further confirm that our results are unrelated to specific dynamics of neurons and synapses, we simulated a neural network with binary excitatory neurons without synaptic decaying dynamics and obtained qualitatively similar results (see the supplemental material available online at http://www.mitpressjournals.org/doi/suppl/10.1162/NECO_a_00210).

4.1.  Stability of Learning Dynamics.

Biologically, the question of how recurrent networks develop functional dynamics and avoid excitation explosion is critical to understanding cortical function. The stable learning rule should generate the convergent dynamics within a neural network without pathological activity (Vogels et al., 2005; Destexhe & Contreras, 2006; Frohlich et al., 2008). The most straightforward way to describe excitation explosion over the course of learning is to use the derivative of the average firing rate curve, . The different values of ϵ characterize the degree of excitation explosion. Therefore, stability is obtained when ϵ≪1, which occurs under PSD learning. The instability with a sharp discontinuity and excitation explosion is observed under SS learning.

Stability analysis has been intensively studied in the literature of artificial neural networks and machine learning (Hertz, Krogh, & Palmer, 1991) with a goal of controlling the stability of the synaptic matrix. Here we have focused on what would correspond to the development of a cortical network. In our simulation, the network develops from an initial state, in which all synapses are weak and activity does not propagate, to the one in which stimuli elicit network-wide activity. This scenario is observed in cortical networks in vitro (Johnson & Buonomano, 2007), where the underlying synaptic matrix may be shaped by learning dynamics to avoid excitation explosion.

When a stimulus is presented, it is the learning dynamics that make the activity develop in a stable manner and the synaptic matrix converge to a stable state. The stability condition of learning rules may be more complicated than what is expected by controlling the largest eigenvalue. We find that the ratio r1 is close to the theoretical upper bound 1 under SS but is much less than 1 under PSD. The classical analysis of the Hebbian rule requires the synaptic matrix to be controlled with ρ(W)<1, which is a sufficient condition. However, this condition fails in our results. The largest eigenvalues of all matrices under SS and PSD are larger than 1. We speculate that r1≪1 is a necessary and sufficient condition for the stability of homeostatic synaptic scaling, and we suggest that r1 is an important diagnostic variable for the stability. r1 may play the role of order parameter as in a phase transition in statistical physics. It is likely that calculating r1 for a number of learning rules and plotting them all together can generate a phase diagram of stability of learning rules, in which stable rules have r1≪1 and unstable rules have larger r1. Then in this phase diagram with r1 as an order parameter, PSD resides nearly at the boundary point 0, and SS is close to the boundary point 1. In this way, each learning rule has a unique r1 associated with its stability property.

Interestingly, consistent with our results, a recent study shows that chaotic neural networks can generate coherent patterns of activity, even though real parts of many eigenvalues are greater than 1, both before and after training (Sussillo & Abbott, 2009). During training with an unchaotic neural network, there exist eigenvalues with real parts greater than 1 after training. These results suggest that generating stable network dynamics with different learning rules yields different solutions, in which the synaptic matrix, with or without larger-than-1 eigenvalues, can be shaped by learning dynamics in a stable manner.

4.2.  Generalizing to Other Learning Rules.

In the classical models of Hebbian learning, Hebb's postulate is rephrased as modifications of synaptic weights driven by correlations in the firing activity of pre- and postsynaptic neurons, which is often taken as an additive form without the scaling factor, ΔwijFi×νj). Most classic theoretical studies represent the activity of pre- and postsynaptic neurons in terms of firing rates with the different functions of form F(·) (Gerstner & Kistler, 2002). However, the unique feature of homeostatic synaptic scaling is that the change of weights has a multiplication form with ΔwijFi×νj)wij.

In general, synaptic learning rules can be classified into two categories: multiplicative and additive rules dependent on whether there is a scale factor in the function F or firing rate and spike timing rules dependent on which type of F is used. Combinations of these two categories give four specific types of learning rules. Synaptic scaling is a form of a multiplicative firing-rate rule, in which F of the SS rule is dependent only on postsynaptic neural firing rate, whereas F of the PSD rule is dependent on both pre- and postsynaptic neural firing rates. The matrix analysis conducted in this study can be applied to other learning rules even when spike timing is considered.

As long as presynaptic activity (firing rate or spike timing) is considered in any particular learning rule, the matrix structure of this rule also uses the Hadamard product. In recent years, spike-timing-dependent plasticity (STDP) has been identified experimentally and studied intensively (Bi & Poo, 2001; Morrison, Diesmann, & Gerstner, 2008). Since STDP needs information from both pre- and postsynaptic spike times, the matrix analysis we have explored can be applied. Other presynaptic dependent rules have been proposed in the literature, such as heterosynaptic depression among all input synapses, which has been shown to generate stable activity sequences within recurrent networks (Fiete, Senn, Wang, & Hahnloser, 2010). However, there is no systematic way to study the stability of these rules. Furthermore, it will be interesting to study the case where two or more learning rules are used together (Liu & Buonomano, 2009; Fiete et al., 2010; Clopath, Bsing, Vasilaki, & Gerstner, 2010); in this case the order parameter r1 may have different values, and the stability of combined learning rules may have a different stable phase. Our analysis may give a clue on this issue, although further studies are needed.

Appendix:  Simulation of Spiking Neural Network

We used the same neural network model as described by Liu and Buonomano (2009), where simulations were performed using NEURON (Hines & Carnevale, 1997) with a time step t=0.1 ms. The codes programmed with C++ generated similar results (Liu & She, 2009). The code can be downloaded from the author's home page.

A.1.  Neural Dynamics.

The single neuron is modeled as a integrate-and-fire neuron, in which the membrane potential V is described when V<Vthr as
formula
where membrane time constants are 30 ms for all excitatory (E) (gL=0.1 μS/cm2; C=3 μF/cm2) and inhibitory (I) neurons (gL=0.1 μS/cm2; C=1 μF/cm2). Neurons are heterogeneous in the sense that firing thresholds Vthr are set from a normal distribution (σ2=5% of the mean) with the mean for the E(I) cells as −40(−45) mV. When Vthr is reached at the spiking time tspk, V is set to Vpeak=40 mV for the duration of the spike (τdur=1 ms). After the spike, V is reset to the repolarizing potential Vreset=−60(−65) mV for the E(I) cells; at the same time, the afterhyperpolarization gAHP is turned on and changed as where τAHP=10(2) ms for the E(I) cells. The Dirac function δ is used to set a stepwise increment of for the E(I) cells whenever a spike occurs. The refractory period is τref=2 ms for all neurons.

A.2.  Synaptic Dynamics.

Short-term plasticity is incorporated in all synapses and modeled as (Markram, Wang, & Tsodyks, 1998; Izhikevich, 2003):
formula
where R (u) is the short-term depression (facilitation) variable with the time constant τrecfac), and subjects to the pulsed decrease uRδ(ttn) (increase U(1−u)δ(ttn)) due to the spike at tspk. The cumulative synaptic efficacy at any time is the product Ru that is incorporated into the single synaptic dynamics below. Specifically, EE synapses exhibit depression: U=0.5, τrec=500 ms, τfac=10 ms; EI synapses exhibit facilitation U=0.2, τrec=125 ms, τfac=500 ms. All inhibitory synapses exhibit depression as basket cell synapses (Gupta, Wang, & Markram, 2000): U=0.25, τrec=700 ms, τfac=20 ms.
Each neuron receives four possible synaptic currents:
formula
where synaptic delays are uniformly distributed τd∈[0, 2]. The receptor activation r(t) for fast AMPA and GABAA dynamics follows two-state kinetic models (Destexhe, Mainen, & Sejnowski, 1994):
formula
where α=1.5 ms−1nM−1 and β=0.75 ms−1 for AMPA; α=0.5 ms−1nM−1 and β=0.25 ms−1 for GABAA. T=1 nM is the presynaptic transmitter concentration. NMDA is modeled as (Golomb, Wang, & Rinzel, 1994; Buonomano, 2000):
formula
where for NMDA, α=0.06 ms−1, β=0.01 ms−1, τs=50 ms, γ=0.5, θ=0.3, σ=0.5. In all synapses, Ru is included for the short-term plasticity. The ratio of NMDA to AMPA synaptic weights is fixed as gNMDA=0.6gAMPA for all E-cells.

A.3.  Network Topology and Parameters.

All simulations are performed using a network with 400 excitatory (E) and 100 inhibitory (I) neurons connected with a probability 0.12 for E→E, and 0.2 for both E→I and I→E, which results in each postsynaptic E-neuron receiving 48 inputs from other E-neurons and 20 inputs from I-neurons; each postsynaptic I-neuron receives 80 inputs from E-neurons. Initial synaptic weights are chosen from a normal distribution with mean WEE=2/48 nS, WEI=1/80 nS, and WIE=2/20 nS, respectively, and SD σEE=2WEE, σEI=8WEI, and σIE=8WIE. If the initial weights are nonpositive, they are reset to a uniform distribution from 0 to twice the mean. To avoid the induction of unphysiological states in which a single presynaptic neuron fires a postsynaptic neuron, the maximal E→E AMPA synaptic weights are WEEmax=1.5 nS. The maximal E→I AMPA synaptic weights are set to WEImax=0.4 nS. All inhibitory synaptic weights are fixed. For learning rules, αw=0.01 and . νgoal is the target activity set to 1(2)Hz for E(I)-cells. A stimulus is composed by randomly selected 24 E- and 12 I-cells that fire at 1 Hz. The input spiking timings are assigned to 10±1 ms (mean ± SD) following a normal distribution relative to the onset of each period of 1 s, thus, one subset of cells fires at the beginning of each period. Selected input cells are activated by a 1 Hz excitatory postsynaptic current.

Acknowledgments

We thank Dean Buonomano, Tiago Carvacrol, and Tyler Lee for helpful discussions, and Nicolas Brunel, Claudia Clopath, Omri Harish, and David Higgins for comments and careful reading of the manuscript. This work was partially supported by the ANR-BBSRC Grant VESTICODE.

References

Bi
,
G.
, &
Poo
,
M.
(
2001
).
Synaptic modification by correlated activity: Hebb's postulate revisited
.
Annu. Rev. Neurosci.
,
24
,
139
166
.
Broome
,
B. M.
,
Jayaraman
,
V.
, &
Laurent
,
G.
(
2006
).
Encoding and decoding of overlapping odor sequences
.
Neuron
,
51
(
4
),
467
482
.
Buonomano
,
D. V.
(
2000
).
Decoding temporal information: A model based on short-term synaptic plasticity
.
J. Neurosci.
,
20
(
3
),
1129
1141
.
Buonomano
,
D. V.
(
2005
).
A learning rule for the emergence of stable dynamics and timing in recurrent networks
.
J. Neurophysiol.
,
94
(
4
),
2275
2283
.
Clopath
,
C.
,
Bsing
,
L.
,
Vasilaki
,
E.
, &
Gerstner
,
W.
(
2010
).
Connectivity reflects coding: A model of voltage-based STDP with homeostasis
.
Nat. Neurosci.
,
13
(
3
),
344
352
.
Destexhe
,
A.
, &
Contreras
,
D.
(
2006
).
Neuronal computations with stochastic network states
.
Science
,
314
(
5796
),
85
90
.
Destexhe
,
A.
,
Mainen
,
Z. F.
, &
Sejnowski
,
T. J.
(
1994
).
An efficient method for computing synaptic conductances based on a kinetic-model of receptor-binding
.
Neural Comput.
,
6
(
1
),
14
18
.
Fiete
,
I. R.
,
Senn
,
W.
,
Wang
,
C. Z.
, &
Hahnloser
,
R. H.
(
2010
).
Spike-time-dependent plasticity and heterosynaptic competition organize networks to produce long scale-free sequences of neural activity
.
Neuron
,
65
(
4
),
563
576
.
Frohlich
,
F.
,
Bazhenov
,
M.
, &
Sejnowski
,
T. J.
(
2008
).
Pathological effect of homeostatic synaptic scaling on network dynamics in diseases of the cortex
.
J. Neurosci.
,
28
,
1709
1720
.
Gerstner
,
W.
, &
Kistler
,
W.
(
2002
).
Spiking neuron models: Single neurons, populations, plasticity
.
Cambridge
:
Cambridge University Press
.
Goel
,
A.
, &
Lee
,
H. K.
(
2007
).
Persistence of experience-induced homeostatic synaptic plasticity through adulthood in superficial layers of mouse visual cortex
.
J. Neurosci.
,
27
(
25
),
6692
7000
.
Goldman
,
M. S.
(
2009
).
Memory without feedback in a neural network
.
Neuron
,
61
(
4
),
621
634
.
Golomb
,
D.
,
Wang
,
X. J.
, &
Rinzel
,
J.
(
1994
).
Synchronization properties of spindle oscillations in a thalamic reticular nucleus model
.
J. Neurophysiol.
,
72
(
3
),
1109
1126
.
Gupta
,
A.
,
Wang
,
Y.
, &
Markram
,
H.
(
2000
).
Organizing principles for a diversity of GABAergic interneurons and synapses in the neocortex
.
Science
,
287
(
5451
),
273
278
.
Hahnloser
,
R. H. R.
,
Kozhevnikov
,
A. A.
, &
Fee
,
M. S.
(
2002
).
An ultra-sparse code underlies the generation of neural sequences in a songbird
.
Nature
,
419
(
6902
),
65
70
.
Hertz
,
J.
,
Krogh
,
A.
, &
Palmer
,
R.
(
1991
).
Introduction to the theory of neural computation
.
Reading, MA
:
Addison-Wesley
.
Hines
,
M. L.
, &
Carnevale
,
N. T.
(
1997
).
The neuron simulation environment
.
Neural Comput
,
9
(
6
),
1179
1209
.
Horn
,
R.
, &
Johnson
,
C.
(
1994
).
Topics in matrix analysis
.
Cambridge
:
Cambridge University Press
.
Houweling
,
A. R.
,
Bazhenov
,
M.
,
Timofeev
,
I.
,
Steriade
,
M.
, &
Sejnowski
,
T. J.
(
2005
).
Homeostatic synaptic plasticity can explain post-traumatic epileptogenesis in chronically isolated neocortex
.
Cereb. Cortex
,
15
(
6
),
834
845
.
Izhikevich
,
E. M.
(
2003
).
Simple model of spiking neurons
.
IEEE Trans Neural Netw.
,
14
(
6
),
1569
1572
.
Johnson
,
H. A.
, &
Buonomano
,
D. V.
(
2007
).
Development and plasticity of spontaneous activity and up states in cortical organotypic slices
.
J. Neurosci.
,
27
(
22
),
5915
5925
.
Karmarkar
,
U. R.
, &
Dan
,
Y.
(
2006
).
Experience-dependent plasticity in adult visual cortex
.
Neuron
,
52
,
577
585
.
Kim
,
J.
, &
Tsien
,
R. W.
(
2008
).
Synapse-specific adaptations to inactivity in hippocampal circuits achieve homeostatic gain control while dampening network reverberation
.
Neuron
,
58
(
6
),
925
937
.
Liu
,
J. K.
, &
Buonomano
,
D. V.
(
2009
).
Embedding multiple trajectories in simulated recurrent neural networks in a self-organizing manner
.
J. Neurosci.
,
29
(
42
),
13172
13181
.
Liu
,
J. K.
, &
She
,
Z. S.
(
2009
).
A spike-timing pattern based neural network model for the study of memory dynamics
.
PLoS ONE
,
4
(
7
),
e6247
.
Markram
,
H.
,
Wang
,
Y.
, &
Tsodyks
,
M.
(
1998
).
Differential signaling via the same axon of neocortical pyramidal neurons
.
Proc. Natl. Acad. Sci. USA
,
95
(
9
),
5323
5328
.
Mehring
,
C.
,
Hehl
,
U.
,
Kubo
,
M.
,
Diesmann
,
M.
, &
Aertsen
,
A.
(
2003
).
Activity dynamics and propagation of synchronous spiking in locally connected random networks
.
Biol. Cybern.
,
88
(
5
),
395
408
.
Morrison
,
A.
,
Diesmann
,
M.
, &
Gerstner
,
W.
(
2008
).
Phenomenological models of synaptic plasticity based on spike timing
.
Biol. Cybern.
,
98
(
6
),
459
478
.
Nelson
,
S. B.
, &
Turrigiano
,
G. G.
(
2008
).
Strength through diversity
.
Neuron
,
60
(
3
),
477
482
.
Pastalkova
,
E.
,
Itskov
,
V.
,
Amarasingham
,
A.
, &
Buzsaki
,
G.
(
2008
).
Internally generated cell assembly sequences in the rat hippocampus
.
Science
,
321
(
5894
),
1322
1327
.
Pozo
,
K.
, &
Goda
,
Y.
(
2010
).
Unraveling mechanisms of homeostatic synaptic plasticity
.
Neuron
,
66
(
3
),
337
351
.
Rajan
,
K.
, &
Abbott
,
L. F.
(
2006
).
Eigenvalue spectra of random matrices for neural networks
.
Phys. Rev. Lett.
,
97
(
18
).
Ramakers
,
G. J.
,
Corner
,
M. A.
, &
Habets
,
A. M.
(
1990
).
Development in the absence of spontaneous bioelectric activity results in increased stereotyped burst firing in cultures of dissociated cerebral cortex
.
Exp. Brain Res.
,
79
(
1
),
157
166
.
Renart
,
A.
,
Song
,
P. C.
, &
Wang
,
X. J.
(
2003
).
Robust spatial working memory through homeostatic synaptic scaling in heterogeneous cortical networks
.
Neuron
,
38
(
3
),
473
485
.
Siri
,
B.
,
Berry
,
H.
,
Cessac
,
B.
,
Delord
,
B.
, &
Quoy
,
M.
(
2008
).
A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks
.
Neural Comput.
,
20
(
12
),
2937
2966
.
Siri
,
B.
,
Quoy
,
M.
,
Delord
,
B.
,
Cessac
,
B.
, &
Berry
,
H.
(
2007
).
Effects of Hebbian learning on the dynamics and structure of random networks with inhibitory and excitatory neurons
.
J. Physiol. Paris
,
101
(
1–3
),
136
148
.
Sussillo
,
D.
, &
Abbott
,
L. F.
(
2009
).
Generating coherent patterns of activity from chaotic neural networks
.
Neuron
,
63
(
4
),
544
557
.
Turrigiano
,
G.
(
2007
).
Homeostatic signaling: The positive side of negative feedback
.
Curr. Opin. Neurobiol.
,
17
(
3
),
318
324
.
Turrigiano
,
G. G.
,
Leslie
,
K. R.
,
Desai
,
N. S.
,
Rutherford
,
L. C.
, &
Nelson
,
S. B.
(
1998
).
Activity-dependent scaling of quantal amplitude in neocortical neurons
.
Nature
,
391
(
6670
),
892
896
.
van Rossum
,
M. C. W.
,
Bi
,
G. Q.
, &
Turrigiano
,
G. G.
(
2000
).
Stable Hebbian learning from spike timing-dependent plasticity
.
J. Neurosci.
,
20
(
23
),
8812
8821
.
Vogels
,
T. P.
,
Rajan
,
K.
, &
Abbott
,
L. F.
(
2005
).
Neural network dynamics
.
Annu. Rev. Neurosci.
,
28
,
357
376
.