In recurrent networks of leaky integrate-and-fire neurons, the mean-field theory has been instrumental in capturing the statistical properties of neuronal activity, like firing rate distributions. This theory has been applied to networks with either homogeneous synaptic weights and heterogeneous connections per neuron or vice versa. Our work expands mean-field models to include networks with both types of structural heterogeneity simultaneously, particularly focusing on those with synapses that undergo plastic changes. The model introduces a spike trace for each neuron, a variable that rises with neuron spikes and decays without activity, influenced by a degradation rate rp and the neuron’s firing rate ν. When the ratio α = ν/rp is significantly high, this trace effectively estimates the neuron’s firing rate, allowing synaptic weights at equilibrium to be determined by the firing rates of connected neurons. This relationship is incorporated into our mean-field formalism, providing exact solutions for firing rate and synaptic weight distributions at equilibrium in the high α regime. However, the model remains accurate within a practical range of degradation rates, as demonstrated through simulations with networks of excitatory and inhibitory neurons. This approach sheds light on how plasticity modulates both activity and structure within neuronal networks, offering insights into their complex behavior.

Networks of spiking neurons are complex systems where the structure of connections and the activity patterns generated are deeply intertwined, a relationship often studied using mathematical approaches like the mean-field theory. However, previous studies have primarily focused on networks with limited structural variability, where either the connection strength is nearly identical across the network or the number of connections varies little from one neuron to another. This work takes a step forward by combining both types of structural variability and allowing connection strengths to adapt over time, thereby providing an extended mean-field theory. We derive exact solutions for the distribution of spiking rates and connection strengths at equilibrium and demonstrate their accuracy through numerical simulations, even beyond the defining parameter ranges, offering a more comprehensive and realistic perspective on the interplay between activity and structure in neuronal networks.

Synthetic networks of spiking neurons have been widely used to model the spontaneous activity of neuronal assemblies (Akil, Rosenbaum, & Josić, 2021; Amit & Brunel, 1997a, 1997b; Angulo-Garcia, Luccioli, Olmi, & Torcini, 2017; Cimeša, Ciric, & Ostojic, 2023; Duchet, Bick, & Byrne, 2023; Fusi & Mattia, 1999; Galán, 2008; Gast, Knösche, & Schmidt, 2021; Hartmann, Lazar, Nessler, & Triesch, 2015; Hennequin, Vogels, & Gerstner, 2012; Hutt, Mierau, & Lefebvre, 2016; Koch Ocker, 2023; La Camera, 2022; Lonardoni et al., 2017; Luccioli, Angulo-Garcia, & Torcini, 2019; Ostojic, 2014; Painchaud, Doyon, & Desrosiers, 2022; Pena, Zaks, & Roque, 2018; Sanzeni, Histed, & Brunel, 2022; Tartaglia & Brunel, 2017; Vogels & Abbott, 2005; Zillmer, Brunel, & Hansel, 2009). A common way to model the spiking activity of individual neurons is by means of the so-called leaky integrate-and-fire (LIF) model (Brunel & van Rossum, 2007; Gerstner, 2002). Despite being simple compared with more detailed spiking models (Izhikevich, 2004), the LIF model can reproduce some of the features observed in real neuronal assemblies when implemented on synthetic networks. For example, LIF neurons that receive both excitatory (E) and inhibitory (I) inputs that approximately compensate each other exhibit spike trains that are irregular and compatible with Poisson statistics, in agreement with the spontaneous activity measured experimentally (Shadlen & Newsome, 1994). In network models in which the average E input is compensated by the average I input for all neurons, such a balance can be maintained by the network dynamics because a temporary increase in the E activity rapidly induces an increase in the I activity (and vice versa) due to the recurrent connectivity (Renart et al., 2010; van Vreeswijk & Sompolinsky, 1996).

A clear advantage of the LIF model is that it is amenable to an analytical treatment: The mean-field theory of LIF and balanced neuronal networks can be used to describe the spiking statistics in the stationary state (Brunel, 2000). This theory takes into account not only the mean synaptic input received by individual neurons but also the input fluctuations caused by the irregularity of the spiking process in the presynaptic neurons. In a balanced state, in which the total mean synaptic input is close to zero, the spike emission is mainly driven by these input fluctuations. From the pioneering work of Amit and Brunel (1997a, 1997b), such mean-field formalisms have been used to predict the mean firing rate and firing rate distributions in networks of LIF neurons with various degrees of structural heterogeneity, including networks whose connectivity structure does not change in time (nonplastic) and networks whose synaptic efficacies have been modified by some plasticity mechanism. The term structural heterogeneity here refers either to a neuron-to-neuron variability in the number of inputs received from the network (i.e., the in-degree) or to a synapse-to-synapse variability in the synaptic efficacy (i.e., the synaptic weight).

Previous work has dealt with these two sources of structural heterogeneity separately. On the one hand, mean-field theory has been applied to nonplastic networks with homogeneous E/I synaptic weights and whose in-degrees are either homogeneous over different neurons (Brunel, 2000), slightly heterogeneous (as in networks with Erdős-Rényi connectivity) (Roxin, Brunel, Hansel, Mongillo, & van Vreeswijk, 2011), or determined by an arbitrary joint in-/out-degree distribution (Vegué & Roxin, 2019). In the latter case, the joint degree distribution can include correlations between individual in- and out-degrees, and this was shown to have an important influence on the resulting stationary distribution of firing rates (Vegué & Roxin, 2019). On the other hand, Amit and Brunel studied the case of networks with fixed in-degrees in which E/I synaptic weights are independently drawn from prescribed distributions, including networks whose weight distributions have been previously shaped by a learning process in which different subsets of neurons have been selected to store a set of activity patterns (Amit & Brunel, 1997b).

Together, these contributions (among several others) highlight the fact that any structural heterogeneity (be it a heterogeneity of in-degrees or of synaptic weights) causes a heterogeneity of firing rates in the stationary state and that this should be taken into account by the mean-field equations (Roxin et al., 2011; Vegué & Roxin, 2019). However, when the in-degrees are the same for all neurons, the heterogeneity of synaptic weights can be neglected under some conditions, and this greatly simplifies the mean-field equations and their steady-state solution (Brunel, 2000).

The aim of the present work is to study a rather general scenario in which both sources of structural heterogeneity (the one relative to the degree distribution and that of the weight distribution) are simultaneously taken into account. We assume that the connectivity structure (“who connects to whom”) is determined by a time-invariant “scaffold” on top of which the actual efficacy of every particular connection is defined by the synaptic weight, which can be stable in time or plastic. Hence, we consider networks without structural plasticity that can, however, exhibit synaptic plasticity (Caroni, Donato, & Muller, 2012; Knoblauch, Palm, & Sommer, 2010). The connectivity scaffold is determined by a joint in-/out-degree distribution but is fully random otherwise (i.e., configuration model; Cooper & Frieze, 2004; Newman, 2003). We examine synaptic weights under two distinct conditions. In the first scenario, the weights remain unchanged over time yet exhibit variability among the different synaptic connections. Each weight is independent and conforms to a predetermined distribution (model A). In the second scenario, we explore a dynamic setting where synaptic weights evolve over time, adjusting in response to neuronal activity according to a specific plasticity rule (model B).

As it is common in models of spike-timing-dependent plasticity (STDP), the plasticity in our model is mediated by the introduction of spike traces (Akil et al., 2021; Chen & Jasnow, 2010; di Volo, Burioni, Casartelli, Livi, & Vezzani, 2014; Gerstner, Kistler, Naud, & Paninski, 2014; Morrison, Diesmann, & Gerstner, 2008; Pfister & Gerstner, 2006; Wang & Aljadeff, 2022). A trace associated to one neuron is a variable that increases by a fixed amount every time the neuron emits a spike and decays over time in the absence of neuronal activity. It may represent a chemical signal that is released in response to firing (Pfister & Gerstner, 2006). Its characteristic degradation rate is a measure of how fast the “memory” about the neuron’s spiking history is lost. In this work, we consider a single trace per neuron, in contrast with the usual implementation of plasticity rules based on spike traces where every neuron’s spikes contribute to different traces that differ in their degradation rate (i.e., two traces per neuron in pair-based STDP rules; Morrison et al., 2008; and three or more in triplet models; Pfister & Gerstner, 2006).

If the firing process is stochastic and controlled by an intrinsic firing rate ν, the trace itself is a stochastic variable whose temporal evolution depends on ν and on the trace’s degradation rate, rp. We perform a mathematical analysis of the trace, assuming that the spiking process is Poisson and the firing rate is constant in time. We show that the trace’s stationary probability density can be analytically computed and, in particular, (a) the trace can be rescaled so that it equals the firing rate on average and (b) the fluctuations around this mean vanish in the limit in which the ratio α = ν/rp goes to infinity. Close to this limit, the trace can be used to estimate the underlying firing rate with high accuracy (we say that the trace is reliable).

In our plasticity model, the temporal evolution of each synaptic weight follows an ordinary differential equation (ODE) that depends on the weight itself and on the pre- and postsynaptic traces. Close to the limit of reliable traces, the synaptic weight at equilibrium can be thus expressed as a function of the firing rates underlying the pre- and postsynaptic spiking processes. This is the key ingredient that allows us to link the microscopic description of the neuronal activity (in terms of voltages, spikes, and traces) with its macroscopic mean-field description at equilibrium (in terms of firing rates).

We extend previous mean-field formalisms (Amit & Brunel, 1997a, 1997b; Brunel, 2000; Roxin et al., 2011; Vegué & Roxin, 2019) to networks with the two sources of structural heterogeneity described earlier, with and without plasticity (Models A and B, respectively). The solution to the mean-field equations allows the reconstruction of the firing rate distribution (and the synaptic weight distribution in model B) at equilibrium. This is done by invoking the central limit theorem (CTL), which allows to jointly regard the mean and the variance of the input to a given neuron, that a priori depend upon a whole unknown rate (and weight in Model B) probability distribution, as a Gaussian random vector that depends on a limited number of unknown statistics. These unknowns are computed by solving the mean-field equations, which specify the dependence of the unknowns on the unknowns themselves due to the fact that the network is recurrent, and, thus, the input and the output statistics are linked. The equations are exact in the limit of reliable traces, but they already provide accurate results when the degradation time constant of the trace, τp = 1/rp, is of the order of seconds. This is shown by comparing the analytically computed rate/weight distributions with those obtained from simulating the whole process on a network.

The Neuronal Dynamics

We consider a network of N LIF neurons. The membrane voltage Vi of a neuron i in the network evolves in time according to
(1)
where τ is a time constant and Ii(t) is the input current received from other neurons. Whenever Vi reaches a threshold Vθ, the neuron fires a spike and the voltage is reset to Vr. During a period τr immediately after firing, the neuron is refractory: Its voltage is fixed at Vr and the neuron cannot respond to the stimulation received from other neurons.
The input Ii is modeled as a sum of Dirac delta functions centered at the spike times of the neurons presynaptic to neuron i (plus a synaptic delay). We split this input into a recurrent input coming from the network itself (Iirec) and an input coming from a pool of external neurons (Iiext):
(2a)
(2b)
(2c)
The first sum in Equation 2b runs over the neurons’ indices. The second sum runs over the spikes emitted in the past by neuron j, and tjk denotes the kth spike time of neuron j. The delay in spike transmission is a parameter associated to neuron j, and it is given by dj. The binary matrix A=aiji,j=1N and the weighted matrix Wt=wijti,j=1N specify the connectivity in the network. The term aij is 1 whenever a connection from neuron j to neuron i exists, and is 0 otherwise. When aij = 1, wij(t) gives the synaptic weight of the connection from j to i at time t. Equations 1, 2a, and 2b thus state that, whenever aij = 1, a spike emitted by the presynaptic neuron j at time tjk will have an effect on neuron i at time tjk+dj, and the effect is to make the postsynaptic voltage Vi jump a magnitude equal to the synaptic weight at this time, wijtjk+dj.

The external input defined by Equation 2c has the same structure, but we assume that it is generated from a pool of Kext external neurons that is unique for each postsynaptic neuron i. The time tijk is the arrival time of the kth spike emitted by the jth external neuron to neuron i. The synaptic weights from the external pool are assumed to be all the same for the sake of simplicity. While the spikes within the neuronal network are generated from the neurons’ voltages crossing the threshold, the external spikes are assumed to come from independent Poisson processes with a fixed rate νext.

The Connectivity Structure

We call A the binary adjacency matrix and W(t) the weight matrix. Matrix A is fixed (i.e., time independent) so it acts as a structural scaffold that determines which neurons can be connected. It also determines the (nonweighted) in- and out-degree of every neuron i via
(3)

We assume that A is a random instantiation from an ensemble of possible binary adjacency matrices. The ensemble is characterized by a joint in-/out-degree probability density function (p.d.f.) ρin, out in the following sense: The set of degree pairs Kiin,Kiouti=1N is a sample of N independent instantiations of a two-dimensional random vector that is distributed according to ρin, out.

Contrary to A, matrix W(t) may change in time depending on pre- and postsynaptic neuronal activity. We explore the following two models for W(t) (see Figure 1):

  1. Model A. The structure of W is fixed in time: W(t) = W. Each synaptic weight wij is generated independently of the others from a common arbitrary weight probability distribution.

  2. Model B. The matrix W(t) is plastic: Each weight wij(t) changes in time as a function of the activity of the pre- and postsynaptic neurons j and i. The details of the plasticity mechanism are given in the next section.

Figure 1.

Schematics of the network structure. In both models, the network is built on a scaffold that is fixed and determines which neurons can be connected. The scaffold is such that individual in- and out-degrees follow a prescribed joint distribution defined by a density ρin,out. On top of this scaffold, the synaptic weights vary from one synapse to the next. In model A, weights are fixed in time, and every weight is an independent realization of a common random variable following a prescribed distribution with density ρw. In model B, every weight is plastic, and it varies in time as a function of the spiking traces of the pre- and postsynaptic neurons involved in the connection.

Figure 1.

Schematics of the network structure. In both models, the network is built on a scaffold that is fixed and determines which neurons can be connected. The scaffold is such that individual in- and out-degrees follow a prescribed joint distribution defined by a density ρin,out. On top of this scaffold, the synaptic weights vary from one synapse to the next. In model A, weights are fixed in time, and every weight is an independent realization of a common random variable following a prescribed distribution with density ρw. In model B, every weight is plastic, and it varies in time as a function of the spiking traces of the pre- and postsynaptic neurons involved in the connection.

Close modal

These models can be easily extended to networks with more than a single neuronal type or population (e.g., E and I neurons). For the sake of clarity, the mathematical analysis presented in the main text corresponds to a network with a single population, but we show and discuss results on networks with two populations in some of the main text’s figures. We provide the full mathematical analysis of the extended models in Supporting Information Sections S3 and S5.

The Plasticity Rule

In Model B, for every pair (i, j) of connected neurons, the weight wij from j to i evolves in time as a function of the activities of neurons i and j. As it is standard in models of STDP (Gerstner et al., 2014; Morrison et al., 2008), besides membrane voltage, each neuron has associated a spike trace (also called local variable in the literature). This trace is a time-dependent variable that is a record of the neuron’s spiking activity. In particular, the trace Ri of neuron i exponentially decays with a characteristic time constant τp and makes jumps of magnitude 1 every time the neuron fires a spike:
(4)
where, as in Equation 2b, δ is the Dirac delta function and tikk is the collection of spike times of neuron i.

The trace Ri represents the concentration of a chemical signal that is released every time neuron i fires and that is degraded over time at a rate rp = 1/τp. This chemical signal could correspond to glutamate bound to its receptors, intracellular calcium, and second messengers, among many others (Pfister & Gerstner, 2006). The constant τp acts as a “memory” parameter: It determines how much the spikes emitted in the past still have an impact on the trace at the present moment. A large τp implies that the trace degradation is slow so the spiking memory is large, and vice versa. In the limit τp → ∞, there is no trace degradation and Ri(t) simply counts the total number of spikes emitted up to time t (assuming that Ri was 0 at time 0). Note that while many models of STDP assume the presence of two or more traces per neuron with distinct characteristic time constants (Morrison et al., 2008; Pfister & Gerstner, 2006), here, we assume a single trace per neuron.

As we show in detail in Supporting Information Section S1, if neuron i fires approximately as a Poisson process with a characteristic firing rate νi, then its trace is a random process that can be used to approximate νi. For this, the trace has to be normalized by τp,
(5)
so that, at equilibrium (i.e., when the probability distribution of Ri(t) is independent of t), νˆit equals the firing rate on average:
(6)
We call νˆit the normalized trace or the approximate firing rate of neuron i. We will analyze the statistical properties of the normalized trace in The spike trace section.
In our plasticity model (Model B), the variation of the synaptic weight at time t depends on the value of the weight at time t and also on the value of the traces associated to the pre- and postsynaptic neurons involved in the connection at time t. Note that this makes our rule simpler than typical STDP rules, in which the instantaneous weight variation at time t depends not only on the value of the traces at t but also on the presence or absence of a pre/postspike at time t (see, e.g., Gerstner et al., 2014). Using the normalized trace defined by Equation 5, we express the instantaneous rate of change of a weight as a function of the weight itself and the pre- and postsynaptic normalized traces:
(7)
This relationship mirrors the classical equations for modeling synaptic plasticity based on neuronal firing rates while adhering to the principle of locality (Gerstner & Kistler, 2002).
For the derivation of our mean-field equations, we assume the function g to be such that in the steady-state solution of Equation 7; the weight wij is expressed as a sum of n ≥ 1 multiplicative functions of the traces, that is,
(8)
where νˆi*,νˆj*,wij* are such that gνˆj*,νˆi*,wij*=0, and gkpre and gkpost are arbitrary functions for all k. Notice that if the relation between the weight and the traces at equilibrium takes the general form
(9)
and g* is of class 𝒞n, then we can use the Taylor theorem to approximate g* by a sum of n multiplicative functions of νˆj* and νˆi*.
For simplicity, we will restrict ourselves to the n = 1 case. In all our results, we take the function g to be of the form
(10)
so that the steady-state solution of Equation 7 is
(11)
with
(12)
The parameter g0 is taken to be positive so that the first term of Equation 10 defines a pure Hebbian rule because the weight increase is larger when the pre- and the postsynaptic rates are simultaneously high (Gerstner & Kistler, 2002; Zenke & Gerstner, 2017). The second parameter, ε, is assumed to be positive and close to zero, ensuring the contribution of −εw to the homeostasis of the plasticity (Zenke & Gerstner, 2017). When g0 = 1 and ε = 0, this rule corresponds to Oja’s (1982) plasticity rule if we replace the normalized traces by the corresponding firing rates.

As we pointed out before, the trace and its normalized counterpart are stochastic variables due to the stochastic nature of the spiking process. As we show later, there is, nonetheless, a parameter regime in which the fluctuations of the normalized trace around its average can be neglected. This is the regime for which we develop the mean-field formalism when the weights are plastic.

We are interested in studying macroscopic properties of the system defined by Equations 12c when the structure of the synaptic weights is heterogeneous and possibly plastic (i.e., given by Models A and B). We use the term macroscopic property to denote a feature that statistically characterizes the neuronal ensemble, regardless of its microscopic details. For example, a microscopic description of the activity of a single neuron in the network is provided by its spike train, that is, the collection of times at which the neuron has spiked. However, a simpler and probably more meaningful measure of the neuron’s activity is provided by the average number of spikes the neuron has emitted per unit time, that is, its firing rate. At the network level, the distribution of firing rates in the stationary state is a macroscopic property of the neuronal ensemble that is informative of the overall activity level in the network as well as of how heterogeneous this activity is. We notice that the term stationary here refers to the fact that the firing rates are stable in time. Despite this statistical stability, the underlying stochastic neuronal network remains highly active, with individual neurons still firing irregularly and exhibiting fluctuations. In the case of plastic synaptic weights, we are also interested in determining what the distribution of these weights will be in the stationary state.

The mean-field theory of networks of LIF neurons with homogeneous degrees or homogeneous synaptic weights has been extensively studied (Amit & Brunel, 1997a, 1997b; Brunel, 2000; Roxin et al., 2011; Vegué & Roxin, 2019). The formalism that we present here is an extension of this theory in networks that are heterogeneous in terms of both degrees and synaptic weights. The assumptions for the system to be well described macroscopically by the mean-field theory prevail, namely:

  1. individual neurons fire as Poisson processes, so that Equation 2b is treated as a nondeterministic equation and Equation 1 becomes a stochastic differential equation;

  2. each of these Poisson processes is defined by its characteristic firing rate and they are independent once the firing rates are known;

  3. the absolute value of every synaptic weight is small compared with the threshold Vθ, and the total number of inputs received by each neuron is large.

For Condition 1 to be approximately fulfilled, it is enough that the expectation of the total input current’s integral over a time window of length τ (i.e., the quantity μi defined later by Equation 25; see Supporting Information Section S2) be below threshold (Feng, 2003, Chapter 15). In this case, the spiking process is mainly driven by the input fluctuations and it is thus irregular. Condition 2 is fulfilled when the set of presynaptic neurons to a given neuron has a small overlap from one neuron to another. This is accomplished when the network structure is random and sparse (that is, when the in-degrees are small compared with the network’s size, KiinN for all i). Condition 3 depends on the degree distribution, on wext and on the weight distribution chosen or the plasticity rule in place. Note that, according to Conditions 2 and 3, the in-degrees are large in absolute terms but small compared with the network’s size. We assume that all these conditions are approximately fulfilled in our networks.

In the following sections, we derive the mean-field equations that allow to analytically predict the firing rate and synaptic weight distributions in the stationary state. We first provide a brief mathematical analysis of the spike trace and its normalized counterpart. To make it clearer and easier to follow, we start by analyzing the much simpler case of a network with statistically equivalent neurons. We move afterward to the heterogeneous scenarios defined by Models A and B.

The Spike Trace

Let us focus on the spike trace of a given random neuron i. If the spike times were known, Equation 4 could be solved analytically, yielding the solution
(13)
where θ denotes the Heaviside step function and where we omitted the subscript i to simplify the notation in what follows. In the mean-field formalism, however, we treat the spike train as a stochastic process that is well approximated by a Poisson process. This transforms the trace Equation 4 into a stochastic differential equation whose solution is given by a time-dependent p.d.f., ρ(r, t). This function allows us to compute the probability that, at time t, the trace lies within a given interval [r, r + dr]:
(14)
If the firing process that determines the trace jumps is a Poisson process of rate ν, the function ρ(r, t) obeys the partial differential equation
(15)
with
(16)
(see Supporting Information Section S1.1 for details). In particular, the stationary distribution of r, ρ(r), fulfills
(17)
From Equation 15, we can obtain a system of ODEs for the moments of R(t). We denote by 〈R〉(t) the expectation of R(t) and by 〈Rn〉(t) the centered moment of order n ≥ 0 of R(t):
(18)
Clearly, 〈R0〉(t) = 1 and 〈R1〉(t) = 0 for all t. The equations (see Supporting Information Section S1.2) are
(19)
Note the similarity between the ODE for 〈R〉 and that of R itself, Equation 4: In the averaged version, the spike train kδttik has been replaced by the firing rate ν = α/τp. We see that, on average, the variable R approaches exponentially to α = τpν, meaning that the rescaled stochastic variable
(20)
allows to approximately recover the firing rate ν from R. The equations for the expectation νˆt and variance νˆ2t of νˆt are
(21)
From this, we derive several conclusions. First, the larger the memory constant τp, the slower the convergence to the stationary distribution. Second, the coefficient of variation (standard deviation-to-mean ratio) for both R and νˆ at equilibrium is
so the estimation of ν through νˆ becomes more accurate as the product of the memory constant and the true firing rate increases. In fact, in the limit α → ∞, R and νˆ are normally distributed at equilibrium and CV = 0 (see Supporting Information Section S1.4). Conversely, for small α, the stationary distributions of R and νˆ are highly non-Gaussian and they exhibit a large CV. Figure 2 shows typical trajectories and the stationary distribution of νˆ for different values of ν and τp.
Figure 2.

Trace-dependent estimated firing rate νˆ as defined by Equations 4 and 20 for different values of the true firing rate ν and the memory constant τp. In each plot, we show three examples of temporal trajectories for νˆt (ν = 1 Hz, ν = 5 Hz, and ν = 10 Hz), all of them starting at t = 0. The dashed lines indicate the predicted mean value at equilibrium, νˆ=ν. To the right of each plot, we show the p.d.f. of νˆ at equilibrium, both analytical (obtained by numerically solving Equation 17 for the density of R and appropriately rescaling) and empirical (obtained by simulating an ensemble of 50,000 independent trajectories and computing the resulting histogram at time t = 5τp).

Figure 2.

Trace-dependent estimated firing rate νˆ as defined by Equations 4 and 20 for different values of the true firing rate ν and the memory constant τp. In each plot, we show three examples of temporal trajectories for νˆt (ν = 1 Hz, ν = 5 Hz, and ν = 10 Hz), all of them starting at t = 0. The dashed lines indicate the predicted mean value at equilibrium, νˆ=ν. To the right of each plot, we show the p.d.f. of νˆ at equilibrium, both analytical (obtained by numerically solving Equation 17 for the density of R and appropriately rescaling) and empirical (obtained by simulating an ensemble of 50,000 independent trajectories and computing the resulting histogram at time t = 5τp).

Close modal
As shown in this figure, the approximate rate νˆ can be highly noisy, especially when the product τpν is small. When used to implement a plasticity rule, this variable can thus lead to highly varying synaptic weights, and this could make the firing rates vary accordingly. Thus, a true stationary state of the system, that is, a state in which both weights and rates remain stable in time, can only be reached in the limit of stable traces, that is, in the limit τpν → ∞. For finite values of τpν, the stationary state is only reached approximately. In what follows, we fix τp to be large enough. The implication is that we can reasonably assume that the normalized trace νˆ is a good approximation of the firing rate ν in the stationary state, that is,
(22)
so that we can rewrite the steady-state solution to the plasticity rule (Equation 11) as
(23)

The biological interpretation of this assumption is that the trace is slowly degraded, so the memory of the trace on the spiking activity is large. This allows the synaptic weights to respond only to slow temporal variations of the firing rates.

Introduction to the Mean-Field Formalism

From now on, we assume that our system is in a state characterized by a stationary distribution of firing rates, so in order to simplify the notation, we remove the asterisks (*) on the stationary weights and firing rates.

Under Conditions 1, 2, and 3, the stationary firing rate of a neuron i in the network is given by the equations (Brunel, 2000):
(24)
where μi and σi are, respectively, the mean and the standard deviation of the integral of the total input current Ii(t) (Equation 2a) over a time window of length τ, that is,
(25)
where ν1, …, νN are the stationary firing rates of the neurons in the network (see Supporting Information Section 2 for details). Evaluating Equations 24 and 25 requires knowing what all the firing rates and all the synaptic weights are at equilibrium. However, the firing rate distribution (and the synaptic weight distribution in Model B) are not known a priori: They are the unknowns of our problem. The strategy to solve the problem is based on assuming that, whatever the rate and weight distributions are, the sums over j in Equation 25 are sums taken from independent realizations of a common random vector or realizations of weakly correlated random variables as in Rio (2017) and Rosenblatt (1956). This allows us to apply the CLT to these sums to conclude that they approximately follow a Gaussian distribution that is determined by a few statistical parameters. This step is key because it greatly reduces the space of the system’s unknowns from a whole distribution to a few parameters. The goal is then to compute these parameters. We will soon make these ideas more precise for the different model variations presented earlier.

Network with Equivalent Neurons

We start by studying the simplest scenario: when all the neurons in the network are statistically identical. This occurs when the in-degrees are the same for all the neurons, and the synaptic weights are either the same for all the synapses (Model A) or evolve in time according to the same form of plasticity rule (Model B). This homogeneity results in a homogeneity of firing rates and synaptic weights in both models, so the problem’s unknown is the stationary firing rate ν (together with the stationary weight w in Model B).

We denote the in-degree of every neuron by K. The quantities μi and σi of Equation 25 in this case do not depend on i and reduce to
(26)
If there is no plasticity in the network (Model A), meaning that all the synaptic weights are set to a known value w, then, according to Equation 24, the stationary firing rate ν is found by solving
(27)
with
(28)
If there is a plasticity mechanism of the form described earlier (Model B), the value of all the synaptic weights in the stationary state is specified by the stationary firing rate through w = w(ν), so the equation to solve is Equation 27 with
(29)

Heterogeneous Network with No Plasticity (Model A)

We move now to the case of heterogeneous connectivity with synaptic weights that are constant in time. The binary connectivity structure is defined by a joint distribution of in-/out-degrees. The synaptic weights are generated independently from a known weight distribution.

As stated before, in the stationary state, the firing rate of a random neuron i can be computed through Equations 24 and 25. We can rewrite Equation 25 as
(30)
with
(31)
The key step is to deal with the sums of Equation 31. To simplify the notation, let us reorder the indices of the presynaptic neurons to neuron i so that these indices are now 1, …, Ki, where Ki is the in-degree of neuron i, Kiin=Ki. This allows us to rewrite Equation 31 as
(32)
In the mean-field formulation, we treat the neurons statistically, so the in-degree Ki is a random variable taken from the in-degree distribution imposed in the network, and once Ki is known, the sum in Equation 34 can be assumed to be a sum over Ki independent and identically distributed (i.i.d.) random vectors v1i,,vKii,
(33)
The distribution of a presynaptic rate νj does not depend on Ki. This is ensured by the random connectivity structure in the network (beyond the degree distribution imposed), which makes the in-degrees of connected neurons be independent random variables (see Supporting Information Sections S3.3 and S3.4 for a proper justification). This would not be the case in an assortative network in which neurons with large in-degrees tend to be connected to neurons with large in-degrees. The independence between in-degrees of connected neurons implies that the firing rate of a presynaptic neuron (which is a function of its in-degree) is also independent of the postsynaptic in-degree.

Also, the postsynaptic firing rate can be assumed to be independent of a presynaptic rate νj when the synaptic weight wij is small enough so that the influence of a single presynaptic neuron on a postsynaptic neuron is negligible. Because of all these reasons, the presynaptic rates ν1, …, νKi can be regarded as i.i.d. random variables whose distribution is independent of i.

Since the weights are also chosen independently from a common weight distribution, the result is that the vectors v1i,,vKii are i.i.d. and the distribution that characterizes them is independent of the postsynaptic neuron i. This “universality” feature of the set of vectors vjii,j is key in our mean-field calculation. Let
(34)
be the mean vector and the covariance matrix of vji, that is,
(35)
where ji indicates that j is a presynaptic neuron of i, that is, aij = 1. As it was pointed out in Vegué and Roxin (2019) (and it is explained in detail in Supporting Information Section S3.3), this condition cannot be neglected. A neuron that is presynaptic to another neuron tends to have a larger out-degree than a neuron picked at random (being presynaptic, in particular, means that your out-degree is at least 1). If individual in- and out-degrees in the network are correlated, the distribution of in-degrees among the presynaptic neurons is going to be biased compared with the distribution of in-degrees in the network. Since the firing rate depends on the in-degree, this, in turn, will bias the distribution of firing rates among the presynaptic neurons, and the statistical parameters in Equation 35 will be biased too. This bias can be precisely quantified as we will show later.
Once Ki is known, and if it is large enough, the multidimensional version of the CLT ensures that the vector Si will be approximately distributed as a bivariate normal vector with mean vector Kim and covariance matrix KiΣ:
(36)
where
(37)
We denote by m and s2 the mean and variance of the rate of a presynaptic neuron, respectively:
(38)
Let θ = (m, s2). Since any synaptic weight is independent of the firing rate of its presynaptic neuron, the moments defined in Equation 35 are expressed as a function of the moments of the weight distribution and the pair of rate statistics θ by
(39)
Thus, to compute the statistics mμ,sμ2,mσ,sσ2 and cμσ, it is enough to know the mean and the variance of the rate distribution of presynaptic neurons, m and s2. Once these two parameters are known, the distribution of the vector Si is determined through Equations 34, 36, 37, and 39. The firing rate νi of a neuron i in the network is therefore specified by the pair of mean-field parameters, θ = (m, s2), and by a triplet of identity variables associated to that neuron, Xi = (Ki, Yi, Zi) (whose distribution, in turn, depends on the mean-field parameters):
(40a)
(40b)

If the neuron is randomly chosen in the network, its identity variables Ki, Yi, Zi are random variables: Ki is distributed according to the in-degree distribution imposed in the network, and (Yi, Zi) is a normal bivariate vector with zero mean and covariance matrix Σ = Σ(θ), as stated by Equation 37. The vector (Yi, Zi) is independent of Ki, and the identity vectors of all the neurons, X1, …, XN, are i.i.d. In sum, the whole rate distribution in the network can be reconstructed from only two rate statistics: m and s2.

The problem, then, reduces to computing these two statistics. This can be done by using their definitions as the mean and the variance of the rate of a random presynaptic neuron: They should fulfill
(41)
where x = (k, y, z) and ρXθx is the joint p.d.f. of the triplet Xi = (Ki, Yi, Zi) for a presynaptic neuron:
(42)
with ρKpre being the p.d.f. of the in-degree of a presynaptic neuron (see Supporting Information Section S3.3 on how to compute it) and ρY,Zθ being the p.d.f. of a normal bivariate vector with mean 0 and covariance matrix Σ(θ).
For the system to be consistent, Equation 41 must be fulfilled. One should thus find the pair of mean-field parameters θ = (m, s2) by solving the system of two unknowns and two equations
(43)
with F(θ) ≔ (Fm, Fs2)(θ) and Fm, Fs2 being the functions defined in Equation 41.

Once θ is known, the network’s firing rate distribution can be numerically reconstructed by creating a large sample of triplets {Xi}i and then using it to compute the corresponding sample of firing rates by applying Equation 40(a and b). Note that to create the triplet sample, we must use the in-degree distribution in the network, not the in-degree distribution among presynaptic neurons as in Equations 41 and 42.

To compare this result with the stationary rate distribution obtained from simulating the whole network, we took a network composed of N = 1,000 I neurons with fixed in-degree K = 25. The incoming neighbors were chosen randomly, resulting in normally distributed out-degrees (Figure 3A). The synaptic weights were taken from a gamma distribution. Figure 3 shows the comparison as we vary the weight expectation Ew. The mean and standard deviation of the rate distribution (Figure 3B) and the rate distribution itself (Figure 3C) are well predicted by the theory, for different values of the external firing rate νext.

Figure 3.

Firing rate distributions in three inhibitory networks with heterogeneous synaptic weights. (A) Details of the connectivity structure. Left: in-/out-degree histogram (the same in the three networks), with fixed in-degree and variable out-degree; right: synaptic weight histograms. The synaptic weights have been created independently with wij ∼ −Gamma(κ, θ) with mean Ew0.1,0.3,0.5 and variance Var[w] = 0.2 (units in millivolts), the parameters being such that Ew=κθ, Var[w] = κθ2 (the three networks only differ in the value of the mean weight). (B) Mean ± standard deviation of the firing rate distribution at equilibrium as a function of the external firing rate νext. (C) Complete firing rate histogram in the three networks for three values of the external firing rate: 7.0 Hz (left), 7.5 Hz (middle), and 8.5 Hz (right). In all the plots, the empirical results come from integrating the full neuronal dynamics on networks with N = 1,000 neurons. The analytical results are obtained by numerically solving Equation 43 on θ = (m, s2). The neuronal parameters are: K = 25 (in-degree), τ = 20 ms, Vθ = 20 mV, Vr = 10 mV, τr = 2 ms, Kext = 1,000, and wext = 0.14 mV.

Figure 3.

Firing rate distributions in three inhibitory networks with heterogeneous synaptic weights. (A) Details of the connectivity structure. Left: in-/out-degree histogram (the same in the three networks), with fixed in-degree and variable out-degree; right: synaptic weight histograms. The synaptic weights have been created independently with wij ∼ −Gamma(κ, θ) with mean Ew0.1,0.3,0.5 and variance Var[w] = 0.2 (units in millivolts), the parameters being such that Ew=κθ, Var[w] = κθ2 (the three networks only differ in the value of the mean weight). (B) Mean ± standard deviation of the firing rate distribution at equilibrium as a function of the external firing rate νext. (C) Complete firing rate histogram in the three networks for three values of the external firing rate: 7.0 Hz (left), 7.5 Hz (middle), and 8.5 Hz (right). In all the plots, the empirical results come from integrating the full neuronal dynamics on networks with N = 1,000 neurons. The analytical results are obtained by numerically solving Equation 43 on θ = (m, s2). The neuronal parameters are: K = 25 (in-degree), τ = 20 ms, Vθ = 20 mV, Vr = 10 mV, τr = 2 ms, Kext = 1,000, and wext = 0.14 mV.

Close modal

Heterogeneous Network with Plastic Synaptic Weights (Model B)

We now consider a more complex scenario in which the network structure is determined not only by a heterogeneous connectivity defined by a joint distribution of in-/out-degrees but also by a plasticity mechanism that shapes individual synaptic weights. As stated in previous sections, the plasticity rule is such that, once the network reaches a stationary state, every synaptic weight wij is related to the pre- and postsynaptic firing rates νj, νi through
(44)
for arbitrary functions gpre and gpost. This relationship is an approximation that becomes more accurate as the products τpνi and τpνj increase.
As in the previous cases, the stationary firing rate of a random neuron i is given by Equations 24 and 25. Again, we redefine the indices of the presynaptic neurons to neuron i to be 1, …, Ki, with Ki=Kiin the in-degree of i, and we use Equation 44 to rewrite Equation 25 as
(45)
where
(46)
The specific form of the weight-rate relationship defined by Equation 44 allows us to separate the pre- and postsynaptic components of each synaptic weight so that the sums Sμi and Sσi do not depend on the postsynaptic rate νi. As before, the vector
(47)
can be assumed to be a sum over Ki i.i.d. random vectors v1i,,vKii,
(48)
and the distribution of vji is independent of the postsynaptic neuron i (and, thus, a network feature). We now define
(49)
as the mean vector and the covariance matrix of vji:
(50)
When Ki is known and large enough, the multidimensional version of the CLT ensures that the vector Si will be approximately distributed as a bivariate normal vector with mean vector Kim and covariance matrix KiΣ:
(51)
(52)
Now, the set of mean-field parameters is θmμ,sμ2,mσ,sσ2,cμσ. Once θ is known, the distribution of Si is determined through Equations 49, 51, and 52. The firing rate νi of a neuron i is again determined by θ and by a triplet of identity variables associated to that neuron, Xi = (Ki, Yi, Zi) (whose distribution depends on the mean-field parameters). Due to the dependence of the synaptic weight wij on the postynaptic rate νi, now, the way to compute νi from θ and Xi is no longer based on evaluating a function. Instead, one must solve a one-dimensional equation on νi:
(53a)
(53b)
We denote by Φ = Φ(θ, Xi) a mapping that, given θ and Xi, returns a solution to Equation 53 (a and b) on the unknown νi.

As in the previous case, the identity variables Ki, Yi, Zi are random: Ki is distributed according to the in-degree distribution imposed in the network and (Yi, Zi) is a normal bivariate vector with zero mean and covariance matrix Σ = Σ(θ) (see Equation 52). The vector (Yi, Zi) is independent of Ki, and the identity vectors of all the neurons, X1, …, XN, are i.i.d.

The firing rate distribution can thus be computed once the mean-field parameter set θ=mμ,sμ2,mσ,sσ2,cμσ is known. By definition, the parameters in θ fulfill
(54)
where x = (k, y, z) and ρXθx is the joint p.d.f. of the triplet Xi = (Ki, Yi, Zi) for a presynaptic neuron:
(55)
with ρKpre being the p.d.f. of the in-degree of a presynaptic neuron (see Supporting Information Section S3.3) and ρY,Zθ being the p.d.f. of a normal bivariate vector with mean 0 and covariance matrix Σ(θ).
The mean-field parameter set θ is thus found by solving the system of five unknowns and five equations
(56)
where GθGmμGsμ2,Gmσ,Gsσ2,Gcμσθ and the component functions are defined in Equation 54.
The firing rate distribution can be reconstructed analogously as we did for Model A. In this scenario, we are also interested in anticipating the distribution of synaptic weights. Once the system Equation 56 is solved and we know the value of θ, the synaptic weight of a randomly chosen connection is computed as follows. Calling i and j the post- and presynaptic neurons involved in the connection, respectively, with identity variables Xi = (Ki, Yi, Zi) and Xj = (Kj, Yj, Zj), the firing rates of i and j are
(57)
The synaptic weight of the connection ij is, then, given by Equation 44. Note, however, that the in-degrees Ki and Kj do not necessarily follow the in-degree distribution imposed in the network: The fact that a connection exists from j to i always biases the in-degree distribution of i and can bias the in-degree distribution of j (if individual in-/out-degrees are correlated). As detailed in Supporting Information Section S3, these distributions are specified by
(58)
where 〈K〉 and 〈Kout | Kin = m〉 are, respectively, the expected (in- and out-) degree and the expected out-degree of a neuron conditioned to its in-degree being m. Equation 58 shows that the in-degree distribution for a postsynaptic neuron is always biased with respect to the network in-degree distribution. The in-degree distribution for a presynaptic neuron is only biased when individual in- and out-degrees in the network are correlated. To numerically reconstruct the weight distribution, we can create a large sample of pre- and postsynaptic triplets Xj and Xi, taking into account the pre- and postsynaptic degree distributions given in Equation 58 and then use it to create a sample of synaptic weights through Equations 57 and 44.

This formalism can be extended to networks composed of E and I neurons as we detail in Supporting Information Sections S4 and S5.

We verified that the described equations can predict the weight and firing rate distributions in the stationary state. For this, we first simulated the microscopic dynamics of a network composed of NE excitatory and NI inhibitory neurons with NE = NI = 1,000 in which all the synaptic weights were plastic. The plasticity rule for E synapses was inspired by Oja’s rule (Oja, 1982), see Equations 7, 10, and 12. The I rule was taken to be analogous but with the opposite sign to simplify the resulting mean-field equations.

In our example network, degrees from/to the E population were normally distributed and independent, whereas the in-degrees from the I population were fixed (the I incoming neighbors were chosen randomly, resulting in normally distributed I out-degrees); see Figure 4A, B. The reason to include I neurons to the network of E neurons is that the network should be approximately balanced for it to reach a stationary state with irregular (and, hence, close to Poisson) firing and low firing rates. The raster plots in Figure 4C show this irregular firing. Figure 4D shows the mean and standard deviation of the rate and weight distributions as the external firing rate is increased, for three choices of the plasticity parameter g0 (see Equation 12). A sample of the corresponding distributions is given in Figure 5, showing a very good agreement between theory and simulations.

Figure 4.

Firing rate and synaptic weight statistics in E-I networks with plastic synaptic weights. (A) Each neuron in the network has four characteristic degrees: the in-/out-degrees coming from/going to the E and I populations (KE,in, KE,out, KI,in, KI,out). (B) E and I in-/out-degree histogram (statistically identical in all the networks), with normally distributed E in-degrees, E out-degrees, and I out-degrees and a fixed I in-degree. The four degrees associated to each neuron are independent random variables. (C) Spike times (top) and instantaneous population firing rates (bottom) when νext = 8.0 Hz for three choices of the plasticity parameter g0 (see Equations 10, 12). (D) Mean ± standard deviation of the E/I firing rate (top) and E synaptic weight (bottom) distributions at equilibrium as a function of the external firing rate νext for the same three networks of (C). In all the plots, the empirical results come from integrating the full neuronal dynamics on networks with NE = NI = 1,000 neurons. The analytical results are obtained by numerically solving Equation 56 on θ=mμ,sμ2,mσ,sσ2,cμσ. The E in-/out-degrees are KiE,in,KiE,outNμK,σK with μK = 25, σK = 7. The I in-degree is the same for all the neurons, KiI,in=25. The neuronal parameters are τ = 20 ms, Vθ = 20 mV, Vr = 10 mV, τr = 2 ms, Kext = 1,000, and wext = 0.14 mV. The remaining plasticity parameters are ε = 0.001 ms−2 and τp = 50 s.

Figure 4.

Firing rate and synaptic weight statistics in E-I networks with plastic synaptic weights. (A) Each neuron in the network has four characteristic degrees: the in-/out-degrees coming from/going to the E and I populations (KE,in, KE,out, KI,in, KI,out). (B) E and I in-/out-degree histogram (statistically identical in all the networks), with normally distributed E in-degrees, E out-degrees, and I out-degrees and a fixed I in-degree. The four degrees associated to each neuron are independent random variables. (C) Spike times (top) and instantaneous population firing rates (bottom) when νext = 8.0 Hz for three choices of the plasticity parameter g0 (see Equations 10, 12). (D) Mean ± standard deviation of the E/I firing rate (top) and E synaptic weight (bottom) distributions at equilibrium as a function of the external firing rate νext for the same three networks of (C). In all the plots, the empirical results come from integrating the full neuronal dynamics on networks with NE = NI = 1,000 neurons. The analytical results are obtained by numerically solving Equation 56 on θ=mμ,sμ2,mσ,sσ2,cμσ. The E in-/out-degrees are KiE,in,KiE,outNμK,σK with μK = 25, σK = 7. The I in-degree is the same for all the neurons, KiI,in=25. The neuronal parameters are τ = 20 ms, Vθ = 20 mV, Vr = 10 mV, τr = 2 ms, Kext = 1,000, and wext = 0.14 mV. The remaining plasticity parameters are ε = 0.001 ms−2 and τp = 50 s.

Close modal
Figure 5.

E/I firing rate and E synaptic weight histograms in the three networks with plastic synaptic weights of Figure 4. The histograms are shown for the three choices of the external firing rate.

Figure 5.

E/I firing rate and E synaptic weight histograms in the three networks with plastic synaptic weights of Figure 4. The histograms are shown for the three choices of the external firing rate.

Close modal

We also investigated to what extent this agreement can be extended to plastic networks composed solely of E neurons. In a network of this kind, if the external firing rate is large enough, the hypothesis of Poissonian firing cannot be guaranteed, and this can make the network be outside of the parameter range in which the analytical solution is correct. Surprisingly, we found that for many choices of the external rate, the analytical prediction matches the simulations quite well; see Figures 6B, C and 7. However, there seems to be a restricted range in the external rate for which the network activity has some degree of synchrony and regular firing, and in this case, the analytical prediction does not match the empirical results. This is the case of the network with g0 = 0.4 mV and νext = 7 Hz in Figure 6B,C. This range coincides with the range in which the network activity shifts from a low firing to a high firing state (Figure 6B).

Figure 6.

Firing rate and synaptic weight statistics in five excitatory networks with plastic synaptic weights. (A) In-/out-degree histogram (the same in all the networks), with normally distributed in-degree and out-degree and no correlation between individual degrees. (B) Mean ± standard deviation of the firing rate (left) and synaptic weight (right) distributions at equilibrium as a function of the external firing rate νext. The five networks only differ in the parameter g0 of the plasticity rule (see Equations 10, 12). (C) Spike times (top) and instantaneous population firing rate (bottom) for the network with g0 = 0.4 mV for the three choices of νext. In all the plots, the empirical results come from integrating the full neuronal dynamics on networks with N = 1,000 neurons. The analytical results are obtained by numerically solving Equation 56 on θ=mμ,sμ2,mσ,sσ2,cμσ. The in-/out-degrees are Kiin,KioutNμK,σK with μK = 25, σK = 7. The neuronal parameters are τ = 20 ms, Vθ = 20 mV, Vr = 10 mV, τr = 2 ms, Kext = 1,000, and wext = 0.14 mV. The remaining plasticity parameters are ε = 0.001 ms−2 and τp = 50 s.

Figure 6.

Firing rate and synaptic weight statistics in five excitatory networks with plastic synaptic weights. (A) In-/out-degree histogram (the same in all the networks), with normally distributed in-degree and out-degree and no correlation between individual degrees. (B) Mean ± standard deviation of the firing rate (left) and synaptic weight (right) distributions at equilibrium as a function of the external firing rate νext. The five networks only differ in the parameter g0 of the plasticity rule (see Equations 10, 12). (C) Spike times (top) and instantaneous population firing rate (bottom) for the network with g0 = 0.4 mV for the three choices of νext. In all the plots, the empirical results come from integrating the full neuronal dynamics on networks with N = 1,000 neurons. The analytical results are obtained by numerically solving Equation 56 on θ=mμ,sμ2,mσ,sσ2,cμσ. The in-/out-degrees are Kiin,KioutNμK,σK with μK = 25, σK = 7. The neuronal parameters are τ = 20 ms, Vθ = 20 mV, Vr = 10 mV, τr = 2 ms, Kext = 1,000, and wext = 0.14 mV. The remaining plasticity parameters are ε = 0.001 ms−2 and τp = 50 s.

Close modal
Figure 7.

Firing rate and synaptic weight histograms in the five excitatory networks with plastic synaptic weights. The networks are those of Figure 6, and the histograms are shown for the four values of the external firing rate: 6.5, 7.5, 8.0, and 9.5 Hz.

Figure 7.

Firing rate and synaptic weight histograms in the five excitatory networks with plastic synaptic weights. The networks are those of Figure 6, and the histograms are shown for the four values of the external firing rate: 6.5, 7.5, 8.0, and 9.5 Hz.

Close modal

We have derived of a set of mean-field equations that bridge the gap between a microscopical and a macroscopical description of the neuronal activity in a heterogeneous network of LIF spiking neurons at equilibrium. Whereas the microscopical description is given in terms of membrane voltages and spike times, in the macroscopical description, the neuronal activity is represented by the neurons’ firing rates (i.e., average number of spikes emitted per unit time). Although this kind of mean-field formalism has been widely used before, the main contribution of the present work has been to generalize it so that it is applicable to networks in which two sources of structural heterogeneity may take place at the same time: a heterogeneity in terms of in- and out-degrees and a heterogeneity in terms of synaptic weights, including weights that have been shaped by an activity-dependent plasticity mechanism.

In the nonplastic scenario, the synaptic weights were assumed to be independent variables from a common probability distribution. In the model with plasticity, every neuron had associated a spike trace (which could represent the concentration of a chemical that increases every time the neuron emits a spike and which is degraded over time; see Pfister & Gerstner, 2006 for possible interpretations), and the instantaneous variation of every synaptic weight was a function of the pre- and postsynaptic traces. We assumed that the network was on a regime in which, at equilibrium, the traces’ fluctuations around their means are small so that they can be used to approximate the neurons’ firing rates. This is the key step to include the plasticity mechanism into the mean-field equations, because at equilibrium, every synaptic weight can be considered to be a known function of the pre- and postsynaptic firing rates.

Given a (postsynaptic) LIF neuron, its firing rate at equilibrium is a well-defined function of its presynaptic neighbors’ rates and the corresponding synaptic weights. More precisely, it is a function of two important quantities: the sum (over the presynaptic neighbors) of the presynaptic rates times the weights, Sμ, and the sum of the presynaptic rates times the squares of the weights, Sσ. The firing rates (and the synaptic weights in the plastic model) are, however, not known a priori: The purpose is precisely to compute them analytically. For this, another key step is necessary: Under reasonable hypotheses, the aforementioned sums can be assumed to be sums over identically distributed random variables, which allows us to apply the CTL to deduce that they are jointly normally distributed. This step reduces the complexity of the problem from computing the whole rate/weight distribution to computing just a few statistical parameters that characterize the normal vector (Sμ, Sσ). These parameters are computed by invoking their definitions as statistics related to the firing rate distribution, which gives a set of equations on the parameters themselves that can be solved numerically. The results seem to match well with direct simulations of the microscopic dynamics on networks composed of both I and I-E LIF neurons.

This work is, to our knowledge, the first to simultaneously tackle the problem of extending previous mean-field formalisms to networks in which there is a heterogeneity of both degrees and synaptic weights, including weights that are plastic. It thus offers a step forward in the tremendous effort for understanding and predicting the collective behavior of networks of LIF neurons in these scenarios.

Our work has, however, several limitations that should be pointed out. We considered networks composed of LIF neurons because the LIF model is simpler and more amenable to analytical treatment than more realistic models. However, the LIF model is unable to reproduce some of the electrophysiological properties found in real neurons. The effective threshold for firing in real neurons, for example, seems to be not fixed but to depend on the stimulation protocol (Mensi, Hagens, Gerstner, & Pozzorini, 2016), and this can be reproduced by nonlinear IF models like the quadratic (Latham, Richmond, Nelson, & Nirenberg, 2000) or the exponential (Fourcaud-Trocmé, Hansel, van Vreeswijk, & Brunel, 2003) models. Another example is the spike-triggered adaptation, a process by which the spike frequency decreases upon sustained firing. Models of IF neurons including spike-triggered and subthreshold adaptation by means of an additional dynamic variable have been shown to be notably more realistic (Brette & Gerstner, 2005; Hertäg, Hass, Golovko, & Durstewitz, 2012; Izhikevich, 2004; Naud, Marcille, Clopath, & Gerstner, 2008) while still being simple compared with detailed biophysical models like the Hodgkin-Huxley model (Hodgkin & Huxley, 1952). Despite these, nonlinear and adaptive models have been successively studied under the lens of mean-field techniques (Brunel, Hakim, & Richardson, 2003; Brunel & Latham, 2003; Fourcaud-Trocmé et al., 2003; Hertäg, Durstewitz, & Brunel, 2014; Montbrió, Pazó, & Roxin, 2015); to what extent the analysis performed here could be extended to them, too, remains an open question.

Another important limitation of our work concerns the plasticity rule. Since the neuronal activity in the microscopic model is given by the spike train, any plasticity rule implemented on such a network should be a spike-dependent rule or, at a slightly more coarse-grained scale, a burst-dependent rule (Payeur, Guerguiev, Zenke, Richards, & Naud, 2021). The mean-field description is, however, given in terms of firing rates, and this is why going from one description to the other requires rewriting the plasticity rule at equilibrium in terms of firing rates. A natural way to do so is by considering spike traces: stochastic variables reflecting the spiking activity whose statistics, as we showed, can be directly linked to the underlying firing rates. However, in our mean-field formalism, it is assumed that the synaptic weight at equilibrium is fixed once the firing rate is known, and this does not allow for the introduction of weight fluctuations caused by the traces’ fluctuations. The parameter regime in which the spike trace is a reliable estimator of the firing rate (i.e., the regime in which the trace’s fluctuations are small compared with their average) is precisely the regime at which the product of the firing rate and the trace degradation constant τp is large. This limits the applicability range of our mean-field formulation and, particularly, makes it not applicable when the plasticity rule in place is a function of both the traces and the spike trains themselves or a function of several traces per neuron, with different characteristic degradation constants, as in pair-based and triplet STDP rules (Morrison et al., 2008; Pfister & Gerstner, 2006). One further step would be to study if not only the trace averages but also their fluctuations could be taken into account in the mean-field formalism. In this case, every synaptic weight at a given time would be a stochastic variable, whose statistics at equilibrium should be introduced in the mean-field formulation.

A central hypothesis in our mean-field equations is that the network’s structure is such that the in-degrees of two connected neurons are independent variables. This implies that the distribution of in-degrees among the presynaptic neurons to a given postsynaptic neuron is the same for all postsynaptic neurons: It is a network property. This ingredient is central to reduce the space of unknowns down to a set of a few parameters, because we use the fact that the statistics of every input to a neuron are independent of the identity of that neuron. It would be interesting to study how our theory should be modified so as to include assortative or dissortative networks, as was done for assortative networks with homogeneous synaptic weights in Schmeltzer, Kihara, Sokolov, and Rüdiger (2015).

Finally, we did not analyze the stability of the stationary distribution of firing rates predicted by the theory, or whether there is more than one stationary distribution depending on the model’s parameters. We leave such questions for the future.

We acknowledge Calcul Québec and the Digital Research Alliance of Canada for their technical support and computing infrastructures.

Supporting information for this article is available at https://doi.org/10.1162/netn_a_00442.

Marina Vegué: Conceptualization; Formal analysis; Investigation; Methodology; Software; Writing – original draft; Writing – review & editing. Antoine Allard: Funding acquisition; Project administration; Writing – review & editing. Patrick Desrosiers: Conceptualization; Funding acquisition; Project administration; Writing – review & editing.

Antoine Allard, Natural Sciences and Engineering Research Council of Canada (https://dx.doi.org/10.13039/501100000038). Patrick Desrosiers, Sentinelle Nord, Université Laval (https://dx.doi.org/10.13039/100020862). Marina Vegué, Ministerio de Universidades (https://dx.doi.org/10.13039/501100023561). Patrick Desrosiers, Natural Sciences and Engineering Research Council of Canada (https://dx.doi.org/10.13039/501100000038). Antoine Allard, Sentinelle Nord, Université Laval (https://dx.doi.org/10.13039/100020862).

Integrate-and-fire model:

Model for the activity of a neuron that is formulated through (a) a differential equation describing the temporal evolution of the membrane potential V and (b) a mechanism for spike generation that depends on V.

Spike train:

Collection of spike times for a given neuron.

Mean-field theory:

Mathematical theory for studying stochastic interacting systems composed of statistically identical units that allows to describe the evolution of the ensemble’s statistics using a simpler model.

Stationary state:

State that is stable in time.

Firing rate:

The average number of spikes emitted by a neuron per unit time.

Joint in-/out-degree distribution:

Distribution of the degree pair (in-degree, out-degree) associated to a neuron in a random network model.

Structural plasticity:

Modification of the connectivity structure (“who connects to whom”) in an interacting network.

Synaptic plasticity:

Modification of the synaptic weights in a neuronal network.

Spike-timing-dependent plasticity (STDP):

Model of synaptic plasticity in which the evolution of every synaptic weight depends on the specific timing of pre- and postsynaptic spikes emitted by the two neurons involved in the connection.

Akil
,
A. E.
,
Rosenbaum
,
R.
, &
Josić
,
K.
(
2021
).
Balanced networks under spike-time dependent plasticity
.
PLOS Computational Biology
,
17
(
5
),
e1008958
. ,
[PubMed]
Amit
,
D. J.
, &
Brunel
,
N.
(
1997a
).
Dynamics of a recurrent network of spiking neurons before and following learning
.
Network: Computation in Neural System
,
8
(
4
),
373
404
.
Amit
,
D. J.
, &
Brunel
,
N.
(
1997b
).
Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex
.
Cerebral Cortex
,
7
(
3
),
237
252
. ,
[PubMed]
Angulo-Garcia
,
D.
,
Luccioli
,
S.
,
Olmi
,
S.
, &
Torcini
,
A.
(
2017
).
Death and rebirth of neural activity in sparse inhibitory networks
.
New Journal of Physics
,
19
(
5
),
053011
.
Brette
,
R.
, &
Gerstner
,
W.
(
2005
).
Adaptive exponential integrate-and-fire model as an effective description of neuronal activity
.
Journal of Neurophysiology
,
94
(
5
),
3637
3642
. ,
[PubMed]
Brunel
,
N.
(
2000
).
Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons
.
Journal of Computational Neuroscience
,
8
(
3
),
183
208
. ,
[PubMed]
Brunel
,
N.
,
Hakim
,
V.
, &
Richardson
,
M. J. E.
(
2003
).
Firing-rate resonance in a generalized integrate-and-fire neuron with subthreshold resonance
.
Physical Review E
,
67
(
5
),
051916
.
Brunel
,
N.
, &
Latham
,
P. E.
(
2003
).
Firing rate of the noisy quadratic integrate-and-fire neuron
.
Neural Computation
,
15
(
10
),
2281
2306
. ,
[PubMed]
Brunel
,
N.
, &
van Rossum
,
M. C. W.
(
2007
).
Lapicque’s 1907 paper: From frogs to integrate-and-fire
.
Biological Cybernetics.
,
97
(
5–6
),
337
339
. ,
[PubMed]
Caroni
,
P.
,
Donato
,
F.
, &
Muller
,
D.
(
2012
).
Structural plasticity upon learning: Regulation and functions
.
Nature Reviews Neuroscience
,
13
(
7
),
478
490
. ,
[PubMed]
Chen
,
C.-C.
, &
Jasnow
,
D.
(
2010
).
Mean-field theory of a plastic network of integrate-and-fire neurons
.
Physical Review E
,
81
(
1
),
011907
. ,
[PubMed]
Cimeša
,
L.
,
Ciric
,
L.
, &
Ostojic
,
S.
(
2023
).
Geometry of population activity in spiking networks with low-rank structure
.
PLOS Computational Biology
,
19
(
8
),
e1011315
. ,
[PubMed]
Cooper
,
C.
, &
Frieze
,
A.
(
2004
).
The size of the largest strongly connected component of a random digraph with a given degree sequence
.
Combinatorics, Probability and Computing
,
13
(
3
),
319
337
.
di Volo
,
M.
,
Burioni
,
R.
,
Casartelli
,
M.
,
Livi
,
R.
, &
Vezzani
,
A.
(
2014
).
Heterogeneous mean field for neural networks with short-term plasticity
.
Physical Review E
,
90
(
2
),
022811
. ,
[PubMed]
Duchet
,
B.
,
Bick
,
C.
, &
Byrne
,
Á.
(
2023
).
Mean-field approximations with adaptive coupling for networks with spike-timing-dependent plasticity
.
Neural Computation
,
35
(
9
),
1481
1528
. ,
[PubMed]
Feng
,
J.
(Ed.). (
2003
).
Computational neuroscience: A comprehensive approach
.
Chapman & Hall/CRC
.
Fourcaud-Trocmé
,
N.
,
Hansel
,
D.
,
van Vreeswijk
,
C.
, &
Brunel
,
N.
(
2003
).
How spike generation mechanisms determine the neuronal response to fluctuating inputs
.
Journal of Neuroscience
,
23
(
37
),
11628
11640
. ,
[PubMed]
Fusi
,
S.
, &
Mattia
,
M.
(
1999
).
Collective behavior of networks with linear (vlsi) integrate-and-fire neurons
.
Neural Computation
,
11
(
3
),
633
652
. ,
[PubMed]
Galán
,
R. F.
(
2008
).
On how network architecture determines the dominant patterns of spontaneous neural activity
.
PLOS One
,
3
(
5
),
e2148
. ,
[PubMed]
Gast
,
R.
,
Knösche
,
T. R.
, &
Schmidt
,
H.
(
2021
).
Mean-field approximations of networks of spiking neurons with short-term synaptic plasticity
.
Physical Review E
,
104
(
4
),
044310
. ,
[PubMed]
Gerstner
,
W.
(
2002
).
Integrate-and-fire neurons and networks
. In
M. A.
Arbib
(Ed.),
The handbook of brain theory and neural networks
(2nd ed., pp.
577
581
).
Cambridge, MA
:
MIT Press
.
Gerstner
,
W.
, &
Kistler
,
W. M.
(
2002
).
Mathematical formulations of Hebbian learning
.
Biological Cybernetics
,
87
(
5
),
404
415
. ,
[PubMed]
Gerstner
,
W.
,
Kistler
,
W. M.
,
Naud
,
R.
, &
Paninski
,
L.
(
2014
).
Neuronal dynamics: From single neurons to networks and models of cognition
.
Cambridge University Press
.
Hartmann
,
C.
,
Lazar
,
A.
,
Nessler
,
B.
, &
Triesch
,
J.
(
2015
).
Where’s the noise? key features of spontaneous activity and neural variability arise through learning in a deterministic network
.
PLOS Computational Biology
,
11
(
12
),
e1004640
. ,
[PubMed]
Hennequin
,
G.
,
Vogels
,
T. P.
, &
Gerstner
,
W.
(
2012
).
Non-normal amplification in random balanced neuronal networks
.
Physical Review E
,
86
(
1
),
011909
. ,
[PubMed]
Hertäg
,
L.
,
Durstewitz
,
D.
, &
Brunel
,
N.
(
2014
).
Analytical approximations of the firing rate of an adaptive exponential integrate-and-fire neuron in the presence of synaptic noise
.
Frontiers in Computational Neuroscience
,
8
,
116
. ,
[PubMed]
Hertäg
,
L.
,
Hass
,
J.
,
Golovko
,
T.
, &
Durstewitz
,
D.
(
2012
).
An approximation to the adaptive exponential integrate-and-fire neuron model allows fast and predictive fitting to physiological data
.
Frontiers in Computational Neuroscience
,
6
,
62
. ,
[PubMed]
Hodgkin
,
A. L.
, &
Huxley
,
A. F.
(
1952
).
A quantitative description of membrane current and its application to conduction and excitation in nerve
.
Journal of Physiology
,
117
(
4
),
500
544
. ,
[PubMed]
Hutt
,
A.
,
Mierau
,
A.
, &
Lefebvre
,
J.
(
2016
).
Dynamic control of synchronous activity in networks of spiking neurons
.
PLOS One
,
11
(
9
),
e0161488
. ,
[PubMed]
Izhikevich
,
E. M.
(
2004
).
Which model to use for cortical spiking neurons?
IEEE Transactions on Neural Networks
,
15
(
5
),
1063
1070
. ,
[PubMed]
Knoblauch
,
A.
,
Palm
,
G.
, &
Sommer
,
F. T.
(
2010
).
Memory capacities for synaptic and structural plasticity
.
Neural Computation
,
22
(
2
),
289
341
. ,
[PubMed]
Koch Ocker
,
G.
(
2023
).
Dynamics of stochastic integrate-and-fire networks
.
Physical Review X
,
13
,
041047
.
La Camera
,
G.
(
2022
).
The mean field approach for populations of spiking neurons
. In
M.
Giugliano
,
M.
Negrello
, &
D.
Linaro
(Eds.),
Computational modelling of the brain: Modelling approaches to cells, circuits and networks
(pp.
125
157
).
Springer International Publishing
. ,
[PubMed]
Latham
,
P. E.
,
Richmond
,
B. J.
,
Nelson
,
P. G.
, &
Nirenberg
,
S.
(
2000
).
Intrinsic dynamics in neuronal networks. I. Theory
.
Journal of Neurophysiology
,
83
(
2
),
808
827
. ,
[PubMed]
Lonardoni
,
D.
,
Amin
,
H.
,
Di Marco
,
S.
,
Maccione
,
A.
,
Berdondini
,
L.
, &
Nieus
,
T.
(
2017
).
Recurrently connected and localized neuronal communities initiate coordinated spontaneous activity in neuronal networks
.
PLOS Computational Biology
,
13
(
7
),
e1005672
. ,
[PubMed]
Luccioli
1,
S.
,
Angulo-Garcia
,
D.
, &
Torcini
,
A.
(
2019
).
Neural activity of heterogeneous inhibitory spiking networks with delay
.
Physical Review E
,
99
(
5
),
052412
. ,
[PubMed]
Mensi
,
S.
,
Hagens
,
O.
,
Gerstner
,
W.
, &
Pozzorini
,
C.
(
2016
).
Enhanced sensitivity to rapid input fluctuations by nonlinear threshold dynamics in neocortical pyramidal neurons
.
PLOS Computational Biology
,
12
(
2
),
e1004761
. ,
[PubMed]
Montbrió
,
E.
,
Pazó
,
D.
, &
Roxin
,
A.
(
2015
).
Macroscopic description for networks of spiking neurons
.
Physical Review X
,
5
(
2
),
021028
.
Morrison
,
A.
,
Diesmann
,
M.
, &
Gerstner
,
W.
(
2008
).
Phenomenological models of synaptic plasticity based on spike timing
.
Biological Cybernetics
,
98
(
6
),
459
478
. ,
[PubMed]
Naud
,
R.
,
Marcille
,
N.
,
Clopath
,
C.
, &
Gerstner
,
W.
(
2008
).
Firing patterns in the adaptive exponential integrate-and-fire model
.
Biological Cybernetics
,
99
(
4–5
),
335
347
. ,
[PubMed]
Newman
,
M. E. J.
(
2003
).
The structure and function of complex networks
.
SIAM Review
,
45
(
2
),
167
256
.
Oja
,
E.
(
1982
).
A simplified neuron model as a principal component analyzer
.
Journal of Mathematical Biology
,
15
(
3
),
267
273
. ,
[PubMed]
Ostojic
,
S.
(
2014
).
Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons
.
Nature Neuroscience
,
17
(
4
),
594
–600. ,
[PubMed]
Painchaud
,
V.
,
Doyon
,
N.
, &
Desrosiers
,
P.
(
2022
).
Beyond Wilson–Cowan dynamics: Oscillations and chaos without inhibition
.
Biological Cybernetics
,
116
(
5–6
),
527
543
. ,
[PubMed]
Payeur
,
A.
,
Guerguiev
,
J.
,
Zenke
,
F.
,
Richards
,
B. A.
, &
Naud
,
R.
(
2021
).
Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits
.
Nature Neuroscience
,
24
(
7
),
1010
1019
. ,
[PubMed]
Pena
,
R. F. O.
,
Zaks
,
M. A.
, &
Roque
,
A. C.
(
2018
).
Dynamics of spontaneous activity in random networks with multiple neuron subtypes and synaptic noise: Spontaneous activity in networks with synaptic noise
.
Journal of Computational Neuroscience
,
45
(
1
),
1
28
. ,
[PubMed]
Pfister
,
J.-P.
, &
Gerstner
,
W.
(
2006
).
Triplets of spikes in a model of spike timing-dependent plasticity
.
Journal of Neuroscience
,
26
(
38
),
9673
9682
. ,
[PubMed]
Renart
,
A.
,
de la Rocha
,
J.
,
Bartho
,
P.
,
Hollender
,
L.
,
Parga
,
N.
,
Reyes
,
A.
, &
Harris
,
K. D.
(
2010
).
The asynchronous state in cortical circuits
.
Science
,
327
(
5965
),
587
590
. ,
[PubMed]
Rio
,
E.
(
2017
).
Asymptotic theory of weakly dependent random processes
(Vol.
80
).
Springer
.
Rosenblatt
,
M.
(
1956
).
A central limit theorem and a strong mixing condition
.
Proceedings of the National Academy of Sciences of the United States of America
,
42
(
1
),
43
47
. ,
[PubMed]
Roxin
,
A.
,
Brunel
,
N.
,
Hansel
,
D.
,
Mongillo
,
G.
, &
van Vreeswijk
,
C.
(
2011
).
On the distribution of firing rates in networks of cortical neurons
.
Journal of Neuroscience
,
31
(
45
),
16217
16226
. ,
[PubMed]
Sanzeni
,
A.
,
Histed
,
M. H.
, &
Brunel
,
N.
(
2022
).
Emergence of irregular activity in networks of strongly coupled conductance-based neurons
.
Physical Review X
,
12
(
1
),
011044
. ,
[PubMed]
Schmeltzer
,
C.
,
Kihara
,
A. H.
,
Sokolov
,
I. M.
, &
Rüdiger
,
S.
(
2015
).
Degree correlations optimize neuronal network sensitivity to sub-threshold stimuli
.
PLOS One
,
10
(
6
),
e0121794
. ,
[PubMed]
Shadlen
,
M. N.
, &
Newsome
,
W. T.
(
1994
).
Noise, neural codes and cortical organization
.
Current Opinion in Neurobiology
,
4
(
4
),
569
579
. ,
[PubMed]
Tartaglia
,
E. M.
, &
Brunel
,
N.
(
2017
).
Bistability and up/down state alternations in inhibition-dominated randomly connected networks of lif neurons
.
Scientific Reports
,
7
(
1
),
11916
. ,
[PubMed]
van Vreeswijk
,
C.
, &
Sompolinsky
,
H.
(
1996
).
Chaos in neuronal networks with balanced excitatory and inhibitory activity
.
Science
,
274
(
5293
),
1724
1726
. ,
[PubMed]
Vegué
,
M.
, &
Roxin
,
A.
(
2019
).
Firing rate distributions in spiking networks with heterogeneous connectivity
.
Physical Review E
,
100
(
2
),
022208
. ,
[PubMed]
Vogels
,
T. P.
, &
Abbott
,
L. F.
(
2005
).
Signal propagation and logic gating in networks of integrate-and-fire neurons
.
Journal of Neuroscience
,
25
(
46
),
10786
10795
. ,
[PubMed]
Wang
,
B.
, &
Aljadeff
,
J.
(
2022
).
Multiplicative shot-noise: A new route to stability of plastic networks
.
Physical Review Letters
,
129
(
6
),
068101
. ,
[PubMed]
Zenke
,
F.
, &
Gerstner
,
W.
(
2017
).
Hebbian plasticity requires compensatory processes on multiple timescales
.
Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences
,
372
(
1715
),
20160259
. ,
[PubMed]
Zillmer
,
R.
,
Brunel
,
N.
, &
Hansel
,
D.
(
2009
).
Very long transients, irregular firing, and chaotic dynamics in networks of randomly connected inhibitory integrate-and-fire neurons
.
Physical Review E
,
79
(
3
),
031909
. ,
[PubMed]

Competing Interests

Competing Interests: The authors have declared that no competing interests exist.

Author notes

Handling Editor: Marcus Kaiser

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.

Supplementary data