Abstract
In recurrent networks of leaky integrate-and-fire neurons, the mean-field theory has been instrumental in capturing the statistical properties of neuronal activity, like firing rate distributions. This theory has been applied to networks with either homogeneous synaptic weights and heterogeneous connections per neuron or vice versa. Our work expands mean-field models to include networks with both types of structural heterogeneity simultaneously, particularly focusing on those with synapses that undergo plastic changes. The model introduces a spike trace for each neuron, a variable that rises with neuron spikes and decays without activity, influenced by a degradation rate rp and the neuron’s firing rate ν. When the ratio α = ν/rp is significantly high, this trace effectively estimates the neuron’s firing rate, allowing synaptic weights at equilibrium to be determined by the firing rates of connected neurons. This relationship is incorporated into our mean-field formalism, providing exact solutions for firing rate and synaptic weight distributions at equilibrium in the high α regime. However, the model remains accurate within a practical range of degradation rates, as demonstrated through simulations with networks of excitatory and inhibitory neurons. This approach sheds light on how plasticity modulates both activity and structure within neuronal networks, offering insights into their complex behavior.
Author Summary
Networks of spiking neurons are complex systems where the structure of connections and the activity patterns generated are deeply intertwined, a relationship often studied using mathematical approaches like the mean-field theory. However, previous studies have primarily focused on networks with limited structural variability, where either the connection strength is nearly identical across the network or the number of connections varies little from one neuron to another. This work takes a step forward by combining both types of structural variability and allowing connection strengths to adapt over time, thereby providing an extended mean-field theory. We derive exact solutions for the distribution of spiking rates and connection strengths at equilibrium and demonstrate their accuracy through numerical simulations, even beyond the defining parameter ranges, offering a more comprehensive and realistic perspective on the interplay between activity and structure in neuronal networks.
INTRODUCTION
Synthetic networks of spiking neurons have been widely used to model the spontaneous activity of neuronal assemblies (Akil, Rosenbaum, & Josić, 2021; Amit & Brunel, 1997a, 1997b; Angulo-Garcia, Luccioli, Olmi, & Torcini, 2017; Cimeša, Ciric, & Ostojic, 2023; Duchet, Bick, & Byrne, 2023; Fusi & Mattia, 1999; Galán, 2008; Gast, Knösche, & Schmidt, 2021; Hartmann, Lazar, Nessler, & Triesch, 2015; Hennequin, Vogels, & Gerstner, 2012; Hutt, Mierau, & Lefebvre, 2016; Koch Ocker, 2023; La Camera, 2022; Lonardoni et al., 2017; Luccioli, Angulo-Garcia, & Torcini, 2019; Ostojic, 2014; Painchaud, Doyon, & Desrosiers, 2022; Pena, Zaks, & Roque, 2018; Sanzeni, Histed, & Brunel, 2022; Tartaglia & Brunel, 2017; Vogels & Abbott, 2005; Zillmer, Brunel, & Hansel, 2009). A common way to model the spiking activity of individual neurons is by means of the so-called leaky integrate-and-fire (LIF) model (Brunel & van Rossum, 2007; Gerstner, 2002). Despite being simple compared with more detailed spiking models (Izhikevich, 2004), the LIF model can reproduce some of the features observed in real neuronal assemblies when implemented on synthetic networks. For example, LIF neurons that receive both excitatory (E) and inhibitory (I) inputs that approximately compensate each other exhibit spike trains that are irregular and compatible with Poisson statistics, in agreement with the spontaneous activity measured experimentally (Shadlen & Newsome, 1994). In network models in which the average E input is compensated by the average I input for all neurons, such a balance can be maintained by the network dynamics because a temporary increase in the E activity rapidly induces an increase in the I activity (and vice versa) due to the recurrent connectivity (Renart et al., 2010; van Vreeswijk & Sompolinsky, 1996).
A clear advantage of the LIF model is that it is amenable to an analytical treatment: The mean-field theory of LIF and balanced neuronal networks can be used to describe the spiking statistics in the stationary state (Brunel, 2000). This theory takes into account not only the mean synaptic input received by individual neurons but also the input fluctuations caused by the irregularity of the spiking process in the presynaptic neurons. In a balanced state, in which the total mean synaptic input is close to zero, the spike emission is mainly driven by these input fluctuations. From the pioneering work of Amit and Brunel (1997a, 1997b), such mean-field formalisms have been used to predict the mean firing rate and firing rate distributions in networks of LIF neurons with various degrees of structural heterogeneity, including networks whose connectivity structure does not change in time (nonplastic) and networks whose synaptic efficacies have been modified by some plasticity mechanism. The term structural heterogeneity here refers either to a neuron-to-neuron variability in the number of inputs received from the network (i.e., the in-degree) or to a synapse-to-synapse variability in the synaptic efficacy (i.e., the synaptic weight).
Previous work has dealt with these two sources of structural heterogeneity separately. On the one hand, mean-field theory has been applied to nonplastic networks with homogeneous E/I synaptic weights and whose in-degrees are either homogeneous over different neurons (Brunel, 2000), slightly heterogeneous (as in networks with Erdős-Rényi connectivity) (Roxin, Brunel, Hansel, Mongillo, & van Vreeswijk, 2011), or determined by an arbitrary joint in-/out-degree distribution (Vegué & Roxin, 2019). In the latter case, the joint degree distribution can include correlations between individual in- and out-degrees, and this was shown to have an important influence on the resulting stationary distribution of firing rates (Vegué & Roxin, 2019). On the other hand, Amit and Brunel studied the case of networks with fixed in-degrees in which E/I synaptic weights are independently drawn from prescribed distributions, including networks whose weight distributions have been previously shaped by a learning process in which different subsets of neurons have been selected to store a set of activity patterns (Amit & Brunel, 1997b).
Together, these contributions (among several others) highlight the fact that any structural heterogeneity (be it a heterogeneity of in-degrees or of synaptic weights) causes a heterogeneity of firing rates in the stationary state and that this should be taken into account by the mean-field equations (Roxin et al., 2011; Vegué & Roxin, 2019). However, when the in-degrees are the same for all neurons, the heterogeneity of synaptic weights can be neglected under some conditions, and this greatly simplifies the mean-field equations and their steady-state solution (Brunel, 2000).
The aim of the present work is to study a rather general scenario in which both sources of structural heterogeneity (the one relative to the degree distribution and that of the weight distribution) are simultaneously taken into account. We assume that the connectivity structure (“who connects to whom”) is determined by a time-invariant “scaffold” on top of which the actual efficacy of every particular connection is defined by the synaptic weight, which can be stable in time or plastic. Hence, we consider networks without structural plasticity that can, however, exhibit synaptic plasticity (Caroni, Donato, & Muller, 2012; Knoblauch, Palm, & Sommer, 2010). The connectivity scaffold is determined by a joint in-/out-degree distribution but is fully random otherwise (i.e., configuration model; Cooper & Frieze, 2004; Newman, 2003). We examine synaptic weights under two distinct conditions. In the first scenario, the weights remain unchanged over time yet exhibit variability among the different synaptic connections. Each weight is independent and conforms to a predetermined distribution (model A). In the second scenario, we explore a dynamic setting where synaptic weights evolve over time, adjusting in response to neuronal activity according to a specific plasticity rule (model B).
As it is common in models of spike-timing-dependent plasticity (STDP), the plasticity in our model is mediated by the introduction of spike traces (Akil et al., 2021; Chen & Jasnow, 2010; di Volo, Burioni, Casartelli, Livi, & Vezzani, 2014; Gerstner, Kistler, Naud, & Paninski, 2014; Morrison, Diesmann, & Gerstner, 2008; Pfister & Gerstner, 2006; Wang & Aljadeff, 2022). A trace associated to one neuron is a variable that increases by a fixed amount every time the neuron emits a spike and decays over time in the absence of neuronal activity. It may represent a chemical signal that is released in response to firing (Pfister & Gerstner, 2006). Its characteristic degradation rate is a measure of how fast the “memory” about the neuron’s spiking history is lost. In this work, we consider a single trace per neuron, in contrast with the usual implementation of plasticity rules based on spike traces where every neuron’s spikes contribute to different traces that differ in their degradation rate (i.e., two traces per neuron in pair-based STDP rules; Morrison et al., 2008; and three or more in triplet models; Pfister & Gerstner, 2006).
If the firing process is stochastic and controlled by an intrinsic firing rate ν, the trace itself is a stochastic variable whose temporal evolution depends on ν and on the trace’s degradation rate, rp. We perform a mathematical analysis of the trace, assuming that the spiking process is Poisson and the firing rate is constant in time. We show that the trace’s stationary probability density can be analytically computed and, in particular, (a) the trace can be rescaled so that it equals the firing rate on average and (b) the fluctuations around this mean vanish in the limit in which the ratio α = ν/rp goes to infinity. Close to this limit, the trace can be used to estimate the underlying firing rate with high accuracy (we say that the trace is reliable).
In our plasticity model, the temporal evolution of each synaptic weight follows an ordinary differential equation (ODE) that depends on the weight itself and on the pre- and postsynaptic traces. Close to the limit of reliable traces, the synaptic weight at equilibrium can be thus expressed as a function of the firing rates underlying the pre- and postsynaptic spiking processes. This is the key ingredient that allows us to link the microscopic description of the neuronal activity (in terms of voltages, spikes, and traces) with its macroscopic mean-field description at equilibrium (in terms of firing rates).
We extend previous mean-field formalisms (Amit & Brunel, 1997a, 1997b; Brunel, 2000; Roxin et al., 2011; Vegué & Roxin, 2019) to networks with the two sources of structural heterogeneity described earlier, with and without plasticity (Models A and B, respectively). The solution to the mean-field equations allows the reconstruction of the firing rate distribution (and the synaptic weight distribution in model B) at equilibrium. This is done by invoking the central limit theorem (CTL), which allows to jointly regard the mean and the variance of the input to a given neuron, that a priori depend upon a whole unknown rate (and weight in Model B) probability distribution, as a Gaussian random vector that depends on a limited number of unknown statistics. These unknowns are computed by solving the mean-field equations, which specify the dependence of the unknowns on the unknowns themselves due to the fact that the network is recurrent, and, thus, the input and the output statistics are linked. The equations are exact in the limit of reliable traces, but they already provide accurate results when the degradation time constant of the trace, τp = 1/rp, is of the order of seconds. This is shown by comparing the analytically computed rate/weight distributions with those obtained from simulating the whole process on a network.
THE NEURONAL NETWORK MODEL
The Neuronal Dynamics
The external input defined by Equation 2c has the same structure, but we assume that it is generated from a pool of Kext external neurons that is unique for each postsynaptic neuron i. The time is the arrival time of the kth spike emitted by the jth external neuron to neuron i. The synaptic weights from the external pool are assumed to be all the same for the sake of simplicity. While the spikes within the neuronal network are generated from the neurons’ voltages crossing the threshold, the external spikes are assumed to come from independent Poisson processes with a fixed rate νext.
The Connectivity Structure
We assume that A is a random instantiation from an ensemble of possible binary adjacency matrices. The ensemble is characterized by a joint in-/out-degree probability density function (p.d.f.) ρin, out in the following sense: The set of degree pairs is a sample of N independent instantiations of a two-dimensional random vector that is distributed according to ρin, out.
Contrary to A, matrix W(t) may change in time depending on pre- and postsynaptic neuronal activity. We explore the following two models for W(t) (see Figure 1):
Model A. The structure of W is fixed in time: W(t) = W. Each synaptic weight wij is generated independently of the others from a common arbitrary weight probability distribution.
Model B. The matrix W(t) is plastic: Each weight wij(t) changes in time as a function of the activity of the pre- and postsynaptic neurons j and i. The details of the plasticity mechanism are given in the next section.
Schematics of the network structure. In both models, the network is built on a scaffold that is fixed and determines which neurons can be connected. The scaffold is such that individual in- and out-degrees follow a prescribed joint distribution defined by a density ρin,out. On top of this scaffold, the synaptic weights vary from one synapse to the next. In model A, weights are fixed in time, and every weight is an independent realization of a common random variable following a prescribed distribution with density ρw. In model B, every weight is plastic, and it varies in time as a function of the spiking traces of the pre- and postsynaptic neurons involved in the connection.
Schematics of the network structure. In both models, the network is built on a scaffold that is fixed and determines which neurons can be connected. The scaffold is such that individual in- and out-degrees follow a prescribed joint distribution defined by a density ρin,out. On top of this scaffold, the synaptic weights vary from one synapse to the next. In model A, weights are fixed in time, and every weight is an independent realization of a common random variable following a prescribed distribution with density ρw. In model B, every weight is plastic, and it varies in time as a function of the spiking traces of the pre- and postsynaptic neurons involved in the connection.
These models can be easily extended to networks with more than a single neuronal type or population (e.g., E and I neurons). For the sake of clarity, the mathematical analysis presented in the main text corresponds to a network with a single population, but we show and discuss results on networks with two populations in some of the main text’s figures. We provide the full mathematical analysis of the extended models in Supporting Information Sections S3 and S5.
The Plasticity Rule
The trace Ri represents the concentration of a chemical signal that is released every time neuron i fires and that is degraded over time at a rate rp = 1/τp. This chemical signal could correspond to glutamate bound to its receptors, intracellular calcium, and second messengers, among many others (Pfister & Gerstner, 2006). The constant τp acts as a “memory” parameter: It determines how much the spikes emitted in the past still have an impact on the trace at the present moment. A large τp implies that the trace degradation is slow so the spiking memory is large, and vice versa. In the limit τp → ∞, there is no trace degradation and Ri(t) simply counts the total number of spikes emitted up to time t (assuming that Ri was 0 at time 0). Note that while many models of STDP assume the presence of two or more traces per neuron with distinct characteristic time constants (Morrison et al., 2008; Pfister & Gerstner, 2006), here, we assume a single trace per neuron.
As we pointed out before, the trace and its normalized counterpart are stochastic variables due to the stochastic nature of the spiking process. As we show later, there is, nonetheless, a parameter regime in which the fluctuations of the normalized trace around its average can be neglected. This is the regime for which we develop the mean-field formalism when the weights are plastic.
MEAN-FIELD FORMALISM
We are interested in studying macroscopic properties of the system defined by Equations 1–2c when the structure of the synaptic weights is heterogeneous and possibly plastic (i.e., given by Models A and B). We use the term macroscopic property to denote a feature that statistically characterizes the neuronal ensemble, regardless of its microscopic details. For example, a microscopic description of the activity of a single neuron in the network is provided by its spike train, that is, the collection of times at which the neuron has spiked. However, a simpler and probably more meaningful measure of the neuron’s activity is provided by the average number of spikes the neuron has emitted per unit time, that is, its firing rate. At the network level, the distribution of firing rates in the stationary state is a macroscopic property of the neuronal ensemble that is informative of the overall activity level in the network as well as of how heterogeneous this activity is. We notice that the term stationary here refers to the fact that the firing rates are stable in time. Despite this statistical stability, the underlying stochastic neuronal network remains highly active, with individual neurons still firing irregularly and exhibiting fluctuations. In the case of plastic synaptic weights, we are also interested in determining what the distribution of these weights will be in the stationary state.
The mean-field theory of networks of LIF neurons with homogeneous degrees or homogeneous synaptic weights has been extensively studied (Amit & Brunel, 1997a, 1997b; Brunel, 2000; Roxin et al., 2011; Vegué & Roxin, 2019). The formalism that we present here is an extension of this theory in networks that are heterogeneous in terms of both degrees and synaptic weights. The assumptions for the system to be well described macroscopically by the mean-field theory prevail, namely:
individual neurons fire as Poisson processes, so that Equation 2b is treated as a nondeterministic equation and Equation 1 becomes a stochastic differential equation;
each of these Poisson processes is defined by its characteristic firing rate and they are independent once the firing rates are known;
the absolute value of every synaptic weight is small compared with the threshold Vθ, and the total number of inputs received by each neuron is large.
For Condition 1 to be approximately fulfilled, it is enough that the expectation of the total input current’s integral over a time window of length τ (i.e., the quantity μi defined later by Equation 25; see Supporting Information Section S2) be below threshold (Feng, 2003, Chapter 15). In this case, the spiking process is mainly driven by the input fluctuations and it is thus irregular. Condition 2 is fulfilled when the set of presynaptic neurons to a given neuron has a small overlap from one neuron to another. This is accomplished when the network structure is random and sparse (that is, when the in-degrees are small compared with the network’s size, for all i). Condition 3 depends on the degree distribution, on wext and on the weight distribution chosen or the plasticity rule in place. Note that, according to Conditions 2 and 3, the in-degrees are large in absolute terms but small compared with the network’s size. We assume that all these conditions are approximately fulfilled in our networks.
In the following sections, we derive the mean-field equations that allow to analytically predict the firing rate and synaptic weight distributions in the stationary state. We first provide a brief mathematical analysis of the spike trace and its normalized counterpart. To make it clearer and easier to follow, we start by analyzing the much simpler case of a network with statistically equivalent neurons. We move afterward to the heterogeneous scenarios defined by Models A and B.
The Spike Trace
Trace-dependent estimated firing rate as defined by Equations 4 and 20 for different values of the true firing rate ν and the memory constant τp. In each plot, we show three examples of temporal trajectories for (ν = 1 Hz, ν = 5 Hz, and ν = 10 Hz), all of them starting at t = 0. The dashed lines indicate the predicted mean value at equilibrium, . To the right of each plot, we show the p.d.f. of at equilibrium, both analytical (obtained by numerically solving Equation 17 for the density of R and appropriately rescaling) and empirical (obtained by simulating an ensemble of 50,000 independent trajectories and computing the resulting histogram at time t = 5τp).
Trace-dependent estimated firing rate as defined by Equations 4 and 20 for different values of the true firing rate ν and the memory constant τp. In each plot, we show three examples of temporal trajectories for (ν = 1 Hz, ν = 5 Hz, and ν = 10 Hz), all of them starting at t = 0. The dashed lines indicate the predicted mean value at equilibrium, . To the right of each plot, we show the p.d.f. of at equilibrium, both analytical (obtained by numerically solving Equation 17 for the density of R and appropriately rescaling) and empirical (obtained by simulating an ensemble of 50,000 independent trajectories and computing the resulting histogram at time t = 5τp).
The biological interpretation of this assumption is that the trace is slowly degraded, so the memory of the trace on the spiking activity is large. This allows the synaptic weights to respond only to slow temporal variations of the firing rates.
Introduction to the Mean-Field Formalism
From now on, we assume that our system is in a state characterized by a stationary distribution of firing rates, so in order to simplify the notation, we remove the asterisks (*) on the stationary weights and firing rates.
Network with Equivalent Neurons
We start by studying the simplest scenario: when all the neurons in the network are statistically identical. This occurs when the in-degrees are the same for all the neurons, and the synaptic weights are either the same for all the synapses (Model A) or evolve in time according to the same form of plasticity rule (Model B). This homogeneity results in a homogeneity of firing rates and synaptic weights in both models, so the problem’s unknown is the stationary firing rate ν (together with the stationary weight w in Model B).
Heterogeneous Network with No Plasticity (Model A)
We move now to the case of heterogeneous connectivity with synaptic weights that are constant in time. The binary connectivity structure is defined by a joint distribution of in-/out-degrees. The synaptic weights are generated independently from a known weight distribution.
Also, the postsynaptic firing rate can be assumed to be independent of a presynaptic rate νj when the synaptic weight wij is small enough so that the influence of a single presynaptic neuron on a postsynaptic neuron is negligible. Because of all these reasons, the presynaptic rates ν1, …, νKi can be regarded as i.i.d. random variables whose distribution is independent of i.
If the neuron is randomly chosen in the network, its identity variables Ki, Yi, Zi are random variables: Ki is distributed according to the in-degree distribution imposed in the network, and (Yi, Zi) is a normal bivariate vector with zero mean and covariance matrix Σ = Σ(θ), as stated by Equation 37. The vector (Yi, Zi) is independent of Ki, and the identity vectors of all the neurons, X1, …, XN, are i.i.d. In sum, the whole rate distribution in the network can be reconstructed from only two rate statistics: m and s2.
Once θ is known, the network’s firing rate distribution can be numerically reconstructed by creating a large sample of triplets {Xi}i and then using it to compute the corresponding sample of firing rates by applying Equation 40(a and b). Note that to create the triplet sample, we must use the in-degree distribution in the network, not the in-degree distribution among presynaptic neurons as in Equations 41 and 42.
To compare this result with the stationary rate distribution obtained from simulating the whole network, we took a network composed of N = 1,000 I neurons with fixed in-degree K = 25. The incoming neighbors were chosen randomly, resulting in normally distributed out-degrees (Figure 3A). The synaptic weights were taken from a gamma distribution. Figure 3 shows the comparison as we vary the weight expectation . The mean and standard deviation of the rate distribution (Figure 3B) and the rate distribution itself (Figure 3C) are well predicted by the theory, for different values of the external firing rate νext.
Firing rate distributions in three inhibitory networks with heterogeneous synaptic weights. (A) Details of the connectivity structure. Left: in-/out-degree histogram (the same in the three networks), with fixed in-degree and variable out-degree; right: synaptic weight histograms. The synaptic weights have been created independently with wij ∼ −Gamma(κ, θ) with mean and variance Var[w] = 0.2 (units in millivolts), the parameters being such that , Var[w] = κθ2 (the three networks only differ in the value of the mean weight). (B) Mean ± standard deviation of the firing rate distribution at equilibrium as a function of the external firing rate νext. (C) Complete firing rate histogram in the three networks for three values of the external firing rate: 7.0 Hz (left), 7.5 Hz (middle), and 8.5 Hz (right). In all the plots, the empirical results come from integrating the full neuronal dynamics on networks with N = 1,000 neurons. The analytical results are obtained by numerically solving Equation 43 on θ = (m, s2). The neuronal parameters are: K = 25 (in-degree), τ = 20 ms, Vθ = 20 mV, Vr = 10 mV, τr = 2 ms, Kext = 1,000, and wext = 0.14 mV.
Firing rate distributions in three inhibitory networks with heterogeneous synaptic weights. (A) Details of the connectivity structure. Left: in-/out-degree histogram (the same in the three networks), with fixed in-degree and variable out-degree; right: synaptic weight histograms. The synaptic weights have been created independently with wij ∼ −Gamma(κ, θ) with mean and variance Var[w] = 0.2 (units in millivolts), the parameters being such that , Var[w] = κθ2 (the three networks only differ in the value of the mean weight). (B) Mean ± standard deviation of the firing rate distribution at equilibrium as a function of the external firing rate νext. (C) Complete firing rate histogram in the three networks for three values of the external firing rate: 7.0 Hz (left), 7.5 Hz (middle), and 8.5 Hz (right). In all the plots, the empirical results come from integrating the full neuronal dynamics on networks with N = 1,000 neurons. The analytical results are obtained by numerically solving Equation 43 on θ = (m, s2). The neuronal parameters are: K = 25 (in-degree), τ = 20 ms, Vθ = 20 mV, Vr = 10 mV, τr = 2 ms, Kext = 1,000, and wext = 0.14 mV.
Heterogeneous Network with Plastic Synaptic Weights (Model B)
As in the previous case, the identity variables Ki, Yi, Zi are random: Ki is distributed according to the in-degree distribution imposed in the network and (Yi, Zi) is a normal bivariate vector with zero mean and covariance matrix Σ = Σ(θ) (see Equation 52). The vector (Yi, Zi) is independent of Ki, and the identity vectors of all the neurons, X1, …, XN, are i.i.d.
This formalism can be extended to networks composed of E and I neurons as we detail in Supporting Information Sections S4 and S5.
We verified that the described equations can predict the weight and firing rate distributions in the stationary state. For this, we first simulated the microscopic dynamics of a network composed of NE excitatory and NI inhibitory neurons with NE = NI = 1,000 in which all the synaptic weights were plastic. The plasticity rule for E synapses was inspired by Oja’s rule (Oja, 1982), see Equations 7, 10, and 12. The I rule was taken to be analogous but with the opposite sign to simplify the resulting mean-field equations.
In our example network, degrees from/to the E population were normally distributed and independent, whereas the in-degrees from the I population were fixed (the I incoming neighbors were chosen randomly, resulting in normally distributed I out-degrees); see Figure 4A, B. The reason to include I neurons to the network of E neurons is that the network should be approximately balanced for it to reach a stationary state with irregular (and, hence, close to Poisson) firing and low firing rates. The raster plots in Figure 4C show this irregular firing. Figure 4D shows the mean and standard deviation of the rate and weight distributions as the external firing rate is increased, for three choices of the plasticity parameter g0 (see Equation 12). A sample of the corresponding distributions is given in Figure 5, showing a very good agreement between theory and simulations.
Firing rate and synaptic weight statistics in E-I networks with plastic synaptic weights. (A) Each neuron in the network has four characteristic degrees: the in-/out-degrees coming from/going to the E and I populations (KE,in, KE,out, KI,in, KI,out). (B) E and I in-/out-degree histogram (statistically identical in all the networks), with normally distributed E in-degrees, E out-degrees, and I out-degrees and a fixed I in-degree. The four degrees associated to each neuron are independent random variables. (C) Spike times (top) and instantaneous population firing rates (bottom) when νext = 8.0 Hz for three choices of the plasticity parameter g0 (see Equations 10, 12). (D) Mean ± standard deviation of the E/I firing rate (top) and E synaptic weight (bottom) distributions at equilibrium as a function of the external firing rate νext for the same three networks of (C). In all the plots, the empirical results come from integrating the full neuronal dynamics on networks with NE = NI = 1,000 neurons. The analytical results are obtained by numerically solving Equation 56 on . The E in-/out-degrees are with μK = 25, σK = 7. The I in-degree is the same for all the neurons, . The neuronal parameters are τ = 20 ms, Vθ = 20 mV, Vr = 10 mV, τr = 2 ms, Kext = 1,000, and wext = 0.14 mV. The remaining plasticity parameters are ε = 0.001 ms−2 and τp = 50 s.
Firing rate and synaptic weight statistics in E-I networks with plastic synaptic weights. (A) Each neuron in the network has four characteristic degrees: the in-/out-degrees coming from/going to the E and I populations (KE,in, KE,out, KI,in, KI,out). (B) E and I in-/out-degree histogram (statistically identical in all the networks), with normally distributed E in-degrees, E out-degrees, and I out-degrees and a fixed I in-degree. The four degrees associated to each neuron are independent random variables. (C) Spike times (top) and instantaneous population firing rates (bottom) when νext = 8.0 Hz for three choices of the plasticity parameter g0 (see Equations 10, 12). (D) Mean ± standard deviation of the E/I firing rate (top) and E synaptic weight (bottom) distributions at equilibrium as a function of the external firing rate νext for the same three networks of (C). In all the plots, the empirical results come from integrating the full neuronal dynamics on networks with NE = NI = 1,000 neurons. The analytical results are obtained by numerically solving Equation 56 on . The E in-/out-degrees are with μK = 25, σK = 7. The I in-degree is the same for all the neurons, . The neuronal parameters are τ = 20 ms, Vθ = 20 mV, Vr = 10 mV, τr = 2 ms, Kext = 1,000, and wext = 0.14 mV. The remaining plasticity parameters are ε = 0.001 ms−2 and τp = 50 s.
E/I firing rate and E synaptic weight histograms in the three networks with plastic synaptic weights of Figure 4. The histograms are shown for the three choices of the external firing rate.
E/I firing rate and E synaptic weight histograms in the three networks with plastic synaptic weights of Figure 4. The histograms are shown for the three choices of the external firing rate.
We also investigated to what extent this agreement can be extended to plastic networks composed solely of E neurons. In a network of this kind, if the external firing rate is large enough, the hypothesis of Poissonian firing cannot be guaranteed, and this can make the network be outside of the parameter range in which the analytical solution is correct. Surprisingly, we found that for many choices of the external rate, the analytical prediction matches the simulations quite well; see Figures 6B, C and 7. However, there seems to be a restricted range in the external rate for which the network activity has some degree of synchrony and regular firing, and in this case, the analytical prediction does not match the empirical results. This is the case of the network with g0 = 0.4 mV and νext = 7 Hz in Figure 6B,C. This range coincides with the range in which the network activity shifts from a low firing to a high firing state (Figure 6B).
Firing rate and synaptic weight statistics in five excitatory networks with plastic synaptic weights. (A) In-/out-degree histogram (the same in all the networks), with normally distributed in-degree and out-degree and no correlation between individual degrees. (B) Mean ± standard deviation of the firing rate (left) and synaptic weight (right) distributions at equilibrium as a function of the external firing rate νext. The five networks only differ in the parameter g0 of the plasticity rule (see Equations 10, 12). (C) Spike times (top) and instantaneous population firing rate (bottom) for the network with g0 = 0.4 mV for the three choices of νext. In all the plots, the empirical results come from integrating the full neuronal dynamics on networks with N = 1,000 neurons. The analytical results are obtained by numerically solving Equation 56 on . The in-/out-degrees are with μK = 25, σK = 7. The neuronal parameters are τ = 20 ms, Vθ = 20 mV, Vr = 10 mV, τr = 2 ms, Kext = 1,000, and wext = 0.14 mV. The remaining plasticity parameters are ε = 0.001 ms−2 and τp = 50 s.
Firing rate and synaptic weight statistics in five excitatory networks with plastic synaptic weights. (A) In-/out-degree histogram (the same in all the networks), with normally distributed in-degree and out-degree and no correlation between individual degrees. (B) Mean ± standard deviation of the firing rate (left) and synaptic weight (right) distributions at equilibrium as a function of the external firing rate νext. The five networks only differ in the parameter g0 of the plasticity rule (see Equations 10, 12). (C) Spike times (top) and instantaneous population firing rate (bottom) for the network with g0 = 0.4 mV for the three choices of νext. In all the plots, the empirical results come from integrating the full neuronal dynamics on networks with N = 1,000 neurons. The analytical results are obtained by numerically solving Equation 56 on . The in-/out-degrees are with μK = 25, σK = 7. The neuronal parameters are τ = 20 ms, Vθ = 20 mV, Vr = 10 mV, τr = 2 ms, Kext = 1,000, and wext = 0.14 mV. The remaining plasticity parameters are ε = 0.001 ms−2 and τp = 50 s.
Firing rate and synaptic weight histograms in the five excitatory networks with plastic synaptic weights. The networks are those of Figure 6, and the histograms are shown for the four values of the external firing rate: 6.5, 7.5, 8.0, and 9.5 Hz.
Firing rate and synaptic weight histograms in the five excitatory networks with plastic synaptic weights. The networks are those of Figure 6, and the histograms are shown for the four values of the external firing rate: 6.5, 7.5, 8.0, and 9.5 Hz.
DISCUSSION
We have derived of a set of mean-field equations that bridge the gap between a microscopical and a macroscopical description of the neuronal activity in a heterogeneous network of LIF spiking neurons at equilibrium. Whereas the microscopical description is given in terms of membrane voltages and spike times, in the macroscopical description, the neuronal activity is represented by the neurons’ firing rates (i.e., average number of spikes emitted per unit time). Although this kind of mean-field formalism has been widely used before, the main contribution of the present work has been to generalize it so that it is applicable to networks in which two sources of structural heterogeneity may take place at the same time: a heterogeneity in terms of in- and out-degrees and a heterogeneity in terms of synaptic weights, including weights that have been shaped by an activity-dependent plasticity mechanism.
In the nonplastic scenario, the synaptic weights were assumed to be independent variables from a common probability distribution. In the model with plasticity, every neuron had associated a spike trace (which could represent the concentration of a chemical that increases every time the neuron emits a spike and which is degraded over time; see Pfister & Gerstner, 2006 for possible interpretations), and the instantaneous variation of every synaptic weight was a function of the pre- and postsynaptic traces. We assumed that the network was on a regime in which, at equilibrium, the traces’ fluctuations around their means are small so that they can be used to approximate the neurons’ firing rates. This is the key step to include the plasticity mechanism into the mean-field equations, because at equilibrium, every synaptic weight can be considered to be a known function of the pre- and postsynaptic firing rates.
Given a (postsynaptic) LIF neuron, its firing rate at equilibrium is a well-defined function of its presynaptic neighbors’ rates and the corresponding synaptic weights. More precisely, it is a function of two important quantities: the sum (over the presynaptic neighbors) of the presynaptic rates times the weights, Sμ, and the sum of the presynaptic rates times the squares of the weights, Sσ. The firing rates (and the synaptic weights in the plastic model) are, however, not known a priori: The purpose is precisely to compute them analytically. For this, another key step is necessary: Under reasonable hypotheses, the aforementioned sums can be assumed to be sums over identically distributed random variables, which allows us to apply the CTL to deduce that they are jointly normally distributed. This step reduces the complexity of the problem from computing the whole rate/weight distribution to computing just a few statistical parameters that characterize the normal vector (Sμ, Sσ). These parameters are computed by invoking their definitions as statistics related to the firing rate distribution, which gives a set of equations on the parameters themselves that can be solved numerically. The results seem to match well with direct simulations of the microscopic dynamics on networks composed of both I and I-E LIF neurons.
This work is, to our knowledge, the first to simultaneously tackle the problem of extending previous mean-field formalisms to networks in which there is a heterogeneity of both degrees and synaptic weights, including weights that are plastic. It thus offers a step forward in the tremendous effort for understanding and predicting the collective behavior of networks of LIF neurons in these scenarios.
Our work has, however, several limitations that should be pointed out. We considered networks composed of LIF neurons because the LIF model is simpler and more amenable to analytical treatment than more realistic models. However, the LIF model is unable to reproduce some of the electrophysiological properties found in real neurons. The effective threshold for firing in real neurons, for example, seems to be not fixed but to depend on the stimulation protocol (Mensi, Hagens, Gerstner, & Pozzorini, 2016), and this can be reproduced by nonlinear IF models like the quadratic (Latham, Richmond, Nelson, & Nirenberg, 2000) or the exponential (Fourcaud-Trocmé, Hansel, van Vreeswijk, & Brunel, 2003) models. Another example is the spike-triggered adaptation, a process by which the spike frequency decreases upon sustained firing. Models of IF neurons including spike-triggered and subthreshold adaptation by means of an additional dynamic variable have been shown to be notably more realistic (Brette & Gerstner, 2005; Hertäg, Hass, Golovko, & Durstewitz, 2012; Izhikevich, 2004; Naud, Marcille, Clopath, & Gerstner, 2008) while still being simple compared with detailed biophysical models like the Hodgkin-Huxley model (Hodgkin & Huxley, 1952). Despite these, nonlinear and adaptive models have been successively studied under the lens of mean-field techniques (Brunel, Hakim, & Richardson, 2003; Brunel & Latham, 2003; Fourcaud-Trocmé et al., 2003; Hertäg, Durstewitz, & Brunel, 2014; Montbrió, Pazó, & Roxin, 2015); to what extent the analysis performed here could be extended to them, too, remains an open question.
Another important limitation of our work concerns the plasticity rule. Since the neuronal activity in the microscopic model is given by the spike train, any plasticity rule implemented on such a network should be a spike-dependent rule or, at a slightly more coarse-grained scale, a burst-dependent rule (Payeur, Guerguiev, Zenke, Richards, & Naud, 2021). The mean-field description is, however, given in terms of firing rates, and this is why going from one description to the other requires rewriting the plasticity rule at equilibrium in terms of firing rates. A natural way to do so is by considering spike traces: stochastic variables reflecting the spiking activity whose statistics, as we showed, can be directly linked to the underlying firing rates. However, in our mean-field formalism, it is assumed that the synaptic weight at equilibrium is fixed once the firing rate is known, and this does not allow for the introduction of weight fluctuations caused by the traces’ fluctuations. The parameter regime in which the spike trace is a reliable estimator of the firing rate (i.e., the regime in which the trace’s fluctuations are small compared with their average) is precisely the regime at which the product of the firing rate and the trace degradation constant τp is large. This limits the applicability range of our mean-field formulation and, particularly, makes it not applicable when the plasticity rule in place is a function of both the traces and the spike trains themselves or a function of several traces per neuron, with different characteristic degradation constants, as in pair-based and triplet STDP rules (Morrison et al., 2008; Pfister & Gerstner, 2006). One further step would be to study if not only the trace averages but also their fluctuations could be taken into account in the mean-field formalism. In this case, every synaptic weight at a given time would be a stochastic variable, whose statistics at equilibrium should be introduced in the mean-field formulation.
A central hypothesis in our mean-field equations is that the network’s structure is such that the in-degrees of two connected neurons are independent variables. This implies that the distribution of in-degrees among the presynaptic neurons to a given postsynaptic neuron is the same for all postsynaptic neurons: It is a network property. This ingredient is central to reduce the space of unknowns down to a set of a few parameters, because we use the fact that the statistics of every input to a neuron are independent of the identity of that neuron. It would be interesting to study how our theory should be modified so as to include assortative or dissortative networks, as was done for assortative networks with homogeneous synaptic weights in Schmeltzer, Kihara, Sokolov, and Rüdiger (2015).
Finally, we did not analyze the stability of the stationary distribution of firing rates predicted by the theory, or whether there is more than one stationary distribution depending on the model’s parameters. We leave such questions for the future.
ACKNOWLEDGMENTS
We acknowledge Calcul Québec and the Digital Research Alliance of Canada for their technical support and computing infrastructures.
SUPPORTING INFORMATION
Supporting information for this article is available at https://doi.org/10.1162/netn_a_00442.
AUTHOR CONTRIBUTIONS
Marina Vegué: Conceptualization; Formal analysis; Investigation; Methodology; Software; Writing – original draft; Writing – review & editing. Antoine Allard: Funding acquisition; Project administration; Writing – review & editing. Patrick Desrosiers: Conceptualization; Funding acquisition; Project administration; Writing – review & editing.
FUNDING INFORMATION
Antoine Allard, Natural Sciences and Engineering Research Council of Canada (https://dx.doi.org/10.13039/501100000038). Patrick Desrosiers, Sentinelle Nord, Université Laval (https://dx.doi.org/10.13039/100020862). Marina Vegué, Ministerio de Universidades (https://dx.doi.org/10.13039/501100023561). Patrick Desrosiers, Natural Sciences and Engineering Research Council of Canada (https://dx.doi.org/10.13039/501100000038). Antoine Allard, Sentinelle Nord, Université Laval (https://dx.doi.org/10.13039/100020862).
TECHNICAL TERMS
- Integrate-and-fire model:
Model for the activity of a neuron that is formulated through (a) a differential equation describing the temporal evolution of the membrane potential V and (b) a mechanism for spike generation that depends on V.
- Spike train:
Collection of spike times for a given neuron.
- Mean-field theory:
Mathematical theory for studying stochastic interacting systems composed of statistically identical units that allows to describe the evolution of the ensemble’s statistics using a simpler model.
- Stationary state:
State that is stable in time.
- Firing rate:
The average number of spikes emitted by a neuron per unit time.
- Joint in-/out-degree distribution:
Distribution of the degree pair (in-degree, out-degree) associated to a neuron in a random network model.
- Structural plasticity:
Modification of the connectivity structure (“who connects to whom”) in an interacting network.
- Synaptic plasticity:
Modification of the synaptic weights in a neuronal network.
- Spike-timing-dependent plasticity (STDP):
Model of synaptic plasticity in which the evolution of every synaptic weight depends on the specific timing of pre- and postsynaptic spikes emitted by the two neurons involved in the connection.
REFERENCES
Competing Interests
Competing Interests: The authors have declared that no competing interests exist.
Author notes
Handling Editor: Marcus Kaiser