The study of brain activity spans diverse scales and levels of description and requires the development of computational models alongside experimental investigations to explore integrations across scales. The high dimensionality of spiking networks presents challenges for understanding their dynamics. To tackle this, a mean-field formulation offers a potential approach for dimensionality reduction while retaining essential elements. Here, we focus on a previously developed mean-field model of adaptive exponential integrate and fire (AdEx) networks used in various research work. We observe qualitative similarities in the bifurcation structure but quantitative differences in mean firing rates between the mean-field model and AdEx spiking network simulations. Even if the mean-field model does not accurately predict phase shift during transients and oscillatory input, it generally captures the qualitative dynamics of the spiking network’s response to both constant and varying inputs. Finally, we offer an overview of the dynamical properties of the AdExMF to assist future users in interpreting their results of simulations.

The study of brain activity is a complex and multifaceted field that involves examining different scales and levels of description. In addition to experimental work in neuroscience, computational models have been developed to investigate brain activity at different scales, ranging from molecular to whole-brain levels. In this article, we are interested in models reproducing dynamical aspects of neuron membrane excitability networked through a synaptic communication model. These networks include an excitatory population and an inhibitory population similar to what is observed in the cortex (di Volo et al., 2019). These networks help study complex dynamics associated with the large dimensions of these systems (Carlu et al., 2020; Depannemaecker et al., 2022). However, having too many dimensions can be a limiting factor for understanding the representation of its dynamics (Depannemaecker et al., 2023). But also, when going to a larger spatial scale to constitute a network model of cerebral regions (Goldman et al., 2020), if keeping this level of description, the system becomes too complex to be understood and too computationally demanding to be simulated. In these cases, it may be interesting to try to reduce the dimensions of the system and keep key elements to achieve the function of the model. A possible way of reduction is the formulation of a mean-field. This study focuses on the analysis of a mean-field previously developed (di Volo et al., 2019; El Boustani & Destexhe, 2009; Zerlaut & Destexhe, 2017) and already used in different studies (Capone et al., 2019; Carlu et al., 2020; Chemla et al., 2019; di Volo et al., 2019; di Volo & Férézou, 2021; Goldman et al., 2023, 2020; Tesler et al., 2022, 2023; Zerlaut et al., 2018). Such approaches may focus on the proper dynamics of its corresponding spiking network (Capone et al., 2019; Carlu et al., 2020; di Volo et al., 2019; El Boustani & Destexhe, 2009) or on experimental results (Chemla et al., 2019; di Volo & Férézou, 2021; Zerlaut et al., 2018) or, to clinically observable measures (Goldman et al., 2023; Tesler et al., 2022, 2023). However, despite many uses, no systematic study of the mean-field model properties and dynamical structure has been published. Therefore, we propose to complete the description and characterization intended for future users of this model and thus to help them interpret their results.

We propose a methodology, and the results are given for one specific mean-field derivation (i.e., in a particular fit of the transfer function; see section 2); each user may need to apply these approaches to characterize its specific model or parameterization. In section 2, we detail the construction of this model, and then in section 3, we detail essential elements of the structure underlying the dynamics. Then we propose a characterization of its implementation in numerical simulations. Finally, in the discussion, we situate this model in the broad panorama of mean-fields, explaining its advantages and limits.

In this section, we first describe the adaptive exponential integrate-and-fire model (Brette & Gerstner, 2005) that constituted the spiking network and then the corresponding mean-field model. The equations’ details and the parameters’ values are available in supplementary Tables 1 and 2 based on the proposition of Nordlie et al. (2009).

2.1  AdEx Network

A network of 10,000 spiking neurons is constructed with a probability of connection of 5%, according to a sparse and random (Erdos-Renyi type) architecture. The network consists of a population with 20% inhibitory (FS) neurons and 80% excitatory (RS) neurons, according to the proportions in the cerebral cortex, as previously modeled (Carlu et al., 2020; Zerlaut et al., 2018). The dynamics of a single neuron membrane excitability is described by the AdEx model (Brette & Gerstner, 2005) corresponding to the following equation,
(2.1)
with Cm membrane capacity, gL leak conductance, EL leak reversal potential, ΔT slope factor at the spike initiation, Vth spike initiation threshold, τw adaptation time constant, a subthreshold adaptation, Isyn synaptic currents and the variables Vm voltage membrane, and w adaptation current (see supplementary table 1A5 for their values). The parameters for the excitatory and inhibitory neurons are different; in particular, inhibitory neurons do not have adaptation (b = 0 and a = 0), When an action potential is emitted (i.e., the membrane potential crosses a threshold), the system is reset as in equation 2.2 during a refractory time tref (5 ms by default):
(2.2)
with Vpeak spike detection threshold, Vreset voltage reset, and b spike-triggered adaptation (see supplementary Table 1A5 for their values).
The interaction between AdEx neurons occurs through synaptic conductances given by
(2.3)
where EE = 0 mV and EI = −80 mV are the reverse potentials of excitatory and inhibitory synapses. For each incoming spike, the excitatory and inhibitory conductances gE and gI are increased respectively by the values QE = 1.5 nS and QI = 5 nS. The conductances decrease following an exponential decay with a time constant τsyn = 5 ms according to equation 2.4.
(2.4)

The network is simulated using the simulator NEST (Hahne et al., 2021).

2.2  AdEx Mean-Field

Following the formalism proposed previously (El Boustani & Destexhe, 2009), di Volo and colleagues (2019) derived a mean-field model of AdEx networks with adaptation. The differential equations describe the time evolution of the firing rate νμ of each population μ = e, i (equation 2.5), the covariance cλη between population λ and η (equation 2.7), and the average adaptation for the excitatory population W (equation 2.8):
with
(2.9)

The parameter T, the time constant of the firing rate, and covariance equations come from the mean-field approach used here (El Boustani & Destexhe, 2009).

The parameter T defines the temporal resolution of the neural network model and is crucial in capturing its dynamics accurately. It determines the timescale over which network activity is averaged, ensuring fluctuations and correlations are captured. The Markovian assumption underlying this approach implies that the future state of the network depends solely on its current state within the timescale T. For a sufficiently small T, the master equation method can be applied, but as T approaches zero, activity fluctuations become significant, causing the transition function to diverge. This resembles experimental bin sizes, where excessively small bins yield sporadic fluctuations and reduce the transition operator to binary spike states. To avoid this, T must remain finite and align with the network’s intrinsic time constants. The timescale T is related to the decay of autocorrelation, which decreases exponentially over timescales comparable to system constants (see supplementary Figure 1 for some estimation of timescale). In asynchronous irregular regimes, correlations arise from residual global oscillations due to finite-size effects. Larger networks, with sparse connectivity, allow smaller T values while maintaining stability. However, oversized T underestimates firing rates in high-activity regimes, while undersized T overestimates second-order statistics. In practice, T could typically be in the order of magnitude of the refractory period. Indeed, choosing T equal to the refractory period ensures each neuron emits at most one spike per interval

Figure 1:

Mean-field and network dynamics without adaptation. (A) The bifurcation of external excitatory input in the variable space of the excitatory and inhibitory mean firing rates. The black line is the mean-field bifurcation with its stability (continuous line for stable fixed points and dashed lines for unstable fixed points). The crosses are the estimation of the fixed point using the simulation of the spiking neural network. The red crosses and red points are the fixed points associated with the examples of each side. Panels B to F exemplify network and mean-field simulations with an external excitatory input of 10 Hz and 80 Hz. The top panels (B, D) visualize the mean firing rate of the excitatory (red) and inhibitory (blue) neurons from the network simulation and the mean firing rate of the excitatory (orange) and inhibitory (cyan) population from the mean-field simulation. The bottom panels (C, E) provides the corresponding raster plots of the spiking network simulations. Panels B and C display an asynchronous irregular regime, and figures D and E display synchronous regular regime.

Figure 1:

Mean-field and network dynamics without adaptation. (A) The bifurcation of external excitatory input in the variable space of the excitatory and inhibitory mean firing rates. The black line is the mean-field bifurcation with its stability (continuous line for stable fixed points and dashed lines for unstable fixed points). The crosses are the estimation of the fixed point using the simulation of the spiking neural network. The red crosses and red points are the fixed points associated with the examples of each side. Panels B to F exemplify network and mean-field simulations with an external excitatory input of 10 Hz and 80 Hz. The top panels (B, D) visualize the mean firing rate of the excitatory (red) and inhibitory (blue) neurons from the network simulation and the mean firing rate of the excitatory (orange) and inhibitory (cyan) population from the mean-field simulation. The bottom panels (C, E) provides the corresponding raster plots of the spiking network simulations. Panels B and C display an asynchronous irregular regime, and figures D and E display synchronous regular regime.

Close modal

The parameters b, a and EL in equation 2.8 directly correspond to the parameters named the same in the AdEx model (di Volo et al., 2019 equation 2.1).

The originality of this mean-field approach is that it is specified by the transfer functions Fμ = {e, i}e, νi, W). The transfer function Fμ returns the firing rate of a population in the function of the incoming excitatory (νe) and inhibitory (νi) firing rate and the mean adaptation of excitatory neurons (W). These functions can be estimated using a semianalytical approach (Zerlaut et al., 2018), fitting single-cell simulations with an analytic template. The transfer functions Fe, i for excitatory and inhibitory populations generally depend on three sources of input, Fee + νext, νi), where νext corresponds to a constant value of the firing rate of an external drive. To evaluate the transfer functions, the first step is to measure the output firing rate of the single neuron model while receiving excitatory and inhibitory firing rate input. These measures are necessary because, due to the nonlinearity of the dynamics of the single neuron model, it cannot be inferred analytically, as well as determining when the system will irretrievably produce a spike. Thus, an effective or phenomenological threshold, V thre eff , is considered. It is a function of the statistics of the subthreshold membrane voltage dynamics. This statistic is assumed to be normally distributed, with the average membrane voltage μV, its standard deviation σV, and autocorrelation time τV; their calculation is described below. The effective threshold function is chosen as a second-order polynomial, giving equation 2.10, where τVN=τVGl/Cm. The {P} values are computationally found through fitting methods:
(2.10)
According to the semianalytic approach, we fit the transfer function (equation 2.11), where erfc is the gauss error function and thus obtain the output firing rate:
(2.11)
Considering asynchronous irregular regimes, we assume that the input spike trains follow Poissonian statistics (di Volo et al., 2019; El Boustani & Destexhe, 2009; Zerlaut & Destexhe, 2017). We calculate the averages (μGe, Gi) and standard deviations (σGe, Gi) of the conductances in equations 2.12. In these equations, Ke and Ki are the average input connectivity received respectively from the excitatory or inhibitory population. As in the spiking network, τe = τi = τsyn are synaptic time constants and Qe and Qi the quantal increment of the conductances, respectively, for the excitatory or inhibitory populations:
(2.12)
We can then calculate the total input conductance of the neuron μG and its effective membrane time constant τm eff (equations 2.13), with Cm and gL being the same as in the AdEx model:
(2.13)
Then we can calculate, in equation 2.14, the mean subthreshold voltage assuming that the subthreshold moments (μV, σV, τV) are not affected by the exponential term of the AdEx model, and thus considering only the leakage term and the synaptic inputs:
(2.14)
Thanks to the calculation developed by Zerlaut et al. (2018), we obtain σV and τV in equations 2.15 and 2.16, where Us=QsμG(Es-μV) and s = (e, i):
(2.15)
(2.16)

The mean-field is simulated using the virtual brain (Sanz-Leon et al., 2015).

2.3  Transfer Function Fitting

The transfer function Fμ = {e, i}e, νi, W) is a function that provides the firing rate of a population for given excitatory (νe) and inhibitory (νi) input firing rates and mean adaptation current (W). To fit the phenomenological threshold of this function, we create a data set of averages of the mean firing rate of 10 seconds of 50 neurons connecting only to one excitatory and one inhibitory Poisson generator with a fixed negative input. We take 20 values for the inhibitory input firing rate (between 0 and 40 Hz) and 20 values of the adaptation current, that is, a constant negative current (between 0 and 500 pA). The 500 excitatory input firing rates (between 0 and 200 Hz) are adjusted to get a precision of the output firing rate (the interval between two values is in the range of 0.1 and 1.0 Hz). The mean output firing rate is the average over 50 neurons by removing the outliers (values higher than three times the interquartile difference). The phenomenological voltage threshold of the transfer function is fitted following Zerlaut et al. (2018) with two steps. The initial values of the 10 polynomial coefficients of the voltage threshold are set to (effective mean voltage, 1e-3, 1e-3, 1e-3, 1e-3, 0, 0, 0, 0, 0). The first fitting step is the minimization of mean square difference of the voltage threshold estimated from the transfer function (V thre eff ) and the estimated effective threshold from the data set (μV+2σV·erfc_inv2μout·τV). The second fitting step minimizes the mean squared difference between the output firing rate of the data set and the one estimated by the transfer function. The coefficients fit using the Broyden–Fletcher–Goldfarb–Shanno algorithm (Broyden, 1970; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970) implemented in Scipy (Virtanen et al., 2020).

2.4  Bifurcation Analysis

The bifurcation analysis of the mean AdEx has been done using MatCont (version 7.7; Dhooge et al., 2003), a numerical continuation library of matlab (Mathworks, 2022). From the equilibrium point of a silence network (0.0, 0.0, 0.0, 0.0, 0.0, 0.0), we compute the change of stability depending on the external excitatory firing rate. The result of the numerical continuation is a stability diagram of the mean-field with a bifurcation point. The steps of numerical continuation are variable and adjusted depending on the sensitivity of the stability of the fixed points.

2.5  Spike Train Analysis

In this article, we analyze the properties of the spike trains from simulations to validate the mean-field hypotheses and compare the network’s and mean-field’s responses to nonconstant inputs. The hypotheses of the mean-field are evaluated using two measures: the timescale of the spike trains and the minimum of the spiking interval.

The timescale of the network is estimated based on the proposition of El Boustani and Destexhe (2009). They assume that in an asynchronous irregular state, the activity autocorrelation decreases exponentially with the time lag with a timescale similar to the network time constant. An exponential is fitted to the envelope of the autocorrelation positive peaks to estimate the exponential timescale. We use the Elephant library (Denker et al., 2018) to compute the spike trains’ autocorrelation and Scipy’s functions (Virtanen et al., 2020) to detect the peaks and fit the autocorrelation positive peaks to an exponential decay function.

The minimum interspiking interval is the minimum time between two spikes of the same neurons.

The maximum firing rate is 200 Hz (1 spike per 5 ms because of the reset condition).

We use an inhomogeneous Poisson generator with an instantaneous rate oscillating to analyze the network’s response to oscillatory input. The spike rate is set to zero when the oscillation modulation is negative. The analysis of the network’s response to this oscillatory Poisson generator is based on quantifying the phase difference between the oscillation of the mean firing rate of the excitatory neuron of the spiking network and the input oscillation. The network oscillation signal corresponds to the firing rate filter at the input oscillation frequency. The firing rate is defined as a sliding average with a window of 5 milliseconds of the instantaneous firing rate (resolution 0.1 ms).

The phase difference between these two oscillatory signals is defined by the following equation,
(2.17)
where θinput is the analytic signal given by the Hilbert transformation of the sinusoidal signal and θnetwork is the analytic signal provided by the Hilbert transformation of the network oscillation signal. The network oscillation signal is the histogram of the instantaneous firing rate (bin: 0.1 ms) that smooths with a sliding average window of 5 milliseconds. The absolute value of the meanphase corresponds to the phase locking value as defined by Lachaux et al. (1999). The angle of the meanphase corresponds to the phase shift (Freeman et al., 2003). Additional analyses are described in supplementary note 3.

2.6  Mean-Field Analysis

The mean-field signal corresponds to the excitatory mean firing rate filter at the input oscillation frequency. The rest of the analysis is the same as for the spike trains. The difference of phase is calculated based on equation 2.17, where θinput is the analytic signal given by the Hilbert transformation of the sinusoidal signal and θmean_field is the analytic signal given by the Hilbert transformation of the mean-field signal. The phase locking value and the phase shift are extracted from this phase and used to quantify the mean-field’s response. Additionally, the maximum firing rate of the mean-field corresponds to this maximum value of the mean firing rate. Additional analyses are described in supplementary note 4.

In this section, we provide a characterization of the model, illustrated by specific simulations. The results presented (unless otherwise indicated) are based on a unique set of parameters that have led to a fixed transfer function fitted for each of the populations (inhibitory and excitatory) considered. These results may differ for other fits and parameters, and we invite users of the mean-field AdEx to perform their characterizations for their specific applications.

We start by showing the steady-state solutions of the AdEx mean-field.

3.1  Steady-State and Transfer Function

By construction, the AdEx mean-field (MF) model captures the out-of-equilibrium asynchronous irregular steady state of the spiking neural network as a stable fixed point. The stability analysis, according to the parameter νext, the external input, provides a bifurcation diagram. This bifurcation diagram can complete our understanding of the observed dynamics while building a network of MF (Goldman et al., 2020; Zerlaut et al., 2018).

A qualitative comparison of the mean firing rate for a subset of external input characterizes the link between the MF’s bifurcation diagram and the spiking neural network state. The external input is a parameter, constant input, for the MF and the frequency of the Poissonian spike generator for the network. Figure 1 illustrates in the central plot the stability of the MF (black line) for different levels of external inputs and the corresponding mean firing rate of the steady state of the network. The bifurcation diagram shows three regions of stability associated with the two variables, the mean firing rate of each population, also existing for the network. The side panels of the diagram show two simulations corresponding to two examples taken from the stable region of the MF with the associated network simulations. The qualitative comparison of these two measures displays a similar behavior and shows that the MF captures the steady state of the network for different inputs. Remarkably, in the region with a very high firing rate (approximately 200 Hz), the network is no longer in an asynchronous irregular regime, but the MF captures the mean firing rate. Furthermore, the qualitative comparison for different values of the spike-triggered adaptation indicates that the mean-field captures fit the steady state of the network in the presence of the adaptation current (see Figure 2). However, looking more into details, significant quantitative differences appear for specific values of external inputs. Additionally, the time series shows a significant difference in the excitatory mean firing rate between the MF and the network during the transient from the same initial state. The first peak is present only in the network mean firing rate. The transient from comparable initial conditions is not captured by the MF.

Figure 2:

Mean-field and network dynamics with adaptation. Panels A, C, and E compare the mean firing rate of the network and mean-field simulation with an external excitatory input of 50 Hz for different values of spike-triggered adaptation, respectively, 0.0 pA, 30.0 pA, and 60.0 pA. Panels B, C, and F are the corresponding raster plots of the simulation of the spiking networks.

Figure 2:

Mean-field and network dynamics with adaptation. Panels A, C, and E compare the mean firing rate of the network and mean-field simulation with an external excitatory input of 50 Hz for different values of spike-triggered adaptation, respectively, 0.0 pA, 30.0 pA, and 60.0 pA. Panels B, C, and F are the corresponding raster plots of the simulation of the spiking networks.

Close modal

Before looking into details, it is essential to note that the measure of the mean and standard deviation of the firing rate of a network’s steady state present variability in time and are dependent on the window size. Supplementary Figures 7 and 8 show the distribution of these measures over 30 seconds for different window sizes (for details, see supplementary note 1). This confirms that the distribution is dependent on the network state. To reduce variability in the measurement, the steady state of the network is measured over 4 seconds after discarding 1 second of transients. Figure 3 compares MF fixed points and the corresponding mean firing rate of the steady state of the network. The results show the firing rate of each population for three different values of the parameter b, which directly affects the range of the adaptation variable w. This adaptation variable is a slower variable, playing an essential role in modeling phenomena such as neuromodulation (Goldman et al., 2020; Tort-Colet et al., 2023).

Figure 3:

Impact of spike-triggered adaptation on network dynamics. The figure shows the bifurcation diagram, which depends on the external excitatory input for three different values of spike-triggered adaptation (black: 0 pA, blue: 30 pA, green: 60 pA). The three curves represent the stability of the fixed point (a continuous line indicates a stable fixed point, and a dashed line represents an unstable fixed point). The crosses on the diagram are an approximation of the stable fixed point obtained through simulations. Panels A and B correspond respectively to the excitatory and inhibitory population diagrams. Panels C and D are a specific focus on the region of interest for inputs between 0 and 50 Hz.

Figure 3:

Impact of spike-triggered adaptation on network dynamics. The figure shows the bifurcation diagram, which depends on the external excitatory input for three different values of spike-triggered adaptation (black: 0 pA, blue: 30 pA, green: 60 pA). The three curves represent the stability of the fixed point (a continuous line indicates a stable fixed point, and a dashed line represents an unstable fixed point). The crosses on the diagram are an approximation of the stable fixed point obtained through simulations. Panels A and B correspond respectively to the excitatory and inhibitory population diagrams. Panels C and D are a specific focus on the region of interest for inputs between 0 and 50 Hz.

Close modal

Figure 3 confirms that the MF captures the qualitative behavior of the network. However, the MF model overestimates the mean firing rate when it is higher than 150 Hz. For the mean firing rate under 150 Hz, the mean-field underestimates or overestimates depending on the value of the spike-triggered adaptation, b. The difference is nonlinear with the increase of the external excitatory input. This may be reduced by fitting the population’s transfer function with the network’s steady state in a specific range of values. However, the structure remains qualitatively the same: a stable fixed point at the zero input (giving a firing rate =0) that soon becomes an unstable fixed point going through a Hopf bifurcation for negative or nearby zero values depending on the level of adaptation b. After another Hopf bifurcation, it becomes stable again, entering the main region of interest of the MF where the corresponding spiking network exhibits asynchronous irregular dynamics and where the range of firing rate observed corresponds to what can be observed experimentally in the cortex (Ison et al., 2011; Roxin et al., 2011). It remains stable for an extended range of input (approximately 80 Hz without adaption and more than 100 Hz with adaptation b = 30 pA or b = 60 pA). A bistability exists, with a higher fixed point around 200 Hz. This phenomenon captured the hysteresis observed in the spiking network simulations (see Figure 9 in the supplementary material). Supplementary Figures 4, 5, and 6 provide the four additional dimensions of the bifurcation diagram. They also demonstrate that the MF does not capture quantitatively the variances and covariance. This may be linked to the size of the windows (see supplementary Figures 7 and 8).

Figure 4:

Fitting of transfer functions. Panels A, C, and E are dedicated to the excitatory neuron, while the panels B, D, and F are dedicated to the inhibitory neuron. (A, B) The top graph represents the transfer function of the mean-field fitted to the data. (C, D) The middle graph shows the data used for the fitting, which is the mean firing rate of 50 independent neurons. (E, F) The bottom graph displays the error between the transfer function and the data. Each line is associated with a different value of inhibitory firing rate (20 values evenly distributed between 0 Hz and 40 Hz).

Figure 4:

Fitting of transfer functions. Panels A, C, and E are dedicated to the excitatory neuron, while the panels B, D, and F are dedicated to the inhibitory neuron. (A, B) The top graph represents the transfer function of the mean-field fitted to the data. (C, D) The middle graph shows the data used for the fitting, which is the mean firing rate of 50 independent neurons. (E, F) The bottom graph displays the error between the transfer function and the data. Each line is associated with a different value of inhibitory firing rate (20 values evenly distributed between 0 Hz and 40 Hz).

Close modal

Our previous method for estimating the steady state of the network is limited to one state, and it cannot provide evidence of the existence of bistability. By a reduction every 10 seconds of 1 Hz of the external input of the Poisson generator for a network in a steady state around 200 Hz, we estimate the range of existence of the bistability for an external firing rate under 51 Hz. Similarly, by an increase every 10 seconds of 1 Hz of the external input of a steady-state network under 150 Hz, we estimate the existence of bistability for an external firing rate over 48 Hz. Supplementary Figures 9 and 10 show that the MF overestimates the bistability’s existence (for details, see supplementary note 1). Previously, the comparison was also limited to a specific network and one noise realization. To examine the generalization of the MF, the estimation of the steady state of the network was done 30 times. The result demonstrates a low variability of the steady state except when the external input is close to 50 Hz, the network changes basin of attraction (see supplementary Figure 3).

The crucial aspect for the accuracy of the MF AdEx is the fitted transfer function that specifies the dynamics in the equations. As detailed in section 2, the final expression is based on the semianalytical approach, building the link between the single neuron response to inhibitory and excitatory input and the MF. We aim to characterize this relation in Figure 4. This figure provides the result of the analysis of the MF transfer function’s ability to capture the single neuron’s firing rate in response to stationary excitatory input. Panels A and B show the MF transfer function fitted. Panels C and D show the single neuron firing rate as a function of excitatory input. In contrast, panels E and F show the absolute error corresponding to the difference between the MF transfer function and the firing rate of the single neuron. The results indicate that the transfer function captures the firing rate well, with the maximum error occurring when the inhibitory firing rate is null. For the transfer function of the excitatory neurons, the maximum error was found to be 4.0e1 Hz, with an error of 8.0e1 Hz and a relative error of 5.6e-3 Hz for the range between 0 Hz and 200 Hz (see supplementary Figure 12). It becomes much lower if we consider only the range between 0 Hz and 50 Hz, where the maximum error was found to be 3.4e0 Hz with an absolute error of 1.7e-1 Hz and a relative error of 2.2e-2 Hz. For inhibitory input, between 0 Hz and 200 Hz, the maximum error was found to be 3.4e1, with an absolute error of 5.7e1 Hz and a relative error of 4.8e-3 Hz (see supplementary Figure 12). If we only consider the range between 0 Hz and 50 Hz, the maximum error was 1.7e0 Hz with an absolute error of 6.9e-2 Hz and a relative error of 6.9e-3 Hz. In this article, the fitting of transfer functions includes the effect of the adaptation currents by using data from simulations of neurons with constant negative currents. This was done to reduce the discrepancy between MF and network firing rate when the spike-triggered adaptation increased, as shown in di Volo et al. (2019). For this set of parameters, the effect of the adaptation does not have a significant impact (under the fitting errors without this additional data), as shown in supplementary Figure 13.

Due to the crucial aspect of the transfer function on the MF dynamic, we analyze the sensibility of the precision of the polynomial coefficients of the excitatory transfer function (for details, see supplementary note 1). The sensibility analysis is the variation of the 10 Hz fixed point (far from bifurcation) depending on the number of significant digits of the polynomial coefficients. For an accuracy of 10 Hz, it requires at least one significant digit. An accuracy of 1 Hz requires at least three significant digits. And for an accuracy of 0.1 Hz, it requires at least four significant digits. However, the closer the network dynamics are to a bifurcation point, the more sensitive the fixed point’s stability becomes to the precision of the coefficients. Furthermore, for some specific states of the MF, the mean firing rate becomes negative (see supplementary Figure 14 and supplementary note 1). Consequently, the MF becomes undefined because the transfer function is not defined for negative firing rates (which would be unrealistic).

However, another possible source of differences between the network and the MF observed in Figure 3 is that the network may not stay in an asynchronous irregular regime while the input increases. While investigating the MF, we did not explore the network behaviors in more detail, which would require an entirely separate study.

In conclusion, the MF captures the different network steady states qualitatively well. Still, some quantitative differences are observed, which may be improved by better fitting the transfer function to a specific region of interest for a specific application of this model.

3.2  Dynamical Comparison

This part of the analysis is the dynamical response of the MF to nonconstant inputs. Indeed, this model is used to constitute a network of MF to model a cortical region (Zerlaut & Destexhe, 2017) or whole brain modeling (Goldman et al., 2020). In these simulations, the focus extends beyond the network’s steady state to include its response to nonconstant inputs. The network behavior may or not follow the different assumptions of our formalism. We tested the two assumptions using varying external Poissonian inputs: that the system is Markovian and that the timescale (associated with interval between two spikes) exceeds 5 ms. Supplementary Figure 2 shows the range of validity of these two hypotheses of the MF.

Out of the MF assumptions, we compare the MF and network dynamics in response to an oscillatory input. Figure 5 shows that the phase-locking values between the input signal and the network or MF are close to 1.0, that is, synchronized. The only exception is when the network receives a signal with low amplitude or low frequency; this can come from the variability of the input of the inhomogeneous Poisson generator. However, the phase shift analysis clearly shows the difference between the network and the MF. The MF’s phase shift is more negative than the networks and is in antiphase with the input for high amplitude and high firing rate (yellow area). This area also corresponds to a maximum firing close to 200 Hz. On the contrary, the network has a maximum firing rate of 10 Hz to 20 Hz. It means that the network has a different dynamical structure, and the attracting basin of the center fixed points differs from the MF, that is, the network and the MF tend to diverge for the same input and initial condition. Including adaptation does not change these observations but increases the mismatch already noted (see supplementary Figure 15). Nevertheless, the mismatches become more significant if the oscillatory input is centered around 0.0 Hz. As the mean firing rate does not attract to the center fixed point, it creates a difference in the phase locking values and phase shift (see supplementary Figures 16 and 17). The adaptation pushes the network in antiphase, reducing the discrepancy between the MF and the network for the phase shift. However, the other observation remains the same.

Figure 5:

Comparison between the mean-field and the spiking network in the presence of oscillatory input with a mean rate of 7 Hz. Panels A, B, and C represent the measure of the excitatory population in the spiking neural network, and panels D, E, and F graphics illustrate the measure of the excitatory mean firing rate of the mean-field. The three measures displayed here are the phase-locking values in rad (A and D), the phase shift (B and E) between the oscillatory input and the mean firing rate in rad, and the excitatory population’s maximum firing rate in Hz (C and F).

Figure 5:

Comparison between the mean-field and the spiking network in the presence of oscillatory input with a mean rate of 7 Hz. Panels A, B, and C represent the measure of the excitatory population in the spiking neural network, and panels D, E, and F graphics illustrate the measure of the excitatory mean firing rate of the mean-field. The three measures displayed here are the phase-locking values in rad (A and D), the phase shift (B and E) between the oscillatory input and the mean firing rate in rad, and the excitatory population’s maximum firing rate in Hz (C and F).

Close modal

Many different approaches are possible to capture brain dynamics at the mesoscopic scale (Bandyopadhyay et al., 2021; Brunel & Hakim, 2015; Cakan & Obermayer, 2021; El Boustani & Destexhe, 2009; Huang & Lin, 2018; Montbrió et al., 2015; Nykamp & Tranchina, 2000; Schmutz et al., 2020; Schwalger & Chizhov, 2019; Wilson & Cowan, 1972; Richardson, 2007, 2009; Augustin et al., 2017), and each of these approaches is relevant to capture specific features of the dynamics of spiking networks. Some specificity of the mean-field methods studied here are the possibility of considering the sparse and random connectivity, conductance-based interactions, and the finite size of the populations. This method has been applied considering different single neuron models (Alexandersen et al., 2023; Carlu et al., 2020; di Volo et al., 2019; El Boustani & Destexhe, 2009), but the most used to build large-scale models is based on the AdEx model. Thus, we didn’t want to compare with other mean-field methods (for this, we recommend the discussion in Augustin et al., 2017). Still, we aim to offer relevant information and characterization approaches about this specific AdEx-MF model.

The present characterization of the AdEx mean-field (MF) indicates that the MF provides good qualitative insights into the complex behavior of networks of the adaptive exponential integrate-and-fire neurons with conductance synapses. The quantitative analysis reveals significant differences. The approximation of the MF is tightly connected to its transfer function and, in particular, to the precision of the phenomenological threshold. However, this mean-field approximation does not capture quantitatively the dynamical behaviors of the network for varying inputs because some of its assumptions have been relaxed.

The MF’s bifurcation analysis reveals the coexistence of multiple fixed points depending on the external input. In this work, we validated the coexistence of multiple steady states in spiking neural networks. Additionally, the MF captures fixed points around 200 Hz despite the nonvalidity of the MF’s assumption that the network regimes are asynchronous irregular regimes. This means that the MF captures the network’s saturation (for a given T). However, the quantitative comparison of the fixed points and the steady state present a mismatch, which is accentuated by the increase in the spike-triggered adaptation of the excitatory neurons. This mismatch has multiple causes, such as the size of the windows for the measures, the approximation of the transfer function, or the degree of network synchronization.

To evaluate the influence of these causes, we quantify the approximation of the transfer function. This quantification informs us about errors that are not uniform and increase with the output firing rate of the neuron. However, the approximation can be improved if the input and output firing rate range is reduced. Additionally, sensitive analysis of the polynomial coefficients of the phenomenological threshold of the transfer function shows that lowering the imprecision at 0.1 Hz requires at least four significant digits.

For nonconstant input, we showed that the Markovian and the irregular assumptions of the mean-field remain valid. However, despite respecting these two assumptions, the comparison of the dynamics between the MF and the network presents a good match for synchronization with the input but a mismatch in mean firing rate, like previously, and a mismatch in the phase shift with the input (which could be due to the nonrespect of the adiabatic or asynchronous irregular state assumptions). These mismatches depend on the average input and are insensitive to the adaptation current.

This study is limited to a specific set of parameters and mainly to a specific network connectivity. However, the realization of 30 other networks provides a low variance except around the change of basin of attraction. Additionally, the qualitative dynamics are similar to that presented in di Volo et al. (2019). The analyses do not provide exhaustive confirmation of all MF’s hypotheses, such as the gaussian distribution of the voltage membrane, the adiabatic process, and the input spike trains of each neuron equivalent to a Poisson generator. It explains the partial conclusion of the study but provides a characterization to help the interpretation of MF simulation and clarify the link between the MF and the network. Additionally, we do not provide a study of the different versions of the phenomenological threshold (di Volo et al., 2019; Zerlaut & Destexhe, 2017; Zerlaut et al., 2018), which can reduce the MF’s approximation. For example, Zerlaut et al. (2018) approximates the MF better because the difference in phase shift is more negligible. Still, it was realized with a different transfer function and simulator. Indeed, the implementation of the Poisson generator may differ between simulating environments, such as between NEST (Diesmann et al., 2002) and Brian 2 (Stimberg et al., 2019), which can have an impact on the network regimes, particularly for high frequency. The good practice will be to perform a characterization of the MF for each specific application. The MF provides good insight into the network for steady state. Indeed, we observe a qualitative similarity in the bifurcation structure but note quantitative discrepancies in the mean firing rates of the recurrent network between the mean-field model and simulations of the AdEx model. Additionally, disparities are evident during transient periods and with oscillatory inputs. Overall, the mean-field model approximates the qualitative dynamic responses of the spiking network to both constant and varying inputs. Consequently, the usage in the network of mean-field does not ensure its correspondence with the spiking neural network. It requires additional validation for interpreting the MF as a substitution of it.

Such an approach can be applied through a transfer function to obtain a mean-field model of very complex neuron models (Alexandersen et al., 2023; Carlu et al., 2020; Zerlaut & Destexhe, 2017). Future work could be to do a systematic characterization of these other mean-field approximations.

In conclusion, we have characterized the different dynamical behaviors of the AdEx mean-field model and provided the corresponding bifurcation diagrams. The availability of such diagrams is useful when searching for specific states of the system, so we anticipate that this information will be beneficial not only to understand the behavior of these MF models but also to interpreting simulations of large-scale dynamics better using networks of synaptically connected MF models at mesoscale (Zerlaut et al., 2018) or whole-brain scale (Goldman et al., 2020, 2023).

We thank Mallory Carlu, Matteo di Volo, and Jennifer Goldman for stimulating discussions. This research has received funding from CNRS and the European Union’s Horizon 2020 Framework Programme for Research and Innovation under Specific Grant Agreement 945539 (Human Brain Project SGA3) and Specific Grant Agreement 785907 (Human Brain Project SGA2). This project has also received funding from the European Union’s Horizon Europe Programme under Specific Grant Agreement 101147319 (EBRAINS 2.0 Project). This work has benefited from a government grant managed by the Agence Nationale de la Recherche (ANR) under the France 2030 program, ANR-22-PESN-0012 and has received funding from the European Union’s Horizon Europe Programme under Specific Grant Agreement 101137289 (Virtual Brain Twin Project).

We declare that they we no competing interests.

Conceptualization: L.K., D.D., A.D., V.J. Formal analysis: L.K. Funding acquisition: V.J., A.D. Methodology: L.K., D.D. project administration: V.J., A.D. Resources: V.J., A.D. Software: L.K. Investigation: L.K., D.D. Visualization: L.K., D.D. Supervision: D.D., A.D., V.J. Writing-original draft: L.K., D.D. Writing-review and editing: L.K., D.D., A.D., V.J.

The code for the simulation and analysis of this article is freely available under the V2 Apache license at https://github.com/lionelkusch/compare zerlaut/tree/paper. Each exploration’s analysis results are registered in databases available on Zenedo (https://doi.org/10.5281/zenodo.1460192113010) with the associate code. Supplementary note 2 describes their contents.

Alexandersen
,
C. G.
,
Duprat
,
C.
,
Ezzati
,
A.
,
Houzelstein
,
P.
,
Ledoux
,
A.
,
Liu
,
Y.
, . . .
Depannemaecker
,
D.
(
2023
).
A mean-field to capture asynchronous irregular dynamics of conductance-based networks of adaptive quadratic integrate-and fire neuron models
.
bioRxiv
.
Augustin
,
M.
,
Ladenbauer
,
J.
,
Baumann
,
F.
, &
Obermayer
,
K.
(
2017
).
Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons: Comparison and implementation
.
PLOS Computational Biology
,
13
(
6
), e1005545.
Bandyopadhyay
,
A.
,
Rabuffo
,
G.
,
Calabrese
,
C.
,
Gudibanda
,
K.
,
Depannemaecker
,
D.
,
Ivanov
,
A.
, . . .
Petkoski
,
S.
(
2021
).
Mean-field approximation of network of biophysical neurons driven by conductance-based ion exchange
.
bioRxiv
.
Brette
,
R.
, &
Gerstner
,
W.
(
2005
).
Adaptive exponential integrate-and-fire model as an effective description of neuronal activity
.
Journal of Neurophysiology
,
94
(
5
),
3637
3642
.
Broyden
,
C. G.
(
1970
).
The convergence of a class of double-rank minimization algorithms 1 General considerations
.
IMA Journal of Applied Mathematics
,
6
(
1
),
76
90
.
Brunel
,
N.
, &
Hakim
,
V.
(
2015
).
Population density model
. In
Encyclopedia of computational neuroscience
(pp.
2447
2465
).
Springer
.
Cakan
,
C.
, &
Obermayer
,
K.
(
2021
).
Correction: Biophysically grounded mean-field models of neural populations under electrical stimulation
.
PLOS Computational Biology
,
17
(
2
), e1008717.
Capone
,
C.
,
di Volo
,
M.
,
Romagnoni
,
A.
,
Mattia
,
M.
, &
Destexhe
,
A.
(
2019
).
State-dependent mean-field formalism to model different activity states in conductance-based networks of spiking neurons
.
Physical Review E
,
100
(
6
), 062413.
Carlu
,
M.
,
Chehab
,
O.
,
Dalla Porta
,
L.
,
Depannemaecker
,
D.
,
Héricé
C.
,
Jedynak
,
M.
, &
di Volo
,
M.
(
2020
).
A mean-field approach to the dynamics of networks of complex neurons, from nonlinear integrate-and-fire to Hodgkin–Huxley models
.
Journal of Neurophysiology
,
123
(
3
),
1042
1051
.
Chemla
,
S.
,
Reynaud
,
A.
,
Volo
,
M. d.
,
Zerlaut
,
Y.
,
Perrinet
,
L.
,
Destexhe
,
A.
, &
Chavane
,
F.
(
2019
).
Suppressive traveling waves shape representations of illusory motion in primary visual cortex of awake primate
.
Journal of Neuroscience
,
39
(
22
),
4282
4298
.
Denker
,
M.
,
Yegenoglu
,
A.
, &
Grün
,
S.
(
2018
).
Collaborative HPC-enabled workflows on the HBP Collaboratory using the Elephant framework
. In
Proceedings of Neuroinformatics 2018
(p. P19).
Depannemaecker
,
D.
,
Carlu
,
M.
,
Bouté
J.
, &
Destexhe
,
A.
(
2022
).
A model for the propagation of seizure activity in normal brain tissue
.
eNeuro
,
9
(
6
), ENEURO.0234–21.2022.
Depannemaecker
,
D.
,
Ezzati
,
A.
,
Wang
,
H. E.
,
Jirsa
,
V.
, &
Bernard
,
C.
(
2023
).
From phenomenological to biophysical models of seizures
.
Neurobiology of Disease
,
182
, 106131.
Dhooge
,
A.
,
Govaerts
,
W.
, &
Kuznetsov
,
Y. A.
(
2003
).
MATCONT
.
ACM Transactions on Mathematical Software
,
29
(
2
),
141
164
.
Diesmann
,
M.
,
Strmungsforschung
,
M.-p.
, &
Gewaltig
,
M.-O.
(
2002
).
NEST: An environment for neural systems
.
Forschung und Wisschenschaftliches Rechnen Beiträge zum Heinz-Billing-Preis
,
58
.
di Volo
,
M.
, &
Férézou
,
I.
(
2021
).
Nonlinear collision between propagating waves in mouse somatosensory cortex
.
Scientific Reports
,
11
(
1
), 19630.
di Volo
,
M.
,
Romagnoni
,
A.
,
Capone
,
C.
, &
Destexhe
,
A.
(
2019
).
Biologically realistic mean-field models of conductance-based networks of spiking neurons with adaptation
.
Neural Computation
,
31
(
4
),
653
680
.
El Boustani
,
S.
, &
Destexhe
,
A.
(
2009
).
A master equation formalism for macroscopic modeling of asynchronous irregular activity states
.
Neural Computation
,
21
(
1
),
46
100
.
Fletcher
,
R.
(
1970
).
A new approach to variable metric algorithms
.
Computer Journal
,
13
(
3
),
317
322
.
Freeman
,
W. J.
,
Burke
,
B. C.
, &
Holmes
,
M. D.
(
2003
).
Aperiodic phase resetting in scalp EEG of beta–gamma oscillations by state transitions at alphatheta rates
.
Human Brain Mapping
,
19
(
4
),
248
272
.
Goldfarb
,
D.
(
1970
).
A family of variable-metric methods derived by variational means
.
Mathematics of Computation
24
(
109
),
23
26
.
Goldman
,
J. S.
,
Kusch
,
L.
,
Aquilue
,
D.
,
Yalçınkaya
,
B. H.
,
Depannemaecker
,
D.
,
Ancourt
,
K.
, . . .
Destexhe
,
A.
(
2023
).
A comprehensive neural simulation of slow-wave sleep and highly responsive wakefulness dynamics
.
Frontiers in Computational Neuroscience, 16
.
Goldman
,
J. S.
,
Kusch
,
L.
,
Yalcinkaya
,
B. H.
,
Depannemaecker
,
D.
,
Nghiem
,
T.-A. E.
,
Jirsa
,
V.
, &
Destexhe
,
A.
(
2020
).
Brain-scale emergence of slow-wave synchrony and highly responsive asynchronous states based on biologically realistic population models simulated in the virtual brain
.
bioRxiv
.
Hahne
,
J.
,
Diaz
,
S.
,
Patronis
,
A.
,
Schenck
,
W.
,
Peyser
,
A.
,
Graber
,
S.
, . . .
Plesser
,
H. E.
(
2021
).
NEST 3.0
. https://doi.org/10.5281/zenodo.473910
Huang
,
C.-H.
, &
Lin
,
C.-C. K.
(
2018
).
An efficient population density method for modeling neural networks with synaptic dynamics manifesting finite relaxation time and short-term plasticity
.
eNeuro
,
5
(
6
).
Ison
,
M. J.
,
Mormann
,
F.
,
Cerf
,
M.
,
Koch
,
C.
,
Fried
,
I.
, and
Quiroga
,
R. Q.
(
2011
).
Selectivity of pyramidal cells and interneurons in the human medial temporal lobe
.
Journal of Neurophysiology
,
106
(
4
),
1713
1721
.
Lachaux
,
J.-P.
,
Rodriguez
,
E.
,
Martinerie
,
J.
, &
Varela
,
F. J.
(
1999
).
Measuring phase synchrony in brain signals
.
Human Brain Mapping
,
8
(
4
),
194
208
.
Mathworks
. (
2022
).
Matlab version: 9.13.0 (r2022b)
.
Montbrió
,
E.
,
Pazó
,
D.
, &
Roxin
,
A.
(
2015
).
Macroscopic description for networks of spiking neurons
.
Physical Review X
,
5
(
2
).
Nordlie
,
E.
,
Gewaltig
,
M.-O.
, &
Plesser
,
H. E.
(
2009
).
Towards reproducible descriptions of neuronal network models
.
PLOS Computational Biology
,
5
(
8
), e1000456.
Nykamp
,
D. Q.
&
Tranchina
,
D.
(
2000
).
A population density approach that facilitates large-scale modeling of neural networks: Analysis and an application to orientation tuning
.
Journal of Computational Neuroscience
,
8
(
1
),
19
50
.
Richardson
,
M. J. E.
(
2007
).
Firing-rate response of linear and nonlinear integrate-and-fire neurons to modulated current-based and conductance-based synaptic drive
.
Physical Review E
,
76
(
2
), 021919.
Richardson
,
M. J. E.
(
2009
).
Dynamics of populations and networks of neurons with voltage-activated and calcium-activated currents
.
Physical Review E
,
80
(
2
), 021928.
Roxin
,
A.
,
Brunel
,
N.
,
Hansel
,
D.
,
Mongillo
,
G.
, &
van Vreeswijk
,
C.
(
2011
).
On the distribution of firing rates in networks of cortical neurons
.
Journal of Neuroscience
31
(
45
),
16217
16226
.
Sanz-Leon
,
P.
,
Knock
,
S. A.
,
Spiegler
,
A.
, &
Jirsa
,
V. K.
(
2015
).
Mathematical framework for large-scale brain network modeling in the virtual brain
.
NeuroImage
,
111
,
385
430
.
Schmutz
,
V.
,
Gerstner
,
W.
, &
Schwalger
,
T.
(
2020
).
Mesoscopic population equations for spiking neural networks with synaptic short-term plasticity
.
Journal of Mathematical Neuroscience
,
10
(
1
).
Schwalger
,
T.
, &
Chizhov
,
A. V.
(
2019
).
Mind the last spikefiring rate models for mesoscopic populations of spiking neurons
.
Current Opinion in Neurobiology
,
58
,
155
166
.
Shanno
,
D. F.
(
1970
).
Conditioning of quasi-Newton methods for function minimization
.
Mathematics of Computation
,
24
(
111
),
647
656
.
Stimberg
,
M.
,
Brette
,
R.
, &
Goodman
,
D. F.
(
2019
).
Brian 2, an intuitive and efficient neural simulator
.
eLife
,
8
, e47314.
Tesler
,
F.
,
Linne
,
M.-L.
, &
Destexhe
,
A.
(
2023
).
Modeling the relationship between neuronal activity and the BOLD signal: Contributions from astrocyte calcium dynamics
.
Scientific Reports
,
13
(
1
), 6451(1).
Tesler
,
F.
,
Tort-Colet
,
N.
,
Depannemaecker
,
D.
,
Carlu
,
M.
, &
Destexhe
,
A.
(
2022
).
Mean-field based framework for forward modeling of LFP and MEG signals
.
Frontiers in Computational Neuroscience
,
16
, 968278.
Tort-Colet
,
N.
,
Resta
,
F.
,
Montagni
,
E.
,
Pavone
,
F.
,
Mascaro
,
A. L. A.
, &
Destexhe
,
A.
(
2023
).
Assessing brain state and anesthesia level with two-photon calcium signals
.
Scientific Reports
,
13
(
1
).
Virtanen
,
P.
,
Gommers
,
R.
,
Oliphant
,
T. E.
,
Haberland
,
M.
,
Reddy
,
T.
,
Cournapeau
,
D.
, …
(
2020
).
SciPy 1.0: Fundamental algorithms for scientific computing in Python
.
Nature Methods
,
17
,
261
272
.
Wilson
,
H. R.
, &
Cowan
,
J. D.
(
1972
).
Excitatory and inhibitory interactions in localized populations of model neurons
.
Biophysical Journal
,
12
(
1
),
1
24
.
Zerlaut
,
Y.
,
Chemla
,
S.
,
Chavane
,
F.
, &
Destexhe
,
A.
(
2018
).
Modeling mesoscopic cortical dynamics using a mean-field model of conductance-based networks of adaptive exponential integrate-and-fire neurons
.
Journal of Computational Neuroscience
,
44
(
1
),
45
61
.
Zerlaut
,
Y.
, &
Destexhe
,
A.
(
2017
).
Enhanced responsiveness and low-level awareness in stochastic network states
.
Neuron
,
94
(
5
),
1002
1009
.

Author notes

Lionel Kusch and Damien Depannemaecker are equally contributing first authors.

Supplementary data