## Abstract

The Potjans-Diesmann cortical microcircuit model is a widely used model originally implemented in NEST. Here, we reimplemented the model using NetPyNE, a high-level Python interface to the NEURON simulator, and reproduced the findings of the original publication. We also implemented a method for scaling the network size that preserves first- and second-order statistics, building on existing work on network theory. Our new implementation enabled the use of more detailed neuron models with multicompartmental morphologies and multiple biophysically realistic ion channels. This opens the model to new research, including the study of dendritic processing, the influence of individual channel parameters, the relation to local field potentials, and other multiscale interactions. The scaling method we used provides flexibility to increase or decrease the network size as needed when running these CPU-intensive detailed simulations. Finally, NetPyNE facilitates modifying or extending the model using its declarative language; optimizing model parameters; running efficient, large-scale parallelized simulations; and analyzing the model through built-in methods, including local field potential calculation and information flow measures.

## 1 Introduction

The Potjans-Diesmann cortical microcircuit (PDCM) model (Potjans & Diesmann, 2014a) was designed to reproduce an early sensory cortical network of 1 mm$2$$\xd7$ cortical depth in volume. The model generates spontaneous activity with layer-specific firing rates similar to those observed experimentally (de Kock & Sakmann, 2009; Sakata & Harris, 2009; Swadlow, 1989). The PDCM model was one of the first to reproduce connectivity with statistical fidelity to experimental observations (Thomson, West, Wang, & Bannister, 2002; West, Mercer, Kirchhecker, Morris, & Thomson, 2005). It has been used to study (1) the emergence of macroscopic cortical patterns, including layer-specific oscillations (van Albada, Helias, & Diesmann, 2015; Bos, Diesmann, & Helias, 2016); (2) the effects on cortical dynamics resulting from interlayer, intercolumn, or interarea communication patterns (Cain, Iyer, Koch, & Mihalas, 2016; Schwalger, Deger, & Gerstner, 2017; Schmidt, Bakker, Hilgetag, Diesmann, & van Albada, 2018; Dean et al., 2018); (3) the influence of the microconnectome on activity propagation through the network layers (Schuecker, Schmidt, van Albada, Diesmann, & Helias, 2017); (4) the interareal effects of spatial attention in visual cortex (Schmidt et al., 2018; Wagatsuma, Potjans, Diesmann, Sakai, & Fukai, 2013); and (5) the effects of inhibitory connections in contextual visual processing (Lee, Koch, & Mihalas, 2017) and cortical microcircuits of different regions (Beul & Hilgetag, 2015).

In this work, we have converted the PDCM model from NEST to NetPyNE (Dura-Bernal, Suter et al., 2019; Lytton et al., 2016) (www.netpyne.org), which provides a high-level interface to the NEURON simulator (Carnevale & Hines, 2006). NetPyNE facilitates biological neuronal network development through use of a declarative format to separate model definition from the underlying low-level NEURON implementation. We suggest that this port makes the PDCM model easier to understand, share, and manipulate than if implemented directly in NEURON. NetPyNE enables efficient parallel simulation of the model with a single function call and gives access to a wide array of built-in analysis functions to explore the model.

The NetPyNE implementation also allowed us to swap in more detailed cell models as alternatives to the original leaky integrate-and-fire neurons (LIFs), retaining the original connection topology. This allows inclusion of additional multiscale details, including conductance-based channels, complex synaptic models (Hines, Morse, Migliore, Carnevale, & Shepherd, 2004), and reaction-diffusion processes (McDougal, Hines, & Lytton, 2013; Ranjan et al., 2011; Newton, McDougal, Hines, & Lytton, 2018). Here, we provide a simple implementation of the PDCM model with more detailed multicompartment neurons. This opens up the possibility of additional multiscale studies, such as investigating the interaction between network topology and dendritic morphology, or channel-specific parameters (Bezaire, Raikov, Burk, Vyas, & Soltesz, 2016; Dura-Bernal, Neymotin, Suter, Shepherd, & Lytton, 2018; Neymotin, Dura-Bernal, Lakatos, Sanger, & Lytton, 2016).

Detailed simulations require greater computational resources than simulations with LIFs. To make these feasible, it is often necessary to reduce the number of neurons in the network. Given the increasing availability of supercomputing resources (Towns et al., 2014; Sivagnanam et al., 2013), researchers may also wish to switch back and forth across different network sizes (Schwalger et al., 2017; Schmidt et al., 2018; Bezaire et al., 2016). However, scaling the network to decrease or increase its size while maintaining its dynamical properties is challenging. For example, when we reduce the number of neurons, we need to increase the number of connections or the synaptic weight to maintain activity balance. However, this can then lead to undesired synchrony (Brunel, 2000). To address this, we include an adapted version of the scaling method used in original model. This allowed us to resize number of network neurons, number of connections, density of external inputs, and synaptic weights, while maintaining first- and second-order dynamical measures.

Our implementation generated a set of network models of different sizes that maintained the original layer-specific firing rates, synchrony, and irregularity features (Potjans & Diesmann, 2014a). The port to NetPyNE will allow researchers to readily modify both the level of detail and size of the PDCM network to adapt it to their computational resources and research objectives.

## 2 Methods

### 2.1 Original NEST PDCM Model

The original NEST (Gewaltig & Diesmann, 2007) network consisted of approximately 80,000 leaky integrate-and-fire neurons (LIFs) (Lapicque, 1907) distributed in eight cell populations representing excitatory and inhibitory neurons in cortical layers 2/3, 4, 5, and 6; we use L2/3e, L2/3i, L4e, L4i, and so on to reference layer and neuron type (e, excitatory; i, inhibitory). External input was provided from thalamic and corticocortical afferents simulated as Poisson processes. Model connectivity corresponds to a cortical slab under a surface area of 1 mm$2$. The number of excitatory and inhibitory neurons in each layer, the number and strength of connections, and the external inputs to each cell population were all based on experimental data from over 30 publications (Thomson et al., 2002; West, Mercer, Kirchhecker, Morris, & Thomson, 2005; Binzegger, Douglas, & Martin, 2004).

### 2.2 NetPyNE Implementation of the PDCM Model

NetPyNE employs a declarative computer language (lists and dictionaries in Python) to specify the network structure and parameters. Loosely, a declarative language allows the user to describe what they want, in contrast to an imperative or a procedural language that generally specifies how to do something. Therefore, a NetPyNE user can directly provide biological specifications at the multiple scales being modeled and is spared low-level implementation details. We extracted model parameters from the original PDCM publication (Potjans & Diesmann, 2014a) and from the source code available at Open Source Brain (OSB) (Potjans & Diesmann, 2014b) (using NEST version 2.12) for the 8 cell populations, 8 spike generators (NetStims) as background inputs, and 68 connectivity rules. Because NetPyNE models require spatial dimensions, even if not explicitly used, we embedded the model in a 1470 $\mu $m depth $\xd7$ 300 $\mu $m diameter cylinder, using cortical depth ranges corresponding to layer boundaries based on macaque V1 (Schmidt et al., 2018). Connectivity rules included fixed divergence values for cells of each presynaptic population, with synaptic weight and delay drawn from normal distributions.

To reproduce the PDCM model, a new NEURON LIF neuron model was implemented, since NEURON's built-in LIF models did not allow setting membrane time constant higher than synaptic decay time constant. Initial membrane potential for each neuron was set randomly from a gaussian distribution with mean −58 mV and standard deviation 10 mV. During model simulations, we allowed a 100 ms initialization period so that the network settled into a robust steady state. As in the original model, we implemented three different conditions in terms of the external inputs to the network (Potjans & Diesmann, 2014a):

- 1.
Poisson and balanced: inputs followed a Poisson distribution and the number of external inputs to each population were balanced to generate a network behavior similar to that observed in biology.

- 2.
Direct current (DC) input and balanced: inputs were replaced with an equivalent DC injection, and were balanced as in case 1.

- 3.
Poisson and unbalanced: inputs followed a Poisson distribution but each population received the same number of inputs (unbalanced).

Source code for the NetPyNE model, including the LIF NMODL (.mod) code, is publicly available on GitHub github.com/suny-downstate-medical-center/PDCM_NetPyNE and ModelDB modeldb.yale.edu/266872 (password: PD_in_NetPyNE).

. | Original Network ($k=1$) . | Network Scaled by Factor of $k$ . |
---|---|---|

Total number of neurons | $N$ | $kN$ |

Number of external inputs per neuron | $I$ | $kI$ |

Probability of connection between two neurons | $pij$ | $pij$ |

Total number of connections between two populations | $Ci,j$ | $k2Ci,j$ |

Synaptic weight | $wk$ | $w$ |

Internal input per neuron | $pNjw\u2329fj\u232a$ | $kpNjw\u2329fj\u232a$ |

External input per neuron | $Iw\u2329fext\u232a$ | $kIw\u2329fext\u232a$ |

DC input equivalence | $X$ | $X+(1-k)pNjW\u2329fj\u232a+(1-k)IW\u2329fext\u232a$ |

Total input per neuron | $pNjw\u2329fj\u232a+Iw\u2329fext\u232a+X$ | $pNjw\u2329fj\u232a+Iw\u2329fext\u232a+X$ |

. | Original Network ($k=1$) . | Network Scaled by Factor of $k$ . |
---|---|---|

Total number of neurons | $N$ | $kN$ |

Number of external inputs per neuron | $I$ | $kI$ |

Probability of connection between two neurons | $pij$ | $pij$ |

Total number of connections between two populations | $Ci,j$ | $k2Ci,j$ |

Synaptic weight | $wk$ | $w$ |

Internal input per neuron | $pNjw\u2329fj\u232a$ | $kpNjw\u2329fj\u232a$ |

External input per neuron | $Iw\u2329fext\u232a$ | $kIw\u2329fext\u232a$ |

DC input equivalence | $X$ | $X+(1-k)pNjW\u2329fj\u232a+(1-k)IW\u2329fext\u232a$ |

Total input per neuron | $pNjw\u2329fj\u232a+Iw\u2329fext\u232a+X$ | $pNjw\u2329fj\u232a+Iw\u2329fext\u232a+X$ |

Note: The variables and parameters are defined in the text.

### 2.3 Network Scaling

The scaling process was able to maintain first (mean, $\mu $) and second (variance, $\sigma $) order statistics of network activity, based on a single scaling factor $k$ ($k<1$ to downsize), using the following steps (see also Table 1):

- 1.
Multiply number of cells in each population by $k$.

- 2.
Multiply number of connections per population by $k2$.

- 3.
Multiply synaptic weights by $1k$.

- 4.
Calculate lost input for each cell and provide compensatory DC input current.

Details are provided in Table 1, where $N$, $I$, $pi,j$, $Ci,j$, $w$, and $X$ are defined in the first column of the table; $Nj$ is the size of the postsynaptic population $j$; $\u2329fj\u232a$ is the mean firing rate of population $j$; and $\u2329fext\u232a$ is the mean firing rate of external inputs.

The first three steps maintain second-order statistics (internal and external firing rate variance, $\sigma int$ and $\sigma ext$), whereas the fourth step restores the original first-order statistics (mean firing rates ($\mu =\mu int+\sigma ext$)). The method retains layer-specific average firing rates, synchrony, and irregularity features in different size networks.

### 2.4 First- and Second-Order Statistics

In equations 2.1 and 2.2, $Ni$ and $Nj$ are the sizes of the pre- and postsynaptic populations, respectively; $pij$ is the probability that two neurons, one from the presynaptic and the other from the postsynaptic population, are connected by at least one synapse; and $Ci,j$ is the total number of synapses linking the pre- to the postsynaptic population ($pij$ and $Ci,j$ are denoted as $Ca$ and $K$ by Potjans & Diesmann, 2014a.) Note that in the original model, autapses and multapses are allowed, whereas in NetPyNE, both were disabled by default, although they can be specifically enabled through configuration flags.

See the appendix for a detailed derivation of why the derivation of the scaling method preserves the network activity statistics.

### 2.5 Replacing LIF with Biophysically Detailed, Multicompartment Neurons

As a proof-of-concept we modified the PDCM model by replacing the point neuron models with biophysically detailed multicompartment neurons that included multiple ionic channels and synapses. We modified the original model through the following changes:

- 1.
Replaced LIFs in all populations with a six-compartment (soma, two apical dendrites, basal dendrite and axon) pyramidal neuron model with nine conductance-based ionic channels (passive, Na$+$, 3 K$+$, 3 Ca$2+$, HCN) from Dura-Bernal et al. (2018). Neuron model parameters were imported from an existing JSON file. The required NMODL files were added to the repository and compiled.

- 2.
Set temperature to 34$o$C, the temperature required for the cell model above to respond physiologically.

- 3.
Added exponential (Exp2Syn) synapse models with rise time constant, decay time, and reversal potential (E$rev$) of 0.8 ms, 5.3 ms, and 0 mV for excitatory synapses; and 0.6 ms, 8.5 ms and $-$75 mV for inhibitory synapses. These synapses were included in the connectivity rules. Weights were now all made positive, since inhibition was now mediated by a negative E$rev$.

- 4.
Scaled all connection weights by 1e-6, given that weights here represent changes in conductance in NEURON $\mu S$ units.

This version of the model is available as the “multicompartment” branch in the Github repository [github.com/suny-downstate-medical-center/PDCM_NetPyNE]

## 3 Results

### 3.1 Reproduction of Potjans-Diesmann (PDCM) Model Results

The NetPyNE implementation was able to reproduce the raster plot patterns as well as firing rates, irregularity, and synchrony statistics for the balanced Poisson inputs condition (see Figure 1; compare with Figure 6 of the original article: Potjans & Diesmann, 2014a). The results are not identical due to different randomization of model drive and specific cell-to-cell wiring (see section 4). Major characteristics of the original model that were reproduced, including (1) overall apparent asynchronous activity (but see details below); (2) lowest firing rates in L2/3e and L6e ($\u223c$1 Hz) $<$L4e ($\u223c$4 Hz) $<$L5e with the highest excitatory firing rate ($\u223c$7 Hz); (3) irregularity $\u223c$0.8, lowest for L5i and L6i; and (4) synchrony measure with L5e $>$L2/3e, L4e $>$L5i, L6i.

Platform . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

NEST (1 trial) | 0.86 | – | 4.45 | – | 7.59 | – | 1.09 | – |

NetPyNE (1 trial) | 0.90 | 2.80 | 4.39 | 5.70 | 6.79 | 8.21 | 1.14 | 7.60 |

NetPyNE (10 trials) | 0.91 $\xb1$ 1.08 | 2.80 $\xb1$ 2.36 | 4.39 $\xb1$ 3.94 | 5.70 $\xb1$ 4.50 | 6.77 $\xb1$ 5.04 | 8.21 $\xb1$ 5.88 | 1.14 $\xb1$ 1.85 | 7.60 $\xb1$ 6.03 |

Platform . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

NEST (1 trial) | 0.86 | – | 4.45 | – | 7.59 | – | 1.09 | – |

NetPyNE (1 trial) | 0.90 | 2.80 | 4.39 | 5.70 | 6.79 | 8.21 | 1.14 | 7.60 |

NetPyNE (10 trials) | 0.91 $\xb1$ 1.08 | 2.80 $\xb1$ 2.36 | 4.39 $\xb1$ 3.94 | 5.70 $\xb1$ 4.50 | 6.77 $\xb1$ 5.04 | 8.21 $\xb1$ 5.88 | 1.14 $\xb1$ 1.85 | 7.60 $\xb1$ 6.03 |

A comparison of the mean population firing rates of the NetPyNE implementation with the original implementation (taken from Table 6 of the original article) is given in Table 2. To perform this comparison, we ran the NetPyNE simulations with different seeds for the random number generator. More specifically, we simulated the 100% scale network for 60 s with 10 different seeds and calculated the mean and standard deviation across the neurons in all 10 simulations for each of the populations. We compared these results with the NEST results, which consisted of a single 60 s simulation with a fixed random number generator seed. The mean rates of the excitatory populations in the NEST implementation fall within $0.16\sigma $ (0.16 of the standard deviation ranges) of their respective counterparts in the NetPyNE implementation. The relative deviations $(|fNEST-fNetPyNE|/fNEST)$ are less than 11%. Considering the NEST subsampling of 1000 neurons per population, the standard normal random variables $z$ ($Z$-scores) and $p$-values of the statistical comparison between the original model and the NetPyNE model firing rates were L2/3e: 1.33, 0.184; L2/3i: 2.15, 0.016; L4e: 0.50, 0.615; L4i: 0.90, 0.179; L5e: 5.12, $3\xb710-7$; L5i: 2.13, 0.017; L6e: 0.92, 0.367; and L6i: 0.75, 0.226.

The synchrony metric depended on sampling, differing substantially when using all 77,169 neurons instead of subsampling (see Figure 2). Synchrony in L2/3e and L4e was now visible in the raster plot, and the synchrony values increased considerably for these populations compared to Figure 1. Irregularity and mean firing rates were adequately captured with subsampling: the maximum variation $(|fsub-fall|/fall)$ was less than 0.4%.

We next tested the balanced DC input and unbalanced Poisson input protocols (compare Figure 3 to panels A1, A2, B1, and B2 from Figure 7 in the original publication (Potjans & Diesmann, 2014a)). As in the original model, replacing the balanced Poisson inputs with DC current did not affect the irregular firing displayed in the raster plot or the population average firing rate properties. However, replacing them with unbalanced Poisson inputs resulted in loss of activity in L6e and modified the average firing rates across populations.

### 3.2 Network Scaling

Now that we have compared the full-scale versions of the NetPyNE and NEST implementations with different sampling sizes, we proceed to compare the scaled down NetPyNE implementations with the full-scale NEST implementation.

Figures 4 to 6 show raster plots and statistics for scaled-down NetPyNE versions of the PDCM model, with either Poisson or DC external inputs. As in the original article (Potjans & Diesmann, 2014a), raster plots show 1862 cells, and the statistical measures were calculated using a fixed number of 8000 neurons. The raster plots exhibit similar firing patterns, and mean firing rate and irregularity per neuronal population as the full-scale raster plot (see Figure 1), both when using Poisson inputs (panels A–C of Figures 4, 5, and 6) and DC inputs (panels E–G of Figures 4 and 5). Although the raster plot appears to show all populations as asynchronous, the synchrony measure plot reveals high values for some populations (panels A and D of Figures 4, 5, and 6) and DC inputs (panels E and H of Figures 4 and 5). The raster plot and synchrony for the 10% scaling with DC external inputs case differed from the full-scale results (see Figure 1), as the raster plot exhibited a visually perceptible synchrony (see Figure 6E), and the calculated synchrony value was considerably higher (see Figure 6H).

Note that the total number of neurons of the scaled version may not exactly match that fraction of the total number of neurons (e.g., the 10% scale version has 7713, whereas the full-scale version has 77,169 cells), because scaling is performed for each population separately, resulting in the accumulation of small rounding errors.

To test whether neuron subsampling affects the results in the rescaled networks, in Figure 7 we evaluated the $30%$ rescaled network with no subsampling, that is, including all 23,147 in the raster plot and statistics calculation (compare to Figure 5). Similar to what was seen in the full-scale network simulation (see Figure 2), spike synchrony was visually perceptible in the raster plots, and the population synchrony values were significantly altered.

Figure 8 summarizes the NetPyNE PDCM scaling results: model mean firing rate, irregularity, synchrony, and average cross-correlation for each cell population as a function of the degree of scaling and external input type. The complete data set is available in supplementary tables S1 (mean firing rates, Poisson input), S2 (mean firing rate, DC input), S3 (irregularity, Poisson input), S4 (irregularity, DC input), S5 (synchrony, Poisson input), S6 (synchrony, DC input), S7 (average cross correlation, Poisson input), and S8 (average cross correlation, DC input). They allow a comparison of the different rescaled NetPyNE PDCM models and the original NEST implementation.

For both Poisson and DC external inputs, the mean population firing rates of all rescaled versions are close to the original results (Potjans & Diesmann, 2014a). For Poisson inputs, even extreme downscaling to 1% resulted in average firing rates consistent with the original NEST model (Table A.1). For DC inputs, downscaling the network below 10% resulted in no firing activity due to insufficient spiking input (i.e., an average neuron potential lower than the spiking threshold with a small variance results in almost nonexistent spike activity). Nevertheless, the average firing rates of the DC input models with downscaling above 10% also are consistent with the original article results (Table A.2). Overall, the highest relative deviation of the mean population firing rate between the rescaled NetPyNE and the original PD article $(|fx%inNetPyNE-fNEST|/fNEST)$ is less than 28% (based on data from Tables 2, A.1, and A.2).

The mean population firing rates of the downscaled models presented overall small relative deviations with respect to the full-scale network results (see Figure 8 and Tables A.1 and A.2). Relative deviation was calculated as $|fx%-f100%|/f100%$. The populations with the largest relative deviations were L2/3e and L2/3i, but even these generally exhibited maximum relative deviations under 30%. Networks with Poisson inputs tended to exhibit the largest relative deviations when downscaling below 10%, whereas networks with DC inputs started to display large, relative deviations earlier on, when downscaling 40% and below.

The relative deviations of the irregularity measure across downscaled models compared to the full-scale model $|Ix%-I100%|/I100%$ were much smaller than that of the mean firing rate, with values around or below 1% (see Figure 8 and Tables A.3 and A.4). The populations with the largest relative deviations for the measure were L5e and L5i.

Next, we evaluated the effect of downscaling on network synchrony. We first compared four subsampling approaches to illustrate the effect of sample size in the synchrony calculation:

- 1.
Sample a fixed percentage ($8000/77,169=10.37%$) of neurons per population, totaling 8000 neurons

- 2.
Sample 1000 neurons per population, totaling 8000 neurons (as in the original article)

- 3.
Sample 2000 neurons per population, totaling 16,000 neurons

- 4.
Not to sample and include all 77,169 neurons

In contrast to irregularity, synchrony depends on the sampling strategy (see Table 3). However, it does not depend on the time window (5 s in Figures 1 and 2 and 60 s in Table 3 and Figure 8). The observed discrepancies in synchrony may be a consequence of sampling a different number of neurons or a different percentage of the population size (see section 4). Interestingly, with the exception of L5i and L6i, synchrony appears to increase linearly with the number of sampled neurons from each population (see Table 3). For comparison, we show in Figure 8 the synchrony of each population for the full-scale and downscaled NetPyNE implementations using the sampling strategy of the original article (1000 neurons per layer) (more details are given in Tables A.5 and A.6).

Sampling Method . | Population . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|---|

Fixed percentage | Number of neurons | 2144 | 605 | 2272 | 568 | 503 | 110 | 1492 | 306 |

Synchrony | 5.1 | 1.5 | 5.7 | 1.4 | 2.5 | 1.2 | 1.4 | 1.0 | |

1000 per population | Number of neurons | 1000 | 1000 | 1000 | 1000 | 1000 | 1000 | 1000 | 1000 |

Synchrony | 2.9 | 1.8 | 3.0 | 1.7 | 4.3 | 1.1 | 1.2 | 1.0 | |

2000 per population | Number of neurons | 2000 | 2000 | 2000 | 2000 | 2000 | 2000 | 2000 | 2000 |

Synchrony | 4.9 | 2.7 | 5.1 | 2.3 | 7.9 | 1.1 | 1.6 | 1.0 | |

No subsampling | Number of neurons | 20,683 | 5834 | 21,915 | 5479 | 4850 | 1065 | 14,395 | 2948 |

Synchrony | 38.3 | 4.4 | 43.0 | 3.9 | 12.1 | 1.1 | 0.98 | 0.8 |

Sampling Method . | Population . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|---|

Fixed percentage | Number of neurons | 2144 | 605 | 2272 | 568 | 503 | 110 | 1492 | 306 |

Synchrony | 5.1 | 1.5 | 5.7 | 1.4 | 2.5 | 1.2 | 1.4 | 1.0 | |

1000 per population | Number of neurons | 1000 | 1000 | 1000 | 1000 | 1000 | 1000 | 1000 | 1000 |

Synchrony | 2.9 | 1.8 | 3.0 | 1.7 | 4.3 | 1.1 | 1.2 | 1.0 | |

2000 per population | Number of neurons | 2000 | 2000 | 2000 | 2000 | 2000 | 2000 | 2000 | 2000 |

Synchrony | 4.9 | 2.7 | 5.1 | 2.3 | 7.9 | 1.1 | 1.6 | 1.0 | |

No subsampling | Number of neurons | 20,683 | 5834 | 21,915 | 5479 | 4850 | 1065 | 14,395 | 2948 |

Synchrony | 38.3 | 4.4 | 43.0 | 3.9 | 12.1 | 1.1 | 0.98 | 0.8 |

Note: Synchrony was quantified as the normalized variance of the binned spike count with bin width of 3 ms.

Finally, we evaluated the mean cross-correlation (defined in equation 2.10) per layer as a second-order statistic measure alternative to synchrony (see Figure 8). The relative deviations of the mean cross-correlation across downscaled versions above 10% compared to full scale were much smaller than that of synchrony, with values around or below 12% for Poisson and 18% for DC external input (see Figure 8 and Tables A.7 and A.8). However, for downscaling under 10%, the relative deviations increased and reached a maximum of 82% for L5i. The populations with the largest cross-correlation relative deviations were L2/3e for scalings above 10% and L5 for scalings below 10%.

### 3.3 Additional Model Analysis Facilitated by NetPyNE

Converting^{3} the PDCM model to the NetPyNE standardized specifications allows the user to readily make use of the tool's many built-in analysis functions. These range from 2D visualization of the cell locations to different representations of network connectivity to spiking activity and information flow measures. Importantly, these are available to the user through simple high-level function calls, which can be customized to include a specific time range, frequency range, set of populations, and visualization options.

### 3.4 Network with Biophysically Detailed Multicompartment Neurons

We simulated the new biophysically detailed PDCM model (at 1% scale for 600 ms) and obtained physiological-like activity in all populations (see Figure 10B). It is out of the scope of this letter to tune the new version to reproduce the original model statistics. However, the preliminary firing rate statistics, without additional parameter tuning, are consistent with the original (see Figure 10D). This implementation also allowed simulating previously unavailable biological measures, including membrane voltage at different neuron compartments (see Figures 10A and 10C), ionic and synaptic currents, local field potential, and current source density at different depths (see Figures 10E and 10F). LFP spectral analysis (see Figures 10E and 10F) revealed time-varying oscillations in the beta and gamma range consistent with cortical dynamics (Buzsáki & Draguhn, 2004).

## 4 Discussion

We reimplemented the PDCM model using the NetPyNE tool, a high-level interface to the NEURON simulator. The new model version reproduced the overall network dynamics of the original PDCM model, evaluated through population-specific average firing rates, irregularity, and synchrony measures. The NetPyNE version allowed simplified scaling of network size, while preserving the network statistics for most conditions. This feature can be used to study the effect of scaling on network dynamics. For example, under certain conditions, network synchrony increased for smaller networks, as explained in section 4.5.

### 4.1 Advantages of NetPyNE Implementation

Source code for the NetPyNE implementation is available at GitHub: github.com/suny-downstate-medical-center/PDCM_NetPyNE and on ModelDB: modeldb.yale.edu/266872 (password: PD_in_NetPyNE). NetPyNE provides a clear separation of model specifications and the underlying low-level implementation and facilitates extension of the model. This was demonstrated by modifying the PDCM model to include more biophysically realistic multicompartment neuron models (see Figure 10). This version, which produced similar dynamics, enabled recording membrane voltage and ionic currents and simulating local field potentials (LFPs) and current source density (CSD) analysis, permitting more direct comparison to experimental data.

The LFPy tool hybrid scheme (Hagen et al., 2016) also offers a method to calculate LFP signals in point-neuron networks. To do this, it generates a network of simplified multicompartment neurons with a passive membrane model that preserves the original network features. This provides an efficient method to predict extracellular potentials in large point-neuron networks but requires generating and running a separate model. The NetPyNE implementation provides a single network model, with fully customizable neuron biophysics and morphologies, which can be simulated to obtain neural data at multiple scales: from membrane voltage and currents to spiking activity to LFPs. The LFPy hybrid scheme likely provides a faster method to obtain LFP estimates in point neuron networks.

The NetPyNE implementation of the original model also enabled employing NetPyNE's analysis capabilities to gain further insights into the model. This was illustrated by visualizing the network's topology and connectivity, plotting the average firing rate spectrogram, and calculating the spectral Granger causality (a measure of information flow) between two model populations (see Figure 9).

NetPyNE provides integrated analysis and plotting functions that avoid the need to read and manipulate the output data using a separate tool. An alternative approach is to separate modeling and analysis code to avoid maintaining separate implementations of core analysis functions. For this purpose, NetPyNE simulation outputs can also be saved to common formats and analyzed by other tool such as Elephant (Electrophysiology Analysis Toolkit).

### 4.2 Reproduction of Original Results

We were able to reproduce all the network statistics (mean firing rate, irregularity, and synchrony) for the three types of external inputs: balanced Poisson, DC current, and unbalanced Poisson—compare Figure 1 with Figure 6 of the original article (Potjans & Diesmann, 2014a), and Figure 3 with Figure 7 of the original article. The mean rates of the excitatory neurons of the NEST implementation fall within 0.16 $\sigma $ of the NetPyNE results. The low $p$-value obtained for layer L5e is probably due to the high variability of the average firing rates, a feature described in Figure 8C of the original article (Potjans & Diesmann, 2014a). Notably, in the unbalanced Poisson input condition, we can observe the lack of activity in L6e and the firing rate changes in other populations, both present in the original study.

### 4.3 Preserved Statistics in Rescaled Networks

The scaling method works by keeping the random inputs unchanged on average (van Albada et al., 2015) and fixing the proportion between the firing threshold and the square root of the number of connections (van Vreeswijk & Sompolinsky, 1998; see the parameters in Table 1). This method managed to approximately preserve the mean firing rate and irregularity for all populations across all scaling percentages, ranging from 90% to 1% (see Figure 8 and Tables A.1 to A.4). The synchrony measure was similarly preserved for the Poisson external input condition (see Table A.5), but not for the DC input condition, as discussed in section 4.5.

The second-order measures, synchrony and cross-correlation, showed an increased relative deviation from the full-scale network when downscaling below 10%. It is important to point out that any scaling method has limitations due to mathematical constraints. The downscaling ceases to maintain the second-order statistics when the condition $w\u226a(\theta -Vreset)$ is not satisfied (see section 4.4). This may be one of the reasons for the observed increase in relative deviations below 10% downscaling.

Downscaled models are extremely useful for exploratory work and educational purposes because they reduce the required computational resources. For example, they can be used to study aspects requiring long simulation times, such as the learning window of spike-time-dependent plasticity (STDP; Clopath, Büsing, Vasilaki, & Gerstner, 2010), or requiring detailed multicompartment neurons, such as the effect of adrenergic neuromodulation of dendritic excitability on network activity (Dura-Bernal, Neymotin et al., 2019; Labarrera et al., 2018).

### 4.4 Mathematical Explanation and Limitations of the Scaling Method

The scaling method implemented in our model was based on previous theoretical work: van Vreeswijk and Sompolinsky (1998) and van Albada et al. (2015). The technique employed two independent parameters: one of them maintained the cross-correlation and number of neurons but changed the number of connections, and so the probability of connections; the other one decreased the network size without guaranteeing a preservation of second-order statistics, including cross-correlation.

The scaling method available as source code from the Open Source Brain (OSB) platform (Potjans & Diesmann, 2014b) appears to be derived from the two-independent-parameter approach, although its provenance is not fully documented. This OSB implementation allowed for different ways to rescale the network, some of which did not preserve network statistics.

Following van Vreeswijk and Sompolinsky (1998), we preserved network statistics by scaling both internal connections and external inputs, combining the two parameters into a single $k$ parameter. This preserved the network second-order statistics ($\sigma int2$ and $\sigma ext2$). In the original scaling method, only $\sigma int2$ is downscaled, and so equation 4.1 imposes a limitation on the scaling factor. However, in our approach, both $\sigma int2$ and $\sigma ext2$ are downscaled proportionally, so the quotient of equation 4.1 remains constant and this limitation does not apply. Note that our scaling factor $k$ is different from the $k2015$ (see equation 4.1) used in van Albada et al. (2015).

This scaling method presented limitations in terms of both the working point and the preserving statistics, especially for small values of $k$, when the synaptic weights no longer satisfied the condition: $w\u226a(\theta -Vreset)$ (synaptic weight much smaller than the amplitude from voltage reset to firing threshold).

Overall, our scaling implementation was simplified and adapted in order to guarantee the conservation of the first $\mu $ and second $\sigma $ order statistics of network activity for all possible scalings while making it easy to implement in NetPyNE/NEURON. It is dependent on a single scaling factor, $k$, in the interval (0, $\u221e$), presented in Table 1, which is used to adapt the number of network neurons, connections, and external inputs, as well as the synaptic weights, while keeping the matrix of connection probabilities and the proportions of cells per population fixed.

The scaling method does not change any other network parameters, including $\tau m$, $\tau s$, $\tau ref$, $R$, $Vreset$, the relative inhibitory synaptic strength $g$, excitatory or inhibitory synaptic transmission delays $de$ and $di$, and firing threshold $\theta $.

### 4.5 Multiple Factors Affect Synchrony

The synchrony measure was dependent on the number of neurons used in its calculation, in general with a higher number of neurons resulting in higher synchrony values. Because of the different sampling strategies, comparing synchrony across populations and network models should be done with caution. For example, when we compared synchrony across populations, sampling a *fixed percentage* of neurons per population (see Table 3, top row), the two largest populations, L2/3e and L4e, exhibited the highest synchrony values. On the other hand, when we sampled a *fixed number* of neurons from each population, 1000 as in the original model (see Table 3, middle row) or 2000 (see Table 3, bottom row), the highest synchrony was displayed by population L5e. The strategy of sampling a fixed number of neurons per population also may lead to scaling distortions because a given fixed number corresponds to different percentages of the cell populations at each scaling degree. For example, in the full-scale version, 1000 corresponds to almost 100% of L5i neurons but to less than 5% of L2/3e neurons. When downscaling was too great ($<20%$), there were not enough neurons in a population to do the full subsampling and include in the calculation.

Choosing the number of neurons to estimate synchrony as a percentage also influenced the synchrony measure. For example, the raster plot and synchrony for the Poisson-driven full-scale network indicated, both visually and numerically, a high degree of synchrony when all neurons (approximately 80,000) were sampled (see Figure 2), but a low degree of synchrony when only 2.3% of the neurons were sampled (see Figure 1). The same phenomenon was observed in the rescaled implementations. For example, the 30% rescaled network displayed high synchrony when all neurons (approximately 23,000) were sampled (see Figure 7), and low synchrony when a small subset of neurons (approximately 2000) was sampled (see Figure 5). This leads us to conclude that the apparent asynchrony in the PDCM model is possibly a consequence of subsampling. The influence of subsampling on synchrony has been previously studied (Harris & Thiele, 2011; Tetzlaff, Helias, Einevoll, & Diesmann, 2012; Hagen et al., 2016), and synchrony in the PDCM network has been previously described (see Figure 8C in Hagen et al., 2016).

In general, synchrony tended to decrease with the degree of downscaling (see the plots at the bottom of Figure 8). This was due to the increase in injected current that we provided to compensate for the decreased number of connections (see section 2.3). This effect occurred up to a downscaling level that depended on the population and external input type (between 40% and 60%). Downscaling past this point, we reached a situation in which the number of neurons was not sufficient to allow reliable synchrony calculation. For example, when we downscaled the networks to 10% of the original size, we had to replace 99% of the connections with DC inputs, and this resulted in large increases in synchrony (see Figure 8, bottom plots).

Synchrony was also dependent on the population average firing rate. The synchrony measure used (see section 2) changed with the heterogeneity of firing within the cell population (Pinsky & Rinzel, 1995), which for equal population sizes and fixed bin size is higher for cells with higher firing rates. This dependence may be a possible explanation for the high synchrony of L5e neurons (see Figure 8, bottom plots; see also Tables A.5 and A.6).

Synchrony was generally higher under the DC input condition than the Poisson input condition (see Figure 8, bottom plots; see also Tables A.5 and A.6). We hypothesize this was due to the two sources of randomization present in the Poisson-driven network: the Poisson inputs and the random pattern of connection. The third source of randomness, the initial values of the membrane potential of the neurons, should not be a cause for the high synchrony found, given the omission of the initial 100 ms transient period on the estimation of the measure. In the DC condition, we removed the Poisson inputs, thus increasing the network synchrony. For very high downscaling, for example, 10%, synchrony becomes visually perceptible in the raster plot for DC inputs but not in the one for Poisson inputs (compare Figures 6H and 6D). This effect is not seen for intermediate downscaling levels (see Figures 4D and 4H (50% downscaling) and 5D and 5H (30% downscaling)) because the fraction of sampled cells is not high enough.

Since multiple factors affect synchrony, we calculated the mean cross-correlation as an alternative measure based on second-order statistics. This metric was preserved for models downscaled to factors over 20% and generally presented lower relative deviations than the synchrony measure.

The scaling method used here has the theoretical property of not adding synchrony or regularity to asynchronous irregular networks (van Vreeswijk & Sompolinsky, 1998; van Albada et al., 2015). In our study, we found that irregularity, synchrony, and mean cross-correlation (second-order statistics) did not appear to be affected by scaling down to 1% for Poisson inputs and to 20% for DC inputs, and the mean firing rates (first order statistic) of the original article (Potjans & Diesmann, 2014a) were within 0.16 standard deviations of our results.

## Appendix: Supplementary Material

### A.1 Derivation of Why the Scaling Method Preserves the Network Activity Statistics

^{4}

^{5}

^{6}

^{7}

^{8}

^{9}

^{10}

^{11}

Population Scaling . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

100% | 0.90 | 2.80 | 4.39 | 5.70 | 6.80 | 8.22 | 1.14 | 7.60 |

80% | 0.82 (9%) | 2.81 (0%) | 4.45 (1%) | 5.74 (1%) | 7.10 (4%) | 8.24 (0%) | 1.17 (3%) | 7.64 (1%) |

60% | 0.85 (6%) | 2.79 (0%) | 4.42 (1%) | 5.74 (1%) | 6.25 (8%) | 8.22 (0%) | 1.09 (4%) | 7.57 (0%) |

50% | 0.75 (17%) | 2.87 (3%) | 4.57 (4%) | 5.84 (2%) | 7.35 (8%) | 8.31 (1%) | 1.17 (3%) | 7.70 (1%) |

40% | 0.70 (22%) | 2.95 (5%) | 4.62 (5%) | 5.90 (4%) | 7.90 (16%) | 8.35 (2%) | 1.22 (7%) | 7.79 (3%) |

30% | 0.69 (23%) | 3.06 (9%) | 4.71 (7%) | 5.98 (5%) | 8.34 (23%) | 8.45 (3%) | 1.18 (4%) | 7.92 (4%) |

20% | 0.78 (13%) | 2.99 (7%) | 4.57 (4%) | 5.97 (5%) | 6.17 (9%) | 8.54 (4%) | 1.06 (7%) | 7.80 (3%) |

10% | 0.75 (17%) | 3.28 (17%) | 4.76 (8%) | 6.20 (9%) | 6.55 (4%) | 8.97 (9%) | 1.10 (4%) | 8.06 (6%) |

5% | 0.69 (23%) | 3.88 (39%) | 4.74 (8%) | 6.33 (11%) | 9.66 (42%) | 8.83 (7%) | 1.09 (4%) | 8.60 (13%) |

2 % | 0.69 (23%) | 3.8 (36%) | 4.15 (5%) | 5.90 (4%) | 7.08 (4%) | 8.70 (6%) | 1.02 (11%) | 7.96 (5%) |

1% | 0.71 (21%) | 4.08 (46%) | 3.71 (15%) | 5.46 (4%) | 8.33 (23%) | 8.92 (9%) | 0.93 (18%) | 7.43 (2%) |

NEST trial | 0.86 | – | 4.45 | – | 7.59 | – | 1.09 | – |

Population Scaling . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

100% | 0.90 | 2.80 | 4.39 | 5.70 | 6.80 | 8.22 | 1.14 | 7.60 |

80% | 0.82 (9%) | 2.81 (0%) | 4.45 (1%) | 5.74 (1%) | 7.10 (4%) | 8.24 (0%) | 1.17 (3%) | 7.64 (1%) |

60% | 0.85 (6%) | 2.79 (0%) | 4.42 (1%) | 5.74 (1%) | 6.25 (8%) | 8.22 (0%) | 1.09 (4%) | 7.57 (0%) |

50% | 0.75 (17%) | 2.87 (3%) | 4.57 (4%) | 5.84 (2%) | 7.35 (8%) | 8.31 (1%) | 1.17 (3%) | 7.70 (1%) |

40% | 0.70 (22%) | 2.95 (5%) | 4.62 (5%) | 5.90 (4%) | 7.90 (16%) | 8.35 (2%) | 1.22 (7%) | 7.79 (3%) |

30% | 0.69 (23%) | 3.06 (9%) | 4.71 (7%) | 5.98 (5%) | 8.34 (23%) | 8.45 (3%) | 1.18 (4%) | 7.92 (4%) |

20% | 0.78 (13%) | 2.99 (7%) | 4.57 (4%) | 5.97 (5%) | 6.17 (9%) | 8.54 (4%) | 1.06 (7%) | 7.80 (3%) |

10% | 0.75 (17%) | 3.28 (17%) | 4.76 (8%) | 6.20 (9%) | 6.55 (4%) | 8.97 (9%) | 1.10 (4%) | 8.06 (6%) |

5% | 0.69 (23%) | 3.88 (39%) | 4.74 (8%) | 6.33 (11%) | 9.66 (42%) | 8.83 (7%) | 1.09 (4%) | 8.60 (13%) |

2 % | 0.69 (23%) | 3.8 (36%) | 4.15 (5%) | 5.90 (4%) | 7.08 (4%) | 8.70 (6%) | 1.02 (11%) | 7.96 (5%) |

1% | 0.71 (21%) | 4.08 (46%) | 3.71 (15%) | 5.46 (4%) | 8.33 (23%) | 8.92 (9%) | 0.93 (18%) | 7.43 (2%) |

NEST trial | 0.86 | – | 4.45 | – | 7.59 | – | 1.09 | – |

Notes: All results calculated from 60 s simulations and all neurons. Relative deviations in relation to the full-scale NetPyNE version $(|fx%-f100%|/f100%)$, are shown within parentheses and their maxima in bold. For comparison, the last row shows the mean firing rates of the NEST implementation (see Potjans & Diesmann, 2014a, for details).

Population Scaling . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

100% | 1.02 | 2.89 | 4.32 | 5.60 | 7.02 | 8.20 | 0.90 | 7.46 |

80% | 0.85 (17%) | 2.83 (2%) | 4.40 (2%) | 5.64 (1%) | 7.55 (8%) | 8.20 (0%) | 1.02 (13%) | 7.49 (1%) |

60% | 0.89 (13%) | 2.77 (4%) | 4.32 (0%) | 5.59 (0%) | 6.38 (9%) | 8.10 (1%) | 0.94 (4%) | 7.37 (1%) |

50% | 0.75 (26%) | 2.82 (2%) | 4.46 (3%) | 5.66 (1%) | 7.46 (6%) | 8.16 (0%) | 1.03 (14%) | 7.47 (0%) |

40% | 0.66 (35%) | 2.86 (1%) | 4.49 (4%) | 5.70 (2%) | 8.17 (16%) | 8.16 (0%) | 1.12 (24%) | 7.54 (1%) |

30% | 0.66 (35%) | 2.93 (1%) | 4.53 (5%) | 5.71 (2%) | 8.46 (21%) | 8.21 (0%) | 1.05 (17%) | 7.50 (2%) |

20% | 0.76 (25%) | 2.77 (4%) | 4.29 (1%) | 5.58 (0%) | 6.08 (13%) | 8.11 (1%) | 0.89 (1%) | 7.30 (2%) |

10% | 0.87 (15%) | 3.06 (6%) | 4.25 (2%) | 5.46 (2%) | 6.62 (6%) | 8.17 (0%) | 0.80 (11%) | 7.17 (4%) |

NEST trial | 0.86 | – | 4.45 | – | 7.59 | – | 1.09 | – |

Population Scaling . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

100% | 1.02 | 2.89 | 4.32 | 5.60 | 7.02 | 8.20 | 0.90 | 7.46 |

80% | 0.85 (17%) | 2.83 (2%) | 4.40 (2%) | 5.64 (1%) | 7.55 (8%) | 8.20 (0%) | 1.02 (13%) | 7.49 (1%) |

60% | 0.89 (13%) | 2.77 (4%) | 4.32 (0%) | 5.59 (0%) | 6.38 (9%) | 8.10 (1%) | 0.94 (4%) | 7.37 (1%) |

50% | 0.75 (26%) | 2.82 (2%) | 4.46 (3%) | 5.66 (1%) | 7.46 (6%) | 8.16 (0%) | 1.03 (14%) | 7.47 (0%) |

40% | 0.66 (35%) | 2.86 (1%) | 4.49 (4%) | 5.70 (2%) | 8.17 (16%) | 8.16 (0%) | 1.12 (24%) | 7.54 (1%) |

30% | 0.66 (35%) | 2.93 (1%) | 4.53 (5%) | 5.71 (2%) | 8.46 (21%) | 8.21 (0%) | 1.05 (17%) | 7.50 (2%) |

20% | 0.76 (25%) | 2.77 (4%) | 4.29 (1%) | 5.58 (0%) | 6.08 (13%) | 8.11 (1%) | 0.89 (1%) | 7.30 (2%) |

10% | 0.87 (15%) | 3.06 (6%) | 4.25 (2%) | 5.46 (2%) | 6.62 (6%) | 8.17 (0%) | 0.80 (11%) | 7.17 (4%) |

NEST trial | 0.86 | – | 4.45 | – | 7.59 | – | 1.09 | – |

Notes: All results calculated from 60-second simulations and all neurons. Relative deviations in relation to the full-scale NetPyNE version $(|fx%-f100%|/f100%)$ are shown within parentheses and their maxima in bold. For comparison, the last row shows sampled mean firing rates of the NEST implementation (see Potjans & Diesmann, 2014a, for details).

Population Scaling . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

100% | 0.938 | 0.916 | 0.891 | 0.873 | 0.847 | 0.809 | 0.924 | 0.819 |

80% | 0.934 (0.45%) | 0.916 (0.06%) | 0.890 (0.17%) | 0.873 (0.07%) | 0.842 (0.63%) | 0.808 (0.15%) | 0.927 (0.29%) | 0.818 (0.19%) |

60% | 0.934 (0.45%) | 0.917 (0.04%) | 0.891 (0.02%) | 0.874 (0.12%) | 0.858 (1.25%) | 0.813 (0.48%) | 0.930 (0.63%) | 0.822 (0.27%) |

50% | 0.934 (0.39%) | 0.917 (0.06%) | 0.889 (0.19%) | 0.873 (0.06%) | 0.838 (1.12%) | 0.808 (0.24%) | 0.930 (0.68%) | 0.816 (0.38%) |

40% | 0.937 (0.08%) | 0.914 (0.31%) | 0.887 (0.44%) | 0.872 (0.12%) | 0.828 (2.30%) | 0.807 (0.36%) | 0.932 (0.88%) | 0.816 (0.40%) |

30% | 0.931 (0.72%) | 0.913 (0.37%) | 0.887 (0.45%) | 0.873 (0.06%) | 0.822 (2.95%) | 0.807 (0.32%) | 0.931 (0.72%) | 0.815 (0.60%) |

20% | 0.936 (0.15%) | 0.921 (0.50%) | 0.887 (0.41%) | 0.871 (0.28%) | 0.858 (1.22%) | 0.806 (0.41%) | 0.929 (0.55%) | 0.814 (0.64%) |

10% | 0.937 (0.05%) | 0.919 (0.31%) | 0.883 (0.87%) | 0.869 (0.51%) | 0.853 (0.62%) | 0.800 (1.23%) | 0.938 (1.55%) | 0.810 (1.11%) |

5% | 0.926 (1.21%) | 0.899 (1.90%) | 0.884 (0.82%) | 0.867 (0.76%) | 0.797 (5.99%) | 0.802 (0.97%) | 0.934 (1.03%) | 0.803 (1.97%) |

2% | 0.935 (0.32%) | 0.895 (2.34%) | 0.891 (0.03%) | 0.863 (1.21%) | 0.837 (1.23%) | 0.811 (0.15%) | 0.932 (0.81%) | 0.817 (0.24%) |

1% | 0.929 (0.92%) | 0.895 (2.30%) | 0.901 (1.12%) | 0.884 (1.16%) | 0.823 (2.90%) | 0.785 (2.98%) | 0.932 (0.82%) | 0.814 (0.66%) |

NetPyNE-10 trial | 0.936 $\xb1$ 0.146 | 0.916 $\xb1$ 0.106 | 0.890 $\xb1$ 0.107 | 0.875 $\xb1$ 0.101 | 0.847 $\xb1$ 0.107 | 0.809 $\xb1$ 0.118 | 0.926 $\xb1$ 0.142 | 0.822 $\xb1$ 0.125 |

Population Scaling . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

100% | 0.938 | 0.916 | 0.891 | 0.873 | 0.847 | 0.809 | 0.924 | 0.819 |

80% | 0.934 (0.45%) | 0.916 (0.06%) | 0.890 (0.17%) | 0.873 (0.07%) | 0.842 (0.63%) | 0.808 (0.15%) | 0.927 (0.29%) | 0.818 (0.19%) |

60% | 0.934 (0.45%) | 0.917 (0.04%) | 0.891 (0.02%) | 0.874 (0.12%) | 0.858 (1.25%) | 0.813 (0.48%) | 0.930 (0.63%) | 0.822 (0.27%) |

50% | 0.934 (0.39%) | 0.917 (0.06%) | 0.889 (0.19%) | 0.873 (0.06%) | 0.838 (1.12%) | 0.808 (0.24%) | 0.930 (0.68%) | 0.816 (0.38%) |

40% | 0.937 (0.08%) | 0.914 (0.31%) | 0.887 (0.44%) | 0.872 (0.12%) | 0.828 (2.30%) | 0.807 (0.36%) | 0.932 (0.88%) | 0.816 (0.40%) |

30% | 0.931 (0.72%) | 0.913 (0.37%) | 0.887 (0.45%) | 0.873 (0.06%) | 0.822 (2.95%) | 0.807 (0.32%) | 0.931 (0.72%) | 0.815 (0.60%) |

20% | 0.936 (0.15%) | 0.921 (0.50%) | 0.887 (0.41%) | 0.871 (0.28%) | 0.858 (1.22%) | 0.806 (0.41%) | 0.929 (0.55%) | 0.814 (0.64%) |

10% | 0.937 (0.05%) | 0.919 (0.31%) | 0.883 (0.87%) | 0.869 (0.51%) | 0.853 (0.62%) | 0.800 (1.23%) | 0.938 (1.55%) | 0.810 (1.11%) |

5% | 0.926 (1.21%) | 0.899 (1.90%) | 0.884 (0.82%) | 0.867 (0.76%) | 0.797 (5.99%) | 0.802 (0.97%) | 0.934 (1.03%) | 0.803 (1.97%) |

2% | 0.935 (0.32%) | 0.895 (2.34%) | 0.891 (0.03%) | 0.863 (1.21%) | 0.837 (1.23%) | 0.811 (0.15%) | 0.932 (0.81%) | 0.817 (0.24%) |

1% | 0.929 (0.92%) | 0.895 (2.30%) | 0.901 (1.12%) | 0.884 (1.16%) | 0.823 (2.90%) | 0.785 (2.98%) | 0.932 (0.82%) | 0.814 (0.66%) |

NetPyNE-10 trial | 0.936 $\xb1$ 0.146 | 0.916 $\xb1$ 0.106 | 0.890 $\xb1$ 0.107 | 0.875 $\xb1$ 0.101 | 0.847 $\xb1$ 0.107 | 0.809 $\xb1$ 0.118 | 0.926 $\xb1$ 0.142 | 0.822 $\xb1$ 0.125 |

Notes: All results calculated from 60-second simulations and approximately 1000 neurons per population. Relative deviations in relation to the full-scale NetPyNE version $(|fx%-f100%|/f100%)$ are shown within parentheses and their maxima in bold.

Population Scaling . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

100% | 0.936 | 0.909 | 0.875 | 0.862 | 0.820 | 0.775 | 0.923 | 0.788 |

80% | 0.934 (0.21%) | 0.912 (0.33%) | 0.874 (0.11%) | 0.858 (0.46%) | 0.807 (1.59%) | 0.772 (0.39%) | 0.921 (0.22%) | 0.786 (0.25%) |

60% | 0.935 (0.11%) | 0.911 (0.22%) | 0.875 (0.00%) | 0.858 (0.46%) | 0.831 (1.34%) | 0.776 (0.13%) | 0.922 (0.11%) | 0.787 (0.13%) |

50% | 0.933 (0.32%) | 0.905 (0.44%) | 0.872 (0.34%) | 0.855 (0.81%) | 0.807 (1.59%) | 0.773 (0.26%) | 0.920 (0.33%) | 0.785 (0.38%) |

40% | 0.931 (0.53%) | 0.910 (0.11%) | 0.871 (0.46%) | 0.856 (0.70%) | 0.792 (3.41%) | 0.775 (0.00%) | 0.921 (0.22%) | 0.783 (0.63%) |

30% | 0.929 (0.75%) | 0.906 (0.33%) | 0.867 (0.91%) | 0.857 (0.58%) | 0.784 (4.39%) | 0.777 (0.26%) | 0.921 (0.22%) | 0.777 (1.40%) |

20% | 0.929 (0.75%) | 0.908 (0.11%) | 0.870 (0.57%) | 0.857 (0.58%) | 0.829 (1.10%) | 0.768 (0.90%) | 0.921 (0.22%) | 0.782 (0.76%) |

10% | 0.937 (0.11%) | 0.896 (1.43%) | 0.866 (1.03%) | 0.855 (0.81%) | 0.817 (0.37%) | 0.766 (1.16%) | 0.920 (0.33%) | 0.771 (2.16%) |

Population Scaling . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

100% | 0.936 | 0.909 | 0.875 | 0.862 | 0.820 | 0.775 | 0.923 | 0.788 |

80% | 0.934 (0.21%) | 0.912 (0.33%) | 0.874 (0.11%) | 0.858 (0.46%) | 0.807 (1.59%) | 0.772 (0.39%) | 0.921 (0.22%) | 0.786 (0.25%) |

60% | 0.935 (0.11%) | 0.911 (0.22%) | 0.875 (0.00%) | 0.858 (0.46%) | 0.831 (1.34%) | 0.776 (0.13%) | 0.922 (0.11%) | 0.787 (0.13%) |

50% | 0.933 (0.32%) | 0.905 (0.44%) | 0.872 (0.34%) | 0.855 (0.81%) | 0.807 (1.59%) | 0.773 (0.26%) | 0.920 (0.33%) | 0.785 (0.38%) |

40% | 0.931 (0.53%) | 0.910 (0.11%) | 0.871 (0.46%) | 0.856 (0.70%) | 0.792 (3.41%) | 0.775 (0.00%) | 0.921 (0.22%) | 0.783 (0.63%) |

30% | 0.929 (0.75%) | 0.906 (0.33%) | 0.867 (0.91%) | 0.857 (0.58%) | 0.784 (4.39%) | 0.777 (0.26%) | 0.921 (0.22%) | 0.777 (1.40%) |

20% | 0.929 (0.75%) | 0.908 (0.11%) | 0.870 (0.57%) | 0.857 (0.58%) | 0.829 (1.10%) | 0.768 (0.90%) | 0.921 (0.22%) | 0.782 (0.76%) |

10% | 0.937 (0.11%) | 0.896 (1.43%) | 0.866 (1.03%) | 0.855 (0.81%) | 0.817 (0.37%) | 0.766 (1.16%) | 0.920 (0.33%) | 0.771 (2.16%) |

Notes: All results calculated from 60-second simulations and approximately 1000 neurons per population. Relative deviations in relation to the full-scale NetPyNE version $(|fx%-f100%|/f100%)$ are shown within parentheses and their maxima in bold.

Layer Scaling . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

100% | 2.9 | 1.8 | 3.0 | 1.7 | 4.4 | 1.1 | 1.2 | 1.0 |

80% | 2.3 (21%) | 1.6 (11%) | 2.4 (20%) | 1.4 (18%) | 4.0 (9%) | 1.0 (9%) | 1.2 (0%) | 0.9 (10%) |

60% | 2.1 (28%) | 1.5 (17%) | 2.2 (27%) | 1.4 (18%) | 3.6 (18%) | 0.9 (18%) | 1.2 (0%) | 0.9 (10%) |

50% | 1.8 (38%) | 1.5 (17%) | 2.0 (33%) | 1.3 (24%) | 4.3 (2%) | 0.9 (18%) | 1.2 (0%) | 0.9 (10%) |

40% | 1.8 (38%) | 1.6 (11%) | 2.1 (30%) | 1.4 (18%) | 5.0 (14%) | 1.0 (9%) | 1.3 (8%) | 0.8 (20%) |

30% | 1.9 (34%) | 1.7 (6%) | 2.1 (30%) | 1.4 (18%) | 5.9 (34%) | 1.1 (0%) | 1.3 (8%) | 0.9 (10%) |

20% | 2.0 (31%) | 1.9 (6%) | 2.4 (20%) | 1.7 (0%) | 5.4 (23%) | 1.0 (9%) | 1.4 (17%) | 0.9 (10%) |

10% | 2.6 (10%) | 2.1 (17%) | 3.1 (3%) | 1.9 (12%) | 4.5 (2%) | 1.1 (0%) | 1.7 (42%) | 1.0 (0%) |

5% | 3.4 (17%) | 2.4 (33%) | 4.1 (37%) | 2.1 (24%) | 5.3 (20%) | 1.2 (9%) | 1.8 (50%) | 1.1 (10%) |

2 % | 2.5 (14%) | 1.9 (6%) | 2.7 (10%) | 1.8 (6%) | 2.4 (45%) | 1.1 (0%) | 1.4 (17%) | 1.1 (10%) |

1% | 1.9 (34%) | 1.6 (11%) | 1.7 (43%) | 1.4 (18%) | 2.0 (55%) | 1.1 (0%) | 1.2 (0%) | 1.0 (0%) |

Layer Scaling . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

100% | 2.9 | 1.8 | 3.0 | 1.7 | 4.4 | 1.1 | 1.2 | 1.0 |

80% | 2.3 (21%) | 1.6 (11%) | 2.4 (20%) | 1.4 (18%) | 4.0 (9%) | 1.0 (9%) | 1.2 (0%) | 0.9 (10%) |

60% | 2.1 (28%) | 1.5 (17%) | 2.2 (27%) | 1.4 (18%) | 3.6 (18%) | 0.9 (18%) | 1.2 (0%) | 0.9 (10%) |

50% | 1.8 (38%) | 1.5 (17%) | 2.0 (33%) | 1.3 (24%) | 4.3 (2%) | 0.9 (18%) | 1.2 (0%) | 0.9 (10%) |

40% | 1.8 (38%) | 1.6 (11%) | 2.1 (30%) | 1.4 (18%) | 5.0 (14%) | 1.0 (9%) | 1.3 (8%) | 0.8 (20%) |

30% | 1.9 (34%) | 1.7 (6%) | 2.1 (30%) | 1.4 (18%) | 5.9 (34%) | 1.1 (0%) | 1.3 (8%) | 0.9 (10%) |

20% | 2.0 (31%) | 1.9 (6%) | 2.4 (20%) | 1.7 (0%) | 5.4 (23%) | 1.0 (9%) | 1.4 (17%) | 0.9 (10%) |

10% | 2.6 (10%) | 2.1 (17%) | 3.1 (3%) | 1.9 (12%) | 4.5 (2%) | 1.1 (0%) | 1.7 (42%) | 1.0 (0%) |

5% | 3.4 (17%) | 2.4 (33%) | 4.1 (37%) | 2.1 (24%) | 5.3 (20%) | 1.2 (9%) | 1.8 (50%) | 1.1 (10%) |

2 % | 2.5 (14%) | 1.9 (6%) | 2.7 (10%) | 1.8 (6%) | 2.4 (45%) | 1.1 (0%) | 1.4 (17%) | 1.1 (10%) |

1% | 1.9 (34%) | 1.6 (11%) | 1.7 (43%) | 1.4 (18%) | 2.0 (55%) | 1.1 (0%) | 1.2 (0%) | 1.0 (0%) |

Notes: All results calculated from 60-second simulations and approximately 1000 neurons per population. Relative deviations in relation to the full-scale NetPyNE version $(|fx%-f100%|/f100%)$ are shown within parentheses and their maxima in bold.

Layer Scaling . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

100% | 5.1 | 3.3 | 5.5 | 2.7 | 8.0 | 2.0 | 1.5 | 1.3 |

80% | 3.6 (29%) | 2.5 (24%) | 4.1 (25%) | 2.1 (22%) | 6.6 (18%) | 1.5 (25%) | 1.4 (7%) | 1.2 (8%) |

60% | 3.4 (33%) | 2.5 (24%) | 3.6 (35%) | 2.1 (22%) | 5.6 (30%) | 1.4 (30%) | 1.4 (7%) | 1.1 (15%) |

50% | 2.6 (49%) | 2.1 (36%) | 3.0 (45%) | 1.8 (33%) | 5.8 (28%) | 1.2 (40%) | 1.4 (7%) | 1.0 (23%) |

40% | 2.4 (53%) | 2.2 (33%) | 2.8 (49%) | 1.8 (33%) | 6.5 (19%) | 1.2 (40%) | 1.4 (7%) | 1.0 (23%) |

30% | 2.5 (51%) | 2.5 (24%) | 2.8 (49%) | 2.0 (26%) | 7.8 (3%) | 1.3 (35%) | 1.5 (0%) | 1.1 (15%) |

20% | 3.3 (35%) | 3.6 (9%) | 3.9 (29%) | 2.9 (7%) | 7.3 (9%) | 1.3 (35%) | 1.6 (7%) | 1.2 (8%) |

10% | 10.7 (110%) | 10.3 (212%) | 15.6 (184%) | 8.5 (215%) | 11.4 (43%) | 2.4 (20%) | 3.3 (120%) | 2.3 (77%) |

Layer Scaling . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

100% | 5.1 | 3.3 | 5.5 | 2.7 | 8.0 | 2.0 | 1.5 | 1.3 |

80% | 3.6 (29%) | 2.5 (24%) | 4.1 (25%) | 2.1 (22%) | 6.6 (18%) | 1.5 (25%) | 1.4 (7%) | 1.2 (8%) |

60% | 3.4 (33%) | 2.5 (24%) | 3.6 (35%) | 2.1 (22%) | 5.6 (30%) | 1.4 (30%) | 1.4 (7%) | 1.1 (15%) |

50% | 2.6 (49%) | 2.1 (36%) | 3.0 (45%) | 1.8 (33%) | 5.8 (28%) | 1.2 (40%) | 1.4 (7%) | 1.0 (23%) |

40% | 2.4 (53%) | 2.2 (33%) | 2.8 (49%) | 1.8 (33%) | 6.5 (19%) | 1.2 (40%) | 1.4 (7%) | 1.0 (23%) |

30% | 2.5 (51%) | 2.5 (24%) | 2.8 (49%) | 2.0 (26%) | 7.8 (3%) | 1.3 (35%) | 1.5 (0%) | 1.1 (15%) |

20% | 3.3 (35%) | 3.6 (9%) | 3.9 (29%) | 2.9 (7%) | 7.3 (9%) | 1.3 (35%) | 1.6 (7%) | 1.2 (8%) |

10% | 10.7 (110%) | 10.3 (212%) | 15.6 (184%) | 8.5 (215%) | 11.4 (43%) | 2.4 (20%) | 3.3 (120%) | 2.3 (77%) |

Layer Scaling . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

100% | 0.036 | 0.053 | 0.056 | 0.057 | 0.057 | 0.058 | 0.039 | 0.057 |

80% | 0.034 (5%) | 0.054 (1%) | 0.056 (1%) | 0.057 (0%) | 0.057 (0%) | 0.058 (0%) | 0.039 (1%) | 0.057 (0%) |

60% | 0.035 (3%) | 0.055 (2%) | 0.056 (2%) | 0.057 (1%) | 0.057 (0%) | 0.059 (1%) | 0.039 (1%) | 0.058 (1%) |

50% | 0.033 (8%) | 0.055 (3%) | 0.057 (2%) | 0.057 (1%) | 0.058 (1%) | 0.059 (2%) | 0.040 (4%) | 0.058 (1%) |

40% | 0.032 (12%) | 0.055 (4%) | 0.057 (2%) | 0.058 (2%) | 0.058 (1%) | 0.060 (3%) | 0.040 (5%) | 0.058 (1%) |

30% | 0.032 (12%) | 0.056 (5%) | 0.057 (3%) | 0.058 (2%) | 0.058 (1%) | 0.060 (4%) | 0.040 (5%) | 0.059 (2%) |

20% | 0.034 (7%) | 0.056 (6%) | 0.057 (3%) | 0.058 (3%) | 0.058 (2%) | 0.062 (6%) | 0.039 (1%) | 0.059 (3%) |

10% | 0.034 (7%) | 0.057 (7%) | 0.057 (3%) | 0.059 (5%) | 0.059 (3%) | 0.067 (15%) | 0.040 (4%) | 0.061 (6%) |

5% | 0.032 (11%) | 0.058 (10%) | 0.058 (4%) | 0.061 (7%) | 0.061 (7%) | 0.071 (22%) | 0.040 (4%) | 0.063 (10%) |

2 % | 0.036 (0%) | 0.060 (12%) | 0.059 (6%) | 0.062 (10%) | 0.066 (15%) | 0.079 (36%) | 0.043 (11%) | 0.066 (15%) |

1% | 0.041 (12%) | 0.063 (18%) | 0.060 (0%) | 0.063 (11%) | 0.076 (34%) | 0.106 (82%) | 0.042 (8%) | 0.063 (10%) |

Layer Scaling . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

100% | 0.036 | 0.053 | 0.056 | 0.057 | 0.057 | 0.058 | 0.039 | 0.057 |

80% | 0.034 (5%) | 0.054 (1%) | 0.056 (1%) | 0.057 (0%) | 0.057 (0%) | 0.058 (0%) | 0.039 (1%) | 0.057 (0%) |

60% | 0.035 (3%) | 0.055 (2%) | 0.056 (2%) | 0.057 (1%) | 0.057 (0%) | 0.059 (1%) | 0.039 (1%) | 0.058 (1%) |

50% | 0.033 (8%) | 0.055 (3%) | 0.057 (2%) | 0.057 (1%) | 0.058 (1%) | 0.059 (2%) | 0.040 (4%) | 0.058 (1%) |

40% | 0.032 (12%) | 0.055 (4%) | 0.057 (2%) | 0.058 (2%) | 0.058 (1%) | 0.060 (3%) | 0.040 (5%) | 0.058 (1%) |

30% | 0.032 (12%) | 0.056 (5%) | 0.057 (3%) | 0.058 (2%) | 0.058 (1%) | 0.060 (4%) | 0.040 (5%) | 0.059 (2%) |

20% | 0.034 (7%) | 0.056 (6%) | 0.057 (3%) | 0.058 (3%) | 0.058 (2%) | 0.062 (6%) | 0.039 (1%) | 0.059 (3%) |

10% | 0.034 (7%) | 0.057 (7%) | 0.057 (3%) | 0.059 (5%) | 0.059 (3%) | 0.067 (15%) | 0.040 (4%) | 0.061 (6%) |

5% | 0.032 (11%) | 0.058 (10%) | 0.058 (4%) | 0.061 (7%) | 0.061 (7%) | 0.071 (22%) | 0.040 (4%) | 0.063 (10%) |

2 % | 0.036 (0%) | 0.060 (12%) | 0.059 (6%) | 0.062 (10%) | 0.066 (15%) | 0.079 (36%) | 0.043 (11%) | 0.066 (15%) |

1% | 0.041 (12%) | 0.063 (18%) | 0.060 (0%) | 0.063 (11%) | 0.076 (34%) | 0.106 (82%) | 0.042 (8%) | 0.063 (10%) |

Notes: All results calculated from 5-second simulations and all pairs of neuron combinations per population. Relative deviations in relation to the full-scale NetPyNE version $(|fx%-f100%|/f100%)$ are shown within parentheses and their maxima in bold.

Layer Scaling . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

100% | 0.039 | 0.055 | 0.057 | 0.057 | 0.058 | 0.059 | 0.036 | 0.058 |

80% | 0.035 (9%) | 0.055 (0%) | 0.057 (0%) | 0.057 (0%) | 0.058 (0%) | 0.059 (1%) | 0.038 (4%) | 0.058 (0%) |

60% | 0.036 (6%) | 0.054 (1%) | 0.057 (0%) | 0.058 (1%) | 0.058 (0%) | 0.060 (1%) | 0.037 (2%) | 0.059 (1%) |

50% | 0.033 (14%) | 0.055 (1%) | 0.057 (0%) | 0.058 (1%) | 0.058 (0%) | 0.061 (3%) | 0.038 (5%) | 0.059 (1%) |

40% | 0.032 (17%) | 0.055 (0%) | 0.057 (0%) | 0.058 (1%) | 0.058 (1%) | 0.061 (3%) | 0.039 (8%) | 0.059 (2%) |

30% | 0.031(18%) | 0.055 (0%) | 0.057 (0%) | 0.058 (1%) | 0.059 (2%) | 0.062 (5%) | 0.038 (5%) | 0.060 (3%) |

20% | 0.033 (13%) | 0.055 (1%) | 0.057 (0%) | 0.059 (2%) | 0.058 (1%) | 0.064 (8%) | 0.036 (1%) | 0.061 (5%) |

10% | 0.039 (2%) | 0.057 (3%) | 0.058 (2%) | 0.060 (5%) | 0.060 (4%) | 0.070 (18%) | 0.035 (2%) | 0.063 (9%) |

Layer Scaling . | L2/3e . | L2/3i . | L4e . | L4i . | L5e . | L5i . | L6e . | L6i . |
---|---|---|---|---|---|---|---|---|

100% | 0.039 | 0.055 | 0.057 | 0.057 | 0.058 | 0.059 | 0.036 | 0.058 |

80% | 0.035 (9%) | 0.055 (0%) | 0.057 (0%) | 0.057 (0%) | 0.058 (0%) | 0.059 (1%) | 0.038 (4%) | 0.058 (0%) |

60% | 0.036 (6%) | 0.054 (1%) | 0.057 (0%) | 0.058 (1%) | 0.058 (0%) | 0.060 (1%) | 0.037 (2%) | 0.059 (1%) |

50% | 0.033 (14%) | 0.055 (1%) | 0.057 (0%) | 0.058 (1%) | 0.058 (0%) | 0.061 (3%) | 0.038 (5%) | 0.059 (1%) |

40% | 0.032 (17%) | 0.055 (0%) | 0.057 (0%) | 0.058 (1%) | 0.058 (1%) | 0.061 (3%) | 0.039 (8%) | 0.059 (2%) |

30% | 0.031(18%) | 0.055 (0%) | 0.057 (0%) | 0.058 (1%) | 0.059 (2%) | 0.062 (5%) | 0.038 (5%) | 0.060 (3%) |

20% | 0.033 (13%) | 0.055 (1%) | 0.057 (0%) | 0.059 (2%) | 0.058 (1%) | 0.064 (8%) | 0.036 (1%) | 0.061 (5%) |

10% | 0.039 (2%) | 0.057 (3%) | 0.058 (2%) | 0.060 (5%) | 0.060 (4%) | 0.070 (18%) | 0.035 (2%) | 0.063 (9%) |

Notes: All results calculated from 5-second simulations and all pair of neurons combination per population. Relative deviations in relation to the full-scale NetPyNE version $(|fx%-f100%|/f100%)$ are shown within parentheses and their maxima in bold.

## Acknowledgments

This work was produced as part of the activities of NIH U24EB028998, R01EB022903, U01EB017695, NYS SCIRB DOH01-C32250GG-3450000, NSF1904444; and FAPESP (São Paulo Research Foundation) Research, Disseminations and Innovation Center for Neuromathematics 2013/07699-0, 2015/50122-0, 2018/20277-0. C.R. (grant 88882.378774/2019-01) and F.A.N. (grant 88882.377124/2019-01) are recipients of PhD scholarships from the Brazilian Coordenao de Aperfeioamento de Pessoal de Nvel Superior. A.C.R. is partially supported by a CNPq fellowship (grant 306251/2014-0).

## References

*Frontiers in Neuroanatomy*

*eLife*

*Journal of Neuroscience*

*PLOS Computational Biology*

*Journal of Computational Neuroscience*

*Mechanical Systems and Signal Processing*

*Science*

*PLOS Computational Biology*

*The NEURON book*

*Biological Cybernetics*

*Nature Neuroscience*

*Proceedings of the National Academy of Sciences USA*

*Brain Structure and Function*

*Neural Computation*

*Multiscale dynamics and information flow in a data-driven model of the primary motor cortex microcircuit*

*Proceedings of the Society for Neuroscience (SFN'19)*

*eLife*

*Scholarpedia*

*Cerebral Cortex*

*Nature Reviews Neuroscience*

*Journal of Computational Neuroscience*

*Cell Reports*

*Journal de Physiologie et de Pathologie Générale*

*Frontiers in Computational Neuroscience*

*Neural Computation*

*Frontiers in Neuroinformatics*

*Frontiers in Neuroinformatics*

*Frontiers in Pharmacology*

*Biological Cybernetics*

*Cerebral Cortex*

*Spiking cortical network model Potjans and Diesmann*

*Frontiers in Neuroinformatics*

*Boundary solution based on rescaling method: Recoup the first and second-order statistics of neuron network dynamics.*

*Neuron*

*Brain Structure and Function*

*PLOS Computational Biology*

*PLOS Computational Biology*

*ReScience*

*Proceedings of the International Workshop on Science Gateway.*

*Journal of Neurophysiology*

*PLOS Comput. Biol.*

*Cerebral Cortex*

*Computing in Science and Engineering*

*PLOS Computational Biology*

*Neural Computation*

*PLOS One*

*Cerebral Cortex*