## Abstract

Complex systems can be defined by “sloppy” dimensions, meaning that their behavior is unmodified by large changes to specific parameter combinations, and “stiff” dimensions, whose change results in considerable behavioral modification. In the neocortex, sloppiness in synaptic architectures would be crucial to allow for the maintenance of asynchronous irregular spiking dynamics with low firing rates despite a diversity of inputs, states, and short- and long-term plasticity. Using simulations on neural networks with first-order spiking statistics matched to firing in murine visual cortex while varying connectivity parameters, we determined the stiff and sloppy parameters of synaptic architectures across three classes of input (brief, continuous, and cyclical). Algorithmically generated connectivity parameter values drawn from a large portion of the parameter space reveal that specific combinations of excitatory and inhibitory connectivity are stiff and that all other architectural details are sloppy. Stiff dimensions are consistent across input classes with self-sustaining synaptic architectures following brief input occupying a smaller subspace as compared to the other input classes. Experimentally estimated connectivity probabilities from mouse visual cortex are consistent with the connectivity correlations found and fall in the same region of the parameter space as architectures identified algorithmically. This suggests that simple statistical descriptions of spiking dynamics are a sufficient and parsimonious description of neocortical activity when examining structure-function relationships at the mesoscopic scale. Additionally, coarse graining cell types does not prevent the generation of accurate, informative, and interpretable models underlying simple spiking activity. This unbiased investigation provides further evidence of the importance of the interrelationship of excitatory and inhibitory connectivity to establish and maintain stable spiking dynamical regimes in the neocortex.

## 1 Introduction

Local synaptic connectivity in neocortex is fundamental to the generation and stipulation of the spiking dynamics (Cossell et al., 2015; Koulakov, Hromádka, & Zador, 2009) that underlie the formation of percepts, decisions, and the generation of appropriate behavioral responses. However, the rules that govern the mesoscopic-scale relationships between local synaptic architectures and spiking activity remain unclear. On one hand, synaptic architectures must be highly dynamic since they underlie, at least in part, the storage of information (Chklovskii, Mel, & Svoboda, 2004) and generate the range of spiking activity corresponding to distinct brain states (Doiron, Litwin-Kumar, Rosenbaum, Ocker, & Josić, 2016). On the other hand, it is clear that aberrant synaptic wiring can give rise to detrimental spiking behaviors and pathophysiology such as epilepsy (Engel, Thompson, Stern, Staba, Bragin, & Mody, 2013).

Modeling and theoretical analysis are essential complements to experimental investigations of structure-function relationships since they enable precise manipulation of simulated connectivity (Churchland & Abbott, 2016; Transtrum, Machta, Brown, Daniels, Myers, & Sethna, 2015). Any model of a biological system involves a large number of free parameters that cannot always be determined from experimental data or that vary greatly from one observation to the other. Nevertheless, numerous models of neural systems have successfully replicated key aspects of neuronal network dynamics, including different activity patterns (Chambers & MacLean, 2016; Ocker et al., 2017; Vegué & Roxin, 2019) and receptive field properties (Hopkins, Pineda-García, Bogdan, & Furber, 2018; Kerr, McGinnity, Coleman, & Clogenson, 2015; Zylberberg, Murphy, & DeWeese, 2011). Given the difficulty of estimating the exact values of connectivity parameters and the variability in these values observed in vivo, a reasonable hypothesis for why these models work is that multiple combinations of parameters can result in similar network activity (Brown & Sethna, 2003). Together, these observations are indicative of sloppy systems, whose behavior depends only on a few stiff combinations of parameters while the majority of parameters are not critical for accurate predictions of the system's behavior (Gutenkunst, Waterfall, Casey, Brown, Myers, & Sethna, 2007).

Sloppiness is a universal feature of models in systems biology (Daniels, Chen, Sethna, Gutenkunst, & Myers, 2008; Gutenkunst et al., 2007; Transtrum et al., 2015). For example, ionic conductances within individual neurons have consistently been found to vary greatly across neurons and between individuals despite regularity in spiking activity (Prinz, Bucher, & Marder, 2004; Ransdel, Nair, & Schulz, 2013; Schulz, Goaillard, & Marder, 2006). At the neuronal circuit level, stability and state changes are mediated by a subset of neurons described by a small number of stiff parameter combinations while the parameters of the remainder of the neurons are sloppy (Panas et al., 2015; Ponce-Alvarez, Mochol, Hermoso-Mendizabal, de la Rocha, & Deco, 2020).

What remains unclear, however, are the stiff and sloppy parameter combinations that define synaptic architectures capable of producing spiking statistics consistent with dynamic regimes observed in neocortex. Neocortical networks have been shown to operate in different regimes and to rapidly transition between them (Brunel, 2000; Tan, Chen, Scholl, Seidemann, & Priebe, 2014; Fontenele et al., 2019). Here we focused on the dominant dynamical regime of the neocortex, which is low rate, irregular, and asynchronous (Brunel, 2000; El Boustani, Pospischil, Rudolph-Lilith, & Destexhe, 2007; Davis et al., 2021). The conservation of both connectivity and wiring cost across different species (Assaf, Bouznach, Zomet, Marom, & Yovel, 2020) is a strong incentive to find the stiff and sloppy dimensions of synaptic architectures. Moreover, delineating these dimensions of neocortical wiring can better constrain cortical models. Understanding the stiff and sloppy dimensions also carries implications beyond the realm of organic neural systems into artificial neural networks (ANNs). It has been shown that connectivity patterns are directly related to the dimensionality of the activity in recurrent spiking neural networks (SNNs), with the latter decreasing as overall connectivity increases (Recanatesi et al., 2019). Hence, delineating the small number of parameters that describe stiff dimensions of network connectivity will allow further studies to capture meaningful functional and computational principles that define those networks.

Here, we survey a large portion of the synaptic connectivity parameter space to create wiring diagrams and identify parameter combinations capable of producing activity matched to murine visual cortex (V1). We identify the stiff and sloppy parameter combinations of synaptic architectures responsible for producing naturalistic activity and compare these algorithmically identified connectivity parameters to recent experimental values, finding them to be largely in agreement.

## 2 Results

### 2.1 Grid Search for Synaptic Architectures Producing Naturalistic Spiking

In previous work, we have shown that conductance-based synapses are crucial to accurately simulate neuronal integration of synaptic inputs—a critical consideration when evaluating structure-function hypotheses (Bojanek, Zhu, & MacLean, 2020; Chambers & MacLean, 2016). Connections between neurons in each network were determined randomly based on four connectivity probabilities, $pee,pei,pie$, and $pii$, where the first and second indices refer to the presynaptic and postsynaptic populations, respectively. Although neocortical synaptic connectivity is not entirely random, we have found that clustering of synaptic connectivity facilitates stable asynchronous irregular activity with low firing rates (Bojanek et al., 2020). We note that even in Erdős-Rényi synaptic networks, the functional connectivity is clustered with a preference for certain motifs, suggesting that regardless of the underlying synaptic connectivity, networks exhibiting naturalistic activity will effectively be clustered (Chambers & MacLean, 2016).

Rather than impose clustering on synaptic connectivity and potentially bias certain outcomes, we chose to keep connectivity random. We conducted a grid search over a range of synaptic connectivity parameters defined by both excitatory and inhibitory connectivity and then quantified the outcome in SNN model behavior as connectivity parameter values changed (see Figure 1B). Specifically, we determined which connectivity parameter combinations were capable of producing sustained spiking activity matched to in vivo spiking of murine visual cortex (Billeh et al., 2020; Dechery & MacLean, 2018; Niell & Stryker, 2010; Siegle et al., 2021; Steinmetz, Zatka-Haas, Carandini, & Harris, 2019). The performance of the networks was quantified using a set of objective functions, each of which corresponded to individual first-order statistical descriptors of spiking activity: firing rates, synchrony, and fraction of trials with sustained activity (see Figure 1C and section 4.4).

We began by identifying combinations of parameters that produced firing rates between 8 and 15 Hz (Siegle et al., 2021) and synchrony scores corresponding to a Van Rossum distance greater than 4 (see section 4.4) in response to the three classes of input (constant continuous, cyclical continuous, and brief). Notably, the input classes fall along a continuum of durations, and as the inputs become increasingly brief, increased emphasis is placed on network architectures capable of producing self-sustaining activity following input. Sustained activity was a requirement even in the case of brief input given both in vivo and in vitro studies showing that networks of neurons are capable of generating and maintaining activity even in the absence of external inputs (Mao, Zatka-Haas, Carandini, & Harris, 2001; Winnubst, Cheyne, Niculescu, & Lohmann, 2015).

The networks were first tested using continuous input from Poisson units firing with rates drawn from a log-normal distribution with a mean of 17 $\xb1$ 5.3 Hz. Out of the 14,461 unique connectivity parameter combinations tested, 5080 resulted in sustained activity for the length of the simulation more than 50% of the time. Of those, 579 synaptic wiring diagrams showed average excitatory firing rates in the desired range ($\nu e\xaf=10.8\xb11.9Hz$). Those networks had a mean synchrony score of 1.03 $\xb1$ 0.06 using the fast synchrony measure (see section 4.4; $\nu \iota \xaf=27.2\xb18.1Hz$; $psus\xaf=0.998\xb10.029)$. Networks receiving cyclical input (1.67 Hz with maximal firing rates of units drawn from the same log-normal distribution) showed a lower number of successful parameter combinations at 199, including 47 networks with rates also between 8 and 15 Hz ($\nu e\xaf=10.8\xb12.1Hz$; $\nu \iota \xaf=25.6\xb17.6Hz$; $s\xaf=1.05\xb10.11$; $psus\xaf=0.97\xb10.10)$.

Finally, we evaluated architectures capable of producing self-sustaining activity in response to brief (300 ms) excitatory Poisson input. We similarly simulated 73,205 trials for 14,461 synaptic architectures corresponding to a range of different parameter combinations. Self-sustained activity is the hardest to achieve, resulting in the lowest number of successful networks: 241 trials resulted in self-sustained activity, which in turn corresponded to 44 unique parameter combinations that produced self-sustained activity in at least 50% of the simulations.

This limited number of possible networks spread out over the entire range of parameters tested (see Figure 2D) is, by definition, indicative of both sloppiness and stiffness of spiking neural networks. For the rest of the analyses, we consider only networks that achieved sustained activity for each type of input.

It is noteworthy that despite the fact that the synchrony level was calculated using only excitatory spikes (see section 4.4), the synchrony score is highly correlated with the inhibitory firing rate but not the excitatory one (see Figure 2A). This confirms that this measure of synchrony does not simply scale with the number of spikes. Moreover, it is consistent with the role of local inhibition setting spike timing in the excitatory pool (Haider, Häusser, & Carandini, 2013).

### 2.2 Correlated Components of Naturally Spiking Synaptic Architectures

As an initial investigation of potentially stiff parameter combinations, we examined the differences and ratios between pairs of connectivity probabilities for the parameter combinations that resulted in sustained activity. We observed that certain differences in the connection likelihoods were highly correlated with each other (either positively or negatively), while others were uncorrelated (see Figures 2E and 2G). Specifically, $pee-pie$ and $pei-pie$ were strongly and positively correlated at 0.96 in the case of brief input, while $pee-pie$ and $pii-pei$ consistently showed the strongest negative correlations ($-$0.99 for brief and cyclical inputs, $-$0.95 for continuous inputs; see Figures 2E, 2I, and 2K). The same pattern of correlations was observed among pairs of ratios (e.g. $pee/pie$ and $pee/pii$ very strongly correlated at 0.95, 0.89, and 0.88 in the case of brief, cyclical, and continuous input, respectively; see Figures 2G, 2J, and 2L), indicating that networks that result in sustained activity are more likely to have connectivity parameters that follow these linear relationships between certain differences and ratios of excitatory and inhibitory connectivity. In other words, certain combinations of differences or combinations of ratios may constitute stiff parameters, while the others may be sloppy.

In fact, especially in the case of networks receiving brief input, when looking at the pairs of differences that are highly correlated, it appeared that the majority of the networks had the same values for these differences despite having very different probabilities of connectivity (see Figures 2F and 2H). Similar strong correlations were found among differences and ratios of wiring parameters in networks sustaining their activity with low firing rates using both brief and cyclical inputs (see Figures 2I and 2J). When not restricting the firing rate to the target range, the correlations remained but with lower magnitudes. Networks that exhibited sustained low-rate activity in response to continuous input ($n=579$) also exhibited the majority of the same significant correlations (see Figures 2K and 2L).

We note that if we did not control for firing rate, these correlations were less apparent in networks that spiked in response to continuous inputs. To confirm that these correlations that appear for rate-matched networks are not statistical artifacts due to the relatively small number of networks considered ($\u223c$11% of networks with sustained activity in more than 50% of the trials), we matched the sample size ($n=579$) in different sets of networks randomly selected from those that showed sustained activity in more than 50% of the trials and evaluated the correlations among the pairs of parameters. The lack of strong correlations in the case of randomly selected networks confirms that the results that we observed are related to the rates of those networks being in the target range.

These correlated parameter combinations are additional indicators of the presence of stiff dimensions within synaptic architectural parameter combinations when matching stable spiking activity in the network to in vivo recordings. It is notable that despite the fact that different networks showed sustained spiking with each type of input, the same pairs of connectivity parameter combinations were crucial to the production of sustained and murine-matched activity regardless of input type.

### 2.3 Experimentally Measured Synaptic Wiring in Mouse Visual Cortex Agrees with Algorithmically Identified Correlations

Grid search in the continuous input condition found that the ratios $peepei$ and $piipie$ have a strong negative correlation despite not sharing any parameter, while $peepii$ and $peipie$, which also have distinct pairs, are not correlated (see Figure 2L). We found that the coarse-grained experimentally derived connectivity values were consistent with the correlations that we found. We tested the importance of each of these pairs of ratios in sustaining network activity using the connectivity parameters previously reported (Billeh et al., 2020) and found that if the ratios $peepei$ and $piipie$ were maintained while changing the actual parameter values, the majority of the resulting networks continued to exhibit sustained activity regardless of how large the values of the probabilities became (up to 0.99; see Figure 3C). However, it should not come as a surprise that not all the resulting networks exhibited sustained activity since other potentially critical parameter combinations were being varied simultaneously. In contrast, using those same networks, maintaining the ratios $peipie$ and $peepii$ (which are very weakly correlated) did not result in networks that sustained activity even with continuous input (see Figure 3D).

### 2.4 All Four Classes of Connectivity Contribute to Stiff Dimensions

To identify the parameter combinations that had the greatest impact on spiking activity, we decomposed the FIM into eigenvectors and identified the corresponding eigenvalues. The eigenvalues of all FIMs extended over several orders of magnitude, consistent with many synaptic architecture parameter combinations being sloppy (see Figures 4B, 4G, and 4I). The eigenvectors that corresponded to the largest eigenvalues define the stiffest dimensions and indicated that those specific parameter combinations have the greatest impact on SNN spiking activity. Indeed, when projecting all the parameter vectors with sustained activity onto the different eigenvectors, we find that the first eigenvector—corresponding to the largest eigenvalue—had the lowest variance in projections (see Figures 4E and 4F), confirming that the first eigenvector defines the stiffest dimension of parameter space.

We then evaluated the sparsity of the FIM using the Gini coefficient to establish the complexity of the stiff dimensions (Panas et al., 2015). In the majority of networks, the Gini coefficient was low (0.21 $\xb1$ 0.07), indicating that the FIMs were not sparse (Panas et al., 2015). The lack of sparsity indicates that the stiff dimensions depend on complex combinations of several parameters and not only on a few critical parameters (Gutenkunst et al., 2007; Panas et al., 2015), consistent with the results of the grid search. Indeed, the contribution of each parameter to the first eigenvectors reveals that those dimensions are equally sensitive to changes in all of the parameters (see Figures 4D, 4H, and 4J). These results were consistent across the three types of inputs.

### 2.5 Input Brevity Increasingly Restricts the Viable Wiring Parameter Space

Parameter combinations derived from Billeh et al. (2020) for L2/3 and for the laminar-agnostic primary visual cortex fall inside the smallest region. However, connectivity parameters from L4 and L5 fall outside this restricted region and also exhibit sustained activity in response to continuous input (see Figure 5A). Notably for L5 parameters, the projection onto the first eigenvector fell in the same very narrow range, but the projection onto the second eigenvector differed.

Dimensions used for this analysis define the stiffest dimensions of the system—and, consistently, the same type of analysis on the less informative dimensions revealed no structure in the data (see Figure 5B).

The nonsparsity of the FIMs indicated the dependence of the stiff dimensions on multiple parameters. We found that the projections of the parameter combinations that exhibited sustained activity following brief input onto the stiffest dimension all fell around 0 (see Figure 5A). The eigenvector considered here is $e1$ = [$-$0.58, 0.43, 0.54, $-$0.43]. Thus $1.3pee-pei\u2248-(pii-1.3pie)$. This explains the strong, negative correlation found between $pee-pei$ and $pii-pie$ (see Figure 2E). Indeed, the differences and ratios between connectivity probabilities could have been used as the FIM parameters to identify stiff dimensions that depend on only a subset of the parameters considered. However, the results of the grid search indicated that this is not possible because of the strong correlations found between differences and ratios (see Figure 2). Hence, inherent relations between the different probabilities of connectivity make it impossible to get stiff dimensions dependent on unique model parameters resulting from straightforward combinations of probabilities of connectivity.

Well-founded new methods of describing the connections between neurons at the network level instead of probabilities of connectivity between pairs of neurons could potentially address this problem. For instance, instead of looking at the probability of pairwise connections, we can try considering the probability of different triplet motifs that include both excitatory and inhibitory neurons, which might result in less complex stiff dimensions. In the case of neuronal parameters, stiff dimensions that depend on only a subset of the parameters have been obtained (Panas et al., 2015; Ponce-Alvarez et al., 2020).

## 3 Discussion

Using large-scale algorithmic grid searches, we found that the parameter space of synaptic architectures is highly anisotropic: large ranges of parameter values produce spiking that matches that of murine visual cortex, while a specific subset of parameter value combinations dramatically changes network activity. These are the sloppy and stiff dimensions, respectively. Stiff parameter combinations generalize across three broad classes of input into the network. Notably, the region of viable parameter combinations constricted as the requirement for architectures being capable of self-sustaining activity increased. We limited our examination here to one observed spiking regime—asynchronous, irregular, and low firing rate (Brunel, 2000). While this is our dominant observation and that of others in vivo (El Boustani et al., 2007; Davis et al., 2021), the neocortex is capable of rapid state transitions defined by different spiking statistics (Brunel, 2000; Tan et al., 2014; Fontenele et al., 2019). Considering the relation between stiff dimensions and state transitions, this link would be of great interest for future studies.

A recurring theme in neuroscience is that stiff parameter combinations encompass opposing forces. For example, most combinations of ion channel conductances are sloppy with the exception of a maintained ratio between a hyperpolarizing conductance and a depolarizing conductance (MacLean, Zhang, Johnson, & Harris-Warrick, 2003; Prinz et al., 2004; Ransdell et al., 2013; Schulz et al., 2006). Here we show that the connectivity statistics between and within excitatory and inhibitory neurons comprise the stiff parameter combinations of synaptic architectures. Indeed, maintaining a balance between excitation and inhibition is critical for normal network activity and has been resolved in synaptic conductances at the single cell level in vivo (Haider, Duque, Hasenstaub, & McCormick, 2006). Moreover, many studies have also argued that a balance of excitation and inhibition underlies irregular firing in the neocortex (van Vreeswijk & Sompolinsky, 1996, 1998).

We show that this balance is achieved in synaptic architectures through inter- and intrapopulation neuronal synaptic connections and involves excitation, inhibition, and disinhibition. It is for this reason that we found that all four connectivity parameters that we implemented contribute to the stiff dimensions and why pairs of connection likelihoods that do not share any parameters were highly correlated. While a balance between excitation and inhibition is well established in neocortex, the approach that we used here is unbiased and without any prior assumptions as to the importance of parameter combinations. Given our broad search, it was possible that stiff parameter combinations do not implicate a balance of excitation and inhibition. The fact that we find these specific stiff parameter combinations reinforces the importance of the balance in maintaining an asynchronous irregular and low firing rate regime in neocortex.

Surprisingly, despite simplistic coarse graining, experimentally estimated connectivity probabilities from the entirety of V1 of mice as well as connectivity probabilities from L2/3 of V1 (Billeh et al., 2020) fall inside the small space of topologies that we identified via algorithmic search as potentially capable of self-sustained activity and may suggest a level of autonomy of L2/3 activity not achievable in other laminae. There were notable differences between the algorithmically identified and experimentally measured synaptic architectures in other laminae.

Experimentally estimated L5 connectivity parameters themselves did not fall within the restricted space of viable synaptic architectures, although the projection onto the first eigenvector does, indicating congruence with the stiffest dimension. The difference between L5 and algorithmically identified topologies is in the projection onto the second eigenvector, a less stiff dimension, which may reflect differences in the proportions of inhibitory neurons between the laminae. In fact, while the probabilities of connections from subtypes of inhibitory neurons to specific subtypes of neurons do not vary substantially between the different cortical layers (Billeh et al., 2020), the difference we observed in the generalized probabilities is likely due to the predominance of parvalbumin- and somatostatin-positive inhibitory neurons in L5 (48% and 43% of inhibitory neurons, respectively) when compared to the percentage of Htr3a-positive neurons (9% versus 50% in L2/3).

As a result, the coarse grained i-i and i-e probabilities in L5 are much higher than the excitatory probabilities in contrast to the other laminae. This could theoretically be compensated for by the much stronger e-e synapses recorded in L5 compared to L2/3 (Billeh et al., 2020; Cossell et al., 2015; Hofer et al., 2011; Jiang et al., 2015; Lefort, Tomm, Floyd Sarria, & Petersen, 2009; Song, Sjöström, Reigl, Nelson, & Chklovskii, 2005); however, we did not take this particular parameter into consideration in our simulation or analysis. Despite the link between synaptic strength and sustained activity (Vogels & Abbott, 2005), which is directly related to the role of total excitatory drive onto each neuron (Ahmadian & Miller, 2021), this is not a parameter that we varied because we opted to focus on the specifics of the synaptic wiring to the extent possible. To do so we used random connectivity weights and conductance-based synapses with time-decaying conductances that result in the total input to each neuron dynamically changing during a single trial since each input is contextualized by the conductance state of the modeled postsynaptic neuron. Notably, it was shown that conductance-based synapses, as compared to current-based synapses, result in more informative spiking dynamics (Cavallari, Panzeri, & Mazzoni, 2014). Connectivity parameters of L4 also fall outside the region of the parameter space identified by the stiffest dimensions. This difference lacks an easy explanation but may simply reflect the different roles that layer 4 is hypothesized to play as compared to other laminae and the fact that thalamocortical connectivity is particularly important and heterogeneous into L4 (Landau, Egger, Dercksen, Oberlaender, & Sompolinsky, 2016).

Machine learning techniques demonstrate that the structure of neural networks can be modified to achieve a specific task and that, following training, these models are capable of accurately modeling neocortical neuronal activity at different stages of the visual processing hierarchy (Yamins, Hong, Cadieu, Solomon, Seibert, & DiCarlo, 2014). Here we studied synaptic network architectures identified using as first-order the spiking statistics rather than trained on a task. It will be of great interest to evaluate whether training similar networks (Bellec et al., 2020) will result in convergence to the same set of synaptic architectures.

Our approach to identify synaptic architectures based solely on spiking statistics and the correspondence with experimental measures demonstrates the utility of spiking statistics as a parsimonious way of studying structure-function relations. Indeed, previous work using maximum entropy models fit to spiking data was similarly successful at identifying cellular-level stiff and sloppy dimensions (Panas et al., 2015; Ponce-Alvarez et al., 2020). These results are consistent with a previous experimental study that identified neurons of varying levels of correlation with the network activity (Okun et al., 2015; Ponce-Alvarez et al., 2020). Our model relied on much simpler and more interpretable descriptives of neuronal firing and elucidated the role of tractable connectivity parameters instead of neuronal ones. In sum, these studies, along with this work form a compelling argument to study mesoscale connectivity as well as network-wide correlations by fitting models to spiking statistics. In addition, our findings show that coarse-graining of all the different cell types is sufficient for studies of neuronal spiking patterns alone as compared to more complex cortical functions for which these elaborations and more complicated models might become more justified.

## 4 Methods

### 4.1 Model Architecture

When a neuron fires, the membrane potential is reset, the adaptation current is increased by a value of $b$, and the corresponding synaptic conductance at the synapses receiving the signal is increased by the weight of the connection. The weights of the connections were randomly drawn from a log-normal distribution, with the parameters of the corresponding normal being $\mu =-0.64$ and $\sigma =0.51$. Connections from inhibitory neurons were enhanced by an order of magnitude given their stronger effect on neurons compared to their excitatory counterpart in biological networks due to their tendency to be localized near the soma of postsynaptic neurons (Huang, Ruff, Pyle, Rosenbaum, Cohen, & Doiron, 2019). The parameters fixed across all simulations are defined and summarized in Table 1.

. | Notation . | Value . |
---|---|---|

Capacitance | $C$ | 281 pF |

Leak conductance | $gL$ | 30 nS |

Leak reversal potential | $EL$ | $-$70.6 mV |

Slope factor | $\Delta T$ | 2 mV |

Firing threshold | $VT$ | $-$40.4 mV |

Excitatory reversal potential | $Ee$ | 0 mV |

Inhibitory reversal potential | $Ei$ | $-$75 mV |

Excitatory synaptic time constant | $\tau e$ | 10 ms |

Inhibitory synaptic time constant | $\tau i$ | 3 ms |

Input synaptic time constant | $\tau p$ | 10 ms |

Adaptation time constant | $\tau w$ | 144 ms |

Subthreshold adaptation | $a$ | 4 nS |

Spike-triggered adaptation | $b$ | 80.5 pA |

. | Notation . | Value . |
---|---|---|

Capacitance | $C$ | 281 pF |

Leak conductance | $gL$ | 30 nS |

Leak reversal potential | $EL$ | $-$70.6 mV |

Slope factor | $\Delta T$ | 2 mV |

Firing threshold | $VT$ | $-$40.4 mV |

Excitatory reversal potential | $Ee$ | 0 mV |

Inhibitory reversal potential | $Ei$ | $-$75 mV |

Excitatory synaptic time constant | $\tau e$ | 10 ms |

Inhibitory synaptic time constant | $\tau i$ | 3 ms |

Input synaptic time constant | $\tau p$ | 10 ms |

Adaptation time constant | $\tau w$ | 144 ms |

Subthreshold adaptation | $a$ | 4 nS |

Spike-triggered adaptation | $b$ | 80.5 pA |

Initial voltages of all neurons were randomly drawn from a normal distribution with mean $\mu =-65mV$ and standard deviation $\sigma =5mV$. All simulations were implemented in Python 3 using the Brian Simulator (version 2.2.1; Stimberg, Brette, & Goodman, 2019).

### 4.2 Network Input

Network activity was initiated using a population of 3000 Poisson units connected to both the excitatory and inhibitory neurons in the main network with a connection probability of 0.1 unless otherwise specified, and the weights of the connections were randomly drawn from a log-normal distribution with the parameters of the corresponding normal being $\mu =-0.64$ and $\sigma =0.51$. Three types of input were considered:

Brief input: The firing rates of the Poisson units were drawn from a log-normal distribution with the parameters of the corresponding normal being $\mu =2.8$ and $\sigma =0.3$, resulting in a mode at around 15 Hz. Their activity was halted after 300 ms.

Continuous input: Similar to the brief input but the activity of the input units was maintained for the duration of the simulation.

Cyclical input: The maximum firing rates $(\nu max)$ of the input units were drawn from the same log-normal distribution as that of the brief input, but the instantaneous firing rates varied between 0 and $\nu max$ according to a sinusoidal function with a period of 600 ms.

### 4.3 Connectivity Parameters

To minimize sampling bias, the four connectivity parameters were algorithmically determined using a low-resolution, four-dimensional grid search in which $pee,pei$, and $pie$ varied between 0.10 and 0.30 and $pii$ varied between 0.20 and 0.40 with an increment of 0.02. This resulted in 14,641 parameter combinations, which were each simulated five times with different initial membrane potentials, yielding 73,205 simulations.

$k,h\u2208{e,i}$.

$pk\u2192h$ and $pkm\u2192hn$ are, respectively, the connection probability from neurons of type $k$ to neurons of type $h$ and the connection probability from neurons of subtype $km$ to neurons of subtype $hn$.

$fkm$ and $fhn$ are the fraction of neurons of subtype $km$ out of all the $k$ neurons and the fraction of neurons of subtype $hn$ out of the $h$ neurons, respectively.

For simulations using the probabilities calculated from the statistics of either L2/3 or L5, the connectivity probability of the input units to the network was computed based on the connections of L4 and L2/3 excitatory neurons respectively to each layer. In the case of L2/3, the resulting value was decreased to account for the numerous inhibitory connections from L4 to excitatory neurons in L2/3.

### 4.4 Matching Spiking Statistics to Murine Visual Cortex

The fit of network activity to naturalistic spiking statistics was evaluated based on the following criteria:

$\u2022$ Firing rate of excitatory $(\nu e)$ and inhibitory $(\nu i)$ neurons.

$\u2022$ Synchrony $(s)$: On runs that were tested individually, synchrony was determined using the Van Rossum distance between excitatory spike trains with 10 ms time constant using the “elephant” package (Denker, Yegenoglu, & Grün, 2018; van Rossum, 2001). The Van Rossum distance is a measure of spike train dissimilarity after convolving the spikes with a decaying exponential kernel. For the Fisher information matrix calculations (see section 4.5), we used the Van Rossum distance on the excitatory spikes during the last 150 ms only, since these are representative of the activity of the network after cessation of input and it allows faster computation. However, because computing the Van Rossum distance is computationally expensive, we devised a fast method of estimating the synchrony of the excitatory neurons to use during the grid search: activity was divided into 10 ms time bins, and for each time bin, we calculated the variance of the number of spikes per neuron divided by the mean number of spikes per neuron and then averaged the result over all time bins. To evaluate the validity of this score, we calculated both this new synchrony measure and the Van Rossum distance for 45 networks that showed sustained activity. The two scores had a 94% correlation.

$\u2022$ Proportion of runs that resulted in sustained activity $(psus)$: Activity is considered sustained if excitatory neurons are firing until the end of the simulation with no more than 150 ms of inactivity across the network. In general, networks that showed sustained activity for 1 second sustained their activity for the duration of the simulation regardless of simulation time. However, for runs initiated using brief input, some networks had activity that truncated after 1 second of sustained activity.

To ensure that all the networks that we evaluated would sustain activity for any simulation duration, we developed an additional check. We selected 50 parameter combinations that resulted in activity sustained for 1 second in at least one trial. We then ran 10 additional trials—each with a new adjacency matrix, initial voltages, and input units—on all 50 parameter combinations. Out of these 500 runs, 435 showed sustained activity for more than 1 second. Of these 435, 85 runs had activity that truncated past 1 second. We then used these data to train a support vector machine (SVM) classifier with a radial basis function (RBF) kernel to predict which of runs that sustained for 1 second will truncate later. This prediction was based on the excitatory and inhibitory firing rates and the fast synchrony measure calculated on the spikes during the first second of activity only. The best hyperparameters were determined using a cross-validation grid search carried out across different methods of scaling the three features considered. The highest accuracy (96%) was obtained with a power transformation of the scores for a more gaussian-like distribution and using $\Gamma =0.1$ and $C=1000$. This classifier was then used during the grid search on the connectivity parameters. The SVM was implemented using scikit-learn version 0.22.1 (Pedregosa et al., 2011). In this way, we ensured that all models considered would spike for the full duration of simulation regardless of the duration.

### 4.5 Fisher Information Matrix Estimation

### 4.6 Gini Coefficient

### 4.7 Statistical Testing

Variances in the projections of the parameter vectors onto the eigenvectors were compared using the Levene test.

## Acknowledgments

J.N.M. was granted research funds from the National Institute of Health (NIH grants R01EY022338; UF1NS115821) and from the National Science Foundation (NSF CAREER grant 0952686). T.J. was granted research support by the University of Chicago (Dean's Scholarship and Neuroscience Research Metcalf Fellowship). We thank former and current MacLean lab members Yuqing Zhu, Isabel Garon, Maayan Levy, and Gabriella Wheeler Fox, and Emil Sidky for assistance with initial simulations, accurate estimations of Fisher information matrices, and helpful comments on our manuscript.

## References

*Neuron*

*Nature Neuroscience*

*Nature Communications*

*Neuron*

*PLOS Computational Biology*

*Journal of Neurophysiology*

*Physical Review E*

*Journal of Computational Neuroscience*

*Frontiers in Neural Circuits*

*PLOS Computational Biology*

*Nature*

*Nature Neuroscience*

*Nature*

*Current Opinion in Biotechnology*

*Nature Communications*

*PLOS Computational Biology*

*Neuroinformatics 2018*

*Nature Neuroscience*

*Current Opinion in Neurology*

*Journal of Physiology–Paris*

*Phys. Rev. Lett.*

*PLOS Computational Biology*

*Journal of Neuroscience*

*Nature*

*Nature Neuroscience*

*Interface Focus*

*Neuron*

*IEEE Transactions on Information Theory*

*Science*

*Neurocomputing*

*Journal of Neuroscience*

*Neuron*

*Neuron*

*Neuron*

*Neuron*

*Neuron*

*Current Opinion in Neurobiology*

*Nature*

*Journal of Neuroscience*

*Journal of Machine Learning Research*

*eLife*

*Nature Neuroscience*

*Journal of Neuroscience*

*PLOS Computational Biology*

*Nature Neuroscience*

*Nature*

*PLOS Biology*

*Journal of Computational and Graphical Statistics*

*Nature*

*eLife*

*Nature*

*Journal of Chemical Physics*

*Neural Computation*

*Science*

*Neural Computation*

*Physical Review E*

*Journal of Neuroscience*

*Neuron*

*Proceedings of the National Academy of Sciences*

*PLOS Computational Biology*

## Author notes

Tarek Jabri is now at Harvard University.