Complex systems can be defined by “sloppy” dimensions, meaning that their behavior is unmodified by large changes to specific parameter combinations, and “stiff” dimensions, whose change results in considerable behavioral modification. In the neocortex, sloppiness in synaptic architectures would be crucial to allow for the maintenance of asynchronous irregular spiking dynamics with low firing rates despite a diversity of inputs, states, and short- and long-term plasticity. Using simulations on neural networks with first-order spiking statistics matched to firing in murine visual cortex while varying connectivity parameters, we determined the stiff and sloppy parameters of synaptic architectures across three classes of input (brief, continuous, and cyclical). Algorithmically generated connectivity parameter values drawn from a large portion of the parameter space reveal that specific combinations of excitatory and inhibitory connectivity are stiff and that all other architectural details are sloppy. Stiff dimensions are consistent across input classes with self-sustaining synaptic architectures following brief input occupying a smaller subspace as compared to the other input classes. Experimentally estimated connectivity probabilities from mouse visual cortex are consistent with the connectivity correlations found and fall in the same region of the parameter space as architectures identified algorithmically. This suggests that simple statistical descriptions of spiking dynamics are a sufficient and parsimonious description of neocortical activity when examining structure-function relationships at the mesoscopic scale. Additionally, coarse graining cell types does not prevent the generation of accurate, informative, and interpretable models underlying simple spiking activity. This unbiased investigation provides further evidence of the importance of the interrelationship of excitatory and inhibitory connectivity to establish and maintain stable spiking dynamical regimes in the neocortex.

Local synaptic connectivity in neocortex is fundamental to the generation and stipulation of the spiking dynamics (Cossell et al., 2015; Koulakov, Hromádka, & Zador, 2009) that underlie the formation of percepts, decisions, and the generation of appropriate behavioral responses. However, the rules that govern the mesoscopic-scale relationships between local synaptic architectures and spiking activity remain unclear. On one hand, synaptic architectures must be highly dynamic since they underlie, at least in part, the storage of information (Chklovskii, Mel, & Svoboda, 2004) and generate the range of spiking activity corresponding to distinct brain states (Doiron, Litwin-Kumar, Rosenbaum, Ocker, & Josić, 2016). On the other hand, it is clear that aberrant synaptic wiring can give rise to detrimental spiking behaviors and pathophysiology such as epilepsy (Engel, Thompson, Stern, Staba, Bragin, & Mody, 2013).

Modeling and theoretical analysis are essential complements to experimental investigations of structure-function relationships since they enable precise manipulation of simulated connectivity (Churchland & Abbott, 2016; Transtrum, Machta, Brown, Daniels, Myers, & Sethna, 2015). Any model of a biological system involves a large number of free parameters that cannot always be determined from experimental data or that vary greatly from one observation to the other. Nevertheless, numerous models of neural systems have successfully replicated key aspects of neuronal network dynamics, including different activity patterns (Chambers & MacLean, 2016; Ocker et al., 2017; Vegué & Roxin, 2019) and receptive field properties (Hopkins, Pineda-García, Bogdan, & Furber, 2018; Kerr, McGinnity, Coleman, & Clogenson, 2015; Zylberberg, Murphy, & DeWeese, 2011). Given the difficulty of estimating the exact values of connectivity parameters and the variability in these values observed in vivo, a reasonable hypothesis for why these models work is that multiple combinations of parameters can result in similar network activity (Brown & Sethna, 2003). Together, these observations are indicative of sloppy systems, whose behavior depends only on a few stiff combinations of parameters while the majority of parameters are not critical for accurate predictions of the system's behavior (Gutenkunst, Waterfall, Casey, Brown, Myers, & Sethna, 2007).

Sloppiness is a universal feature of models in systems biology (Daniels, Chen, Sethna, Gutenkunst, & Myers, 2008; Gutenkunst et al., 2007; Transtrum et al., 2015). For example, ionic conductances within individual neurons have consistently been found to vary greatly across neurons and between individuals despite regularity in spiking activity (Prinz, Bucher, & Marder, 2004; Ransdel, Nair, & Schulz, 2013; Schulz, Goaillard, & Marder, 2006). At the neuronal circuit level, stability and state changes are mediated by a subset of neurons described by a small number of stiff parameter combinations while the parameters of the remainder of the neurons are sloppy (Panas et al., 2015; Ponce-Alvarez, Mochol, Hermoso-Mendizabal, de la Rocha, & Deco, 2020).

What remains unclear, however, are the stiff and sloppy parameter combinations that define synaptic architectures capable of producing spiking statistics consistent with dynamic regimes observed in neocortex. Neocortical networks have been shown to operate in different regimes and to rapidly transition between them (Brunel, 2000; Tan, Chen, Scholl, Seidemann, & Priebe, 2014; Fontenele et al., 2019). Here we focused on the dominant dynamical regime of the neocortex, which is low rate, irregular, and asynchronous (Brunel, 2000; El Boustani, Pospischil, Rudolph-Lilith, & Destexhe, 2007; Davis et al., 2021). The conservation of both connectivity and wiring cost across different species (Assaf, Bouznach, Zomet, Marom, & Yovel, 2020) is a strong incentive to find the stiff and sloppy dimensions of synaptic architectures. Moreover, delineating these dimensions of neocortical wiring can better constrain cortical models. Understanding the stiff and sloppy dimensions also carries implications beyond the realm of organic neural systems into artificial neural networks (ANNs). It has been shown that connectivity patterns are directly related to the dimensionality of the activity in recurrent spiking neural networks (SNNs), with the latter decreasing as overall connectivity increases (Recanatesi et al., 2019). Hence, delineating the small number of parameters that describe stiff dimensions of network connectivity will allow further studies to capture meaningful functional and computational principles that define those networks.

Here, we survey a large portion of the synaptic connectivity parameter space to create wiring diagrams and identify parameter combinations capable of producing activity matched to murine visual cortex (V1). We identify the stiff and sloppy parameter combinations of synaptic architectures responsible for producing naturalistic activity and compare these algorithmically identified connectivity parameters to recent experimental values, finding them to be largely in agreement.

2.1  Grid Search for Synaptic Architectures Producing Naturalistic Spiking

To evaluate the impact of specific synaptic connectivity parameter combinations on the statistics of spiking, we carried out large-scale simulations of spiking neural network (SNN) models. SNNs were composed of both excitatory (e) and inhibitory (i) adapting exponential leaky integrate-and-fire neurons (AdEx; Brette & Gerstner, 2005) connected with conductance-based synapses (see Figure 1A and section 4.1).
Figure 1:

Network structure and activity. (A) Simulated network composition. Populations of excitatory and inhibitory neurons whose connections are determined by probabilities of connectivity receive input from a shared pool of Poisson units firing in one of three regimes: brief, cyclical, or continuous. (B) Design of grid search of synaptic architectures. (C) Raster plot showing the timing of spikes of 50 example neurons, for ease of presentation, in the network with connectivity probabilities that gave the lowest firing rates in the target range using continuous input. Neurons 0 to 39 (green) are excitatory, and neurons 40 to 49 (orange) are inhibitory. Each bar marks the timing of a spike fired by the corresponding neuron along the y-axis.

Figure 1:

Network structure and activity. (A) Simulated network composition. Populations of excitatory and inhibitory neurons whose connections are determined by probabilities of connectivity receive input from a shared pool of Poisson units firing in one of three regimes: brief, cyclical, or continuous. (B) Design of grid search of synaptic architectures. (C) Raster plot showing the timing of spikes of 50 example neurons, for ease of presentation, in the network with connectivity probabilities that gave the lowest firing rates in the target range using continuous input. Neurons 0 to 39 (green) are excitatory, and neurons 40 to 49 (orange) are inhibitory. Each bar marks the timing of a spike fired by the corresponding neuron along the y-axis.

Close modal

In previous work, we have shown that conductance-based synapses are crucial to accurately simulate neuronal integration of synaptic inputs—a critical consideration when evaluating structure-function hypotheses (Bojanek, Zhu, & MacLean, 2020; Chambers & MacLean, 2016). Connections between neurons in each network were determined randomly based on four connectivity probabilities, pee,pei,pie, and pii, where the first and second indices refer to the presynaptic and postsynaptic populations, respectively. Although neocortical synaptic connectivity is not entirely random, we have found that clustering of synaptic connectivity facilitates stable asynchronous irregular activity with low firing rates (Bojanek et al., 2020). We note that even in Erdős-Rényi synaptic networks, the functional connectivity is clustered with a preference for certain motifs, suggesting that regardless of the underlying synaptic connectivity, networks exhibiting naturalistic activity will effectively be clustered (Chambers & MacLean, 2016).

Rather than impose clustering on synaptic connectivity and potentially bias certain outcomes, we chose to keep connectivity random. We conducted a grid search over a range of synaptic connectivity parameters defined by both excitatory and inhibitory connectivity and then quantified the outcome in SNN model behavior as connectivity parameter values changed (see Figure 1B). Specifically, we determined which connectivity parameter combinations were capable of producing sustained spiking activity matched to in vivo spiking of murine visual cortex (Billeh et al., 2020; Dechery & MacLean, 2018; Niell & Stryker, 2010; Siegle et al., 2021; Steinmetz, Zatka-Haas, Carandini, & Harris, 2019). The performance of the networks was quantified using a set of objective functions, each of which corresponded to individual first-order statistical descriptors of spiking activity: firing rates, synchrony, and fraction of trials with sustained activity (see Figure 1C and section 4.4).

We began by identifying combinations of parameters that produced firing rates between 8 and 15 Hz (Siegle et al., 2021) and synchrony scores corresponding to a Van Rossum distance greater than 4 (see section 4.4) in response to the three classes of input (constant continuous, cyclical continuous, and brief). Notably, the input classes fall along a continuum of durations, and as the inputs become increasingly brief, increased emphasis is placed on network architectures capable of producing self-sustaining activity following input. Sustained activity was a requirement even in the case of brief input given both in vivo and in vitro studies showing that networks of neurons are capable of generating and maintaining activity even in the absence of external inputs (Mao, Zatka-Haas, Carandini, & Harris, 2001; Winnubst, Cheyne, Niculescu, & Lohmann, 2015).

The networks were first tested using continuous input from Poisson units firing with rates drawn from a log-normal distribution with a mean of 17 ± 5.3 Hz. Out of the 14,461 unique connectivity parameter combinations tested, 5080 resulted in sustained activity for the length of the simulation more than 50% of the time. Of those, 579 synaptic wiring diagrams showed average excitatory firing rates in the desired range (νe¯=10.8±1.9Hz). Those networks had a mean synchrony score of 1.03 ± 0.06 using the fast synchrony measure (see section 4.4; νι¯=27.2±8.1Hz; psus¯=0.998±0.029). Networks receiving cyclical input (1.67 Hz with maximal firing rates of units drawn from the same log-normal distribution) showed a lower number of successful parameter combinations at 199, including 47 networks with rates also between 8 and 15 Hz (νe¯=10.8±2.1Hz; νι¯=25.6±7.6Hz; s¯=1.05±0.11; psus¯=0.97±0.10).

Finally, we evaluated architectures capable of producing self-sustaining activity in response to brief (300 ms) excitatory Poisson input. We similarly simulated 73,205 trials for 14,461 synaptic architectures corresponding to a range of different parameter combinations. Self-sustained activity is the hardest to achieve, resulting in the lowest number of successful networks: 241 trials resulted in self-sustained activity, which in turn corresponded to 44 unique parameter combinations that produced self-sustained activity in at least 50% of the simulations.

Of these 44 networks, 25 also scored well on the other objective functions and exhibited an average firing rate of excitatory neurons between 8 and 15 Hz (νe¯=9.9±1.6Hz; νι¯=20.6±3.6Hz; s¯=0.98±0.02; psus¯=0.888±0.161; see Figure 2B). Grid search resolution was low in order to evaluate large ranges of parameter combinations for synaptic architectures. Thus, the low number of viable synaptic wiring diagrams should not be interpreted as indicative of a scarcity of viable architectures. In fact, subsequent calculations of the FIM (discussed below) show that within a narrow range (±0.01) around the parameter combinations found, all networks exhibit sustained activity with low firing rates and low synchrony levels. Note that in the three cases considered, the effect of the different activity scores on the number of successful networks varied as expected: as the difficulty to sustain the activity increased, the proportion of networks eliminated by this requirement surpassed that of the networks eliminated due to considerations of firing rates and synchrony scores.
Figure 2:

Analysis of networks from grid search with rate-matched sustained activity shows correlations between classes of connection. (A) Correlation values between spiking measures of networks (excitatory firing rate (νe), inhibitory firing rate (νi), and synchrony score (synch)) using brief input. (B) Distribution of excitatory and inhibitory firing rates (νe and νi) and synchrony measures of networks following brief input. (C) Pairwise Pearson correlation coefficients between the four connectivity parameters (pee,pei,pie,pii) of those successful networks following brief input. (D) Distribution of the four parameters of connectivity in successful networks using brief input. (E, I, K) Pairwise Pearson correlation coefficients between differences of connectivity probabilities using brief, cyclical, and continuous inputs, respectively. (F) Distribution of differences between connectivity probabilities from networks using brief input. (G, J, L) Pairwise Pearson correlation coefficients between ratios of connectivity probabilities using brief, cyclical, and continuous inputs, respectively. (H) Distribution of ratios between connectivity probabilities from networks using brief input. All networks considered here sustained their activity in more than 50% of the trials and had excitatory firing rates between 8 and 15 Hz.

Figure 2:

Analysis of networks from grid search with rate-matched sustained activity shows correlations between classes of connection. (A) Correlation values between spiking measures of networks (excitatory firing rate (νe), inhibitory firing rate (νi), and synchrony score (synch)) using brief input. (B) Distribution of excitatory and inhibitory firing rates (νe and νi) and synchrony measures of networks following brief input. (C) Pairwise Pearson correlation coefficients between the four connectivity parameters (pee,pei,pie,pii) of those successful networks following brief input. (D) Distribution of the four parameters of connectivity in successful networks using brief input. (E, I, K) Pairwise Pearson correlation coefficients between differences of connectivity probabilities using brief, cyclical, and continuous inputs, respectively. (F) Distribution of differences between connectivity probabilities from networks using brief input. (G, J, L) Pairwise Pearson correlation coefficients between ratios of connectivity probabilities using brief, cyclical, and continuous inputs, respectively. (H) Distribution of ratios between connectivity probabilities from networks using brief input. All networks considered here sustained their activity in more than 50% of the trials and had excitatory firing rates between 8 and 15 Hz.

Close modal

This limited number of possible networks spread out over the entire range of parameters tested (see Figure 2D) is, by definition, indicative of both sloppiness and stiffness of spiking neural networks. For the rest of the analyses, we consider only networks that achieved sustained activity for each type of input.

It is noteworthy that despite the fact that the synchrony level was calculated using only excitatory spikes (see section 4.4), the synchrony score is highly correlated with the inhibitory firing rate but not the excitatory one (see Figure 2A). This confirms that this measure of synchrony does not simply scale with the number of spikes. Moreover, it is consistent with the role of local inhibition setting spike timing in the excitatory pool (Haider, Häusser, & Carandini, 2013).

2.2  Correlated Components of Naturally Spiking Synaptic Architectures

As an initial investigation of potentially stiff parameter combinations, we examined the differences and ratios between pairs of connectivity probabilities for the parameter combinations that resulted in sustained activity. We observed that certain differences in the connection likelihoods were highly correlated with each other (either positively or negatively), while others were uncorrelated (see Figures 2E and 2G). Specifically, pee-pie and pei-pie were strongly and positively correlated at 0.96 in the case of brief input, while pee-pie and pii-pei consistently showed the strongest negative correlations (-0.99 for brief and cyclical inputs, -0.95 for continuous inputs; see Figures 2E, 2I, and 2K). The same pattern of correlations was observed among pairs of ratios (e.g. pee/pie and pee/pii very strongly correlated at 0.95, 0.89, and 0.88 in the case of brief, cyclical, and continuous input, respectively; see Figures 2G, 2J, and 2L), indicating that networks that result in sustained activity are more likely to have connectivity parameters that follow these linear relationships between certain differences and ratios of excitatory and inhibitory connectivity. In other words, certain combinations of differences or combinations of ratios may constitute stiff parameters, while the others may be sloppy.

In fact, especially in the case of networks receiving brief input, when looking at the pairs of differences that are highly correlated, it appeared that the majority of the networks had the same values for these differences despite having very different probabilities of connectivity (see Figures 2F and 2H). Similar strong correlations were found among differences and ratios of wiring parameters in networks sustaining their activity with low firing rates using both brief and cyclical inputs (see Figures 2I and 2J). When not restricting the firing rate to the target range, the correlations remained but with lower magnitudes. Networks that exhibited sustained low-rate activity in response to continuous input (n=579) also exhibited the majority of the same significant correlations (see Figures 2K and 2L).

We note that if we did not control for firing rate, these correlations were less apparent in networks that spiked in response to continuous inputs. To confirm that these correlations that appear for rate-matched networks are not statistical artifacts due to the relatively small number of networks considered (11% of networks with sustained activity in more than 50% of the trials), we matched the sample size (n=579) in different sets of networks randomly selected from those that showed sustained activity in more than 50% of the trials and evaluated the correlations among the pairs of parameters. The lack of strong correlations in the case of randomly selected networks confirms that the results that we observed are related to the rates of those networks being in the target range.

These correlated parameter combinations are additional indicators of the presence of stiff dimensions within synaptic architectural parameter combinations when matching stable spiking activity in the network to in vivo recordings. It is notable that despite the fact that different networks showed sustained spiking with each type of input, the same pairs of connectivity parameter combinations were crucial to the production of sustained and murine-matched activity regardless of input type.

2.3  Experimentally Measured Synaptic Wiring in Mouse Visual Cortex Agrees with Algorithmically Identified Correlations

To evaluate the biological plausibility of the networks that we identified using grid search, we compared the algorithmically generated wiring of the SNNs to the values of measured connectivity probabilities recently reported by the Allen Institute for Brain Science (Billeh et al., 2020). To match the parameters varied in the grid searches, it was necessary to coarse-grain the reported connection likelihoods from all classes of inhibitory interneurons and excitatory neurons regardless of laminar location by summarizing all of the connectivity measures as four probability values for excitatory and inhibitory neurons (see Figure 3A and section 4.3). We also coarse-grained the reported connectivities according to laminar position allowing us to separately compare individual layers 2/3, 4, and 5 with the algorithmically identified architectures. For these simulations, input connectivity probability was maintained at 10% for the entire cortex and for L4, but otherwise was based on the reported connectivity probabilities from L4 for layers 2/3 and layers 2/3 to L5 (Billeh et al., 2020). Despite the fact that after coarse-graining some of the resulting connectivity probabilities were not in the range tested in the grid search and despite significant differences in connectivity probabilities between the different laminae, all of the networks exhibited sustained activity in response to continuous input consistent with the initial study (see Figure 3B; Billeh et al., 2020).
Figure 3:

Testing correlated ratios using experimental parameters. (A) Coarse-grained connectivity probabilities calculated from Billeh et al. (2020) for the entire visual cortex as well as individual layers. (B) Raster plot showing the timing of spikes of 50 example neurons in the network with connectivity probabilities experimentally derived from L2/3 using continuous input. Neurons 0 to 39 (green) are excitatory and neurons 40 to 49 (orange) are inhibitory. (C) Proportions of sustained runs using networks with probabilities of connectivity from panel A but varying the values of pii and pie in each case so the ratios peepei and piipie are kept the same as in panel A. Networks that maintained these highly correlated ratios showed sustained activity regardless of the value of two parameters varied. (D) Same as panel C but instead varying pee and pii so as to maintain peepii and peipie the same as in panel A. This did not result in sustained activity in most trials.

Figure 3:

Testing correlated ratios using experimental parameters. (A) Coarse-grained connectivity probabilities calculated from Billeh et al. (2020) for the entire visual cortex as well as individual layers. (B) Raster plot showing the timing of spikes of 50 example neurons in the network with connectivity probabilities experimentally derived from L2/3 using continuous input. Neurons 0 to 39 (green) are excitatory and neurons 40 to 49 (orange) are inhibitory. (C) Proportions of sustained runs using networks with probabilities of connectivity from panel A but varying the values of pii and pie in each case so the ratios peepei and piipie are kept the same as in panel A. Networks that maintained these highly correlated ratios showed sustained activity regardless of the value of two parameters varied. (D) Same as panel C but instead varying pee and pii so as to maintain peepii and peipie the same as in panel A. This did not result in sustained activity in most trials.

Close modal

Grid search in the continuous input condition found that the ratios peepei and piipie have a strong negative correlation despite not sharing any parameter, while peepii and peipie, which also have distinct pairs, are not correlated (see Figure 2L). We found that the coarse-grained experimentally derived connectivity values were consistent with the correlations that we found. We tested the importance of each of these pairs of ratios in sustaining network activity using the connectivity parameters previously reported (Billeh et al., 2020) and found that if the ratios peepei and piipie were maintained while changing the actual parameter values, the majority of the resulting networks continued to exhibit sustained activity regardless of how large the values of the probabilities became (up to 0.99; see Figure 3C). However, it should not come as a surprise that not all the resulting networks exhibited sustained activity since other potentially critical parameter combinations were being varied simultaneously. In contrast, using those same networks, maintaining the ratios peipie and peepii (which are very weakly correlated) did not result in networks that sustained activity even with continuous input (see Figure 3D).

2.4  All Four Classes of Connectivity Contribute to Stiff Dimensions

To determine the contribution of individual parameters and their combinations to the stiff dimensions of synaptic architectures, as well as the impact of the input on the model's stiffness, we used the Fisher information matrix (FIM; Gutenkunst et al., 2007). By its relation to the Hessian matrix, the FIM evaluated at a specific point in the parameter space examines how the likelihood of matching the spiking activity of murine visual cortex changes along the different dimensions around that point. Large changes in the likelihood along certain dimensions indicate that those dimensions are stiff whereas the others are sloppy. For each class of input, the FIM was computed at the parameter combinations that resulted in the lowest firing rates among the rate-matched networks (8–15 Hz), which we defined as the “optimal” combinations (see Figure 4A). For parsimony, we selected the lowest rate values since low rates were consistently more difficult to achieve yet revealed generalizable stiff combinations in the large-scale grid search described above. In the grid search, we ran five trials for each parameter combination and found negligible differences in the average firing rates of the networks at the lower end of the target range (8–15 Hz). We therefore estimated the FIM at the five parameter combinations with the lowest average firing rates for each class of inputs.
Figure 4:

Fisher information matrix analysis. (A) FIM computed at the optimal parameter combination for brief input. (B) Eigenvalues of the FIMs computed at the five parameter combinations with the lowest firing rates in the target range using brief input. The values corresponding to the optimal parameter combinations are shown in blue. (C) Eigenvectors of the FIM in panel A. (D) Sensitivity of the first eigenvector of the five FIMs used in panel B to each of the parameters based on the absolute value of each vector element. The values corresponding to the first eigenvector of the FIM computed at the optimal parameter combination are shown in blue. (E) Projections of parameter vectors that resulted in sustained activity following brief input onto the eigenvectors of the optimal FIM (green: projection onto the eigenvectors 1 and 2; yellow: projection onto the eigenvectors 1 and 3; orange: projection onto the eigenvectors 1 and 4). (F) Variances in the projections in panel E (p1-21.1·10-14,p1-36.2·10-10,p1-44.8·10-14). (G and H, I and J) As for panels B and C but for cyclical and continuous input, respectively.

Figure 4:

Fisher information matrix analysis. (A) FIM computed at the optimal parameter combination for brief input. (B) Eigenvalues of the FIMs computed at the five parameter combinations with the lowest firing rates in the target range using brief input. The values corresponding to the optimal parameter combinations are shown in blue. (C) Eigenvectors of the FIM in panel A. (D) Sensitivity of the first eigenvector of the five FIMs used in panel B to each of the parameters based on the absolute value of each vector element. The values corresponding to the first eigenvector of the FIM computed at the optimal parameter combination are shown in blue. (E) Projections of parameter vectors that resulted in sustained activity following brief input onto the eigenvectors of the optimal FIM (green: projection onto the eigenvectors 1 and 2; yellow: projection onto the eigenvectors 1 and 3; orange: projection onto the eigenvectors 1 and 4). (F) Variances in the projections in panel E (p1-21.1·10-14,p1-36.2·10-10,p1-44.8·10-14). (G and H, I and J) As for panels B and C but for cyclical and continuous input, respectively.

Close modal

To identify the parameter combinations that had the greatest impact on spiking activity, we decomposed the FIM into eigenvectors and identified the corresponding eigenvalues. The eigenvalues of all FIMs extended over several orders of magnitude, consistent with many synaptic architecture parameter combinations being sloppy (see Figures 4B, 4G, and 4I). The eigenvectors that corresponded to the largest eigenvalues define the stiffest dimensions and indicated that those specific parameter combinations have the greatest impact on SNN spiking activity. Indeed, when projecting all the parameter vectors with sustained activity onto the different eigenvectors, we find that the first eigenvector—corresponding to the largest eigenvalue—had the lowest variance in projections (see Figures 4E and 4F), confirming that the first eigenvector defines the stiffest dimension of parameter space.

We then evaluated the sparsity of the FIM using the Gini coefficient to establish the complexity of the stiff dimensions (Panas et al., 2015). In the majority of networks, the Gini coefficient was low (0.21 ± 0.07), indicating that the FIMs were not sparse (Panas et al., 2015). The lack of sparsity indicates that the stiff dimensions depend on complex combinations of several parameters and not only on a few critical parameters (Gutenkunst et al., 2007; Panas et al., 2015), consistent with the results of the grid search. Indeed, the contribution of each parameter to the first eigenvectors reveals that those dimensions are equally sensitive to changes in all of the parameters (see Figures 4D, 4H, and 4J). These results were consistent across the three types of inputs.

2.5  Input Brevity Increasingly Restricts the Viable Wiring Parameter Space

All the networks that had sustained activity in response to brief input also showed sustained activity when receiving continuous input. In addition, similar patterns of correlations were observed among pairs of parameters for all three types of inputs. For this reason, we hypothesized that dimensions deemed stiff for one case may also be informative of stiff dimensions for other inputs. Indeed, when considering the projections of parameter combinations that resulted in sustained activity using continuous input onto the stiffest dimensions, we found that they are grouped in half the parameter space with little overlap with those networks that did not sustain their activity. Importantly, SNNs that demonstrated self-sustaining activity following brief input correspond to a restricted region of the parameter space described by those same eigenvectors (see Figure 5A). Interestingly, networks that show sustained activity when receiving cyclical input define an intermediate region of this parameter space as compared with the parameter space defined by networks receiving continuous input, but also encompassing the restricted region of parameter space corresponding with networks receiving brief input (see Figure 5A).
Figure 5:

Restrictions of the parameter space along the stiffest dimensions. (A, B) Projections of all parameter combinations tested onto the stiffest (first and second eigenvectors) and sloppiest (third and fourth eigenvectors) dimensions, respectively, color-coded depending on the resulting spiking activity using each type of input (never sustained in orange, sustained using continuous input in blue, sustained using cyclical input in green, sustained following brief input in magenta). The projections of the four parameter combinations derived from Billeh et al. (2020) are in black.

Figure 5:

Restrictions of the parameter space along the stiffest dimensions. (A, B) Projections of all parameter combinations tested onto the stiffest (first and second eigenvectors) and sloppiest (third and fourth eigenvectors) dimensions, respectively, color-coded depending on the resulting spiking activity using each type of input (never sustained in orange, sustained using continuous input in blue, sustained using cyclical input in green, sustained following brief input in magenta). The projections of the four parameter combinations derived from Billeh et al. (2020) are in black.

Close modal

Parameter combinations derived from Billeh et al. (2020) for L2/3 and for the laminar-agnostic primary visual cortex fall inside the smallest region. However, connectivity parameters from L4 and L5 fall outside this restricted region and also exhibit sustained activity in response to continuous input (see Figure 5A). Notably for L5 parameters, the projection onto the first eigenvector fell in the same very narrow range, but the projection onto the second eigenvector differed.

Dimensions used for this analysis define the stiffest dimensions of the system—and, consistently, the same type of analysis on the less informative dimensions revealed no structure in the data (see Figure 5B).

The nonsparsity of the FIMs indicated the dependence of the stiff dimensions on multiple parameters. We found that the projections of the parameter combinations that exhibited sustained activity following brief input onto the stiffest dimension all fell around 0 (see Figure 5A). The eigenvector considered here is e1 = [-0.58, 0.43, 0.54, -0.43]. Thus 1.3pee-pei-(pii-1.3pie). This explains the strong, negative correlation found between pee-pei and pii-pie (see Figure 2E). Indeed, the differences and ratios between connectivity probabilities could have been used as the FIM parameters to identify stiff dimensions that depend on only a subset of the parameters considered. However, the results of the grid search indicated that this is not possible because of the strong correlations found between differences and ratios (see Figure 2). Hence, inherent relations between the different probabilities of connectivity make it impossible to get stiff dimensions dependent on unique model parameters resulting from straightforward combinations of probabilities of connectivity.

Well-founded new methods of describing the connections between neurons at the network level instead of probabilities of connectivity between pairs of neurons could potentially address this problem. For instance, instead of looking at the probability of pairwise connections, we can try considering the probability of different triplet motifs that include both excitatory and inhibitory neurons, which might result in less complex stiff dimensions. In the case of neuronal parameters, stiff dimensions that depend on only a subset of the parameters have been obtained (Panas et al., 2015; Ponce-Alvarez et al., 2020).

Using large-scale algorithmic grid searches, we found that the parameter space of synaptic architectures is highly anisotropic: large ranges of parameter values produce spiking that matches that of murine visual cortex, while a specific subset of parameter value combinations dramatically changes network activity. These are the sloppy and stiff dimensions, respectively. Stiff parameter combinations generalize across three broad classes of input into the network. Notably, the region of viable parameter combinations constricted as the requirement for architectures being capable of self-sustaining activity increased. We limited our examination here to one observed spiking regime—asynchronous, irregular, and low firing rate (Brunel, 2000). While this is our dominant observation and that of others in vivo (El Boustani et al., 2007; Davis et al., 2021), the neocortex is capable of rapid state transitions defined by different spiking statistics (Brunel, 2000; Tan et al., 2014; Fontenele et al., 2019). Considering the relation between stiff dimensions and state transitions, this link would be of great interest for future studies.

A recurring theme in neuroscience is that stiff parameter combinations encompass opposing forces. For example, most combinations of ion channel conductances are sloppy with the exception of a maintained ratio between a hyperpolarizing conductance and a depolarizing conductance (MacLean, Zhang, Johnson, & Harris-Warrick, 2003; Prinz et al., 2004; Ransdell et al., 2013; Schulz et al., 2006). Here we show that the connectivity statistics between and within excitatory and inhibitory neurons comprise the stiff parameter combinations of synaptic architectures. Indeed, maintaining a balance between excitation and inhibition is critical for normal network activity and has been resolved in synaptic conductances at the single cell level in vivo (Haider, Duque, Hasenstaub, & McCormick, 2006). Moreover, many studies have also argued that a balance of excitation and inhibition underlies irregular firing in the neocortex (van Vreeswijk & Sompolinsky, 1996, 1998).

We show that this balance is achieved in synaptic architectures through inter- and intrapopulation neuronal synaptic connections and involves excitation, inhibition, and disinhibition. It is for this reason that we found that all four connectivity parameters that we implemented contribute to the stiff dimensions and why pairs of connection likelihoods that do not share any parameters were highly correlated. While a balance between excitation and inhibition is well established in neocortex, the approach that we used here is unbiased and without any prior assumptions as to the importance of parameter combinations. Given our broad search, it was possible that stiff parameter combinations do not implicate a balance of excitation and inhibition. The fact that we find these specific stiff parameter combinations reinforces the importance of the balance in maintaining an asynchronous irregular and low firing rate regime in neocortex.

Surprisingly, despite simplistic coarse graining, experimentally estimated connectivity probabilities from the entirety of V1 of mice as well as connectivity probabilities from L2/3 of V1 (Billeh et al., 2020) fall inside the small space of topologies that we identified via algorithmic search as potentially capable of self-sustained activity and may suggest a level of autonomy of L2/3 activity not achievable in other laminae. There were notable differences between the algorithmically identified and experimentally measured synaptic architectures in other laminae.

Experimentally estimated L5 connectivity parameters themselves did not fall within the restricted space of viable synaptic architectures, although the projection onto the first eigenvector does, indicating congruence with the stiffest dimension. The difference between L5 and algorithmically identified topologies is in the projection onto the second eigenvector, a less stiff dimension, which may reflect differences in the proportions of inhibitory neurons between the laminae. In fact, while the probabilities of connections from subtypes of inhibitory neurons to specific subtypes of neurons do not vary substantially between the different cortical layers (Billeh et al., 2020), the difference we observed in the generalized probabilities is likely due to the predominance of parvalbumin- and somatostatin-positive inhibitory neurons in L5 (48% and 43% of inhibitory neurons, respectively) when compared to the percentage of Htr3a-positive neurons (9% versus 50% in L2/3).

As a result, the coarse grained i-i and i-e probabilities in L5 are much higher than the excitatory probabilities in contrast to the other laminae. This could theoretically be compensated for by the much stronger e-e synapses recorded in L5 compared to L2/3 (Billeh et al., 2020; Cossell et al., 2015; Hofer et al., 2011; Jiang et al., 2015; Lefort, Tomm, Floyd Sarria, & Petersen, 2009; Song, Sjöström, Reigl, Nelson, & Chklovskii, 2005); however, we did not take this particular parameter into consideration in our simulation or analysis. Despite the link between synaptic strength and sustained activity (Vogels & Abbott, 2005), which is directly related to the role of total excitatory drive onto each neuron (Ahmadian & Miller, 2021), this is not a parameter that we varied because we opted to focus on the specifics of the synaptic wiring to the extent possible. To do so we used random connectivity weights and conductance-based synapses with time-decaying conductances that result in the total input to each neuron dynamically changing during a single trial since each input is contextualized by the conductance state of the modeled postsynaptic neuron. Notably, it was shown that conductance-based synapses, as compared to current-based synapses, result in more informative spiking dynamics (Cavallari, Panzeri, & Mazzoni, 2014). Connectivity parameters of L4 also fall outside the region of the parameter space identified by the stiffest dimensions. This difference lacks an easy explanation but may simply reflect the different roles that layer 4 is hypothesized to play as compared to other laminae and the fact that thalamocortical connectivity is particularly important and heterogeneous into L4 (Landau, Egger, Dercksen, Oberlaender, & Sompolinsky, 2016).

Machine learning techniques demonstrate that the structure of neural networks can be modified to achieve a specific task and that, following training, these models are capable of accurately modeling neocortical neuronal activity at different stages of the visual processing hierarchy (Yamins, Hong, Cadieu, Solomon, Seibert, & DiCarlo, 2014). Here we studied synaptic network architectures identified using as first-order the spiking statistics rather than trained on a task. It will be of great interest to evaluate whether training similar networks (Bellec et al., 2020) will result in convergence to the same set of synaptic architectures.

Our approach to identify synaptic architectures based solely on spiking statistics and the correspondence with experimental measures demonstrates the utility of spiking statistics as a parsimonious way of studying structure-function relations. Indeed, previous work using maximum entropy models fit to spiking data was similarly successful at identifying cellular-level stiff and sloppy dimensions (Panas et al., 2015; Ponce-Alvarez et al., 2020). These results are consistent with a previous experimental study that identified neurons of varying levels of correlation with the network activity (Okun et al., 2015; Ponce-Alvarez et al., 2020). Our model relied on much simpler and more interpretable descriptives of neuronal firing and elucidated the role of tractable connectivity parameters instead of neuronal ones. In sum, these studies, along with this work form a compelling argument to study mesoscale connectivity as well as network-wide correlations by fitting models to spiking statistics. In addition, our findings show that coarse-graining of all the different cell types is sufficient for studies of neuronal spiking patterns alone as compared to more complex cortical functions for which these elaborations and more complicated models might become more justified.

4.1  Model Architecture

Networks consisted of 5000 neurons: 4000 excitatory (e) and 1000 inhibitory (i). Neurons were modeled as adaptive exponential leaky integrate-and-fire (AdEx; Brette & Gerstner, 2005) units. The membrane potential of each neuron was governed by the following equation:
(4.1)
with the decaying adaptation current,
(4.2)
Neurons were connected according to four probabilities of connectivity that were varied: pee,pie,pei, and pii, where the first subscript index represents the presynaptic neuron type and the second represents the postsynaptic neuron type.
Synaptic conductances decayed exponentially as per the following equations:
(4.3)
(4.4)
(4.5)

When a neuron fires, the membrane potential is reset, the adaptation current is increased by a value of b, and the corresponding synaptic conductance at the synapses receiving the signal is increased by the weight of the connection. The weights of the connections were randomly drawn from a log-normal distribution, with the parameters of the corresponding normal being μ=-0.64 and σ=0.51. Connections from inhibitory neurons were enhanced by an order of magnitude given their stronger effect on neurons compared to their excitatory counterpart in biological networks due to their tendency to be localized near the soma of postsynaptic neurons (Huang, Ruff, Pyle, Rosenbaum, Cohen, & Doiron, 2019). The parameters fixed across all simulations are defined and summarized in Table 1.

Table 1:

Fixed Neuronal Parameters.

NotationValue
Capacitance C 281 pF 
Leak conductance gL 30 nS 
Leak reversal potential EL -70.6 mV 
Slope factor ΔT 2 mV 
Firing threshold VT -40.4 mV 
Excitatory reversal potential Ee 0 mV 
Inhibitory reversal potential Ei -75 mV 
Excitatory synaptic time constant τe 10 ms 
Inhibitory synaptic time constant τi 3 ms 
Input synaptic time constant τp 10 ms 
Adaptation time constant τw 144 ms 
Subthreshold adaptation a 4 nS 
Spike-triggered adaptation b 80.5 pA 
NotationValue
Capacitance C 281 pF 
Leak conductance gL 30 nS 
Leak reversal potential EL -70.6 mV 
Slope factor ΔT 2 mV 
Firing threshold VT -40.4 mV 
Excitatory reversal potential Ee 0 mV 
Inhibitory reversal potential Ei -75 mV 
Excitatory synaptic time constant τe 10 ms 
Inhibitory synaptic time constant τi 3 ms 
Input synaptic time constant τp 10 ms 
Adaptation time constant τw 144 ms 
Subthreshold adaptation a 4 nS 
Spike-triggered adaptation b 80.5 pA 

Initial voltages of all neurons were randomly drawn from a normal distribution with mean μ=-65mV and standard deviation σ=5mV. All simulations were implemented in Python 3 using the Brian Simulator (version 2.2.1; Stimberg, Brette, & Goodman, 2019).

4.2  Network Input

Network activity was initiated using a population of 3000 Poisson units connected to both the excitatory and inhibitory neurons in the main network with a connection probability of 0.1 unless otherwise specified, and the weights of the connections were randomly drawn from a log-normal distribution with the parameters of the corresponding normal being μ=-0.64 and σ=0.51. Three types of input were considered:

  • Brief input: The firing rates of the Poisson units were drawn from a log-normal distribution with the parameters of the corresponding normal being μ=2.8 and σ=0.3, resulting in a mode at around 15 Hz. Their activity was halted after 300 ms.

  • Continuous input: Similar to the brief input but the activity of the input units was maintained for the duration of the simulation.

  • Cyclical input: The maximum firing rates (νmax) of the input units were drawn from the same log-normal distribution as that of the brief input, but the instantaneous firing rates varied between 0 and νmax according to a sinusoidal function with a period of 600 ms.

4.3  Connectivity Parameters

To minimize sampling bias, the four connectivity parameters were algorithmically determined using a low-resolution, four-dimensional grid search in which pee,pei, and pie varied between 0.10 and 0.30 and pii varied between 0.20 and 0.40 with an increment of 0.02. This resulted in 14,641 parameter combinations, which were each simulated five times with different initial membrane potentials, yielding 73,205 simulations.

We also tested values based on the experimental data in V1 published by the Allen Institute for Brain Science at an intersomatic distance of 75 μm (Billeh et al., 2020). The connectivity statistics reported in the paper are for pyramidal excitatory neurons and the three major classes of inhibitory neurons determined by the markers parvalbumin, somatostatin, and the ionotropic serotonin receptor 5HT3a in all six layers of V1. We calculated the four summary connectivity parameters based on the total number of each neuron type also reported in the paper using the following equation:
(4.6)
where
  • k,h{e,i}.

  • pkh and pkmhn are, respectively, the connection probability from neurons of type k to neurons of type h and the connection probability from neurons of subtype km to neurons of subtype hn.

  • fkm and fhn are the fraction of neurons of subtype km out of all the k neurons and the fraction of neurons of subtype hn out of the h neurons, respectively.

For simulations using the probabilities calculated from the statistics of either L2/3 or L5, the connectivity probability of the input units to the network was computed based on the connections of L4 and L2/3 excitatory neurons respectively to each layer. In the case of L2/3, the resulting value was decreased to account for the numerous inhibitory connections from L4 to excitatory neurons in L2/3.

4.4  Matching Spiking Statistics to Murine Visual Cortex

The fit of network activity to naturalistic spiking statistics was evaluated based on the following criteria:

Firing rate of excitatory (νe) and inhibitory (νi) neurons.

Synchrony (s): On runs that were tested individually, synchrony was determined using the Van Rossum distance between excitatory spike trains with 10 ms time constant using the “elephant” package (Denker, Yegenoglu, & Grün, 2018; van Rossum, 2001). The Van Rossum distance is a measure of spike train dissimilarity after convolving the spikes with a decaying exponential kernel. For the Fisher information matrix calculations (see section 4.5), we used the Van Rossum distance on the excitatory spikes during the last 150 ms only, since these are representative of the activity of the network after cessation of input and it allows faster computation. However, because computing the Van Rossum distance is computationally expensive, we devised a fast method of estimating the synchrony of the excitatory neurons to use during the grid search: activity was divided into 10 ms time bins, and for each time bin, we calculated the variance of the number of spikes per neuron divided by the mean number of spikes per neuron and then averaged the result over all time bins. To evaluate the validity of this score, we calculated both this new synchrony measure and the Van Rossum distance for 45 networks that showed sustained activity. The two scores had a 94% correlation.

Proportion of runs that resulted in sustained activity (psus): Activity is considered sustained if excitatory neurons are firing until the end of the simulation with no more than 150 ms of inactivity across the network. In general, networks that showed sustained activity for 1 second sustained their activity for the duration of the simulation regardless of simulation time. However, for runs initiated using brief input, some networks had activity that truncated after 1 second of sustained activity.

To ensure that all the networks that we evaluated would sustain activity for any simulation duration, we developed an additional check. We selected 50 parameter combinations that resulted in activity sustained for 1 second in at least one trial. We then ran 10 additional trials—each with a new adjacency matrix, initial voltages, and input units—on all 50 parameter combinations. Out of these 500 runs, 435 showed sustained activity for more than 1 second. Of these 435, 85 runs had activity that truncated past 1 second. We then used these data to train a support vector machine (SVM) classifier with a radial basis function (RBF) kernel to predict which of runs that sustained for 1 second will truncate later. This prediction was based on the excitatory and inhibitory firing rates and the fast synchrony measure calculated on the spikes during the first second of activity only. The best hyperparameters were determined using a cross-validation grid search carried out across different methods of scaling the three features considered. The highest accuracy (96%) was obtained with a power transformation of the scores for a more gaussian-like distribution and using Γ=0.1 and C=1000. This classifier was then used during the grid search on the connectivity parameters. The SVM was implemented using scikit-learn version 0.22.1 (Pedregosa et al., 2011). In this way, we ensured that all models considered would spike for the full duration of simulation regardless of the duration.

4.5  Fisher Information Matrix Estimation

Given the lack of data that justifies the choice of certain likelihood distributions, the likelihood model was chosen to be the simplest that would account for having an optimal combination of parameters, so we used a multivariate normal distribution defined by
(4.7)
where N is a normalization factor; x=(pee,pei,pie,pii) is the vector of model parameters; y(x)=(νe¯,νι¯,s¯,psus) is the vector of simulation scores' averages from five simulations using the same parameter vector x; Ky is the (estimated) covariance matrix of the scores vector y (estimated using the grid search results); and μ is the mean score vector of the distribution. μ was set to be equal to the average score vector y(x) obtained at the optimal parameter combination xopt. As such, xopt results in the maximum likelihood.
The Fisher information matrix (FIM) is equal to the negative of the expected value of the Hessian matrix of the log likelihood. Consequently, the observed FIM can be estimated as the negative of the Hessian of the log likelihood (Spall, 2005). However, computing the Hessian matrix numerically may result in errors, especially if the estimated covariance matrix has bad conditioning. Instead, we derived the expression of the components of the Hessian matrix:
(4.8)
The FIM was computed at the optimal parameter combination, which nullifies all the second partial-derivative terms since (y(x)-μ)=0. We are left with the gradients of the scores that were estimated by calculating all four scores for networks that have three of the connectivity parameters fixed and one of them varied between xoptimal-0.01 and xoptimal+0.01, then fitting a line to these results due to the strong linear relationships around the optimal values (see Figure 6).
Figure 6:

Variation of each spiking activity score with the change in each probability of connectivity around the optimal points. From top to bottom, the change in the excitatory firing rate νe, inhibitory firing rate νi, and Van Rossum distance based on the spikes in the last 150 ms (see section 4.4) given the change of each of pee,pei,pie, and pii between μ-0.01 and μ+0.01 while keeping the other three parameters at the optimal point in each case.

Figure 6:

Variation of each spiking activity score with the change in each probability of connectivity around the optimal points. From top to bottom, the change in the excitatory firing rate νe, inhibitory firing rate νi, and Van Rossum distance based on the spikes in the last 150 ms (see section 4.4) given the change of each of pee,pei,pie, and pii between μ-0.01 and μ+0.01 while keeping the other three parameters at the optimal point in each case.

Close modal

4.6  Gini Coefficient

Sparsity of matrices was evaluated with the Gini coefficient (Hurley & Rickard, 2009) using the code provided by Panas et al. (2015):
(4.9)
where fi are the values of the N elements in the matrix.

4.7  Statistical Testing

Variances in the projections of the parameter vectors onto the eigenvectors were compared using the Levene test.

J.N.M. was granted research funds from the National Institute of Health (NIH grants R01EY022338; UF1NS115821) and from the National Science Foundation (NSF CAREER grant 0952686). T.J. was granted research support by the University of Chicago (Dean's Scholarship and Neuroscience Research Metcalf Fellowship). We thank former and current MacLean lab members Yuqing Zhu, Isabel Garon, Maayan Levy, and Gabriella Wheeler Fox, and Emil Sidky for assistance with initial simulations, accurate estimations of Fisher information matrices, and helpful comments on our manuscript.

Ahmadian
,
Y.
, &
Miller
,
K. D.
(
2021
).
What is the dynamical regime of cerebral cortex?
Neuron
,
109
(
21
),
3373
3391
.
Assaf
,
Y.
,
Bouznach
,
A.
,
Zomet
,
O.
,
Marom
,
A.
, &
Yovel
,
Y.
(
2020
).
Conservation of brain connectivity and wiring across the mammalian class
.
Nature Neuroscience
,
23
(
7
),
805
808
.
Bellec
,
G.
,
Scherr
,
F.
,
Subramoney
,
A.
,
Hajek
,
E.
,
Salaj
,
D.
,
Legenstein
,
R.
, &
Maass
,
W.
(
2020
).
A solution to the learning dilemma for recurrent networks of spiking neurons
.
Nature Communications
,
11
(
1
), 3625.
Billeh
,
Y. N.
,
Cai
,
B.
,
Gratiy
,
S. L.
,
Dai
,
K.
,
Iyer
,
R.
,
Gouwens
,
N. W.
, …
Arkhipov
,
A.
(
2020
).
Systematic integration of structural and functional data into multi-scale models of mouse primary visual cortex
.
Neuron
,
106
(
3
),
388
403
.
Bojanek
,
K.
,
Zhu
,
Y.
, &
MacLean
,
J.
(
2020
).
Cyclic transitions between higher order motifs underlie sustained asynchronous spiking in sparse recurrent networks
.
PLOS Computational Biology
,
16
(9), e1007409.
Brette
,
R.
, &
Gerstner
,
W.
(
2005
).
Adaptive exponential integrate-and-fire model as an effective description of neuronal activity
.
Journal of Neurophysiology
,
94
(
5
),
3637
3642
.
Brown
,
K. S.
, &
Sethna
,
J. P.
(
2003
).
Statistical mechanical approaches to models with many poorly known parameters
.
Physical Review E
,
68
(
2
), 021904.
Brunel
,
N.
(
2000
).
Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons
.
Journal of Computational Neuroscience
,
8
(
3
),
183
208
.
Cavallari
,
S.
,
Panzeri
,
S.
, &
Mazzoni
,
A.
(
2014
).
Comparison of the dynamics of neural interactions between current-based and conductance-based integrate-and-fire recurrent networks
.
Frontiers in Neural Circuits
,
8
, 12.
Chambers
,
B.
, &
MacLean
,
J. N.
(
2016
).
Higher-order synaptic interactions coordinate dynamics in recurrent networks
.
PLOS Computational Biology
,
12
(
8
), e1005078.
Chklovskii
,
D. B.
,
Mel
,
B. W.
, &
Svoboda
,
K.
(
2004
).
Cortical rewiring and information storage
.
Nature
,
431
(
7010
),
782
788
.
Churchland
,
A. K.
, &
Abbott
,
L. F.
(
2016
).
Conceptual and technical advances define a key moment for theoretical neuroscience
.
Nature Neuroscience
,
19
(
3
),
348
349
.
Cossell
,
L.
,
Iacaruso
,
M. F.
,
Muir
,
D. R.
,
Houlton
,
R.
,
Sader
,
E. N.
,
Ko
,
H.
, …
Mrsic-Flogel
,
T. D.
(
2015
).
Functional organization of excitatory synaptic strength in primary visual cortex
.
Nature
,
518
(
7539
),
399
403
.
Daniels
,
B. C.
,
Chen
,
Y.-J.
,
Sethna
,
J. P.
,
Gutenkunst
,
R. N.
, &
Myers
,
C. R.
(
2008
).
Sloppiness, robustness, and evolvability in systems biology
.
Current Opinion in Biotechnology
,
19
(
4
),
389
395
.
Davis
,
Z. W.
,
Benigno
,
G. B.
,
Fletterman
,
C.
,
Desbordes
,
T.
,
Steward
,
C.
,
Sejnowski
,
T. J.
, …
Muller
,
L.
(
2021
).
Spontaneous traveling waves naturally emerge from horizontal fiber time delays and travel through locally asynchronous-irregular states
.
Nature Communications
,
12
(
1
), 6057.
Dechery
,
J. B.
, &
MacLean
,
J. N.
(
2018
).
Functional triplet motifs underlie accurate predictions of single-trial responses in populations of tuned and untuned V1 neurons
.
PLOS Computational Biology
,
14
(
5
), e1006153.
Denker
,
M.
,
Yegenoglu
,
A.
, &
Grün
,
S.
(
2018
).
Collaborative HPC-enabled workflows on the HBP Collaboratory using the elephant framework
.
Neuroinformatics 2018
,
P19
.
Doiron
,
B.
,
Litwin-Kumar
,
A.
,
Rosenbaum
,
R.
,
Ocker
,
G. K.
, &
Josić
,
K.
(
2016
).
The mechanics of state-dependent neural correlations
.
Nature Neuroscience
,
19
(
3
),
383
393
.
Engel
,
J.
,
Thompson
,
P. M.
,
Stern
,
J. M.
,
Staba
,
R. J.
,
Bragin
,
A.
, &
Mody
,
I.
(
2013
).
Connectomics and epilepsy
.
Current Opinion in Neurology
,
26
(
2
),
186
194
.
El Boustani
,
S.
,
Pospischil
,
M.
,
Rudolph-Lilith
,
M.
, &
Destexhe
,
A.
(
2007
).
Activated cortical states: Experiments, analyses and models
.
Journal of Physiology–Paris
,
101
(
1
),
99
109
.
Fontenele
,
A. J.
,
de Vasconcelos
,
N. A. P.
,
Feliciano
,
T.
,
Aguiar
,
L. A. A.
,
Soares-Cunha
,
C.
,
Coimbra
,
B.
, …
Copelli
,
M.
(
2019
).
Criticality between cortical states
.
Phys. Rev. Lett.
,
122
(
20
), 208101.
Gutenkunst
,
R. N.
,
Waterfall
,
J. J.
,
Casey
,
F. P.
,
Brown
,
K. S.
,
Myers
,
C. R.
, &
Sethna
,
J. P.
(
2007
).
Universally sloppy parameter sensitivities in systems biology models
.
PLOS Computational Biology
,
3
(
10
), e189.
Haider
,
B.
,
Duque
,
A.
,
Hasenstaub
,
A. R.
, &
McCormick
,
D. A.
(
2006
).
Neocortical network activity in vivo is generated through a dynamic balance of excitation and inhibition
.
Journal of Neuroscience
,
26
(
17
),
4535
4545
.
Haider
,
B.
,
Häusser
,
M.
, &
Carandini
,
M.
(
2013
).
Inhibition dominates sensory responses in the awake cortex
.
Nature
,
493
(
7430
),
97
100
.
Hofer
,
S. B.
,
Ko
,
H.
,
Pichler
,
B.
,
Vogelstein
,
J.
,
Ros
,
H.
,
Zeng
,
H.
, …
Mrsic-Flogel
,
T. D.
(
2011
).
Differential connectivity and response dynamics of excitatory and inhibitory neurons in visual cortex
.
Nature Neuroscience
,
14
(
8
),
1045
1052
.
Hopkins
,
M.
,
Pineda-García
,
G.
,
Bogdan
,
P. A.
, &
Furber
,
S. B.
(
2018
).
Spiking neural networks for computer vision
.
Interface Focus
,
8
(
4
), 20180007.
Huang
,
C.
,
Ruff
,
D. A.
,
Pyle
,
R.
,
Rosenbaum
,
R.
,
Cohen
,
M. R.
, &
Doiron
,
B.
(
2019
).
Circuit models of low-dimensional shared variability in cortical networks
.
Neuron
,
101
(
2
),
337
348
.
Hurley
,
N.
, &
Rickard
,
S.
(
2009
).
Comparing measures of sparsity
.
IEEE Transactions on Information Theory
,
55
(
10
),
4723
4741
.
Jiang
,
X.
,
Shen
,
S.
,
Cadwell
,
C. R.
,
Berens
,
P.
,
Sinz
,
F.
,
Ecker
,
A. S.
, …
Tolias
,
A. S.
(
2015
).
Principles of connectivity among morphologically defined cell types in adult neocortex
.
Science
,
350
(
6264
), aac9462.
Kerr
,
D.
,
McGinnity
,
T. M.
,
Coleman
,
S.
, &
Clogenson
,
M.
(
2015
).
A biologically inspired spiking model of visual processing for image feature detection
.
Neurocomputing
,
158
,
268
280
.
Koulakov
,
A. A.
,
Hromádka
,
T.
, &
Zador
,
A. M.
(
2009
).
Correlated connectivity and the distribution of firing rates in the neocortex
.
Journal of Neuroscience
,
29
(
12
), 3685.
Landau
,
I. D.
,
Egger
,
R.
,
Dercksen
,
V. J.
,
Oberlaender
,
M.
, &
Sompolinsky
,
H.
(
2016
).
The impact of structural heterogeneity on excitation-inhibition balance in cortical networks
.
Neuron
,
92
(
5
),
1106
1121
.
Lefort
,
S.
,
Tomm
,
C.
, Floyd
Sarria
,
J.-C.
, &
Petersen
,
C. C. H.
(
2009
).
The excitatory neuronal network of the C2 barrel column in mouse primary somatosensory cortex
.
Neuron
,
61
(
2
),
301
316
.
MacLean
,
J. N.
,
Zhang
,
Y.
,
Johnson
,
B. R.
, &
Harris-Warrick
,
R. M.
(
2003
).
Activity-independent homeostasis in rhythmically active neurons
.
Neuron
,
37
(
1
),
109
120
.
Mao
,
B.-Q.
,
Hamzei-Sichani
,
F.
,
Aronov
,
D.
,
Froemke
,
R. C.
, &
Yuste
,
R.
(
2001
).
Dynamics of spontaneous activity in neocortical slices
.
Neuron
,
32
(
5
),
883
898
.
Niell
,
C. M.
, &
Stryker
,
M. P.
(
2010
).
Modulation of visual responses by behavioral state in mouse visual cortex
.
Neuron
,
65
(
4
),
472
479
.
Ocker
,
G. K.
,
Hu
,
Y.
,
Buice
,
M. A.
,
Doiron
,
B.
,
Josić
,
K.
,
Rosenbaum
,
R.
, &
Shea-Brown
,
E.
(
2017
).
From the statistics of connectivity to the statistics of spike times in neuronal networks
.
Current Opinion in Neurobiology
,
46
,
109
119
.
Okun
,
M.
,
Steinmetz
,
N. A.
,
Cossell
,
L.
,
Iacaruso
,
M. F.
,
Ko
,
H.
,
Barthó
,
P.
,
Moore
,
T.
, …
Harris
,
K. D.
(
2015
).
Diverse coupling of neurons to populations in sensory cortex
.
Nature
,
521
(
7553
),
511
515
.
Panas
,
D.
,
Amin
,
H.
,
Maccione
,
A.
,
Muthmann
,
O.
,
van Rossum
,
M.
,
Berdondini
,
L.
, &
Hennig
,
M. H.
(
2015
).
Sloppiness in spontaneously active neuronal networks
.
Journal of Neuroscience
,
35
(
22
),
8480
8492
.
Pedregosa
,
F.
,
Varoquaux
,
G.
,
Gramfort
,
A.
,
Michel
,
V.
,
Thirion
,
B.
,
Grisel
,
O.
, …
Duchesnay
,
E.
(
2011
).
Scikit-learn: Machine learning in Python
.
Journal of Machine Learning Research
,
12
,
2825
2830
.
Ponce-Alvarez
,
A.
,
Mochol
,
G.
,
Hermoso-Mendizabal
,
A.
,
de la Rocha
,
J.
, &
Deco
,
G.
(
2020
).
Cortical state transitions and stimulus response evolve along stiff and sloppy parameter dimensions, respectively
,
eLife
,
9
, e53268.
Prinz
,
A. A.
,
Bucher
,
D.
, &
Marder
,
E.
(
2004
).
Similar network activity from disparate circuit parameters
.
Nature Neuroscience
,
7
(
12
),
1345
1352
.
Ransdell
,
J. L.
,
Nair
,
S. S.
, &
Schulz
,
D. J.
(
2013
).
Neurons within the same network independently achieve conserved output by differentially balancing variable conductance magnitudes
.
Journal of Neuroscience
,
33
(
24
),
9950
9956
.
Recanatesi
,
S.
,
Ocker
,
G. K.
,
Buice
,
M. A.
, &
Shea-Brown
,
E.
(
2019
).
Dimensionality in recurrent spiking networks: Global trends in activity and local origins in connectivity
.
PLOS Computational Biology
,
15
(
7
), e1006446.
Schulz
,
D. J.
,
Goaillard
,
J.-M.
, &
Marder
,
E.
(
2006
).
Variable channel expression in identified single and electrically coupled neurons in different animals
.
Nature Neuroscience
,
9
(
3
),
356
362
.
Siegle
,
J. H.
,
Jia
,
X.
,
Durand
,
S.
,
Gale
,
S.
,
Bennett
,
C.
,
Graddis
,
N.
, …
Koch
,
C.
(
2021
).
Survey of spiking in the mouse visual system reveals functional hierarchy
.
Nature
,
592
(
7852
),
86
92
.
Song
,
S.
,
Sjöström
,
P. J.
,
Reigl
,
M.
,
Nelson
,
S.
, &
Chklovskii
,
D. B.
(
2005
).
Highly nonrandom features of synaptic connectivity in local cortical circuits
.
PLOS Biology
,
3
(
3
), e68.
Spall
,
J. C.
(
2005
).
Monte Carlo computation of the Fisher information matrix in nonstandard settings
.
Journal of Computational and Graphical Statistics
,
14
(
4
),
889
909
.
Steinmetz
,
N. A.
,
Zatka-Haas
,
P.
,
Carandini
,
M.
, &
Harris
,
K. D.
(
2019
).
Distributed coding of choice, action and engagement across the mouse brain
.
Nature
,
576
(
7786
),
266
273
.
Stimberg
,
M.
,
Brette
,
R.
, &
Goodman
,
D. F.
(
2019
).
Brian 2, an intuitive and efficient neural simulator
.
eLife
,
8
, e47314.
Tan
,
A. Y. Y.
,
Chen
,
Y.
,
Scholl
,
B.
,
Seidemann
,
E.
, &
Priebe
,
N. J.
(
2014
).
Sensory stimulation shifts visual cortex from synchronous to asynchronous states
.
Nature
,
509
(
7499
),
226
229
.
Transtrum
,
M. K.
,
Machta
,
B. B.
,
Brown
,
K. S.
,
Daniels
,
B. C.
,
Myers
,
C. R.
, &
Sethna
,
J. P.
(
2015
).
Perspective: Sloppiness and emergent theories in physics, biology, and beyond
.
Journal of Chemical Physics
,
143
(
1
), 010901.
van Rossum
,
M. C. W.
(
2001
).
A novel spike distance
.
Neural Computation
,
13
(
4
),
751
763
.
van Vreeswijk
,
C.
, &
Sompolinsky
,
H.
(
1996
).
Chaos in neuronal networks with balanced excitatory and inhibitory activity
.
Science
,
274
(
5293
),
1724
1726
.
van Vreeswijk
,
C.
, &
Sompolinsky
,
H.
(
1998
).
Chaotic balanced state in a model of cortical circuits
.
Neural Computation
,
10
(
6
),
1321
1371
.
Vegué
,
M.
, &
Roxin
,
A.
(
2019
).
Firing rate distributions in spiking networks with heterogeneous connectivity
.
Physical Review E
,
100
(
2
), 022208.
Vogels
,
T. P.
, &
Abbott
,
L. F.
(
2005
).
Signal propagation and logic gating in networks of integrate-and-fire neurons
.
Journal of Neuroscience
,
25
(
46
),
10786
10795
.
Winnubst
,
J.
,
Cheyne
,
J. E.
,
Niculescu
,
D.
, &
Lohmann
,
C.
(
2015
).
Spontaneous activity drives local synaptic plasticity in vivo
.
Neuron
,
87
(
2
),
399
410
.
Yamins
,
D. L. K.
,
Hong
,
H.
,
Cadieu
,
C. F.
,
Solomon
,
E. A.
,
Seibert
,
D.
, &
DiCarlo
,
J. J.
(
2014
).
Performance-optimized hierarchical models predict neural responses in higher visual cortex
. In
Proceedings of the National Academy of Sciences
,
111
(
23
),
8619
8624
.
Zylberberg
,
J.
,
Murphy
,
J. T.
, &
DeWeese
,
M. R.
(
2011
).
A sparse coding model with synaptically local plasticity and spiking neurons can account for the diverse shapes of V1 simple cell receptive fields
.
PLOS Computational Biology
,
7
(
10
), e1002250.

Author notes

Tarek Jabri is now at Harvard University.