Abstract

Neurons integrate information from many neighbors when they process information. Inputs to a given neuron are thus indistinguishable from one another. Under the assumption that neurons maximize their information storage, indistinguishability is shown to place a strong constraint on the distribution of strengths between neurons. The distribution of individual synapse strengths is found to follow a modified Boltzmann distribution with strength proportional to . The model is shown to be consistent with experimental data from Caenorhabditis elegans connectivity and in vivo synaptic strength measurements. The dependence helps account for the observation of many zero or weak connections between neurons or sparsity of the neural network.

1  Introduction

The vast majority of neurons in the brain are disconnected from each other (Song, Sjöström, Reigl, Nelson, & Chklovskii, 2005; Buzsáki & Mizuseki, 2014; Cossell et al., 2015). Why might this be? Neurons evolved to observe, integrate, and interpret incoming stimuli, so it would seem that more connectivity is never worse for the neuron and may well always be better. But in observations of brains, the connectivity rate between pairs of cortical neurons is about 10% to 20%, even when the two neurons overlap in the same physical region (Varshney, Sjöström, & Chklovskii, 2006; Lefort, Tomm, Floyd Sarria, & Petersen, 2009; Clopath & Brunel, 2013). One resolution to the sparseness paradox is that neurons operate in a competitive environment with space and resource constraints that inherently limit their connectivity (Schröter, Paulsen, & Bullmore, 2017). Alternatively, there may be an advantage to sparse connectivity. For example, if one assumes that neurons are gaussian channels, then the amount of information represented by all-to-all connectivity is not worth the cost of those connections in terms of space and resources (Varshney et al., 2006). The insight that sparseness is an advantage for information was further developed to show that perceptron models that maximize their storage have sparse networks (Brunel, Hakim, Isope, Nadal, & Barbour, 2004; Clopath & Brunel, 2013; Brunel, 2016). The validity of the assumptions of information theory is at best only approximated in the neuronal setting (Silver, 2010).

Here, an alternative approach is developed where the source of the sparseness of connectivity compensates for the loss of information incurred when neurons lose track of their inputs. When signals come into a dendritic arbor, the identity of the sender is lost or at least confused (Spruston, 2008; Jan & Jan, 2010). At the cell body, neurons perform a summation and nonlinear transformation that further decorrelates the output from the input neuron's identity (Larkum, Nevian, Sandler, Polsky, & Schiller, 2009; Krueppel, Remy, & Beck, 2011). Summation tends to drive total weight upward and dramatically decreases the number of distinguishable inputs (Kouh, 2017).

As we show, this indistinguishability of inputs leads to a probability of individual synapse strength that is strongly peaked near zero, and such distributions are actually observed as sparse neural networks (Yoshimura, Dantzker, & Callaway, 2005; Bullmore & Sporns, 2009) in biological systems (Song et al., 2005; Varshney, Chen, Paniagua, Hall, & Chklovskii, 2011) and in silico models (Brunel et al., 2004; Clopath & Brunel, 2013; Brunel, 2016). The model makes four major assumptions (discussed further in Figure 1): First, neurons store information in a Hebbian manner as connection strengths or weights. Second, synaptic inputs are unlabeled, and the cell cannot tell them apart when constructing its output. Third, inputs are linearly combined or summed to control the nonlinear output. Fourth, the total number of distinguishable configurations of neurons is maximized.

Figure 1:

A layout of the assumptions. (A) A single dendrite on a neuron is receiving signals from many other cells. The signals travel down the dendrite to the cell body, where they are integrated, passed through a nonlinear filter, and sent on to the next layer of neurons. (B) A more broken-down version. The weights encode the information stored by the neuron. They are summed by an integrator and result in a signal for the cell. Information about the origin of the signal is lost so that, for example, a signal on channel 1 looks the same to the cell as a signal on channel 2.

Figure 1:

A layout of the assumptions. (A) A single dendrite on a neuron is receiving signals from many other cells. The signals travel down the dendrite to the cell body, where they are integrated, passed through a nonlinear filter, and sent on to the next layer of neurons. (B) A more broken-down version. The weights encode the information stored by the neuron. They are summed by an integrator and result in a signal for the cell. Information about the origin of the signal is lost so that, for example, a signal on channel 1 looks the same to the cell as a signal on channel 2.

2  Theory

2.1  Distinguishable Synapses

First, consider the problem with distinguishable synapses to demonstrate the analysis before adding indistinguishable synapses. Now define to be the probability that a random neuron has an interaction weight with another random neuron, and define to be the number within a given system with synapses. For now, assume that there is a discrete set of possible weights that the synapse can take. The limit of continuous weights will be taken later without difficulty. Then the total number of ways to label the neuron with these interactions is
formula
2.1
Note that one of the interaction weights can be zero to represent no connection. Equation 2.1 is where the assumption on distinguishability comes into play because it assumes that every possible combination of weights is counted separately. For example, if there are only two active synapses and two weights, and , the system could be in state or , and these count as two different system configurations.

Under the Hebbian assumption, each labeling of the weights represented by equation 2.1 corresponds to a potentially different use of the neural network, that is, a memory or some desired dynamic behavior. Regardless of the specific use of the network, it is assumed here that the system maximizes the number of potential states . The more potential configurations of the network there are, the more information can be stored and manipulated. Thus, assuming information storage and manipulation is the primary function of neurons, optimization processes will drive the system to the maximum.

Given that resources (and heads) are finite, there must be some sort of total constraint on the strength of synapses because there is some cost associated with creating and maintaining them. This cost comes from stronger weights being associated with physically larger synapses and requiring more supporting structure like glia (Varshney et al., 2006; Kawaguchi, Karube, & Kubota, 2006). The cost is represented by a constraint on the total synaptic weight, and call the cost associated with synapse of weight . The constraint can be written as
formula
where is the total synaptic weight.
From these manipulations, the problem has been cast as the well-known Boltzmann optimization problem, and the distribution of synaptic weights follows the eponymous Boltzmann distribution,
formula
where is a normalization. Thus, for distinguishable neurons, the expectation is that the synaptic connections follow an exponential rule.

2.2  Indistinguishable Synapses

For indistinguishable synapses, there is an added complication. In the simple example of two active synapses with weight and , the system can no longer distinguish between the configurations and since it can observe only the sum. The total number of configurations is now represented by
formula
where the   indicates that the quantity is a combination of synapses. Thus, the number of neurons now represents all of the active synaptic inputs to a cell. Under the linearity assumption, the total strength of a neuron is the sum of the active synaptic strengths. In general, the optimization problem is no longer solvable. The optimization to perform is
formula
and the major complication is that the constraints remain in terms of the synapse number (without the ). However, the constraints physically correspond to the total number and weight of active synapses, which is identical regardless of whether we sum the synapses individually or group them into cells first. Thus, the total number of active synapses is the same, and . Similarly, by grouping each of the weights of the active synapses onto their cells, we can reorder the sum of the weights as
formula
Thus, up to renaming the Lagrange multipliers, the resulting optimization problem is the same Boltzmann one as before, and
formula
Finally, in line with experimental results, assume the cost is proportional to the strength and switch to the continuum (Varshney et al., 2006). This leaves
formula

It seems as though the problem is solved and is nearly trivial: neurons should have Boltzmann distributed total strength. However, this is the total strength of the cell instead of the individual synapse strength.

2.3  Two Synapses

To be clear, refers to the probability that the active inputs sum up to a total weight . What is measured, and is most important for determining neuron behavior, is the pairwise strength. Intuitively, to generate a Boltzmann distribution on the sums of a set of weights, the individual weights should be decreased by some amount. In the simplest example, if all neurons had two active synapses and a total strength of 10 units, then each individual synapse should have a strength of 5 units. The individual parts are necessarily smaller than the whole. However, neurons are not that simple. They have a variable number of active inputs and, as shown above, should tend toward a total average strength that follows a Boltzmann distribution. Thus, the problem devolves into this: Given a Boltzmann distribution of total weights for all cells, how are the individual synapses distributed?

As a simple case, consider just two active synapses per neuron. Then it has already been assumed that the synapses add linearly, so the probability of a total strength is a convolution as
formula
where is the probability of a synaptic strength . Take a Laplace transform of both sides, and remember that the Laplace transform of the convolution is the product of the Laplace transforms, leaving
formula
Then, just take a square root and invert the transform to get
formula
Next, plug in the expression for :
formula
2.2
where is a normalization constant. The forward Laplace transform is just of an exponential, and the inverse can be done by noting the branch cut along the negative real line (or using Mathematica, and see the appendix). To generate a Boltzmann distributed total weight, the individual weight of the average synapse decreases. The first instance of sparsity appears here in the dependence on the distribution. Thus, sparsity seems to be a general feature of destructively combining even very few inputs.

2.4  N Synapses

Extending to synapses involves just higher-order convolutions, and it can be shown inductively that the Laplace transform replaces the square with an power:
formula
Inverting the transform with is similar to before, and the result is
formula
where the normalization now depends on . This may be recognized as the function. For neurons, the interesting limit is as because they have many synapses where
formula
2.3

Finally, this is the primary theoretical result: under the assumptions listed above, the distribution of synapse sizes for large neurons should approach the damped exponential (see equation 2.3) with far more than expected weak connections.

3  Comparison to Experiment

It is rare to have comprehensive neural data that sample the distribution of weights, but there are at least two cases: Caenorhabditis elegans (Varshney et al., 2011) and a database of paired random cortical neurons (Sjöström, Rancz, Roth, & Häusser, 2008). In addition to the work here, several alternative theoretical forms of the weight distribution have been proposed. Two are most relevant. First, assuming neurons optimally transmit information subject to noise constraints, the weights should follow a stretched exponential rule (Varshney et al., 2006). Second, assuming neurons maximize the number of dynamic states they may access, the weights should follow a delta function plus a gaussian (Brunel, 2016). Thus, there are four models to test:
formula
3.1
where MB is the modified Boltzmann distribution described here, SE is the stretched exponential (Varshney et al., 2006), and G is the gaussian plus delta function model (Brunel, 2016). The Boltzmann distribution is also included as a reference. The test models were fit to the data using moment matching in the fitdistrplus package in R (Delignette-Muller & Dutang, 2015). The distribution information of the truncated gaussian comes from the CRAN truncnorm package (Trautmann, Steuer, Mersmann, & Bornkamp, 2014) and the stretched exponential from the flexsurv package (Jackson, 2016). All other distributions are standard in R.

3.1  C. elegans

One experimental test for the predicted functional form of the synapse distribution comes from the C. elegans model system where the entire connectome is available (WormAtlas, Altun, Herndon, Crocker, Lints, & Hall, 2002–2017; Varshney et al., 2011). (See Figure 2.) There are some subtleties involved with applying the theory described here to C. elegans. Most significant, the connectome in C. elegans is essentially static with very little dynamics, so the weight distribution is determined by evolutionary processes.

The first prediction is that the overall connectivity is an exponential; this is shown in Figure 2A. From the full connectome, the total number of synaptic connections onto (or out of) each neuron is calculated and binned into a histogram. This total number of connections is used as a proxy for connection strength. In principle, the distributions may be different for input and output, so both are examined. There is some difficulty because some of the synapses are not clearly input or output (gap junctions), and they are ignored here. Including them as both input and output (not shown) preserves the exponential distribution. The distributions are exponential, supporting the theory that the total weight of any neuron is exponentially distributed. This indicates that evolution has favored genomes that generate neurons with maximally efficient information storage.

Figure 2:

The distribution of the number of input synapses (red circles), and output synapses (blue squares) for C. elegens. The data represent (A) the total number of connections onto or out of any neuron, (B) the number of neurons with synapses, and (C) a qq–plot of the four test distributions. In panel A, the line is an exponential as a guide to the eye showing that both the in and out distributions of connectivity are well matched by an exponential. In panel B, the density of neuron pairs with synaptic connections is plotted along with fits to the test functions in equation 3.1. Zero is included and occurs when there are no connections between neurons. Note that the vast majority of neurons have 0 connectivity, as predicted by the dependence of the theory. Panel C shows empirical versus theoretical quantiles for the four test functions: the closer the points are to the black line, the better the function fit is.

Figure 2:

The distribution of the number of input synapses (red circles), and output synapses (blue squares) for C. elegens. The data represent (A) the total number of connections onto or out of any neuron, (B) the number of neurons with synapses, and (C) a qq–plot of the four test distributions. In panel A, the line is an exponential as a guide to the eye showing that both the in and out distributions of connectivity are well matched by an exponential. In panel B, the density of neuron pairs with synaptic connections is plotted along with fits to the test functions in equation 3.1. Zero is included and occurs when there are no connections between neurons. Note that the vast majority of neurons have 0 connectivity, as predicted by the dependence of the theory. Panel C shows empirical versus theoretical quantiles for the four test functions: the closer the points are to the black line, the better the function fit is.

The stronger claim is that the individual connections are given by a modified Boltzmann distribution (see equation 2.3). The individual synapse distribution works very well (see Figure 2B). The theory presented here accounts for both the heavy tail and the large number with zero contacts. Note that these data are the union of two different reconstructions, and taking them individually has no effect on the distribution. Also, there is only one distribution because the input and output distributions must be the same. Figures 2B and 2C compare the data to fits of the test functions. Only the delta gausian and the modified Boltzmann forms are capable of capturing the large number of small weights. The modified Boltzmann also captures the correct inflection of the tail. None of the models are able to entirely explain the long tail. Thus, in C. elegans, currently the only complete connectome, the theory presented here is strongly supported.

3.2  Rat Visual Cortical Columns

While complete reconstructions of a cortical column are not available, there are data in which multiple measurements of connection strengths between neurons exists (Song et al., 2005; Sjöström, 2005). This data set applies directly to the strongest claim made here that pairs of neurons will exhibit modified Boltzmann distributions of connection strength. The data are measured by taking pairs of nearby neurons, in vivo, and measuring the sensitivity of one neuron to the stimulation of the other and vice versa. The strength of the connection was thus directly estimated. A histogram of strengths is constructed and fits remarkably well with the theory presented here (see Figure 3 and Table 1). Note that this is not a test of the total strength of a neuron being an exponential since that would require stimulating all the neuron's neighbors.

Figure 3:

The distribution of strengths between synapses within a cortical column where they have potentially all-to-all connectivity. The data are taken from in vivo paired recordings in rat cortical columns. There is a large spike at zero connection strength due to disconnected neurons, and this reflects the sparseness of the network. (A) The distribution of weights along with fits to the test functions in equation 3.1. (B) A quantile plot to test the quality of the fits. As points fall closer to the diagonal line, the functions fit better. The total strength distribution, analogous to Figure 2A, is not available here.

Figure 3:

The distribution of strengths between synapses within a cortical column where they have potentially all-to-all connectivity. The data are taken from in vivo paired recordings in rat cortical columns. There is a large spike at zero connection strength due to disconnected neurons, and this reflects the sparseness of the network. (A) The distribution of weights along with fits to the test functions in equation 3.1. (B) A quantile plot to test the quality of the fits. As points fall closer to the diagonal line, the functions fit better. The total strength distribution, analogous to Figure 2A, is not available here.

Table 1:
The Parameters of the Test Functions.
Data SetModelParameter
C. elegans Modified  
 Boltzmann  
 Exponential  
 Stretch  
 Delta gaussian  
   
   
Rat cortical Modified  
column Boltzmann  
 Exponential  
 Stretch  
 Delta gaussian  
   
   
Data SetModelParameter
C. elegans Modified  
 Boltzmann  
 Exponential  
 Stretch  
 Delta gaussian  
   
   
Rat cortical Modified  
column Boltzmann  
 Exponential  
 Stretch  
 Delta gaussian  
   
   

Note: The numbers in parentheses are 95% confidence intervals from a parametric bootstrap.

4  Discussion

Assuming that cells behave as unlabeled integrate-and-fire neurons, the theory presented here predicts that the average total strength of all cells is given by a Boltzmann distribution to maximize their entropy. More interesting, restricting the cells to respond to linear combinations of their inputs implies that the inputs will have a distribution with smaller-than-expected values. For relatively modest assumptions, the strengths of the individual synapses, the inputs, are given by a modified Boltzmann distribution, . Consequently, the average connectivity strength between any two random neurons is expected to strongly favor zero strength and very weak connections. The zero-peaked pairwise neuron connectivity distribution was observed in C. elegans connectome data and electrophysiological data from cortical neurons.

One potential complication with the theory is that dendrites may do much more work than just linearly combine their inputs. Indeed, there is evidence that correlations between neighboring synapses on dendrites provide a local inhibition or nonlinear combination (Sjöström et al., 2008; Polsky, Mel, & Schiller, 2004; Silver, 2010). This adds significant complexity to the calculations presented above. Mathematically, local correlations would make the coefficients in the linear combiner state or time dependent, and numerical solutions would likely be most fruitful. A second complication is that the theory presented here ignores the network structure of the neurons (Song et al., 2005; Russo, Herrmann, & de Arcangelis, 2014). Including network structure would have the greatest impact on the enumeration of possible states. For example, in a fully connected order 3 network (, a triangle), rotational symmetry means that distinguishing labels of a,b,c from b,c,a may not be possible. This would change the overall connectivity between cells from the predicted exponential distribution (Brunel et al., 2004; Newman, 1988), and incorporating such network constraints remains an open problem. Also, the assumption that neurons have a well-defined average number of inputs could be improved on by, for example, switching to a grand canonical ensemble to allow synapse numbers to fluctuate explicitly. Regardless, the basic idea of indistinguishability of inputs remains and would likely still drive connection weights down, and the mean field theory presented here appears to recover at least gross characteristics of experimental systems.

Normalization of the modified Boltzmann distribution requires some careful thought because the integral does not converge on the range 0 to , so some finite cutoff may be introduced. Such a modification would make the distribution look more like the log–normal distribution commonly used to model synapse strength in the literature (Song et al., 2005; Ma, Kohashi, & Carlson, 2013; Buzsáki & Mizuseki, 2014; Cossell et al., 2015). But the theory presented here predicts a distribution rather than treating the problem as mainly phenomenological (Barbour, Brunel, Hakim, & Nadal, 2007): the large number of zero or small connection strengths stems directly from neurons combining their inputs. In addition to the sparse connectivity, the large end of the modified Boltzmann distribution has a fat tail and is eventually much larger than either a log-normal (Cossell et al., 2015) or modified gaussian (Brunel, 2016). Such large outlier connections have been observed in vivo (Lefort et al., 2009; Schröter et al., 2017), and they are partially accounted for here.

Finally, a significant implication of the modified Boltzmann distribution is to resolve the question of why neurons have such sparse connectivity (Yoshimura et al., 2005; Bullmore & Sporns, 2009; Barabási, 2009; Bullmore & Sporns, 2012). Sparse connectivity has been shown to be most efficient in simulations of neural networks where the neurons were allowed to determine their own weights (Brunel et al., 2004; Clopath & Brunel, 2013; Brunel, 2016), and it has been noted that summation tends to drive down connection strengths (Kouh, 2017). At first glance, anything less than potential all-to-all connectivity would seem to restrict the capacity of a neural network because removing a potential contact limits the number of inputs. Paradoxically, then, why are observed neural networks sparse or scale free (Bullmore & Sporns, 2012)? Certainly space constraints would seem to have a significant impact (Varshney et al., 2006; Bullmore & Sporns, 2012; Schröter et al., 2017), but there is some circularity to such assumptions because they assume neurons look the way they do. It is possible that brains could have evolved with all-to-all connectivity given some fantastic shape to neurons or an entirely different scheme for brains. The modified Boltzmann weight with its heavy emphasis on sparse connectivity takes a step back and demonstrates that given cells that linearly combine their inputs, sparsity is the optimal configuration for neural networks.

Appendix:  Inverse Laplace Transform

In the interest of completeness and because it involves some subtleties, the inverse Laplace transform to derive the modified Boltzmann distribution is evaluated here. The problem is to find the inverse Laplace transform after applying the convolution theorem to the transformed quantities:
formula
A.1
Using the standard approach of Mellin's formula, the integral to evaluate is
formula
A.2
where is a constant larger than any pole, or in this case, it may be any value larger than . Define and change the variables to :
formula
A.3
where . The next step is to apply Cauchy's well-known theorem on the contour in Figure 4. The branch cut is chosen so that
formula
A.4
Note that the branch cut along the negative real line is avoided, and there are no discontinuities inside the contour, so
formula
A.5
where the contours at and converge to 0. The remaining contours correspond to the original integral and each direction along the branch cut.
formula
A.6
Change the variables to :
formula
A.7
where the last step is the definition of the function. Rearrange and remember Euler's famous reflection formula :
formula
A.8
With , this is the desired result. Also, note that this is the distribution.
Figure 4:

The contour for evaluating the inverse Laplace transform. Note the branch cut along the negative real line.

Figure 4:

The contour for evaluating the inverse Laplace transform. Note the branch cut along the negative real line.

Acknowledgments

This work was supported in part by NSF grant SMA 1041755 to the Temporal Dynamics of Learning Center, an NSF Science of Learning Center.

References

Barabási
,
A.-L.
(
2009
).
Scale-free networks: A decade and beyond
.
Science
,
325
(
5939
),
412
.
Barbour
,
B.
,
Brunel
,
N.
,
Hakim
,
V.
, &
Nadal
,
J.-P.
(
2007
).
What can we learn from synaptic weight distributions
?
Trends in Neurosciences
,
30
(
12
),
622
629
.
Brunel
,
N.
(
2016
).
Is cortical connectivity optimized for storing information
?
Nature Neuroscience
,
19
(
5
),
749
755
.
Brunel
,
N.
,
Hakim
,
V.
,
Isope
,
P.
,
Nadal
,
J.-P.
, &
Barbour
,
B.
(
2004
).
Optimal information storage and the distribution of synaptic weights: Perceptron versus Purkinje cell
.
Neuron
,
43
(
5
),
745
757
.
Bullmore
,
E.
, &
Sporns
,
O.
(
2009
).
Complex brain networks: Graph theoretical analysis of structural and functional systems
.
Nature Reviews Neuroscience
,
10
(
3
),
186
198
.
Bullmore
,
E.
, &
Sporns
,
O.
(
2012
).
The economy of brain network organization
.
Nature Reviews Neuroscience
,
13
(
5
),
336
349
.
Buzsáki
,
G.
, &
Mizuseki
,
K.
(
2014
).
The log-dynamic brain: How skewed distributions affect network operations
.
Nature Reviews Neuroscience
,
15
(
4
),
264
278
.
Clopath
,
C.
, &
Brunel
,
N.
(
2013
).
Optimal properties of analog perceptrons with excitatory weights
.
PLoS Computational Biology
,
9
(
2
),
e1002919
.
Cossell
,
L.
,
Iacaruso
,
M. F.
,
Muir
,
D. R.
,
Houlton
,
R.
,
Sader
,
E. N.
,
Ko
,
H.
, …
Mrsic-Flogel
,
T. D.
(
2015
).
Functional organization of excitatory synaptic strength in primary visual cortex
.
Nature
,
518
(
7539
),
399
403
.
Delignette-Muller
,
M. L.
, &
Dutang
,
C.
(
2015
).
fitdistrplus: An R package for fitting distributions
.
Journal of Statistical Software
,
64
(
4
),
1
34
. http://www.jstatsoft.org/v64/i04/
Jackson
,
C.
(
2016
).
Flexsurv: A platform for parametric survival modeling in R.
Journal of Statistical Software
,
70
(
8
),
1
33
. doi:10.18637/jss.v070.i08
Jan
,
Y.-N.
, &
Jan
,
L. Y.
(
2010
).
Branching out: Mechanisms of dendritic arborization
.
Nature Reviews Neuroscience
,
11
(
5
),
316
328
.
Kawaguchi
,
Y.
,
Karube
,
F.
, &
Kubota
,
Y.
(
2006
).
Dendritic branch typing and spine expression patterns in cortical nonpyramidal cells
.
Cerebral Cortex
,
16
(
5
),
696
711
.
Kouh
,
M.
(
2017
).
Information maximization explains the sparseness of presynaptic neural response
.
Neural Computation
,
29
(
4
),
888
896
.
Krueppel
,
R.
,
Remy
,
S.
, &
Beck
,
H.
(
2011
).
Dendritic integration in hippocampal dentate granule cells
.
Neuron
,
71
(
3
),
512
528
.
Larkum
,
M. E.
,
Nevian
,
T.
,
Sandler
,
M.
,
Polsky
,
A.
, &
Schiller
,
J.
(
2009
).
Synaptic integration in tuft dendrites of layer 5 pyramidal neurons: A new unifying principle
.
Science
,
325
(
5941
),
756
760
.
Lefort
,
S.
,
Tomm
,
C.
,
Floyd Sarria
,
J. C.
, &
Petersen
,
C. C. H.
(
2009
).
The excitatory neuronal network of the C2 barrel column in mouse primary somatosensory cortex
.
Neuron
,
61
(
2
),
301
316
.
Ma
,
X.
,
Kohashi
,
T.
, &
Carlson
,
B. A.
(
2013
).
Extensive excitatory network interactions shape temporal processing of communication signals in a model sensory system
.
Journal of Neurophysiology
,
110
(
2
),
456
469
.
Newman
,
C. M.
(
1988
).
Memory capacity in neural network models: Rigorous lower bounds
.
Neural Networks
,
1
(
3
),
223
238
.
Polsky
,
A.
,
Mel
,
B. W.
, &
Schiller
,
J.
(
2004
).
Computational subunits in thin dendrites of pyramidal cells
.
Nature Neuroscience
,
7
(
6
),
621
627
.
Russo
,
R.
,
Herrmann
,
H. J.
, &
de Arcangelis
,
L.
(
2014
).
Brain modularity controls the critical behavior of spontaneous activity
.
Scientific Reports
,
4
.
Schröter
,
M.
,
Paulsen
,
O.
, &
Bullmore
,
E. T.
(
2017
).
Micro-connectomics: Probing the organization of neuronal networks at the cellular scale
.
Nature Reviews Neuroscience
,
18
(
3
),
131
146
.
Silver
,
R. A.
(
2010
).
Neuronal arithmetic
.
Nature Reviews Neuroscience
,
11
(
7
),
474
489
.
Sjöström
,
P. J.
(
2005
).
Connectivity dataset
. http://plasticity.muhc.mcgill.ca/DataPage/Song_2005/Connectivity_v10.xls
Sjöström
,
P. J.
,
Rancz
,
E. A.
,
Roth
,
A.
, &
Häusser
,
M.
(
2008
).
Dendritic excitability and synaptic plasticity
.
Physiological Reviews
,
88
(
2
),
769
840
.
Song
,
S.
,
Sjöström
,
P. J.
,
Reigl
,
M.
,
Nelson
,
S.
, &
Chklovskii
,
D. B.
(
2005
).
Highly nonrandom features of synaptic connectivity in local cortical circuits
.
PLoS Biology
,
3
(
3
),
e68
.
Spruston
,
N.
(
2008
).
Pyramidal neurons: Dendritic structure and synaptic integration
.
Nature Reviews Neuroscience
,
9
(
3
),
206
221
.
Trautmann
,
H.
,
Steuer
,
D.
,
Mersmann
,
O.
, &
Bornkamp
,
B.
(
2014
).
Truncnorm: Truncated normal distribution
. https://CRAN.R-project.org/package=truncnorm
Varshney
,
L. R.
,
Chen
,
B. L.
,
Paniagua
,
E.
,
Hall
,
D. H.
, &
Chklovskii
,
D. B.
(
2011
).
Structural properties of the Caenorhabditis elegans neuronal network
.
PLoS Computational Biology
,
7
(
2
),
e1001066
.
Varshney
,
L. R.
,
Sjöström
,
P. J.
, &
Chklovskii
,
D. B.
(
2006
).
Optimal information storage in noisy synapses under resource constraints
.
Neuron
,
52
(
3
),
409
423
.
WormAtlas, Altun
,
Z. F.
,
Herndon
,
L. A.
,
Crocker
,
C.
,
Lints
,
R.
, &
Hall
,
D.
(Eds.). (
2002–2017
). http://www.wormatlas.org
Yoshimura
,
Y.
,
Dantzker
,
J. L.
, &
Callaway
,
E. M.
(
2005
).
Excitatory cortical neurons form fine-scale functional networks
.
Nature
,
433
(
7028
),
868
873
.