Abstract

Connection strength estimation is widely used in detecting the topology of neuronal networks and assessing their synaptic plasticity. A recently proposed model-based method using the leaky integrate-and-fire model neuron estimates membrane potential from spike trains by calculating the maximum a posteriori (MAP) path. We further enhance the MAP path method using variational Bayes and dynamic causal modeling. Several simulations demonstrate that the proposed method can accurately estimate connection strengths with an error ratio of less than 20%. The results suggest that the proposed method can be an effective tool for detecting network structure and synaptic plasticity.

1  Introduction

Connection strength estimation is widely used in detecting the topology of neuronal networks and assessing their synaptic plasticity (Feldt, Bonifazi, & Cossart, 2011; Stetter, Battaglia, Soriano, & Geisel, 2012; Garofalo, Nieus, Massobrio, & Martinoia, 2009; Cadotte, DeMarse, He, & Ding, 2008; Bettencourt, Stephens, Ham, & Gross, 2007; Paninski, Pillow, & Simoncelli, 2004; Field et al., 2010; Koyama & Paninski, 2010; Isomura, Takeuchi, Shimba, Kotani, & Jimbo, 2012). However, existing methods are insufficient for reconstructing network topologies from a spike time series (a very large, complex data set) and detecting small changes in connection strengths. Therefore, a more accurate method for calculating functional connectivity (or, more strictly speaking, effective connectivity) is required. To achieve this goal, we enhance a maximum a posteriori (MAP) path method with variational Bayes approximation (Beal & Ghahramani, 2003) and dynamic causal modeling (DCM) (Friston, Mattout, Trujillo-Barreto, Ashburner, & Penny, 2006; Friston, Trujillo-Barreto, & Daunizeau, 2008). The resulting method can simultaneously detect the inhibitory connections, intrinsic parameters, and membrane potential of a neuron.

Conventional methods of connection-strength estimation can be divided into two classes: model free and model based. Model-free methods calculate functional connectivity using information-theoretic features (Stetter et al., 2012; Garofalo et al., 2009; Cadotte et al., 2008; Bettencourt et al., 2007), while model-based methods focus on fitting sample data into the neuron model and optimizing parameters using maximum likelihood estimation or Bayesian estimation (Paninski et al., 2004; Field et al., 2010; Koyama & Paninski, 2010; Isomura et al., 2012). Each of these method classes has limitations. Model-free methods are weak at estimating accurate values of connection strengths and the intrinsic parameters of each neuron. Not only model-free methods but also model-based ones using the firing rate model (Paninski et al., 2004; Field et al., 2010) are difficult to represent the rapid dynamics of membrane potential based on spike timings.

To address these problems, a model-based method that fits spike data into a leaky integrate-and-fire (LIF) model has been proposed (Koyama & Paninski, 2010). This method estimates the membrane potential and inputs to a neuron by calculating the MAP path of spike trains. Other researchers have simultaneously estimated membrane potential and synaptic connection strengths by fitting spike trains into an Izhikevich model using maximum likelihood estimation (Isomura et al., 2012). (Here, an Izhikevich model can be regarded as a quadric enhancement of the LIF model (Izhikevich, 2003, 2004).) A significant benefit of these model-based methods for connection-strength estimation is their applicability to detecting synaptic plasticity, which is believed to be the neural basis of memory and learning (Bear, Connors, & Paradiso, 2006; Fu & Zuo, 2011; Feldman, 2012).

Long-term synaptic plasticity is believed to be crucial for human and animal adaptation. Long-term synaptic plasticity can be categorized as structural plasticity or functional plasticity. Functional plasticity, which refers to the alteration of synaptic transmission efficacy (also called synaptic connection strength), is categorized as long-term potentiation (LTP) or long-term depression (LTD). Many researchers have investigated synaptic plasticity and neural adaptation in neural networks using multineuron recording (Jimbo, Tateno, & Robinson, 1999; Shahaf & Marom, 2001; Eytan, Brenner, & Marom, 2003; Ruaro, Bonifazi, & Torre, 2005; Johnson, Goel, & Buonomano, 2010; Dranias, Ju, Rajaram, & VanDongen, 2013). These approaches evaluate the change in response to test stimulations before and after training; however, they do not track variations in response during the training period. Because the response may pass through several neurons, it is insufficient for evaluating synaptic plasticity in a monosynaptic connection (Cadotte et al., 2008). Also, connection-strength estimation has been used to detect activity-dependent synaptic plasticity (Bettencourt et al., 2007; Cadotte et al., 2008; Feldt et al., 2011); however, the involved methods are imprecise. To reconstruct a network topology from a spike time series and detect small changes in connection strengths requires a more accurate method for calculating functional connectivity.

In the greater context of systems neuroscience, the distinction between model-based and model-free approaches corresponds to the difference between functional and effective connectivity (Sporns, 2007). Functional connectivity is defined as the statistical dependencies among remote electrophysiological time series, while effective connectivity rests on a biophysical model that explains how those dependencies are generated. Generally the effective connectivities (parameters of a biophysical or generative model) are estimated using dynamic causal modeling (DCM), which is established for functional magnetic resonance imaging (fMRI) and electrophysiological time series (Friston et al., 2006). DCM is the variational Bayesian inversion of (typically neuronal) models of distributed responses of the sort that we consider here for spiking neurons. Our variational approach represents the first application of DCM to spike trains. It therefore complements the more macroscopic domains of existing DCM applications.

Variational model inversion or filtering refers to the inversion of the generative or forward model of data for recovering or identifying the unknown states and parameters of the model. In this article, we describe a form of generalized filtering, variational Laplace. This procedure estimates hidden states and model parameters by optimizing a variational free energy bound on Bayesian model evidence under the Laplace assumption, namely, that the posterior is gaussian. This is slightly different from conventional variational Bayes or ensemble learning because it does not require conjugate priors. In variational Bayes, priors are chosen that are conjugate to the assumed posterior to provide explicit solutions to the free energy minimization problem. However, by optimizing variational free energy with a gradient descent, this constraint is relaxed. We demonstrate this by using relatively complicated priors on our connection strengths. The particular variational Laplacian scheme that we consider is closely related to dynamic expectation maximization (DEM) (Friston et al., 2008), which comprises three steps: the first (D) step estimates the hidden states of a system, while the second (E) and third (M) steps optimize, respe firely, the parameters and hyperparameters of the system. We consider only hidden states and parameters; accordingly, our scheme corresponds to a dynamic expectation (DE) scheme. Hyperparameters typically involve the amplitude of random fluctuations, which we do not consider here.

The remainder of this article is organized as follows. In section 2, we define techniques for obtaining the optimal path of membrane potential and optimal connection strengths using variational Bayes. In section 3, we use a simulation to demonstrate that these techniques can accurately estimate connection strengths with an error ratio of less than 20%; the mean square error is compared with the mean square of true connection strengths. We then demonstrate that the method can be used to detect the synaptic connection strengths of a simulated neural network.

2  Definition of Proposed Connection-Strength Estimation Method

2.1  Models

Our objective is to find the optimal path (time course) of a membrane potential and the optimal connection strengths from spike trains. Our general approach is illustrated in Figure 1. First, we consider the dynamics of membrane potential using the LIF model (Gerstner & Kistler, 2002) given by
formula
2.1
where v is a membrane potential, is the change of v, and x is the resulting (output) spike train, such that , where {tk} are firing times . The meanings of other variables and parameters are shown in Tables 1 and 2. Upon crossing threshold vt, the neuron generates a spike, after which v returns to the reset potential, vc, defined as . Assuming that obeys normal distribution, , a vector of spike trains for input neurons is represented as . In addition, a vector of normalized postsynaptic currents (PSCs) is represented as , where is a convolution of and r with time. Note that is a, Heaviside step function, 0 for t 0 and 1 for ; ms is a time constant of PSC; and l = 1 ms is a synaptic delay. is a column vector of connection strengths.
Figure 1:

Procedure for estimating connectivity. (A) Schematic diagram of the neuron model. Connection strength is defined as the postsynaptic current (PSC). Obtained data are assumed to be produced by a neural network model and are fitted to the neuron model. Because the PSCs of many neurons cannot be directly recorded, parameter fitting using spike train x and r is often used as an indirect means of detection. (B) Conditional probability of v. indicates the interval until spike timing. (C) Procedure for the DE algorithm. The optimal (MAP) path of membrane potential and connection strengths (w) are calculated by rotation. In the D-step, the MAP path of , is calculated. In the E-step, the MAP parameter, , is calculated.

Figure 1:

Procedure for estimating connectivity. (A) Schematic diagram of the neuron model. Connection strength is defined as the postsynaptic current (PSC). Obtained data are assumed to be produced by a neural network model and are fitted to the neuron model. Because the PSCs of many neurons cannot be directly recorded, parameter fitting using spike train x and r is often used as an indirect means of detection. (B) Conditional probability of v. indicates the interval until spike timing. (C) Procedure for the DE algorithm. The optimal (MAP) path of membrane potential and connection strengths (w) are calculated by rotation. In the D-step, the MAP path of , is calculated. In the E-step, the MAP parameter, , is calculated.

Table 1:
Variables of LIF and Enhanced LIF Models.
NameSymbolUnit
Membrane potential v mV 
Inactivity index u pA 
Output spike train x Spikes/ms 
White gaussian noise  pA 
Input spike trains  Spikes/ms 
Normalized postsynaptic current  − 
NameSymbolUnit
Membrane potential v mV 
Inactivity index u pA 
Output spike train x Spikes/ms 
White gaussian noise  pA 
Input spike trains  Spikes/ms 
Normalized postsynaptic current  − 
Table 2:
Parameters of LIF and Enhanced LIF Models.
NameSymbolValue (Unit)
Synaptic connection strength  pA 
Membrane capacitance C 100 pF 
Leak conductance g 5 nS 
Time constant  100 ms 
Firing penalty ud 800 pA 
Reset potential vc  mV 
Threshold potential vt  mV 
Resting potential vr  mV 
NameSymbolValue (Unit)
Synaptic connection strength  pA 
Membrane capacitance C 100 pF 
Leak conductance g 5 nS 
Time constant  100 ms 
Firing penalty ud 800 pA 
Reset potential vc  mV 
Threshold potential vt  mV 
Resting potential vr  mV 
An important assumption is that the membrane potential of a neuron is determined by the intrinsic and connection parameters of the neuron and by the spike trains of all neurons; that is, the membrane potential of a neuron is independent of the membrane potentials and intrinsic and connection parameters of all other neurons. Therefore, the empirical prior, or prior beliefs about the dynamics, which relate the hidden states and motion to unknown parameters, can be represented as
formula
2.2
where is a column vector of a set of hidden state variables for each neuron and is a column vector of a set of parameters. Henceforth, we denote and simply as y and . y is a column vector of the deviation from the set point and first-order velocity, . This notation comes from generalized filtering in generalized coordinates of motion. We use a first-order approximation to generalized filtering. These simplifications make equations and derivations much easier to follow. is a column vector and is defined by , where w is a connection strength vector that neuron i provides. The aim is to find the estimated density , where x is an observable output (a spike train) of neuron is a column vector of observable inputs (convolved input spike trains; r), and m is a model of the neuron. Based on the assumption of conditional independence between hidden state variables and parameters (mean field approximation), the probability density is given by
formula
2.3
We also assume that and obey gaussian distribution (the Laplace assumption):
formula
2.4
formula
2.5
Note that and are mean values of y and w, and and are covariance matrices represented as
formula
2.6
formula
2.7

2.2  Priors

To estimate the dynamics of , and total input ws from x and s, we assume that (1) for a period in which the neuron does not fire, v will be smaller than , and (2) a probability distribution of v immediately before each spike converges toward . Based on these assumptions, we modify the dynamics of v for consistency with the observed output spike train. Specifically, we stipulate prior distributions over hidden states v in terms of prior energy (namely, the log probability), thereby restricting the probability density of v according to the remaining time before the next spike ; that is, , where tk is the time of the next spike and t is the current time. In addition, we assume that obeys gamma distribution with ; ). Values for the mean, mode, and variance of are 2, , and 2(, respectively. Parameter is a function of and represented as , where mV is the mode of in the equilibrium state and ms is a time constant. The prior energy Uv is thereby represented as
formula
2.8

Because Uv approaches infinity as v approaches vt and mean and variance become zero when , the distribution is consistent with assumptions 1 and 2 above. In equation 2.8, plays the role of a barrier at . Figure 1B shows conditional probabilities of membrane potential just before spiking. The mean of the true path of v gradually increases as decreases, while the standard deviation of the true path of v gradually decreases. We confirm that the true path of v at a specific timing depends on gamma-like distribution, which indicates that our design of v-prior is plausible.

To limit elements of w to the suitable range that is not 0, we set the prior on w by restricting its first- and second-order moments. We assume that independently obey an identical asymmetrical exponential distribution, . Thus, the prior energy Uw is defined as
formula
2.9
where and represent the first- and second-order statistics of the parameters over connections and and represent the expectations of these statistics under the prior distribution as follows:
formula
2.10
formula
2.11
This means that and play the role of hyperpriors, describing beliefs about the mean and variance of connection strengths over connections. Note that and are given as positive-definite values. Therefore, the first- and second-order differentials of Uw are given by
formula
2.12
formula
2.13
which are used for calculating variational energy and action in section 2.4. In later expressions, is the first-order partial differential of , and is the second-order partial differential of .

2.3  Internal Energy and Action

We next define internal energy as a negative log-likelihood function, which is represented as
formula
2.14
Note that x and s are an output spike train and vector of normalized PSCs, which are known. Because the error of equation 2.1 is represented as , equation 2.14 represents the error at each moment. Intrinsic action is then defined as the integral of U with time, as given by
formula
2.15
The aim is to find the distribution of hidden states v and parameters w that minimize free energy . Under the Laplace assumption, this is equivalent to finding , and , as represented by
formula
2.16

2.4  Variational Energy and Action

Because cannot be directly calculated, we instead use variational energy and , which are defined as expectations of and represented as
formula
2.17
formula
2.18
Uv and Uw are prior energies of v and w under the assumption that v obeys a gamma distribution and w obeys an exponential distribution. As for internal action, variational action is defined as the integral of variational energy with time,
formula
2.19
The solution of equation 2.16 is obtained by minimizing Vy and (Friston et al., 2006, 2008). Because U can be expanded using Taylor expansion, Vy is written as
formula
2.20
Similarly, Vw is calculated as
formula
2.21
Here, , , and are given by
formula
2.22
formula
2.23
formula
2.24
formula
2.25
Because and , the quadric terms are zero in the case of this model, according to equation 2.1. Thus, the proposed method is a simplified version of the DEM algorithm. If we assume that hidden parameters are w and a set of intrinsic parameters , which are represented as , we calculate instead of Vw; the quadric terms of equations 2.20 and 2.21 are nonzero.

2.5  D-Step

The MAP path of hidden state variables and MAP estimators of parameters can be found using a generalization of variational Bayes (Beal & Ghahramani, 2003). In brief, a D-step calculates the MAP path of hidden states using generalized (variational) filtering. Because we consider only the hidden states and their first derivative, this is effectively an extended Kalman filter. This Bayesian filter provides an optimal path, whose variation with respect to hidden state is zero using a gradient method in a moving frame of reference. In more detail, the second-order differential equation for equation 2.1 is
formula
2.26
which can also be represented as using matrix A and vector b. To minimize Vy, it is necessary for the path of the mode, , to be equal to the mode of the path, ; thus, (Friston et al., 2008). Accordingly, using the steepest descent method, the MAP path is calculated as
formula
2.27
where , and are given by
formula
2.28
formula
2.29
formula
2.30
Note that efficacy, , is a positive constant. Equation 2.27 gives the modified path of , which minimizes Vy. A covariance matrix of , is given by
formula
2.31
and is a time-dependent variable. When a neuron generates a spike, becomes zero, because at that moment.

2.6  E-Step

Using the conventional Gauss-Newton scheme, we find the expected values of optimal parameters in the E-step. The extreme value of is given by
formula
2.32
Using the Gauss-Newton method, we calculate the optimal value of , which gives the extreme value of , by
formula
2.33
and covariance matrix is given by
formula
2.34
where specific forms of and are represented as
formula
2.35
formula
2.36
Note that the learning efficacy, , is defined as a small positive constant.

We calculate the D- and E- steps in succession, iteratively (DE algorithm), which is summarized in Figure 1C. These two steps are repeated until changes in the path of v and values of w converge at zero. Note that an initial value of , is defined as 0.

2.7  Enhanced LIF Model

The procedure described above can be applied to other models of neurons. For instance, we additionally tested the enhanced LIF model (Gerstner & Kistler, 2002), which is defined by
formula
2.37
formula
2.38
The meanings of variables and parameters in this model are shown in Tables 1 and 2. Because inner states are four-dimensional, y is represented as . As in equation 2.14, the sum of internal energy is represented as
formula
2.39
As in the case of the LIF model, the MAP path of v and u and the MAP estimators of w are obtained by repeatedly calculating the D- and E-steps.

3  Evaluations

3.1  Methods for Evaluations

To evaluate the accuracy of the estimation, we conducted simulations using the LIF model (LIF), enhanced LIF model (ELIF), and Hodgkin-Huxley-Destexhe model (HHD). The HHD equations are an enhancement of the original Hodgkin-Huxley equations (Hodgkin & Huxley, 1952) for mammalian thalamocortical neurons (Pospischil et al., 2008). Full equations and specific values of parameters for the HHD model are described in Pospischil et al. (2008). The synaptic dynamics for HHD are represented by the conductance-based synapse model (Izhikevich, Gally, & Edelman, 2004). Total synaptic current Isyn is defined as the sum of current inputs from all receptors, where receptors are categorized into AMPA, NMDA, GABA, and GABA and are written as
formula
3.1
where , and gGABAB are the total conductance of AMPA, NMDA, GABA, and GABA receptors, respectively. The dynamics of , gGABAA, and gGABAB are represented as , where k {AMPA, NMDA, GABA, GABA} and gkjs are synaptic inputs from presynaptic neuron j. To determine the conductance values of synaptic receptors, we first randomly choose PSC amplitudes, similar to those used in LIF and ELIF simulations, and then translate these to conductance representation as and , where is a Heaviside step function. We assume that for all neurons, the conductance of NMDA and GABA receptors is zero (). Simulations with HHD equations and the conductance-based synapse model were used to provide a more biologically realistic version of the simplified models presented in the previous section. This enabled us to establish construct validity between the simplified and full Hodgkin-Huxley-like models in our subsequent analyses.

The duration of the simulations was T = 200 s. We assumed an output neuron that receives 20 input streams, N = 20, whose weights were unknown (see also Figure 1A). The spiking activities of inputs r were assumed to independently obey the Poisson process. The probability that neuron j fires during the interval from t to was given as dt, where  Hz, the firing rate common to all input neurons. Of the input neurons, it was assumed that 80% were excitatory neurons and 20% were inhibitory neurons. All input neurons were assumed to connect to the focal output neuron. Connection strengths were given as random values obeying exponential distribution. In our simulations, connection strength obeyed an asymmetric exponential distribution. White gaussian noise with  pA was added. LIF and ELIF were calculated by the Euler method with the temporal resolution of 0.2 ms ( ms), while HHD was calculated by the fourth-order Runge-Kutta method with the temporal resolution of 0.04 ms ( ms). In estimation algorithms, the Euler method with the temporal resolution of 0.2 ms ( ms) was used for all cases. Each simulation was conducted 20 times with differing connection strengths and background noise.

In our comparative evaluations, when we compare different models (e.g., LIF and ELIF), we compared the accuracy by inverting the model used to generate the simulated data. In other words, when assessing the LIF, we inverted the LIF model using the LIF simulator.

3.2  Estimating Connection Strengths from Membrane Potentials

First, we estimated w from , and s for a given model and set of intrinsic parameters . Because v was given, there was no D-step. w was obtained by repeatedly calculating the E-step until converged to 0. The E-step was repeated 20 times. This process was exactly the same as that of the conventional Gauss-Newton scheme.

Figure 2 shows the similarity of true w and estimated w, represented as w. In Figures 2A to 2C, w was estimated using the LIF model; it tended to be proportional to true w. Mean square deviations of w from w were smaller than 4 pA in the LIF simulations, smaller than 30 pA in the ELIF simulations, and approximately 55 pA in the HHD simulations, in which w was estimated using the LIF model. In Figures 2D to 2F, w was estimated using the ELIF model. There was no significant difference in estimation accuracy between LIF and ELIF; however, LIF-estimator was slightly more accurate than ELIF-estimator in the HHD simulations.

Figure 2:

Estimated connection strengths. (A–F) Relationships between true and estimated connection strengths. Dots denote respective connection strengths. Superposition of 20 trials is shown. Connection strengths w were estimated from the path of membrane potential and input spike trains generated from LIF in panels A and D, from ELIF in panels B and E, and from HHD in panels C and F. Connection strengths were estimated using LIF in panels A–C and ELIF in panels D–F. A black line denotes a correct line. In panels A–F, true connections were prepared for precisely the same values. In panels A–B and D–E, estimated w was highly proportional to true w, whereas in panels C and F, estimated w was nearly proportional to true w. Some estimation errors were observed, particularly for negative and large positive connection strengths.

Figure 2:

Estimated connection strengths. (A–F) Relationships between true and estimated connection strengths. Dots denote respective connection strengths. Superposition of 20 trials is shown. Connection strengths w were estimated from the path of membrane potential and input spike trains generated from LIF in panels A and D, from ELIF in panels B and E, and from HHD in panels C and F. Connection strengths were estimated using LIF in panels A–C and ELIF in panels D–F. A black line denotes a correct line. In panels A–F, true connections were prepared for precisely the same values. In panels A–B and D–E, estimated w was highly proportional to true w, whereas in panels C and F, estimated w was nearly proportional to true w. Some estimation errors were observed, particularly for negative and large positive connection strengths.

These results suggest that the proposed method can be used to estimate connection strengths based on extracellular potentials. This is because such potentials are represented as the convolution of intracellular potentials and a filter function determined by properties of membrane and electrodes. It is possible to simultaneously obtain the intracellular potentials, filter function, and connection strength using variational Laplace.

3.3  Estimating Membrane Potentials from Spikes and Parameters

We next estimated the MAP path of v when w, , and the model are given. Assume that inputs s, output x, initial values of , parameters w and , and the model are given. In this case, there is no E-step; moreover, the D-step is calculated only once. Therefore, the MAP path of v is obtained by numerically calculating equation 2.27. Although noise can make the path of v uncertain, a spike train of (i.e., firing times of ) is given; therefore, the path of v can be dynamically evaluated and corrected. The prior of , is required in this case to avoid a divergence of v.

The simulation was conducted under the same conditions as in section 3.2; the MAP path of v was calculated using the LIF and ELIF models. Figure 3 shows the true and estimated trajectories of v at each moment, with spike trains generated from the LIF model (see Figure 3A), the ELIF model (see Figure 3B), and the HHD model (see Figure 3C). The top-left panels show the paths of true v simulated by LIF. The middle-left panels are the paths of v estimated by LIF. The lower-left panels are the paths of v estimated by ELIF. The right panels show the v-v plot between true v and v estimated by LIF (top right), as well as the v-v plot between true v and v estimated by ELIF (bottom right). Because noise makes v highly stochastic, the detailed trajectory of estimated v was not the same as that of true v; however, through knowledge of spike timings, we could reset the error from the true trajectory for each spike timing. Both estimated trajectories by fitting for LIF and ELIF were similar as true trajectories, thereby indicating that the estimation of the v path was successful. However, we found from the v-v plot that estimation using LIF was more accurate than estimation using ELIF regardless of which model was used for the simulation.

Figure 3:

Estimated membrane potentials from parameters and spike trains. (A) Estimation of membrane potential ( with spike trains generated from the LIF model. The top-left panel is the path of true v simulated by LIF, the middle-left panel is the path of v estimated by LIF, and the bottom-left panel is the path of v estimated by ELIF. The top-right panel shows the v-v plot between true v and v estimated by LIF; the bottom-right panel shows the v-v plot between true v and v estimated by ELIF. (B) Estimation of v with spike trains generated from ELIF. The top-left panel is the path of true v simulated by ELIF. The middle-left and lower-left panels are the paths of v estimated by LIF and ELIF, respectively. The right panels show v-v plots between true v and v estimated by LIF (top right) and ELIF (bottom right). (C) Estimation of v with spike trains generated from HHD. The top-left panel is the path of true v simulated by HHD. The middle-left and lower-left panels are the paths of v estimated by LIF and ELIF, respectively. The right panels show v-v plots between true v and v estimated by LIF (top right) and ELIF (bottom right).

Figure 3:

Estimated membrane potentials from parameters and spike trains. (A) Estimation of membrane potential ( with spike trains generated from the LIF model. The top-left panel is the path of true v simulated by LIF, the middle-left panel is the path of v estimated by LIF, and the bottom-left panel is the path of v estimated by ELIF. The top-right panel shows the v-v plot between true v and v estimated by LIF; the bottom-right panel shows the v-v plot between true v and v estimated by ELIF. (B) Estimation of v with spike trains generated from ELIF. The top-left panel is the path of true v simulated by ELIF. The middle-left and lower-left panels are the paths of v estimated by LIF and ELIF, respectively. The right panels show v-v plots between true v and v estimated by LIF (top right) and ELIF (bottom right). (C) Estimation of v with spike trains generated from HHD. The top-left panel is the path of true v simulated by HHD. The middle-left and lower-left panels are the paths of v estimated by LIF and ELIF, respectively. The right panels show v-v plots between true v and v estimated by LIF (top right) and ELIF (bottom right).

3.4  Estimating Membrane Potentials and Connection Strengths from Spikes

Finally, using just the simulated spike data, we estimated the membrane potential path and connection strengths using the proposed method and evaluated against true values that could not be directly known in the estimation step. The D- and E-steps were repeated 20 times so that the changes of the v path and w converged to zero. The results are shown in Figures 4 and 5. Figures 4A and 4B show the trajectories of estimated v using the LIF model; the spike train was generated from LIF (see Figure 4A) and HHD (see Figure 4B). The trajectory of estimated v converged with that of the true v as the D-step was repeated. Figures 5A to 5F show the distribution of connection strengths. The horizontal axis indicates the true connection strengths of the simulated neural network; the vertical axis indicates the estimated connection strength by fitting the LIF model (see Figures 5A–5C) and the ELIF model (see Figures 5D–5F). The estimated connection strengths were almost proportional to the true connection strengths (as in section 3.1). In all cases except Figure 5D, the proposed method accurately estimated connection strengths with an error ratio of less than 20%; the mean square error was compared with the mean square of true connection strengths. This result indicates that simultaneous estimation of the path of v and w using the proposed method was successful. When data were generated from LIF, the accuracy of ELIF-estimator was relatively worse than other cases (see Figure 5D).

Figure 4:

Estimated membrane potentials based on spike trains alone. (A) Estimation of v with spike trains generated from LIF. The top-left panel is the path of true v simulated by LIF. The next three panels on the left are the paths of v estimated by LIF (for 1, 2, and 20 cycles). The right panel shows a v-v plot between true v and v estimated by LIF (at 20 cycles). (B) Estimation of v with spike trains generated from HHD. The top-left panel is the path of true v simulated by HHD. The next three panels on the left are the paths of v estimated by LIF (for 1, 2, and 20 cycles). The right panel shows a v-v plot between true v and v estimated by LIF (at 20 cycles).

Figure 4:

Estimated membrane potentials based on spike trains alone. (A) Estimation of v with spike trains generated from LIF. The top-left panel is the path of true v simulated by LIF. The next three panels on the left are the paths of v estimated by LIF (for 1, 2, and 20 cycles). The right panel shows a v-v plot between true v and v estimated by LIF (at 20 cycles). (B) Estimation of v with spike trains generated from HHD. The top-left panel is the path of true v simulated by HHD. The next three panels on the left are the paths of v estimated by LIF (for 1, 2, and 20 cycles). The right panel shows a v-v plot between true v and v estimated by LIF (at 20 cycles).

Figure 5:

Estimated connection strengths based on spike trains alone. Relationships between true connection strengths and estimated connection strengths. As in Figure 2, spike trains were generated from LIF in panels A and D, from ELIF in panels B and E, and from HHD in panels C and F. Connection strengths were estimated using LIF in panels A to C and ELIF in panels D to F. A black line denotes a correct line. In all cases, true connections were prepared at exactly the same values. Note the similarity among all estimates.

Figure 5:

Estimated connection strengths based on spike trains alone. Relationships between true connection strengths and estimated connection strengths. As in Figure 2, spike trains were generated from LIF in panels A and D, from ELIF in panels B and E, and from HHD in panels C and F. Connection strengths were estimated using LIF in panels A to C and ELIF in panels D to F. A black line denotes a correct line. In all cases, true connections were prepared at exactly the same values. Note the similarity among all estimates.

Figure 6 outlines the properties of the proposed method, including robustness for the number of inputs (, recording time (T [s]), background noise ( [pA]), firing rate of inputs ( [Hz]), and capacity for reproduction. These evaluations were performed using the mean square error from true connection strength, which was averaged with 20 trials and represented as averaged error = ], where w and w are the true and estimated connection strengths in the kth trial. We investigated the estimation error while varying N. The other simulation parameters were fixed ( 200 s and pA). To maintain the total input amplitude, we defined as [Hz]. Between N = 5–50 inputs, the error was lower than 40 pA (see Figure 6A). Then we examined the error varying T (N = 20, pA, and Hz). The error tended to gradually decrease between T = 10 and 100 s; nevertheless, the error maintained approximately 30 pA while T became longer than 100 s (see Figure 6B), suggesting that 100 s is adequate for estimation and that longer recording does not reduce errors. Background noise had almost no effect when was smaller than 150 pA (N = 20, T = 200 s, and Hz) (see Figure 6C). However, when was larger than 150 pA, the influence of background noise exponentially increased. This result is consistent with the fact that accurate estimation of effective network structure becomes difficult when only part of the network can be observed (Isomura et al., 2012). Accurate estimation of network structure requires recording of neuronal activity from many parts of the network. In our evaluation, data with high firing inputs tended to be estimated with high accuracy (N = 20, T = 200 s, and  pA) (see Figure 6D).

Figure 6:

Properties of the proposed estimation method. From spike data generated by a simulated neural network, v and w were estimated using the proposed method. (A) Dependency on the number of inputs. (B) Dependency on length of recording time. (C) Dependency on amplitude of background noise. (D) Dependency on firing rate of input neurons. (E) Relationship between connection strengths independently estimated from an identical setup of model and true connection strengths. (F) Accuracy of detecting changes in true connection strengths. In panels A to D, the horizontal axes represent the number of inputs (, recording time (T [s]), background noise ( [pA]), and firing rate of inputs ( [Hz]). The vertical axes represent the root mean square value of the estimation error as the average of 20 trials (a black curve) and the standard deviation (a gray region).

Figure 6:

Properties of the proposed estimation method. From spike data generated by a simulated neural network, v and w were estimated using the proposed method. (A) Dependency on the number of inputs. (B) Dependency on length of recording time. (C) Dependency on amplitude of background noise. (D) Dependency on firing rate of input neurons. (E) Relationship between connection strengths independently estimated from an identical setup of model and true connection strengths. (F) Accuracy of detecting changes in true connection strengths. In panels A to D, the horizontal axes represent the number of inputs (, recording time (T [s]), background noise ( [pA]), and firing rate of inputs ( [Hz]). The vertical axes represent the root mean square value of the estimation error as the average of 20 trials (a black curve) and the standard deviation (a gray region).

When two sets of connection strengths were respectively estimated for two data sets generated from identical models and parameters but for different periods, the difference in the two sets was significantly smaller than the error between the true and estimated connection strengths (approximately 8%) (N = 20, T = 200 s, pA, and Hz) (see Figure 6E). In addition, the change in connection strengths with small perturbations was estimated with high accuracy (approximately 10%) (N = 20, T = 200 s, pA, and Hz) (see Figure 6F). These results suggest that estimated connection strengths are precisely the same values even when spike data are recorded at different periods, as long as the spike trains were generated by identical models and parameters.

3.5  Comparison with Estimation without Hidden Variables

We compared the proposed method with an estimation method without hidden variables. For estimation without hidden variables, we prepared a filter constructed by generalized linear modeling (GLM) (Paninski et al., 2004). The filter approximates an exponential of the sum of input at time t from spike train . We assumed the relationship of
formula
3.2
holds, where indicates a firing intensity; therefore, is approximated by an expectation of x at time . Note that is a filter time series, ms is a length of the filter, and is a positive constant. Equation 3.2 indicates the inverse of GLM that gives by the convolution of with some linear filter. Assuming that the error of obeys gaussian distribution, we calculated the optimal filter that gives the maximum of the log-likelihood function of . Actually, we calculated
formula
3.3
Note that x was generated from the LIF model and that was represented as a 200-dimensional vector since we used the time resolution of  ms for the simulation, so that ak is a finite impulse response (FIR) filter. The estimated aks are shown in Figure 7A. Assuming the FIR filter is the same as assuming that x is generated from a stochastic function, which is written as a convolution of with some filter function; therefore, this method is an approximation of a Poisson spiking neuron model. Then, assuming that the error of obeys gaussian distribution, we calculated the optimal connection strength vector w that gave the maximum of the log-likelihood function of w, which is represented as
formula
3.4
This enables estimation of optimal w without calculating membrane potential v (a hidden variable). To make the comparison fair, we modified w to the mean and variance of w as being the same as the true distribution of w.
Figure 7:

Comparison of accuracy between the proposed and non-hidden-variable methods. (A) Coefficients of a filter calculated from equation 3.3, aks, –99 100. (B) Error ratio of estimating w. A black solid curve indicates transition of estimation error when we use the proposed method; a black dashed curve indicates that when we use the non-hidden-variable method. Gray areas are the standard deviations.

Figure 7:

Comparison of accuracy between the proposed and non-hidden-variable methods. (A) Coefficients of a filter calculated from equation 3.3, aks, –99 100. (B) Error ratio of estimating w. A black solid curve indicates transition of estimation error when we use the proposed method; a black dashed curve indicates that when we use the non-hidden-variable method. Gray areas are the standard deviations.

Figure 7B shows the results of comparing the estimation accuracy of connection strength between the proposed and non-hidden-variable methods. Spike trains were generated from HHD, and connection strengths were estimated using both the proposed and non-hidden-variable methods. When the firing rate of input was small, the proposed method was preferred over the non-hidden-variable method for estimating w. Moreover, when the firing rate was sufficiently large, the proposed method failed to accurately estimate w and was worse than the non-hidden-variable method. The result indicates that when the mean firing rate of input is larger than 35 spikes/s, the proposed method is not an appropriate choice. However, in practical cases, the mean firing rate of neurons in actual nervous systems is rarely higher than 35 spikes/s. This suggests that in estimating w from spike trains of actual neural networks, use of the proposed method is preferable. When input and output spike trains are sparse, simple correlation between the input and output is not sufficient for accurately estimating w. In these cases, membrane potential helps to estimate w. It is important that the path of v estimated using LIF differs from the true path of v generated from HHD; nevertheless, v estimated using LIF assists in the accurate estimation of w from spike trains generated by HHD.

4  Discussion

We have applied a variational Laplacian technique to simultaneously estimate the MAP path of membrane potential and MAP connection strengths of the LIF model. Our method requires information on the model and spike trains of inputs and an output, but not the membrane potential or amplitude of PSCs. A spike train is generally insufficient for reconstructing hidden state variables. In our case, if the output train is generated by the LIF model and the inputs are restricted as a form of PSC, the MAP path of v can be estimated. Most essential to the proposed method is the prior of . Assuming v rises to vt just before each spike time, plausible dynamics of v can be calculated in the first D-step. Then, using estimated v, an optimal w is calculated in the E-step.

Clearly, in the absence of any random fluctuations or system noise, our generative model produces a deterministic response in terms of hidden neuronal states for any given input. This means that estimation of hidden states is not required; rather, we can focus on optimizing the approximate posterior distribution over unknown parameters. However, inclusion of random fluctuations on equations of motion means that we are obliged to estimate the posterior over hidden states. In fMRI, this is called stochastic dynamic causal modeling.

The prior on v has a central role in the proposed method. Because our method does not assume a soft threshold for the firing function, it can also be applied to hard threshold models (although in conventional model-based connection-strength estimation, a soft threshold model has been used; Paninski et al., 2004). In equation 2.8, acts as a barrier at , where the role of Uv is similar to that in Koyama’s method when used in the hard threshold case (Koyama & Paninski, 2010). A significant difference between the proposed method and Koyama’s method is the cause of convergence with vt when a neuron fires. In our method, the cause of convergence is a feature of v-prior. In equation 2.8, acts as the driving force of convergence. Because converges to zero when a neuron fires, ( increases to infinity if v is not vt, and it becomes zero if . Another difference is the representation of the reset. The reset of v is written using delta function , which enables identification of the MAP path of v by solving the same equation across time.

Through our experiments, we confirmed that the proposed method can estimate the connection strengths of simulated neural networks. In the case of estimating connection strengths based on spike trains alone, the error of estimation when data were generated by HHD was almost the same as when data were generated by LIF and ELIF (less than 20%). This result suggests that the proposed method is robust against the simulation neuron model. We confirmed that the error did not significantly increase while the number of inputs increased. In addition, we found that a recording with 100 s duration was sufficient for the accurate estimation, while the error increased when s, and that the method is robust against background noise when pA. Moreover, the method is robust against a firing rate of inputs when Hz, while the error increased when Hz. Therefore, we confirmed that the method is robust against wide-ranging input numbers, recording length, background noise amplitude, and input firing rate.

The proposed method is easily enhanced to perform online learning by calculating the E-step after each D-step period, where and are dynamically calculated as in the online DEM algorithm (Friston et al., 2008). Furthermore, the method is enhanced to use another neuron model (e.g., HHD model) if a more accurate calculation of dynamics of membrane potential is required. In real systems, not every neuron can be observed; activities can be measured for only part of the neuron population. Therefore, the method must estimate connection strengths even when only part of a neural network is observable. As previously shown (Isomura et al., 2012), when a large part of a system can be observed, the estimation error of the model-based method remains almost the same as in perfect observability, indicating that the estimation accuracy can be maintained in real systems.

Although we have not considered Bayesian model selection, the real power of dynamic causal modeling is its ability to evaluate any given model in terms of free energy approximation to model evidence. This means it is possible to compare different network architectures and test the associated hypotheses. We will demonstrate this in future work.

Unlike model-free methods, which are relatively weak at calculating connection strength, the proposed method can accurately reconstruct network topology from spike time series; moreover, it is especially accurate in detecting changes in connection strengths. These results suggest that our method can simultaneously detect activity-dependent plasticity in a neuronal network; it can therefore be used to investigate mechanisms of memory and learning. Connection strengths are known to increase by approximately 140% for 60 stimulations with spike-timing dependent plasticity (STDP) (Markram, Lübke, Frotscher, & Sakmann, 1997; Bi & Poo, 1998). Because the proposed method can accurately estimate the change of connection strength with an error ratio of approximately 10%, the proposed method should detect the change in connection strength in actual neural networks.

From a theoretical point of view, several models of STDP have been proposed (Song, Miller, & Abbott, 2000; Clopath, Büsing, Vasilaki, & Gerstner, 2010; Gilson & Fukai, 2011; Gilson, Fukai, & Burkitt, 2012). Moreover, homeostatic plasticity and synaptic scaling, which require competition among several synapses, have been addressed (Turrigiano & Nelson, 2004), and are considered related to an unsupervised learning rule (Oja, 1982; Bell & Sejnowski, 1995). Because the proposed method can simultaneously estimate the changes of many connections from multiple points of recording data, it can be used to investigate both the dynamics of individual connection strengths and the competition among several synapses. Further, it can help detect the distribution of connection strengths (Song, Sjöström, Reigl, Nelson, & Chklovskii, 2005) and structure of microcircuits (Bastos et al., 2012).

In summary, we developed a new method of connection strength estimation using a variational Laplacian technique. The method can accurately estimate simulated connection strengths with an error ratio of less than 20%. Moreover, the method can accurately estimate the change of connection strength with an error ratio of approximately 10%, indicating that it should detect the change in connection strength in actual neural networks. This method can therefore be used to detect synaptic connection strengths in neural networks and assess network structure and synaptic plasticity.

Acknowledgments

This work was partially supported by the Japan Society for the Promotion of Science through Grants-in-Aid for Scientific Research (KAKENHI), grants 23240065 and 26560202, and Grant-in-Aid for JSPS Fellows, grant 26-8435. We thank our reviewers for their very helpful guidance.

References

Bastos
,
A. M.
,
Usrey
,
W. M.
,
Adams
,
R. A.
,
Mangun
,
G. R.
,
Fries
,
P.
, &
Friston
,
K. J.
(
2012
).
Canonical microcircuits for predictive coding
.
Neuron
,
76
,
695
711
.
Beal
,
M. J.
, &
Ghahramani
,
Z.
(
2003
).
The variational Bayesian EM algorithm for incomplete data: With application to scoring graphical model structures
. In
J. M.
Bernardo
,
M. J.
Bayarri
,
J. O.
Berger
,
A. P.
Dawid
,
D.
Heckerman
,
A. F. M.
Smith
, &
M.
West
(Eds.),
Bayesian statistics
.
New York
:
Oxford University Press
.
Bear
,
M. F.
,
Connors
,
B. W.
, &
Paradiso
,
M. A.
(Eds.). (
2006
).
Neuroscience
.
New York
:
Wolters Kluwer Health
.
Bell
,
A. J.
, &
Sejnowski
,
T. J.
(
1995
).
An information-maximization approach to blind separation and blind deconvolution
.
Neural Comput.
,
7
,
1129
1159
.
Bettencourt
,
L. M. A.
,
Stephens
,
G. J.
,
Ham
,
M. I.
, &
Gross
,
G. W.
(
2007
).
Functional structure of cortical neuronal networks grown in vitro
.
Phys. Rev. E
,
75
,
021915
.
Bi
,
G. Q.
, &
Poo
,
M. M.
(
1998
).
Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type
.
J. Neurosci.
,
18
,
10464
10472
.
Cadotte
,
A. J.
,
DeMarse
,
T. B.
,
He
,
P.
, &
Ding
,
M.
(
2008
).
Causal measures of structure and plasticity in simulated and living neural networks
.
PLoS ONE
,
3
,
e3355
.
Clopath
,
C.
,
Büsing
,
L.
,
Vasilaki
,
E.
, &
Gerstner
,
W.
(
2010
).
Connectivity reflects coding: A model of voltage-based STDP with homeostasis
.
Nat. Neurosci.
,
13
,
344
352
.
Dranias
,
M. R.
,
Ju
,
H.
,
Rajaram
,
E.
, &
VanDongen
,
A.M.J.
(
2013
).
Short-term memory in networks of dissociated cortical neurons
.
J. Neurosci.
,
33
,
1940
1953
.
Eytan
,
D.
,
Brenner
,
N.
, &
Marom
,
S.
(
2003
).
Selective adaptation in networks of cortical neurons
.
J. Neurosci.
,
23
,
9349
9356
.
Feldman
,
D. E.
(
2012
).
The spike-timing dependence of plasticity
.
Neuron
,
75
,
556
571
.
Feldt
,
S.
,
Bonifazi
,
P.
, &
Cossart
,
R.
(
2011
).
Dissecting functional connectivity of neuronal microcircuits: Experimental and theoretical insights
.
Trends Neurosci.
,
34
,
225
236
.
Field
,
G. D.
,
Gauthier
,
J. L.
,
Sher
,
A.
,
Greschner
,
M.
,
Machado
,
T. A.
,
Jepson
,
L. H.
, …
Chichilnisky
,
E. J.
(
2010
).
Functional connectivity in the retina at the resolution of photoreceptors
.
Nature
,
467
,
672
677
.
Friston
,
K.
,
Mattout
,
J.
,
Trujillo-Barreto
,
N.
,
Ashburner
,
J.
, &
Penny
,
W.
(
2006
).
Variational free energy and the Laplace approximation
.
NeuroImage
,
34
(
1
),
220
234
.
Friston
,
K. J.
,
Trujillo-Barreto
,
N.
, &
Daunizeau
,
J.
(
2008
).
DEM: A variational treatment of dynamic systems
.
Neuroimage
,
41
(
3
),
849
885
.
Fu
,
M.
, &
Zuo
,
Y.
(
2011
).
Experience-dependent structural plasticity in the cortex
.
Trends Neurosci.
,
34
,
177
187
.
Garofalo
,
M.
,
Nieus
,
T.
,
Massobrio
,
P.
, &
Martinoia
,
S.
(
2009
).
Evaluation of the performance of information theory-based methods and cross-correlation to estimate the functional connectivity in cortical networks
.
PLoS ONE
,
4
,
e6482
.
Gerstner
,
W.
, &
Kistler
,
W. M.
(
2002
).
Spiking neuron models
.
Cambridge
:
Cambridge University Press
.
Gilson
,
M.
, &
Fukai
,
T.
(
2011
).
Stability versus neuronal specialization for STDP: Long-tail weight distributions solve the dilemma
.
PLoS ONE
,
6
,
e25339
.
Gilson
,
M.
,
Fukai
,
T.
, &
Burkitt
,
A. N.
(
2012
).
Spectral analysis of input spike trains by spike-timing-dependent plasticity
.
PLoS Comput. Biol.
,
8
,
e1002584
.
Hodgkin
,
A. L.
, &
Huxley
,
A. F.
(
1952
).
A quantitative description of membrane current and its application to conduction and excitation in nerve
.
J. Physiol.
,
117
,
500
.
Isomura
,
T.
,
Takeuchi
,
A.
,
Shimba
,
K.
,
Kotani
,
K.
, &
Jimbo
,
Y.
(
2012
).
Connection-strength estimation of neuronal networks by fitting for Izhikevich model
.
IEEJ Trans. C, 132
,
1581
1588
. (
in Japanese
)
Izhikevich
,
E. M.
(
2003
).
Simple model of spiking neurons
.
IEEE Trans. Neural Net.
,
14
,
1569
1572
.
Izhikevich
,
E. M.
(
2004
).
Which model to use for cortical spiking neurons?
IEEE Trans. Neural Net., 15
,
1063
1070
.
Izhikevich
,
E. M.
,
Gally
,
J. A.
, &
Edelman
,
G. M.
(
2004
).
Spike-timing dynamics of neuronal groups
.
Cereb. Cortex
,
14
(
8
),
933
944
.
Jimbo
,
Y.
,
Tateno
,
T.
, &
Robinson
,
H. P. C.
(
1999
).
Simultaneous induction of pathway-specific potentiation and depression in networks of cortical neurons
.
Biophys. J.
,
76
,
670
678
.
Johnson
,
H. A.
,
Goel
,
A. G.
, &
Buonomano
,
D. V.
(
2010
).
Neural dynamics of in vitro cortical networks reflects experienced temporal patterns
.
Nat. Neurosci.
,
13
,
917
919
.
Koyama
,
S.
, &
Paninski
,
L.
(
2010
).
Efficient computation of the maximum a posteriori path and parameter estimation in integrate-and-fire and more general state-space models
.
J. Comput. Neurosci.
,
29
,
89
105
.
Markram
,
H.
,
Lübke
,
J.
,
Frotscher
,
M.
, &
Sakmann
,
B.
(
1997
).
Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs
.
Science
,
275
,
213
215
.
Oja
,
E.
(
1982
).
A simplified neuron model as a principal component analyzer
.
J. Math. Biol.
,
15
,
267
273
.
Paninski
,
L.
,
Pillow
,
J. W.
, &
Simoncelli
,
E. P.
(
2004
).
Maximum likelihood estimation of a stochastic integrate-and-fire neural encoding model
.
Neural Comput.
,
16
,
2533
2561
.
Pospischil
,
M.
,
Toledo-Rodriguez
,
M.
,
Monier
,
C.
,
Piwkowska
,
Z.
,
Bal
,
T.
,
Frégnac
,
Y.
, … &
Destexhe
,
A.
(
2008
).
Minimal Hodgkin–Huxley type models for different classes of cortical and thalamic neurons
.
Biolog. Cybern.
,
99
,
427
441
.
Ruaro
,
M. E.
,
Bonifazi
,
P.
, &
Torre
,
V.
(
2005
).
Toward the neurocomputer: Image processing and pattern recognition with neuronal cultures
.
IEEE Trans. Biomed. Eng.
,
52
,
371
383
.
Shahaf
,
G.
, &
Marom
,
S.
(
2001
).
Learning in networks of cortical neurons
.
J. Neurosci.
,
21
,
8782
8788
.
Song
,
S.
,
Miller
,
K. D.
, &
Abbott
,
L. F.
(
2000
).
Competitive Hebbian learning through spike-timing-dependent synaptic plasticity
.
Nat. Neurosci.
,
3
,
919
926
.
Song
,
S.
,
Sjöström
,
P. J.
,
Reigl
,
M.
,
Nelson
,
S.
, &
Chklovskii
,
D. B.
(
2005
).
Highly nonrandom features of synaptic connectivity in local cortical circuits
.
PLoS Biology.
,
3
,
e68
.
Sporns
,
O.
(
2007
).
Brain connectivity
.
Scholarpedia.
,
2
(
10
),
4695
.
Stetter
,
O.
,
Battaglia
,
D.
,
Soriano
,
J.
, &
Geisel
,
T.
(
2012
).
Model-free reconstruction of excitatory neuronal connectivity from calcium imaging signals
.
PLoS Comput. Biol.
,
8
,
e1002653
.
Turrigiano
,
G. G.
, &
Nelson
,
S. B.
(
2004
).
Homeostatic plasticity in the developing nervous system
.
Nat. Rev. Neurosci.
,
5
(
2
),
97
107
.