## Abstract

Connection strength estimation is widely used in detecting the topology of neuronal networks and assessing their synaptic plasticity. A recently proposed model-based method using the leaky integrate-and-fire model neuron estimates membrane potential from spike trains by calculating the maximum a posteriori (MAP) path. We further enhance the MAP path method using variational Bayes and dynamic causal modeling. Several simulations demonstrate that the proposed method can accurately estimate connection strengths with an error ratio of less than 20%. The results suggest that the proposed method can be an effective tool for detecting network structure and synaptic plasticity.

## 1 Introduction

Connection strength estimation is widely used in detecting the topology of neuronal networks and assessing their synaptic plasticity (Feldt, Bonifazi, & Cossart, 2011; Stetter, Battaglia, Soriano, & Geisel, 2012; Garofalo, Nieus, Massobrio, & Martinoia, 2009; Cadotte, DeMarse, He, & Ding, 2008; Bettencourt, Stephens, Ham, & Gross, 2007; Paninski, Pillow, & Simoncelli, 2004; Field et al., 2010; Koyama & Paninski, 2010; Isomura, Takeuchi, Shimba, Kotani, & Jimbo, 2012). However, existing methods are insufficient for reconstructing network topologies from a spike time series (a very large, complex data set) and detecting small changes in connection strengths. Therefore, a more accurate method for calculating functional connectivity (or, more strictly speaking, effective connectivity) is required. To achieve this goal, we enhance a maximum a posteriori (MAP) path method with variational Bayes approximation (Beal & Ghahramani, 2003) and dynamic causal modeling (DCM) (Friston, Mattout, Trujillo-Barreto, Ashburner, & Penny, 2006; Friston, Trujillo-Barreto, & Daunizeau, 2008). The resulting method can simultaneously detect the inhibitory connections, intrinsic parameters, and membrane potential of a neuron.

Conventional methods of connection-strength estimation can be divided into two classes: model free and model based. Model-free methods calculate functional connectivity using information-theoretic features (Stetter et al., 2012; Garofalo et al., 2009; Cadotte et al., 2008; Bettencourt et al., 2007), while model-based methods focus on fitting sample data into the neuron model and optimizing parameters using maximum likelihood estimation or Bayesian estimation (Paninski et al., 2004; Field et al., 2010; Koyama & Paninski, 2010; Isomura et al., 2012). Each of these method classes has limitations. Model-free methods are weak at estimating accurate values of connection strengths and the intrinsic parameters of each neuron. Not only model-free methods but also model-based ones using the firing rate model (Paninski et al., 2004; Field et al., 2010) are difficult to represent the rapid dynamics of membrane potential based on spike timings.

To address these problems, a model-based method that fits spike data into a leaky integrate-and-fire (LIF) model has been proposed (Koyama & Paninski, 2010). This method estimates the membrane potential and inputs to a neuron by calculating the MAP path of spike trains. Other researchers have simultaneously estimated membrane potential and synaptic connection strengths by fitting spike trains into an Izhikevich model using maximum likelihood estimation (Isomura et al., 2012). (Here, an Izhikevich model can be regarded as a quadric enhancement of the LIF model (Izhikevich, 2003, 2004).) A significant benefit of these model-based methods for connection-strength estimation is their applicability to detecting synaptic plasticity, which is believed to be the neural basis of memory and learning (Bear, Connors, & Paradiso, 2006; Fu & Zuo, 2011; Feldman, 2012).

Long-term synaptic plasticity is believed to be crucial for human and animal adaptation. Long-term synaptic plasticity can be categorized as structural plasticity or functional plasticity. Functional plasticity, which refers to the alteration of synaptic transmission efficacy (also called synaptic connection strength), is categorized as long-term potentiation (LTP) or long-term depression (LTD). Many researchers have investigated synaptic plasticity and neural adaptation in neural networks using multineuron recording (Jimbo, Tateno, & Robinson, 1999; Shahaf & Marom, 2001; Eytan, Brenner, & Marom, 2003; Ruaro, Bonifazi, & Torre, 2005; Johnson, Goel, & Buonomano, 2010; Dranias, Ju, Rajaram, & VanDongen, 2013). These approaches evaluate the change in response to test stimulations before and after training; however, they do not track variations in response during the training period. Because the response may pass through several neurons, it is insufficient for evaluating synaptic plasticity in a monosynaptic connection (Cadotte et al., 2008). Also, connection-strength estimation has been used to detect activity-dependent synaptic plasticity (Bettencourt et al., 2007; Cadotte et al., 2008; Feldt et al., 2011); however, the involved methods are imprecise. To reconstruct a network topology from a spike time series and detect small changes in connection strengths requires a more accurate method for calculating functional connectivity.

In the greater context of systems neuroscience, the distinction between model-based and model-free approaches corresponds to the difference between functional and effective connectivity (Sporns, 2007). Functional connectivity is defined as the statistical dependencies among remote electrophysiological time series, while effective connectivity rests on a biophysical model that explains how those dependencies are generated. Generally the effective connectivities (parameters of a biophysical or generative model) are estimated using dynamic causal modeling (DCM), which is established for functional magnetic resonance imaging (fMRI) and electrophysiological time series (Friston et al., 2006). DCM is the variational Bayesian inversion of (typically neuronal) models of distributed responses of the sort that we consider here for spiking neurons. Our variational approach represents the first application of DCM to spike trains. It therefore complements the more macroscopic domains of existing DCM applications.

Variational model inversion or filtering refers to the inversion of the generative or forward model of data for recovering or identifying the unknown states and parameters of the model. In this article, we describe a form of generalized filtering, variational Laplace. This procedure estimates hidden states and model parameters by optimizing a variational free energy bound on Bayesian model evidence under the Laplace assumption, namely, that the posterior is gaussian. This is slightly different from conventional variational Bayes or ensemble learning because it does not require conjugate priors. In variational Bayes, priors are chosen that are conjugate to the assumed posterior to provide explicit solutions to the free energy minimization problem. However, by optimizing variational free energy with a gradient descent, this constraint is relaxed. We demonstrate this by using relatively complicated priors on our connection strengths. The particular variational Laplacian scheme that we consider is closely related to dynamic expectation maximization (DEM) (Friston et al., 2008), which comprises three steps: the first (D) step estimates the hidden states of a system, while the second (E) and third (M) steps optimize, respe firely, the parameters and hyperparameters of the system. We consider only hidden states and parameters; accordingly, our scheme corresponds to a dynamic expectation (DE) scheme. Hyperparameters typically involve the amplitude of random fluctuations, which we do not consider here.

The remainder of this article is organized as follows. In section 2, we define techniques for obtaining the optimal path of membrane potential and optimal connection strengths using variational Bayes. In section 3, we use a simulation to demonstrate that these techniques can accurately estimate connection strengths with an error ratio of less than 20%; the mean square error is compared with the mean square of true connection strengths. We then demonstrate that the method can be used to detect the synaptic connection strengths of a simulated neural network.

## 2 Definition of Proposed Connection-Strength Estimation Method

### 2.1 Models

*v*is a membrane potential, is the change of

*v*, and

*x*is the resulting (output) spike train, such that , where {

*t*} are firing times . The meanings of other variables and parameters are shown in Tables 1 and 2. Upon crossing threshold

_{k}*v*, the neuron generates a spike, after which

_{t}*v*returns to the reset potential,

*v*, defined as . Assuming that obeys normal distribution, , a vector of spike trains for input neurons is represented as . In addition, a vector of normalized postsynaptic currents (PSCs) is represented as , where is a convolution of and

_{c}**r**with time. Note that is a, Heaviside step function, 0 for

*t*0 and 1 for ; ms is a time constant of PSC; and

*l*= 1 ms is a synaptic delay. is a column vector of connection strengths.

Name . | Symbol . | Unit . |
---|---|---|

Membrane potential | v | mV |

Inactivity index | u | pA |

Output spike train | x | Spikes/ms |

White gaussian noise | pA | |

Input spike trains | Spikes/ms | |

Normalized postsynaptic current | − |

Name . | Symbol . | Unit . |
---|---|---|

Membrane potential | v | mV |

Inactivity index | u | pA |

Output spike train | x | Spikes/ms |

White gaussian noise | pA | |

Input spike trains | Spikes/ms | |

Normalized postsynaptic current | − |

Name . | Symbol . | Value (Unit) . |
---|---|---|

Synaptic connection strength | pA | |

Membrane capacitance | C | 100 pF |

Leak conductance | g | 5 nS |

Time constant | 100 ms | |

Firing penalty | u _{d} | 800 pA |

Reset potential | v _{c} | mV |

Threshold potential | v _{t} | mV |

Resting potential | v _{r} | mV |

Name . | Symbol . | Value (Unit) . |
---|---|---|

Synaptic connection strength | pA | |

Membrane capacitance | C | 100 pF |

Leak conductance | g | 5 nS |

Time constant | 100 ms | |

Firing penalty | u _{d} | 800 pA |

Reset potential | v _{c} | mV |

Threshold potential | v _{t} | mV |

Resting potential | v _{r} | mV |

**y**and .

**y**is a column vector of the deviation from the set point and first-order velocity, . This notation comes from generalized filtering in generalized coordinates of motion. We use a first-order approximation to generalized filtering. These simplifications make equations and derivations much easier to follow. is a column vector and is defined by , where

**w**is a connection strength vector that neuron

*i*provides. The aim is to find the estimated density , where

*x*is an observable output (a spike train) of neuron is a column vector of observable inputs (convolved input spike trains;

**r**), and

*m*is a model of the neuron. Based on the assumption of conditional independence between hidden state variables and parameters (mean field approximation), the probability density is given by We also assume that and obey gaussian distribution (the Laplace assumption): Note that and are mean values of

**y**and

**w**, and and are covariance matrices represented as

### 2.2 Priors

**w**

**s**from

*x*and

**s**, we assume that (1) for a period in which the neuron does not fire,

*v*will be smaller than , and (2) a probability distribution of

*v*immediately before each spike converges toward . Based on these assumptions, we modify the dynamics of

*v*for consistency with the observed output spike train. Specifically, we stipulate prior distributions over hidden states

*v*in terms of prior energy (namely, the log probability), thereby restricting the probability density of

*v*according to the remaining time before the next spike ; that is, , where

*t*is the time of the next spike and

_{k}*t*is the current time. In addition, we assume that obeys gamma distribution with ; ). Values for the mean, mode, and variance of are 2, , and 2(, respectively. Parameter is a function of and represented as , where mV is the mode of in the equilibrium state and ms is a time constant. The prior energy

*U*is thereby represented as

^{v}Because *U ^{v}* approaches infinity as

*v*approaches

*v*and mean and variance become zero when , the distribution is consistent with assumptions 1 and 2 above. In equation 2.8, plays the role of a barrier at . Figure 1B shows conditional probabilities of membrane potential just before spiking. The mean of the true path of

_{t}*v*gradually increases as decreases, while the standard deviation of the true path of

*v*gradually decreases. We confirm that the true path of

*v*at a specific timing depends on gamma-like distribution, which indicates that our design of v-prior is plausible.

**w**to the suitable range that is not 0, we set the prior on

**w**by restricting its first- and second-order moments. We assume that independently obey an identical asymmetrical exponential distribution, . Thus, the prior energy

*U*is defined as where and represent the first- and second-order statistics of the parameters over connections and and represent the expectations of these statistics under the prior distribution as follows: This means that and play the role of hyperpriors, describing beliefs about the mean and variance of connection strengths over connections. Note that and are given as positive-definite values. Therefore, the first- and second-order differentials of

^{w}*U*are given by which are used for calculating variational energy and action in section 2.4. In later expressions, is the first-order partial differential of , and is the second-order partial differential of .

^{w}### 2.3 Internal Energy and Action

*x*and

**s**are an output spike train and vector of normalized PSCs, which are known. Because the error of equation 2.1 is represented as , equation 2.14 represents the error at each moment. Intrinsic action is then defined as the integral of

*U*with time, as given by The aim is to find the distribution of hidden states

*v*and parameters

**w**that minimize free energy . Under the Laplace assumption, this is equivalent to finding , and , as represented by

### 2.4 Variational Energy and Action

*U*and

^{v}*U*are prior energies of

^{w}*v*and

**w**under the assumption that

*v*obeys a gamma distribution and

**w**obeys an exponential distribution. As for internal action, variational action is defined as the integral of variational energy with time, The solution of equation 2.16 is obtained by minimizing

*V*and (Friston et al., 2006, 2008). Because

^{y}*U*can be expanded using Taylor expansion,

*V*is written as Similarly,

^{y}*V*is calculated as Here, , , and are given by Because and , the quadric terms are zero in the case of this model, according to equation 2.1. Thus, the proposed method is a simplified version of the DEM algorithm. If we assume that hidden parameters are

^{w}**w**and a set of intrinsic parameters , which are represented as , we calculate instead of

*V*; the quadric terms of equations 2.20 and 2.21 are nonzero.

^{w}### 2.5 D-Step

*A*and vector

**b**. To minimize

*V*, it is necessary for the path of the mode, , to be equal to the mode of the path, ; thus, (Friston et al., 2008). Accordingly, using the steepest descent method, the MAP path is calculated as where , and are given by Note that efficacy, , is a positive constant. Equation 2.27 gives the modified path of , which minimizes

^{y}*V*. A covariance matrix of , is given by and is a time-dependent variable. When a neuron generates a spike, becomes zero, because at that moment.

^{y}### 2.6 E-Step

We calculate the D- and E- steps in succession, iteratively (DE algorithm), which is summarized in Figure 1C. These two steps are repeated until changes in the path of *v* and values of **w** converge at zero. Note that an initial value of , is defined as **0**.

### 2.7 Enhanced LIF Model

**y**is represented as . As in equation 2.14, the sum of internal energy is represented as As in the case of the LIF model, the MAP path of

*v*and

*u*and the MAP estimators of

**w**are obtained by repeatedly calculating the D- and E-steps.

## 3 Evaluations

### 3.1 Methods for Evaluations

*I*is defined as the sum of current inputs from all receptors, where receptors are categorized into AMPA, NMDA, GABA, and GABA and are written as where , and

_{syn}*g*are the total conductance of AMPA, NMDA, GABA, and GABA receptors, respectively. The dynamics of ,

_{GABAB}*g*, and

_{GABAA}*g*are represented as , where

_{GABAB}*k*{AMPA, NMDA, GABA, GABA} and

*g*s are synaptic inputs from presynaptic neuron

_{kj}*j*. To determine the conductance values of synaptic receptors, we first randomly choose PSC amplitudes, similar to those used in LIF and ELIF simulations, and then translate these to conductance representation as and , where is a Heaviside step function. We assume that for all neurons, the conductance of NMDA and GABA receptors is zero (). Simulations with HHD equations and the conductance-based synapse model were used to provide a more biologically realistic version of the simplified models presented in the previous section. This enabled us to establish construct validity between the simplified and full Hodgkin-Huxley-like models in our subsequent analyses.

The duration of the simulations was *T* = 200 s. We assumed an output neuron that receives 20 input streams, *N* = 20, whose weights were unknown (see also Figure 1A). The spiking activities of inputs **r** were assumed to independently obey the Poisson process. The probability that neuron *j* fires during the interval from *t* to was given as *dt*, where Hz, the firing rate common to all input neurons. Of the input neurons, it was assumed that 80% were excitatory neurons and 20% were inhibitory neurons. All input neurons were assumed to connect to the focal output neuron. Connection strengths were given as random values obeying exponential distribution. In our simulations, connection strength obeyed an asymmetric exponential distribution. White gaussian noise with pA was added. LIF and ELIF were calculated by the Euler method with the temporal resolution of 0.2 ms ( ms), while HHD was calculated by the fourth-order Runge-Kutta method with the temporal resolution of 0.04 ms ( ms). In estimation algorithms, the Euler method with the temporal resolution of 0.2 ms ( ms) was used for all cases. Each simulation was conducted 20 times with differing connection strengths and background noise.

In our comparative evaluations, when we compare different models (e.g., LIF and ELIF), we compared the accuracy by inverting the model used to generate the simulated data. In other words, when assessing the LIF, we inverted the LIF model using the LIF simulator.

### 3.2 Estimating Connection Strengths from Membrane Potentials

First, we estimated **w** from , and **s** for a given model and set of intrinsic parameters . Because *v* was given, there was no D-step. **w** was obtained by repeatedly calculating the E-step until converged to **0**. The E-step was repeated 20 times. This process was exactly the same as that of the conventional Gauss-Newton scheme.

Figure 2 shows the similarity of true **w** and estimated **w**, represented as **w**. In Figures 2A to 2C, **w** was estimated using the LIF model; it tended to be proportional to true **w**. Mean square deviations of **w** from **w** were smaller than 4 pA in the LIF simulations, smaller than 30 pA in the ELIF simulations, and approximately 55 pA in the HHD simulations, in which **w** was estimated using the LIF model. In Figures 2D to 2F, **w** was estimated using the ELIF model. There was no significant difference in estimation accuracy between LIF and ELIF; however, LIF-estimator was slightly more accurate than ELIF-estimator in the HHD simulations.

These results suggest that the proposed method can be used to estimate connection strengths based on extracellular potentials. This is because such potentials are represented as the convolution of intracellular potentials and a filter function determined by properties of membrane and electrodes. It is possible to simultaneously obtain the intracellular potentials, filter function, and connection strength using variational Laplace.

### 3.3 Estimating Membrane Potentials from Spikes and Parameters

We next estimated the MAP path of *v* when **w**, , and the model are given. Assume that inputs **s**, output *x*, initial values of , parameters **w** and , and the model are given. In this case, there is no E-step; moreover, the D-step is calculated only once. Therefore, the MAP path of *v* is obtained by numerically calculating equation 2.27. Although noise can make the path of *v* uncertain, a spike train of (i.e., firing times of ) is given; therefore, the path of *v* can be dynamically evaluated and corrected. The prior of , is required in this case to avoid a divergence of *v*.

The simulation was conducted under the same conditions as in section 3.2; the MAP path of *v* was calculated using the LIF and ELIF models. Figure 3 shows the true and estimated trajectories of *v* at each moment, with spike trains generated from the LIF model (see Figure 3A), the ELIF model (see Figure 3B), and the HHD model (see Figure 3C). The top-left panels show the paths of true *v* simulated by LIF. The middle-left panels are the paths of *v* estimated by LIF. The lower-left panels are the paths of *v* estimated by ELIF. The right panels show the v-v plot between true *v* and *v* estimated by LIF (top right), as well as the v-v plot between true *v* and *v* estimated by ELIF (bottom right). Because noise makes *v* highly stochastic, the detailed trajectory of estimated *v* was not the same as that of true *v*; however, through knowledge of spike timings, we could reset the error from the true trajectory for each spike timing. Both estimated trajectories by fitting for LIF and ELIF were similar as true trajectories, thereby indicating that the estimation of the *v* path was successful. However, we found from the v-v plot that estimation using LIF was more accurate than estimation using ELIF regardless of which model was used for the simulation.

### 3.4 Estimating Membrane Potentials and Connection Strengths from Spikes

Finally, using just the simulated spike data, we estimated the membrane potential path and connection strengths using the proposed method and evaluated against true values that could not be directly known in the estimation step. The D- and E-steps were repeated 20 times so that the changes of the *v* path and **w** converged to zero. The results are shown in Figures 4 and 5. Figures 4A and 4B show the trajectories of estimated *v* using the LIF model; the spike train was generated from LIF (see Figure 4A) and HHD (see Figure 4B). The trajectory of estimated *v* converged with that of the true *v* as the D-step was repeated. Figures 5A to 5F show the distribution of connection strengths. The horizontal axis indicates the true connection strengths of the simulated neural network; the vertical axis indicates the estimated connection strength by fitting the LIF model (see Figures 5A–5C) and the ELIF model (see Figures 5D–5F). The estimated connection strengths were almost proportional to the true connection strengths (as in section 3.1). In all cases except Figure 5D, the proposed method accurately estimated connection strengths with an error ratio of less than 20%; the mean square error was compared with the mean square of true connection strengths. This result indicates that simultaneous estimation of the path of *v* and **w** using the proposed method was successful. When data were generated from LIF, the accuracy of ELIF-estimator was relatively worse than other cases (see Figure 5D).

Figure 6 outlines the properties of the proposed method, including robustness for the number of inputs (, recording time (*T* [s]), background noise ( [pA]), firing rate of inputs ( [Hz]), and capacity for reproduction. These evaluations were performed using the mean square error from true connection strength, which was averaged with 20 trials and represented as averaged error = ], where **w** and **w** are the true and estimated connection strengths in the *k*th trial. We investigated the estimation error while varying *N*. The other simulation parameters were fixed ( 200 s and pA). To maintain the total input amplitude, we defined as [Hz]. Between *N* = 5–50 inputs, the error was lower than 40 pA (see Figure 6A). Then we examined the error varying *T* (*N* = 20, pA, and Hz). The error tended to gradually decrease between *T* = 10 and 100 s; nevertheless, the error maintained approximately 30 pA while *T* became longer than 100 s (see Figure 6B), suggesting that 100 s is adequate for estimation and that longer recording does not reduce errors. Background noise had almost no effect when was smaller than 150 pA (*N* = 20, *T* = 200 s, and Hz) (see Figure 6C). However, when was larger than 150 pA, the influence of background noise exponentially increased. This result is consistent with the fact that accurate estimation of effective network structure becomes difficult when only part of the network can be observed (Isomura et al., 2012). Accurate estimation of network structure requires recording of neuronal activity from many parts of the network. In our evaluation, data with high firing inputs tended to be estimated with high accuracy (*N* = 20, *T* = 200 s, and pA) (see Figure 6D).

When two sets of connection strengths were respectively estimated for two data sets generated from identical models and parameters but for different periods, the difference in the two sets was significantly smaller than the error between the true and estimated connection strengths (approximately 8%) (*N* = 20, *T* = 200 s, pA, and Hz) (see Figure 6E). In addition, the change in connection strengths with small perturbations was estimated with high accuracy (approximately 10%) (*N* = 20, *T* = 200 s, pA, and Hz) (see Figure 6F). These results suggest that estimated connection strengths are precisely the same values even when spike data are recorded at different periods, as long as the spike trains were generated by identical models and parameters.

### 3.5 Comparison with Estimation without Hidden Variables

*t*from spike train . We assumed the relationship of holds, where indicates a firing intensity; therefore, is approximated by an expectation of

*x*at time . Note that is a filter time series, ms is a length of the filter, and is a positive constant. Equation 3.2 indicates the inverse of GLM that gives by the convolution of with some linear filter. Assuming that the error of obeys gaussian distribution, we calculated the optimal filter that gives the maximum of the log-likelihood function of . Actually, we calculated Note that

*x*was generated from the LIF model and that was represented as a 200-dimensional vector since we used the time resolution of ms for the simulation, so that

*a*is a finite impulse response (FIR) filter. The estimated

_{k}*a*s are shown in Figure 7A. Assuming the FIR filter is the same as assuming that

_{k}*x*is generated from a stochastic function, which is written as a convolution of with some filter function; therefore, this method is an approximation of a Poisson spiking neuron model. Then, assuming that the error of obeys gaussian distribution, we calculated the optimal connection strength vector

**w**that gave the maximum of the log-likelihood function of

**w**, which is represented as This enables estimation of optimal

**w**without calculating membrane potential

*v*(a hidden variable). To make the comparison fair, we modified

**w**to the mean and variance of

**w**as being the same as the true distribution of

**w**.

Figure 7B shows the results of comparing the estimation accuracy of connection strength between the proposed and non-hidden-variable methods. Spike trains were generated from HHD, and connection strengths were estimated using both the proposed and non-hidden-variable methods. When the firing rate of input was small, the proposed method was preferred over the non-hidden-variable method for estimating **w**. Moreover, when the firing rate was sufficiently large, the proposed method failed to accurately estimate **w** and was worse than the non-hidden-variable method. The result indicates that when the mean firing rate of input is larger than 35 spikes/s, the proposed method is not an appropriate choice. However, in practical cases, the mean firing rate of neurons in actual nervous systems is rarely higher than 35 spikes/s. This suggests that in estimating **w** from spike trains of actual neural networks, use of the proposed method is preferable. When input and output spike trains are sparse, simple correlation between the input and output is not sufficient for accurately estimating **w**. In these cases, membrane potential helps to estimate **w**. It is important that the path of *v* estimated using LIF differs from the true path of *v* generated from HHD; nevertheless, *v* estimated using LIF assists in the accurate estimation of **w** from spike trains generated by HHD.

## 4 Discussion

We have applied a variational Laplacian technique to simultaneously estimate the MAP path of membrane potential and MAP connection strengths of the LIF model. Our method requires information on the model and spike trains of inputs and an output, but not the membrane potential or amplitude of PSCs. A spike train is generally insufficient for reconstructing hidden state variables. In our case, if the output train is generated by the LIF model and the inputs are restricted as a form of PSC, the MAP path of *v* can be estimated. Most essential to the proposed method is the prior of . Assuming *v* rises to *v _{t}* just before each spike time, plausible dynamics of

*v*can be calculated in the first D-step. Then, using estimated

*v*, an optimal

**w**is calculated in the E-step.

Clearly, in the absence of any random fluctuations or system noise, our generative model produces a deterministic response in terms of hidden neuronal states for any given input. This means that estimation of hidden states is not required; rather, we can focus on optimizing the approximate posterior distribution over unknown parameters. However, inclusion of random fluctuations on equations of motion means that we are obliged to estimate the posterior over hidden states. In fMRI, this is called stochastic dynamic causal modeling.

The prior on *v* has a central role in the proposed method. Because our method does not assume a soft threshold for the firing function, it can also be applied to hard threshold models (although in conventional model-based connection-strength estimation, a soft threshold model has been used; Paninski et al., 2004). In equation 2.8, acts as a barrier at , where the role of *U ^{v}* is similar to that in Koyama’s method when used in the hard threshold case (Koyama & Paninski, 2010). A significant difference between the proposed method and Koyama’s method is the cause of convergence with

*v*when a neuron fires. In our method, the cause of convergence is a feature of v-prior. In equation 2.8, acts as the driving force of convergence. Because converges to zero when a neuron fires, ( increases to infinity if

_{t}*v*is not

*v*, and it becomes zero if . Another difference is the representation of the reset. The reset of

_{t}*v*is written using delta function , which enables identification of the MAP path of

*v*by solving the same equation across time.

Through our experiments, we confirmed that the proposed method can estimate the connection strengths of simulated neural networks. In the case of estimating connection strengths based on spike trains alone, the error of estimation when data were generated by HHD was almost the same as when data were generated by LIF and ELIF (less than 20%). This result suggests that the proposed method is robust against the simulation neuron model. We confirmed that the error did not significantly increase while the number of inputs increased. In addition, we found that a recording with 100 s duration was sufficient for the accurate estimation, while the error increased when s, and that the method is robust against background noise when pA. Moreover, the method is robust against a firing rate of inputs when Hz, while the error increased when Hz. Therefore, we confirmed that the method is robust against wide-ranging input numbers, recording length, background noise amplitude, and input firing rate.

The proposed method is easily enhanced to perform online learning by calculating the E-step after each D-step period, where and are dynamically calculated as in the online DEM algorithm (Friston et al., 2008). Furthermore, the method is enhanced to use another neuron model (e.g., HHD model) if a more accurate calculation of dynamics of membrane potential is required. In real systems, not every neuron can be observed; activities can be measured for only part of the neuron population. Therefore, the method must estimate connection strengths even when only part of a neural network is observable. As previously shown (Isomura et al., 2012), when a large part of a system can be observed, the estimation error of the model-based method remains almost the same as in perfect observability, indicating that the estimation accuracy can be maintained in real systems.

Although we have not considered Bayesian model selection, the real power of dynamic causal modeling is its ability to evaluate any given model in terms of free energy approximation to model evidence. This means it is possible to compare different network architectures and test the associated hypotheses. We will demonstrate this in future work.

Unlike model-free methods, which are relatively weak at calculating connection strength, the proposed method can accurately reconstruct network topology from spike time series; moreover, it is especially accurate in detecting changes in connection strengths. These results suggest that our method can simultaneously detect activity-dependent plasticity in a neuronal network; it can therefore be used to investigate mechanisms of memory and learning. Connection strengths are known to increase by approximately 140% for 60 stimulations with spike-timing dependent plasticity (STDP) (Markram, Lübke, Frotscher, & Sakmann, 1997; Bi & Poo, 1998). Because the proposed method can accurately estimate the change of connection strength with an error ratio of approximately 10%, the proposed method should detect the change in connection strength in actual neural networks.

From a theoretical point of view, several models of STDP have been proposed (Song, Miller, & Abbott, 2000; Clopath, Büsing, Vasilaki, & Gerstner, 2010; Gilson & Fukai, 2011; Gilson, Fukai, & Burkitt, 2012). Moreover, homeostatic plasticity and synaptic scaling, which require competition among several synapses, have been addressed (Turrigiano & Nelson, 2004), and are considered related to an unsupervised learning rule (Oja, 1982; Bell & Sejnowski, 1995). Because the proposed method can simultaneously estimate the changes of many connections from multiple points of recording data, it can be used to investigate both the dynamics of individual connection strengths and the competition among several synapses. Further, it can help detect the distribution of connection strengths (Song, Sjöström, Reigl, Nelson, & Chklovskii, 2005) and structure of microcircuits (Bastos et al., 2012).

In summary, we developed a new method of connection strength estimation using a variational Laplacian technique. The method can accurately estimate simulated connection strengths with an error ratio of less than 20%. Moreover, the method can accurately estimate the change of connection strength with an error ratio of approximately 10%, indicating that it should detect the change in connection strength in actual neural networks. This method can therefore be used to detect synaptic connection strengths in neural networks and assess network structure and synaptic plasticity.

## Acknowledgments

This work was partially supported by the Japan Society for the Promotion of Science through Grants-in-Aid for Scientific Research (KAKENHI), grants 23240065 and 26560202, and Grant-in-Aid for JSPS Fellows, grant 26-8435. We thank our reviewers for their very helpful guidance.