Abstract

Characterization of long-term activity-dependent plasticity from behaviorally driven spiking activity is important for understanding the underlying mechanisms of learning and memory. In this letter, we present a computational framework for quantifying spike-timing-dependent plasticity (STDP) during behavior by identifying a functional plasticity rule solely from spiking activity. First, we formulate a flexible point-process spiking neuron model structure with STDP, which includes functions that characterize the stationary and plastic properties of the neuron. The STDP model includes a novel function for prolonged plasticity induction, as well as a more typical function for synaptic weight change based on the relative timing of input-output spike pairs. Consideration for system stability is incorporated with weight-dependent synaptic modification. Next, we formalize an estimation technique using a generalized multilinear model (GMLM) structure with basis function expansion. The weight-dependent synaptic modification adds a nonlinearity to the model, which is addressed with an iterative unconstrained optimization approach. Finally, we demonstrate successful model estimation on simulated spiking data and show that all model functions can be estimated accurately with this method across a variety of simulation parameters, such as number of inputs, output firing rate, input firing type, and simulation time. Since this approach requires only naturally generated spikes, it can be readily applied to behaving animal studies to characterize the underlying mechanisms of learning and memory.

1  Introduction

Persistent changes in synaptic strength induced by patterns of neural activity (i.e., long-term synaptic plasticity (LTSP) are widely implicated as an underlying mechanism for learning and memory formation (Martin, Grimwood, & Morris, 2000; Citri & Malenka, 2008). Identifying the exact nature of the activity dependency of LTSP remains an important research front in understanding the computational roles of these synaptic modifications and their underlying biological mechanisms. There has been abundant experimental evidence for spike-timing-dependent plasticity (STDP), where changes in synaptic weight can be predicted from the relative presynaptic and postsynaptic spike timing on the order of tens of milliseconds (Bi & Poo, 1998; Froemke, Debanne, & Bi, 2010; Markram, Lübke, Frotscher, & Sakmann, 1997; Song, Miller, & Abbott, 2000). The observed features of the STDP function that relates relative spike timing to synaptic modification are variable and depend on neural region, neuron type, and experimental protocol (Caporale & Dan, 2008). However, characterization of STDP in the mammalian brain during behavior has been mostly indirect, relying largely on analyses of neural responses evoked by environmental stimuli as opposed to behaviorally driven, spontaneous spiking activity (Feldman, 2012). Here, we present a computational framework for quantifying STDP during behavior by estimating a functional plasticity rule solely from spiking activity. This framework can be contrasted to other general plasticity identification approaches that use different types of observations, such as time-varying dendritic spine image observations (Loewenstein, Yanover, & Rumpel, 2015), or different types of activity dependence, such as rate-based plasticity models (Lim et al., 2015).

Neural spiking data can be utilized to infer the functional connectivity between neurons by statistically estimating how the firing of input neurons contribute to an output neuron’s conditional firing probability (Okatan, Wilson, & Brown, 2005; Stevenson, Rebesco, Miller, & Körding, 2008; Eldawlatly, Jin, & Oweiss, 2009). In our formulation, identification of the neural functional connectivity is equivalent to the estimation of feedforward nonlinear dynamics of a multiple-input, multiple-output (MIMO) neural ensemble model from spiking activity with a generalized functional additive model (Song et al., 2007, 2009). This MIMO model has been extended to a time-varying MIMO model (Chan et al., 2011) using a point-process adaptive filtering technique (Eden, Frank, Barbieri, Solo, & Brown, 2004) to track the changes of neural functional connectivity strength over time. In this study, we seek to identify a plasticity rule that can explain how spiking activity influences changes of neural functional connectivity strength (Song et al., 2015). We term this a functional plasticity rule instead of a synaptic plasticity rule because identification relies on analyses of causal relations between spiking activity and functional connectivity strength changes, as opposed to direct measures of the postsynaptic responses after plasticity induction stimuli in typical STDP studies. Specifically, we extend our previous neural functional connectivity identification technique (Song et al., 2013) by incorporating a functional STDP rule that characterizes how relative input and output spiking activity influence functional connectivity strength over time.

Several aspects of the synaptic plasticity rule need to be considered to formulate the model structure and develop the estimation procedures. First, as in most STDP implementations, the most important component of the plasticity rule is the function that relates the relative timing of pre- and postsynaptic spike pairs to changes in synaptic strength. In this study, we focus on the identification of such a pair-wise plasticity rule for simplicity, although the form of the pair-wise STDP function identification can be readily modified to study triplet-wise or even higher-order effects (Wittenberg & Wang, 2006; Wang, Gerkin, Nauen, & Bi, 2005; Froemke & Dan, 2002). Second, changes in synaptic strength caused by spikes are not immediate; instead, they occur over a prolonged period on the order of tens of seconds and then become stabilized (Gustafsson, Asztely, Hanse, & Wigstrom, 1989). In order to capture these dynamics, a function that describes the induction time course of plasticity is included in the model. Third, the dependence of the plasticity rule on the current synaptic weight has a significant effect on the dynamics, stability, and functionality of the plasticity rule (Morrison, Diesmann, & Gerstner, 2008). This weight dependence adds an extra source of nonlinearity to the system, which yields significant challenges that have to be addressed for plasticity rule identification and precludes the direct use of existing solvers.

The main goals of this letter are to formulate a computational framework for estimating the functional STDP rule using spiking data and validate the estimation method using simulated data generated from a spiking neuron model with a wide range of plasticity parameters. Here, the model takes the form of a generalized multilinear model (GMLM) to facilitate the nonlinear estimation of all model coefficients from spiking data in one neuron model dimension and two plasticity model dimensions. This is an extension of the generalized Volterra model (GVM) used in previous studies (Song et al., 2009), where each GMLM dimension is a GVM. The GVM is equivalent to a cascade of a Volterra model and a link function, which enables coefficient estimation with well-understood algorithms and convergence properties inherited from the generalized linear model (GLM) (Truccolo, Eden, Fellows, Donoghue, & Brown, 2005; McCullagh & Nelder, 1989). Basis function expansion is also used to further reduce the number of parameters to estimate. One complicating factor in the model estimation is that the change in synaptic weight is dependent on the current synaptic weight, in addition to the input-output temporal spiking pattern. This weight dependence adds a nonlinearity that cannot be directly identified using standard GLM estimation procedures such as the reweighted least-squares method. An iterative unconstrained optimization procedure is therefore introduced to overcome this estimation difficulty associated with weight dependence.

Related identification approaches that have been used for neural modeling include the generalized bilinear model, which has properties explored in Ahrens, Paninski, and Sahani (2008), and multilinear modeling without a link function (Ahrens, Linden, & Sahani, 2008). More recently, the theoretical properties of models equivalent to a generalized multilinear model have been explored in other fields (Zhou, Li, & Zhu, 2013). These works highlight the benefits of using multilinear formulations to drastically reduce the amount of parameters to estimate over a full-rank model.

In related plasticity identification from spiking activity work, Stevenson and Kording (2011) use a generalized bilinear model for estimating an additive STDP rule. Assumptions in this work include immediate plasticity induction, as well as a forgetting factor where all weight changes decay with a time constant of 60 seconds to overcome weight instability. Here, prolonged plasticity induction over tens of seconds is captured with a GMLM dimension. Furthermore, achieving stability through multiplicative weight dependence without imposing a forgetting factor and tractably estimating the resultant nonlinearities is a central contribution of the work presented here. Additionally, Linderman, Stock, and Adams (2014) present a promising approach to simultaneously infer connectivity, weight trajectories, and underlying learning rules with a particle Markov chain Monte Carlo (pMCMC) technique from spike train data. This pMCMC approach, however, is demonstrated only on minute-long simulations where all weight trajectories increase or decrease monotonically or saturate.

2  Spiking Neuron and Plasticity Model

2.1  Spiking Neuron Model Structure

First, we define a stochastic spiking neuron model that generates an output spike whenever its membrane potential, w, crosses a threshold, ,
formula
2.1
The membrane potential is composed of a summation of postsynaptic potential, u, a spike-triggered after-potential, a, and gaussian white noise, :
formula
2.2
The postsynaptic potential, u, is calculated by convolving input spike trains, xn, with feedforward functions, kn, multiplied by their respective synaptic weight gn,
formula
2.3
where Mk refers to the memory length of the feedforward function and N is the number of inputs. Every output spike generates an after-potential, a, according to a feedback function, h:
formula
2.4
The gaussian white noise has standard deviation and captures system uncertainty caused by both the intrinsic neuronal noise and unobserved, uncorrelated input. The spiking neuron model structure is shown in Figure 1. By defining u, a, and relative to , The firing probability, Pf, at any time can be expressed as
formula
2.5
where is the cumulative distribution function of the standard normal distribution. As in previous work, this model quantifies functional connectivity (i.e., input-triggered changes in conditional firing probability), which cannot be strictly interpreted as physiological monosynaptic connections. (See D. Song et al., 2009, for more details on this form of spiking neuron model and for a derivation of the Pf.)
Figure 1:

(Top) Spiking neuron model. A feedforward component, feedback component, and gaussian white noise are summed and thresholded to create output spiking. (Bottom) STDP model. (A) For each input-output spike pair, the difference in timing determines the amplitude of the associated change in weight. (B) All changes in weight have a prolonged induction time course defined by functions Ixy and Iyx. (C) If , the associated weight change is multiplicatively weight dependent, where is multiplied by the weight in the previous time step.

Figure 1:

(Top) Spiking neuron model. A feedforward component, feedback component, and gaussian white noise are summed and thresholded to create output spiking. (Bottom) STDP model. (A) For each input-output spike pair, the difference in timing determines the amplitude of the associated change in weight. (B) All changes in weight have a prolonged induction time course defined by functions Ixy and Iyx. (C) If , the associated weight change is multiplicatively weight dependent, where is multiplied by the weight in the previous time step.

2.2  Plasticity Model Structure

The STDP rule models the amplitude of the change in weight as a function of the intervals between input-output spike pairs () and the time course of the weight change induction. Since the STDP amplitude function is discontinuous around , the model includes two separate amplitude functions Axy for and Ayx for (see Figure 1A). The induction of plasticity occurs over a much longer timescale than the tens of milliseconds of the plasticity amplitude functions. We model plasticity induction with two functions: Ixy for and Iyx for (see Figure 1B). The timescale of the implemented plasticity induction functions is on the timescale of tens of seconds as shown in experimental results for long-term potentiation (Gustafsson et al., 1989). The changes in weight are represented by for and for :
formula
2.6
formula
2.7
Essentially, the contribution to (or ) from each spike pair is calculated by convolving the induction function Ixy (or Iyx) by the latter of the two spike times and scaling by (or ). Amplitude functions Ayx and Axy are defined over memory windows , , respectively, and zero elsewhere, while induction functions Ixy and Iyx are defined from . In the model, we make a simplifying assumption that the synaptic weight, , instantaneously influences the output spiking probability and do not explicitly include a delay that would be caused by membrane filtering. The effective delay that would be caused by membrane filtering is less than 100 milliseconds (Pozzorini et al., 2015), which is negligible compared to the slow changes in weight in our model due to prolonged induction on the timescale of tens of seconds.
The weight dependence of the plasticity rule and system stability must also be considered. A purely additive rule, where , is unstable; every weight increase will increase the likelihood of subsequent input spikes to cause output spikes, which leads to an even larger increase in synaptic weight. Stability can be imposed with an additive rule by implementing hard minimum and maximum bounds; however, the dynamics of such a rule cause rapid bimodal saturation to upper and lower bounds. Such a bimodal distribution in synaptic weights is not supported in experimental studies (Turrigiano, Leslie, Desai, Rutherford, & Nelson, 1998; Song, Sjöström, Reigl, Nelson, & Chklovskii, 2005). An alternative way to achieve system stability is to make the effective change in weight be multiplied by the current weight (multiplicative) instead of being additive. Experimental results demonstrate that a multiplicative update rule provides a better fit to depression data, while an additive rule is better than a multiplicative rule for potentiation data (Morrison et al., 2008). Multiplicative weight modification for the depressive portion of the rule and additive weight modification for the potentiation portion can be seen in Figure 1C and is incorporated into the model as
formula
2.8
This weight dependence formulation is equivalent to what is proposed in Van Rossum, Bi, and Turrigiano (2000). The weight dependence on the depressive portion of the rule ensures stability because at low weights, the effective depression is diminished, and at high weights, the effective depression is enhanced. The stable levels of synaptic weight over time are determined by the relative magnitudes of vs. . (See Figure 2 for a visualization of how spike pairs influence synaptic weight according to the defined STDP rule.)
Figure 2:

Demonstration of the STDP rule. In the top portion of the figure, the relation between input-output spike pair timing and resultant and traces are shown. For clarity, STDP amplitude and induction timescales are not drawn to scale. In the implemented rule, interspike timing intervals of interest for the STDP amplitude function are on the order of tens of milliseconds versus an induction time period of tens of seconds. In the bottom portion of the figure, the summation of and over time leads to a prolonged induction of weight change in g. The relative spike timing between the third and fourth spike pair and associated are equivalent; however, due to weight dependence, the contribution of is multiplied by a smaller weight value for the fourth pair and therefore leads to a smaller change in g.

Figure 2:

Demonstration of the STDP rule. In the top portion of the figure, the relation between input-output spike pair timing and resultant and traces are shown. For clarity, STDP amplitude and induction timescales are not drawn to scale. In the implemented rule, interspike timing intervals of interest for the STDP amplitude function are on the order of tens of milliseconds versus an induction time period of tens of seconds. In the bottom portion of the figure, the summation of and over time leads to a prolonged induction of weight change in g. The relative spike timing between the third and fourth spike pair and associated are equivalent; however, due to weight dependence, the contribution of is multiplied by a smaller weight value for the fourth pair and therefore leads to a smaller change in g.

3  Simulation of the Spiking Neuron and STDP Model

First, we run the spiking neuron and STDP model to generate input-output spike data for the estimation. Simulations of the spiking neuron and STDP model are run with a default configuration and several parameter permutations to test the robustness of model identification. Simulations are performed with a 1 ms time step with a default duration of 67 min (4000 s). Input spike trains, x, are generated by resampling interspike intervals from a set of microelectrode recorded hippocampal CA3 data, which has an average firing frequency of 0.76 Hz. In simulations with the default configuration, clusters of input spikes trigger changes in synaptic weight that fluctuate around a central value (see Figure 3).

Figure 3:

Simulation of the spiking neuron and STDP model shown with multiple scales. (A) A 30 min timescale. Model input, constructed from resampled CA3 recordings, has non-Poisson spiking activity. Simulated output spiking activity has a baseline level with fluctuations correlated to input spiking activity. The simulated synaptic weight demonstrates stable and continuous fluctuations around a central value. (B) A 2 min timescale. Weight fluctuations are triggered by bursts of spike pairings. The spike pairing intervals used in calculating the STDP rule weight modifications are shown as bars. The timing of the second spike of each pairing is plotted along the horizontal axis (for all , the second spikes are all output y spikes, while for all , the second spikes are all input x spikes). The length of each pairing interval is plotted along the vertical axis according to the scale on the right. (C) A 500 ms timescale. Weight fluctuations are not immediately affected by concurrent spiking activity because of delayed plasticity induction. Each group of vertically stacked bars represents a group of spike pairings with the same second spike.

Figure 3:

Simulation of the spiking neuron and STDP model shown with multiple scales. (A) A 30 min timescale. Model input, constructed from resampled CA3 recordings, has non-Poisson spiking activity. Simulated output spiking activity has a baseline level with fluctuations correlated to input spiking activity. The simulated synaptic weight demonstrates stable and continuous fluctuations around a central value. (B) A 2 min timescale. Weight fluctuations are triggered by bursts of spike pairings. The spike pairing intervals used in calculating the STDP rule weight modifications are shown as bars. The timing of the second spike of each pairing is plotted along the horizontal axis (for all , the second spikes are all output y spikes, while for all , the second spikes are all input x spikes). The length of each pairing interval is plotted along the vertical axis according to the scale on the right. (C) A 500 ms timescale. Weight fluctuations are not immediately affected by concurrent spiking activity because of delayed plasticity induction. Each group of vertically stacked bars represents a group of spike pairings with the same second spike.

For each parameter permutation, 10 different simulations are performed with a set of unique resampled input trains and gaussian noise sequences. The average output firing rate for the 10 default simulation parameter trials is 6.1 Hz versus a baseline firing rate of 3.8 Hz, which would occur with no input. Parameter permutations include simulations run with 5 Hz Poisson process inputs and simulation lengths of 17 min (1,000 s) and 200 min (12,000 s).

3.1  Spiking Neuron Model

In the spiking neuron model, the feedforward function, k, has the shape of a typical excitatory postsynaptic potential (EPSP) (Sayer, Friedlander, & Redman, 1990). The feedback function, h, has a shape based on Song et al. (2009), which contains an immediate negative refractory peak followed by a delayed facilitatory component. Feedforward, k, and feedback, h, function shapes used in simulations can be seen in Figure 1.

Gaussian noise with is used, which corresponds to a baseline firing rate of 3.8 Hz. An alternative way to describe this gaussian noise level is to quantify how many unobserved Poisson spiking inputs with firing rate and the same feedforward function would yield noise with the same standard deviation. A closed-form expression for this noise standard deviation from unobserved Poisson spiking is
formula
3.1
where Nz is the number of unobserved inputs and is the time bin interval of 1 ms. Gaussian noise with in this case has the same standard deviation as unobserved inputs with and the same firing rate of Hz. Gaussian noise levels with and are also tested, which correspond to baseline firing rates of 2.1 Hz and 6.2 Hz, respectively.

For simplicity, the majority of simulations are run with one input; however, five input trials are used for multiple input validation. When increasing the number of inputs from one to five, the 10-trial average output firing rate increases from 6.1 Hz to 8.4 Hz; however, the weight fluctuation levels remained similar with averages of 0.19 with five inputs and 0.22 with one input (see Figure 4A).

Figure 4:

Simulation of the spiking neuron and STDP model with parameter variation. (A) Multiple inputs. Each line shows the weight value for an input from a simulation with five inputs. There are stable weight fluctuations in each input around an effective baseline weight. (B) Prolonged versus immediate plasticity induction. With immediate induction, weight fluctuations have substantially different features, including timing of relative maximum and minimum values. (C) Spike pairing scheme. When using a single-pairing instead of an all-to-all spike pairing scheme, there are lower weight fluctuations with differing features.

Figure 4:

Simulation of the spiking neuron and STDP model with parameter variation. (A) Multiple inputs. Each line shows the weight value for an input from a simulation with five inputs. There are stable weight fluctuations in each input around an effective baseline weight. (B) Prolonged versus immediate plasticity induction. With immediate induction, weight fluctuations have substantially different features, including timing of relative maximum and minimum values. (C) Spike pairing scheme. When using a single-pairing instead of an all-to-all spike pairing scheme, there are lower weight fluctuations with differing features.

3.2  Plasticity Model

In the STDP model, the amplitude functions, Axy and Ayx have shapes based on experimental results from Bi and Poo (1998). Identical STDP induction functions, Ixy and Iyx, are used, which have a prolonged onset over a period of tens of seconds to match the relative timescale of the induction of long-term potentiation (Gustafsson et al., 1989). Prolonged STDP induction causes substantially different weight fluctuations when compared to immediate STDP induction (see Figure 4B). STDP amplitude and induction function shapes used in simulations can be seen in Figure 1. The spike pairing scheme for the STDP rule also affects the weight fluctuations during a simulation. In equation 2.6, for each output spike, all preceding inputs spikes within an interval of MA are considered. Similarly, in equation 2.7, for each input spike, all preceding output spikes within MA are considered. An alternate to this all-to-all scheme is a single-pairing scheme, where only the immediately preceding spikes are considered for each output and input spike (see Figure 5). When the spike pairing scheme was changed from all-to-all to single-pairing (with identical input spiking and noise), there were significant differences in weight fluctuation features (see Figure 4C).

Figure 5:

Spike pairing scheme comparison. All-to-all: For each input (or output) spike, the interspike intervals from all preceding output (input) spike times are used in the calculations for change in synaptic weight. Single-pairing scheme: For each input (or output) spike, only the interspike intervals for the immediately preceding output (input) spikes are used in plasticity calculations.

Figure 5:

Spike pairing scheme comparison. All-to-all: For each input (or output) spike, the interspike intervals from all preceding output (input) spike times are used in the calculations for change in synaptic weight. Single-pairing scheme: For each input (or output) spike, only the interspike intervals for the immediately preceding output (input) spikes are used in plasticity calculations.

3.3  Stable Weight Fluctuation

Synaptic weight fluctuations are determined by the relative magnitudes of and ; therefore, a weight fluctuation level can be targeted by adjusting the relative magnitude of amplitude functions Ayx versus Axy. A relative depression factor, F, can be defined so that Ayx can be modified from a standard function,
formula
3.2
to target different synaptic weight fluctuation levels, , in different simulations. A standard value of is used; however, simulations are also performed with and . Target values of = 0.1, 0.2, and 0.4 are achieved by setting F equal to 11.5, 8.25, and 5.63, respectively. The initial weight of a simulation, g0, is set to . In multiple studies, reported STDP curves depict the percentage of weight change after tens of spike pairings and typically show 100% increases in weight for spike pairings of –20 ms (Bi & Poo, 1998). Here, the magnitude of weight change in Axy is specified additively and corresponds to a 100% increase in weight for 30 pairings of ms with . The magnitude of Ayx is adjusted to reach stable weight fluctuations as described above.

4  Model Estimation Methods

The goal of model estimation is to identify all neuron and plasticity model parameters (, , , , , , and ) from input and output spiking activity, x and y, that maximize the likelihood of observing output spiking activity, y. In this letter, we apply the following model estimation techniques to simulated spiking data as described in section 3.

To minimize the number of parameters to be estimated, we approximate each function as a weighted summation of a set of basis functions. For example, the feedforward function, k, can be represented as
formula
4.1
where is a basis function of order jk and is the corresponding coefficient. The term Lk denotes the number of basis functions.
The functions can be represented in vector multiplication form as
formula
4.2
where and have length Lk.

All other model functions (h, Axy, Ayx, Ixy, and Iyx) are decomposed similarly with their own corresponding sets of basis functions (, , , , and ), which allows us to recreate the model fully by estimating , , , , , , and . We use Laguerre basis functions (Marmarelis, 1993) for feedforward, feedback, and STDP induction functions (, , , and ), taking advantage of their exponentially decaying shapes. For STDP amplitude functions, we use the more local B-spline basis functions (de Boor, 1972) to avoid overconstraining the asymptomatic shapes of the unknown functions.

Parameter estimation is facilitated by formulating the neuron model of firing probability from equation 2.5 in the form of a generalized mutlilinear model (GMLM). Sets of parameter estimates can therefore be updated by expressing firing probability as a GLM with a probit link function,
formula
4.3
where is a linear predictor in which the parameters to be estimated are linearly separated. Standard GLM algorithms using maximum likelihood estimation (MLE) with iterative reweighted least-squares (IRWLS) can be used for updating sets of parameter estimates with well-defined convergence properties. Unconstrained optimization is used to estimate and due to nonlinearity caused by multiplicative weight dependence. Finally, an initialization and iteration structure is used to estimate all model functions together.

4.1  Generalized Linear Model Representation with Basis Function Expansion

The firing probability model in equation 2.5 can be used as the basis of GLM estimation by formulating u and a with basis function expansion. The postsynpatic membrane potential, u, can be decomposed from equation 2.3 to
formula
4.4
where is a vector of length Lk that represents input spiking activity filtered through the feedforward basis functions with elements , where
formula
4.5
Note that in equation 4.4, only one input is shown for simplicity. Similarly, .
The weight changes, and , can be expanded in a similar manner,
formula
4.6
formula
4.7
formula
4.8
formula
4.9
where and are matrices with elements and , respectively. The multiplication of by the previous weight in equation 2.8 does not allow to be linearly separated in , which means that and cannot be estimated with GLM techniques. However, all other parameters can be estimated as a GLM by embedding into the firing probability model by defining
formula
4.10
formula
4.11
formula
4.12
where is an matrix with elements . The difference between and from the previous section is that has embedded and is dependent on from the previous time step. A full characterization of the firing probability with basis function expansion and linearly separated coefficients can be defined by combining equations 2.5, 4.4, and 4.10 into
formula
4.13

For each model function estimate, appropriate sets of basis functions, (, , , , , and ), must be used. For both Laguerre and B-spline sets of basis functions, the number of basis functions, L, must be chosen. In addition, for B-spline basis functions, their distribution in time must be chosen, as well as their degree. For and , four third-degree B-splines are used. Knots are distributed unevenly with higher density in smaller values. A time window of 200 ms is chosen to distribute and because of the short relevant timescale for STDP spike pairings. For Laguerre basis functions, the Laguerre parameter , which controls the rate of basis function decay, is chosen based on the durations of modeled processes. For and , values were chosen to span the expected durations of 200 ms; for and , values were chosen to span the expected durations of 30 s. As L increases, becomes less important to control. The number of basis functions used for k, h, Ixy, and Iyx are , , , and , respectively. Laguerre basis functions have also been used to estimate STDP amplitude functions accurately (results not shown); however, B-splines are used here because they are local and not biased toward fitting function shapes with exponential decay.

4.2  Coefficient Estimation

There are four steps employed to enable model estimation from the Pf model in equation 4.13:

  1. Estimation of a subset of all parameters from a multilinear GLM form where

  2. Unconstrained optimization of non-GLM parameters and

  3. Creation of an initialization and iteration structure to estimate all parameters

  4. Normalization of all initial estimates into .

First, the predictor from equation 4.3 can be expressed as
formula
4.14
where and rely on , by
formula
4.15
formula
4.16
The predictor in equation 4.14 can be considered trilinear or multilinear with respect to because these coefficients can be linearly isolated into three sets, , , and , which are unique in feedforward, amplitude, and induction coefficients, respectively. The maximum likelihood estimate for each of these sets is unique using the standard GLM technique of IRWLS while holding all remaining model parameters fixed.

Unconstrained optimization can be used to estimate non-GLM parameters and given a cost function. The deviance of GLM estimation is an indicator of model goodness of fit and can be used as a cost function to simultaneously estimate a subset of GLM parameters with non-GLM parameters. Specifically, can be optimized with a cost function using the deviance of the GLM fit of while holding all parameters fixed except for . Likewise, can be simultaneously estimated with . While several different algorithms could be used for unconstrained optimization, we used a Nelder-Mead simplex algorithm (Nelder & Mead, 1965) as defined in Matlab.

Each GLM, IRWLS step and each non-GLM unconstrained optimization step estimates a subset of parameters with respect to the remaining model parameters. In order to achieve convergence to satisfactory estimates for all of the model parameters, a naive but structured initialization and iteration scheme is used for estimation:

  1. Initialize,

  2. Repeat the following steps to update estimates of until a change in the model likelihood of observing output spiking events is less than .

  3. Feedforward estimation of using GLM-IRWLS (holding remaining parameters fixed).

  4. STDP amplitude estimation of and . Perform unconstrained optimization of (for the cost function use the deviance of GLM-IRWLS estimation of given while holding remaining parameters fixed).

  5. STDP induction estimation of and . Perform unconstrained optimization of (for the cost function, use the deviance of GLM-IRWLS estimation of given while holding remaining parameters fixed).

Without the unconstrained optimization steps above, the iterative updating of each linear parameter set is equivalent to generalized tensor regression with rank 1 as outlined in Zhou et al. (2013).

Final estimates can be recovered from by normalizing with regard to and defining conventions to eliminate redundancy between estimates such that the maximum of is unity and the sum of each induction function (, ) is unity.

5  Estimation Results

5.1  Estimation Performance

With the input-output spike data simulated in section 3, we further estimate the spiking neuron and STDP coefficients using the method described in section 4.2. Successful identification of the neuron and STDP model is achieved for all simulation parameters with enough simulated time. Due to the stochastic nature of the simulations, there is trial-to-trial variability in fit fidelity. Trends are identified by completing 10 trials for each parameter permutation. The normalized root-mean-square error (NRMSE) can be used as a metric to analyze the relative fit fidelity of each estimated modeled function. Calculation for the NRMSE for in each trial is defined as
formula
5.1
and is similarly calculated for all other model functions. We define the average ek across all 10 trials of the same parameter set as , and again similarly for all other model functions.

The neuron model estimates, , , and are always accurate, regardless of what simulation parameters are used (the maximum , and for all fits estimated). There is more variation in the STDP model estimates, , , , and , as can be seen in Figures 6 and 8. Fitting results to a standard set of simulation parameters, ( input, , natural input spiking, and 67 min), are robust (), and can be seen in Figure 6. Errors in initial weight estimates, , are expected because the effective weight contribution of the initial weight diminishes quickly (to with default parameters after 96s, as seen in section A.2 in appendix A and Figure 10). In addition to robust model identification, the estimated weight fluctuations in each trial match simulated weights closely, as seen in Figure 7. For simulated versus estimated g with the standard simulation parameters, the average Pearson’s correlation coefficient, , is 0.991. To analyze the importance of assuming prolonged induction, estimation is performed on the same generated data while assuming only immediate induction in the identification model. The resulting estimates of , , and g are distorted from the simulated values but are still correlated and may still serve as useful approximations (, and , with immediate induction versus 0.16, 0.18, and 0.991, respectively, with prolonged induction).

Figure 6:

Estimated spiking neuron model and STDP model functions from spiking data. All plots show the 95% confidence bounds from 10 estimation trials compared to the target function values used in the generative simulations. Successful estimation of all model functions and parameters is demonstrated. (A) Estimated spiking neuron model functions are the most accurate. (B) STDP amplitude functions are recovered more accurately than STDP induction functions.

Figure 6:

Estimated spiking neuron model and STDP model functions from spiking data. All plots show the 95% confidence bounds from 10 estimation trials compared to the target function values used in the generative simulations. Successful estimation of all model functions and parameters is demonstrated. (A) Estimated spiking neuron model functions are the most accurate. (B) STDP amplitude functions are recovered more accurately than STDP induction functions.

Figure 7:

Individual trial estimated STDP functions and weight fluctuations. The 2 trials out of 10 with the lowest correlation values between estimated and simulated weight fluctuations are shown for the default simulation parameter set (the estimated STDP functions on the right correspond to the weight traces on the bottom).

Figure 7:

Individual trial estimated STDP functions and weight fluctuations. The 2 trials out of 10 with the lowest correlation values between estimated and simulated weight fluctuations are shown for the default simulation parameter set (the estimated STDP functions on the right correspond to the weight traces on the bottom).

Figure 8:

Comparison of fit fidelity with varying simulation parameters. Default simulation parameters are indicated with a and correspond to simulation results plotted in Figures 6 and 7. Stacked bar values depict , , , and (error bars correspond to the summation of the SEM).

Figure 8:

Comparison of fit fidelity with varying simulation parameters. Default simulation parameters are indicated with a and correspond to simulation results plotted in Figures 6 and 7. Stacked bar values depict , , , and (error bars correspond to the summation of the SEM).

Estimation is also conducted with simulation parameter permutations to explore the robustness of model identification. Increasing the effective weight baseline, , the number of inputs, and simulation length, , increases fit fidelity, as seen in Figure 8. Model identification is consistently achieved at 67 and 200 min, but not at 17 min. The all-to-all spike pairing scheme has better estimation properties than the single-pairing scheme (see Figure 8), likely due to the increased number of interspike intervals evaluated in the estimation process. The single-pairing scheme, however, achieves successful estimation when the simulation duration is increased to 200 min. When comparing the effect of modeled noise standard deviation, , lower values (which cause lower baseline firing rates) cause an increase in fit fidelity (see Figure 8). This signifies that even with sparsely firing output neurons (with fewer spike pairings and fewer associated weight changes), STDP identification can still be achieved. This is likely due to the increased ratio of output spikes caused by input firing versus noise. The resampled CA3, physiological, input spike trains used have more clustering of spikes than uncorrelated Poisson spike trains. In a comparison between 0.76 Hz physiological input (labeled Natural) and 5 Hz Poisson input (see Figure 8), STDP model identification is superior with the physiological input despite the lower firing rate. This result signifies that the STDP identification approach is suited well for physiological, sparse input spiking data.

5.2  Estimation Algorithm Analysis

The estimation algorithm contains GLM-IRWLS steps, unconstrained optimization steps, and an initialization/iteration structure that links all of the steps together. Due to the properties of the GLM with a probit link function, each of the GLM-IRWLS steps is a concave estimation problem that, if it converges, yields a unique maximum likelihood estimate for the parameter subset being estimated (Wedderburn, 1976). When combining iterative GLM steps into generalized multilinear model estimation, there is global convergence of estimates to stationary points and local convergence, where estimates are attracted toward local maxima (Zhou et al., 2013). With the addition of unconstrained optimization in the complete estimation procedure, there are no longer theoretical guarantees for estimation solutions.

Convergence, however, is still observed in practice with the implemented naive initialization procedure to stationary estimates around the true underlying model given a large enough data duration. For each unconstrained optimization step (depicted in section 4.2, steps 4 and 5), stationary solutions are achieved in 203 or fewer iterations of the implemented Nelder-Simplex algorithm (see Figure 9A). Estimates for the entire set of parameters reach stationary points within two repetitions of steps 3 to 5 in section 4.2. With 67 minutes of data, all estimated STDP functions are equivalent when initialized naively versus perfectly to the underlying model with an average NRMSE for all trials (see Figure 9B). When tested with arbitrarily large initialization points, solutions do not necessarily converge, and estimates frequently do not match those with perfect initialization. Therefore, the initialization scheme implemented is both effective and necessary in enabling estimation.

Figure 9:

Convergence of estimation algorithm. (A) Each unconstrained optimization step identified a stationary solution in 203 or fewer total iterations. (B) Average NRMSE between estimated STDP functions when initialized naively versus perfectly to the underlying model. All error bars denote maximum and minimum values across trials (default values used for unlabeled parameters).

Figure 9:

Convergence of estimation algorithm. (A) Each unconstrained optimization step identified a stationary solution in 203 or fewer total iterations. (B) Average NRMSE between estimated STDP functions when initialized naively versus perfectly to the underlying model. All error bars denote maximum and minimum values across trials (default values used for unlabeled parameters).

An alternative expression for in equation 4.14 is one that is full rank (Ahrens, Paninski, et al., 2008) instead of multilinear. The Volterra representation is also full rank as opposed to multilinear (Marmarelis, 2004). A full-rank expression has a separate coefficient for every combination of multilinear coefficients, which makes linear with respect to the GLM parameters. An advantage of full rank over multilinear expression is that estimation does not have to be iterative and would yield a global optima. The full-rank model is also capable of describing more complex interactions. However, the number of coefficients that must be estimated in the full-rank model is substantially more—() as opposed to in the multilinear model. The multilinear model is also more interpretable because the k, Axy, and Ixy functions are estimated separately, while a full-rank model would be described by a three-dimensional function that would be hard to visualize. Furthermore, for the non-GLM parameters, full-rank estimation would necessitate unconstrained optimization for () parameters, which could become intractable as opposed to separate steps of estimating LA and LI parameters in the multilinear model.

6  Discussion

Here, we first formulate a flexible spiking neuron model structure with STDP, which includes functions that characterize the stationary (k, h, ) and plastic (Axy, Ayx, Ixy, Iyx) properties of the neuron. By incorporating functions for prolonged induction of STDP (Ixy, Iyx), we account for the fact that STDP evolves with time instead of being instantaneous. Next, we formalize an estimation technique using the generalized multilinear model (GMLM) structure and introduce an iterative approach to address nonlinear dependence of the model parameters on synaptic weight. Finally, we demonstrate successful model estimation on simulated spiking data and show that all model parameters can be estimated accurately with this method. There is relatively higher variability, however, in the plasticity model parameter estimates (g0, Axy, Ayx, Ixy, Iyx) than in the spiking neuron model parameter estimates (k, h, ). Several factors can enhance the fitting performance: (1) higher weight fluctuation levels, (2) longer simulation time, (3) lower background firing rate, (4) larger number of inputs, (5) using physiological instead of Poisson spike trains, and (6) using an all-to-all instead of single-pairing spike pairing scheme. Any simulation parameter that shows decreased fitting accuracy can be compensated for by increasing simulation time. While most simulation parameters yield excellent fits with approximately 1 hour of simulated data, some fits require as short as tens of minutes or as long as 200 minutes of simulated data. Performance indicates that this identification method is robust to conditions with bursting inputs and varied baseline firing rates.

The presented identification methodology relies on a generalized multilinear model (GMLM) formulation for parameter estimation, where output firing probability is expressed with a multilinear predictor function coupled to a probit link function. The main advantages of using a multilinear model as opposed to a full-rank model (i.e., Volterra model) are a reduction in the number of parameters to estimate and improving the interpretability of results. For example, the five-input trilinear model used here has 39 open parameters, while a full-rank representation would have 485 open parameters. These advantages have motivated related approaches including neuron model estimation as a multilinear model without a link function (Ahrens, Linden, et al., 2008), and as a generalized bilinear model (GBLM) (Ahrens, Paninski, et al., 2008). The current trilinear model can be extended to include higher-order multilinearities with only a moderate increase in parameters to make the model more general. For example, the pair-wise STDP rule here could be enhanced to consider triplets (Wittenberg & Wang, 2006; Wang et al., 2005; Froemke & Dan, 2002; Pfister & Gerstner, 2006) with an extra multilinear model dimension (see appendix D).

The addition of subsets of non-GLM parameters estimated with unconstrained optimization here differentiates this model structure and its convergence properties from the GBLM (Ahrens, Paninski, et al., 2008) and models equivalent to a GMLM (Zhou et al., 2013). Given sufficient data duration, however, solutions with the implemented naive initialization and iteration structure are equivalent to those perfectly initialized to the true underlying model. Successful incorporation of nonlinear parameters in the model signifies that the model could likely be extended to include additional nonlinear features.

This methodology is an extension of our previous nonlinear neuron ensemble modeling (Song et al., 2007, 2009) to include plasticity and shares previous advantages of the interpretability of spiking neuron model parameter, basis function expansion, and GLM expression. The neuron model has separate feedforward, feedback, and noise parameters, which can be interpreted as functional connectivity (postsynaptic potential), after-potential, and baseline firing rate, respectively. For simplicity, we include only the first-order feedforward Volterra kernel, k, in this study. However, second-order Volterra kernels can be added to characterize paired-pulse facilitation or depression (short-term synaptic plasticity) and nonlinear interactions between more than one input. The formulation of the firing probability of the neuron model as a GLM with basis function expansion allows using standard techniques to estimate the neuron model with a low number of open parameters. Overfitting, which is not specifically addressed here due to the relatively low numbers of parameters estimated, could be minimized with regularized or penalized estimations (Song et al., 2013). In our previous plasticity identification work, we used a two-stage estimation procedure that first tracks synaptic weight trajectory with adaptive filtering and then estimates the plasticity rule using spiking activity and the tracked weight fluctuations (Robinson, Song, & Berger, 2013; Song et al., 2015). The approach presented here makes an important improvement in estimating the synaptic plasticity rules in a single stage and thus effectively mitigates the loss of information between stages of estimation.

Additional assumptions in the plasticity model about weight dependence directly affect STDP estimation, stability, and functionality. The inclusion of weight dependence adds an extra source of nonlinearity (further explained in appendix C), which complicates model estimation and necessitates the use of unconstrained optimization. A related study uses a GBLM for STDP identification (Stevenson & Kording, 2011); however, prolonged induction, and the central challenges of weight dependence and stability were not addressed. Here, the implemented multiplicative, weight-dependent rule enables system stability and in simulations caused weight to continuously fluctuate around an effective baseline value, . A model for plasticity involved in learning, however, implicates the ability to form persistent changes in weight, which were not seen in simulations with these continuous weight fluctuations. To better understand the persistence of weight changes caused by the plasticity rule in this study, analysis has been performed (see appendix B) and showed that weight changes in simulations were indeed short-lived at less than 3 minutes. Since the inputs into the model were unpatterned, however, it is reasonable to expect that they would not cause persistent weight changes. Longer-lasting weight changes may be seen with correlated input or periods of lower output spiking activity. Additionally, persistence in weight changes is enhanced with a similar weight-dependent STDP rule when lateral inhibition is considered (Billings & Van Rossum, 2009). To further enable persistent change, more features could be included and estimated in the plasticity rule, such as synaptic tagging and capture (Redondo & Morris, 2011), dopamine release based on novel stimulus timing (Lisman & Grace, 2005), calcium levels (Shouval, Bear, & Cooper, 2002), and reliance on synaptic membrane potential (Clopath, Büsing, Vasilaki, & Gerstner, 2010).

Overall, we present a methodology for estimating a STDP rule from observed spiking activity, a departure from characterizing STDP by measuring postsynaptic potentials before and after applying a spike-pairing protocol (e.g., Bi & Poo, 1998; Froemke & Dan, 2002). First, this allows STDP characterization to be applied to a broader set of studies that can be in vivo, mammalian, hippocampal, and during a wide array of cognitive tasks. When applied to cognitive tasks, the estimated STDP rule would be especially insightful because it would be descriptive of plasticity exhibited during cognitive processes such as learning and memory formation. Furthermore, all spiking activity would be naturally generated and would not rely on any electrophysiological intervention (such as presynaptic stimulation or postsynaptic current injection). The identified plasticity rule would also be conditioned on a natural distribution and large number of spiking intervals. The rule therefore may be more generalizable because it characterizes the operating pair-wise component of plasticity during a complete and wide range of neural activity patterns (as opposed to a constrained set of interval repetitions). Furthermore, STDP identification is based on observations of weight trajectories that are of long duration (i.e., hours) and high resolution (i.e., observations on the millisecond timescale). Estimation with such complex weight trajectories enables us to investigate subtle but important plasticity rule assumptions such as weight dependence and spike pairing scheme. These assumptions are important because an absence of weight dependence leads to rapid bimodal saturation (Rubin, Lee, & Sompolinsky, 2001) and the nearest-neighbor instead of all-to-all spike pairing scheme can relate STDP to BCM (Izhikevich & Desai, 2003).

When applied to in vivo spike train data, plasticity identification would estimate functional connectivity strength as opposed to synaptic connectivity strength, since the recorded input and output neurons may or may not be synaptically connected. Nevertheless, the identified plasticity rule can still yield insights into the effect of activity-dependent plasticity on the signal transformational properties of a neural ensemble. Furthermore, the identified plasticity rule might be used to extend the current stationary MIMO models used in the hippocampal memory prostheses (Berger et al., 2012) to nonstationary, self-organizing MIMO models for the development of next-generation, adaptive cortical neural prostheses.

Appendix A:  Ordinary Differential Equation Solution

Analysis of the exact solution to the ordinary differential equation describing weight change yields insights into the persistence of weight and the nonlinear nature of the implemented STDP model. The simulation and estimation presented previously is iterative with a discrete time step of 1 ms. As the time step decreases, the discrete representation for weight in equation 2.8 approaches a continuous description of
formula
A.1
This continuous description of weight is an ordinary differential equation with an exact solution given by
formula
A.2
A discrete representation of the exact solution is given by
formula
A.3
formula
A.4
The current weight, , can be broken down into a static component, , which is the contribution from the initial weight, and a dynamic component, , which is the contribution from potentiating changes in weight, . Inspection of shows that the contribution of the starting weight, g0, to the current weight, , is diminished exponentially by the sum of all depressive changes in weight between the initial time, t0, and the current time t. Similarly, as seen in , the contribution to the current weight, from a past time step’s potentiating change in weight, , is diminished by all depressive changes in weight between that time step, , and the current time, t.

Appendix B:  Weight Persistence Analysis

In Figure 10, computations of for several simulations show that the contribution of initial weight is relatively short-lived, (the average amount of time for to reduce to less than 10% of its initial value, or , is 142 s or less in all analyzed parameter sets). This rate of decay, however, is influenced by the simulation parameters. Higher-frequency Poisson input quickens the decay compared to natural bursting input, ( s versus 96 s, respectively). Higher background firing rates also cause quicker decays ( s, 96 s, and 51 s for background firing rates of 2.1 Hz, 3.8 Hz, and 6.2 Hz respectively).

Figure 10:

Comparison of the decay of the initial weight contribution for several parameters. All plotted traces are the average of 10 trials. (A) Bursting input has an average firing rate of 0.76 Hz versus 5 Hz for Poisson input. (B) Background firing rates, fB, are controlled by modifying the simulation noise level, .

Figure 10:

Comparison of the decay of the initial weight contribution for several parameters. All plotted traces are the average of 10 trials. (A) Bursting input has an average firing rate of 0.76 Hz versus 5 Hz for Poisson input. (B) Background firing rates, fB, are controlled by modifying the simulation noise level, .

These general trends are expected, where at higher rates of input and output firing, there is more weight modification and the persistence of the starting weight is diminished. The rapid decay of explains errors in the estimation of g0 and demonstrates that increasing the length of data used for fitting will not lead to better g0 estimates. Furthermore, especially for fits with longer data, estimation could be simplified by omitting the effect of g0 entirely. Currently, the model used for estimation uses the full spiking history for ; however, improvements to the capability and speed of estimation may be enhanced only by considering a certain memory window that encompasses spike history that contributes substantially to the current weight.

As can be seen in equation A.3, the same term that dictates the decay of the effect of the initial weight is also applied to the change in weight from the STDP rule at each time step. Therefore, conclusions about the persistence of the initial weight are also generalizable to the persistence in weight change caused at each time step by the STDP plasticity rule.

Appendix C:  Nonlinearity of Weight Dependence

Expression of the exact solution also gives more insight into the nonlinear nature of the weight equation and challenges in coefficient estimation. In equation A.3, the potentiation changes caused by are balanced by . Therefore, local solution optima may exist where the magnitudes of and are both larger or smaller than ideal values. Also, the nonlinearities involved in plasticity coefficient estimation can be seen by applying basis function decomposition from equations 4.6 and 4.7 with equation A.4 into equation A.3 to express as
formula
C.1
Coefficients and are linearly separable and can therefore be estimated as part of a GLM. However, and are in exponential functions that are separately summed at each time step, which precludes use of the GLM and necessitates the use of unconstrained optimization. Overall, expression of the exact weight solution in equation A.3 allows further inspection of the dynamics of the implemented STDP rule, including weight persistence, as well as insights into estimation challenges and potential improvements.

Appendix D:  Model Extension to Triplet STDP Estimation

In the minimal model considered in Pfister and Gerstner (2006), results from triplet and quadruplet experimental protocols can be captured by considering triplet sequences that end with a postsynaptic spike. These triplets can be formulated in our model by expanding the pairwise STDP amplitude function to a triplet function , where and . This formulation is equivalent to the minimal model considered in Pfister and Gerstner (2006) and Gjorgjieva, Clopath, Audet, and Pfister (2011) if parameterized with . To fit directly to spiking activity, however, and without assuming a strict function shape, we can express
formula
D.1
and approximate bilinearly with basis function decomposition as
formula
D.2
When incorporated into the entire estimation procedure, the only extra parameters to estimate would be , which could be captured by augmenting the model in equation 4.13 with an extra multilinear dimension (four instead of three). These extra parameters, could be estimated by adding an extra step in the iteration structure after the estimation of in section 4.2, similar to how additional dimensions are estimated in Zhou et al. (2013).

Acknowledgments

This work has been supported in part by the Defense Advanced Research Projects Agency through grant N66001-14-C-4016, in part by the National Science Foundation (NSF) through the Integrated NSF Support Promoting Interdisciplinary Research and Education program, and in part by the National Institutes of Health through grant U01 GM104604.

References

Ahrens
,
M. B.
,
Linden
,
J. F.
, &
Sahani
,
M.
(
2008
).
Nonlinearities and contextual influences in auditory cortical responses modeled with multilinear spectrotemporal methods
.
Journal of Neuroscience
,
28
(
8
),
1929
1942
.
Ahrens
,
M. B.
,
Paninski
,
L.
, &
Sahani
,
M.
(
2008
).
Inferring input nonlinearities in neural encoding models
.
Network
,
19
(
1
),
35
67
.
Berger
,
T.
,
Song
,
D.
,
Chan
,
R. H. M.
,
Marmarelis
,
V. Z.
,
LaCoss
,
J.
,
Wills
,
J.
, …
Granacki
,
J. J.
(
2012
).
A hippocampal cognitive prosthesis: Multi-input, multi-output nonlinear modeling and VLSI implementation
.
IEEE Transactions on Neural Systems and Rehabilitation Engineering
,
20
(
2
),
198
211
.
Bi
,
G. Q.
, &
Poo
,
M. M.
(
1998
).
Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type
.
Journal of Neuroscience
,
18
(
24
),
10464
10472
.
Billings
,
G.
, &
Van Rossum
,
M. C. W.
(
2009
).
Memory retention and spike-timing-dependent plasticity
.
Journal of Neurophysiology
,
101
(
6
),
2775
2788
.
Caporale
,
N.
, &
Dan
,
Y.
(
2008
).
Spike timing-dependent plasticity: A Hebbian learning rule
.
Annual Review of Neuroscience
,
31
,
25
46
.
Chan
,
R. H. M.
,
Song
,
D.
,
Goonawardena
,
A. V.
,
Bough
,
S.
,
Sesay
,
J.
,
Hampson
,
R. E.
, …
Berger
,
T. W.
(
2011
).
Tracking the changes of hippocampal population nonlinear dynamics in rats learning a memory-dependent task
. In
Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society
(pp.
3326
3329
).
Piscataway, NJ
:
IEEE
.
Citri
,
A.
, &
Malenka
,
R. C.
(
2008
).
Synaptic plasticity: Multiple forms, functions, and mechanisms
.
Neuropsychopharmacology
,
33
(
1
),
18
41
.
Clopath
,
C.
,
Büsing
,
L.
,
Vasilaki
,
E.
, &
Gerstner
,
W.
(
2010
).
Connectivity reflects coding: A model of voltage-based STDP with homeostasis
.
Nature Neuroscience
,
13
(
3
),
344
352
.
de Boor
,
C.
(
1972
).
On calculating with B-splines
.
Journal of Approximation Theory
,
6
,
50
62
.
Eden
,
U. T.
,
Frank
,
L. M.
,
Barbieri
,
R.
,
Solo
,
V.
, &
Brown
,
E. N.
(
2004
).
Dynamic analysis of neural encoding by point process adaptive filtering
.
Neural Computation
,
16
(
5
),
971
998
.
Eldawlatly
,
S.
,
Jin
,
R.
, &
Oweiss
,
K. G.
(
2009
).
Identifying functional connectivity in large-scale neural ensemble recordings: A multiscale data mining approach
.
Neural Computation
,
21
(
2
),
450
477
.
Feldman
,
D. E.
(
2012
).
The spike-timing dependence of plasticity
.
Neuron
,
75
(
4
),
556
571
.
Froemke
,
R. C.
, &
Dan
,
Y.
(
2002
).
Spike-timing-dependent synaptic modification induced by natural spike trains
.
Nature
,
416
(
6879
),
433
438
.
Froemke
,
R. C.
,
Debanne
,
D.
, &
Bi
,
G. Q.
(
2010
).
Temporal modulation of spike-timing-dependent plasticity
.
Frontiers in Synaptic Neuroscience
,
2
,
19
.
Gjorgjieva
,
J.
,
Clopath
,
C.
,
Audet
,
J.
, &
Pfister
,
J.-P.
(
2011
).
A triplet spike-timing-dependent plasticity model generalizes the Bienenstock-Cooper-Munro rule to higher-order spatiotemporal correlations
.
Proceedings of the National Academy of Sciences of the United States of America
,
108
(
48
),
19383
19388
.
Gustafsson
,
B.
,
Asztely
,
F.
,
Hanse
,
E.
, &
Wigstrom
,
H.
(
1989
).
Onset characteristics of long-term potentiation in the guinea-pig hippocampal CA1 region in vitro
.
European Journal of Neuroscience
,
1
(
4
),
382
394
.
Izhikevich
,
E. M.
, &
Desai
,
N. S.
(
2003
).
Relating STDP to BCM
.
Neural Computation
,
15
(
7
),
1511
1523
.
Lim
,
S.
,
McKee
,
J. L.
,
Woloszyn
,
L.
,
Amit
,
Y.
,
Freedman
,
D. J.
,
Sheinberg
,
D. L.
, &
Brunel
,
N.
(
2015
).
Inferring learning rules from distributions of firing rates
.
Nature Neuroscience
,
18
(
12
),
1
13
.
Linderman
,
S.
,
Stock
,
C. H.
, &
Adams
,
R. P.
(
2014
).
A framework for studying synaptic plasticity with neural spike train data
. In
Z.
Ghahramani
,
M.
Welling
,
C.
Cortes
,
N. D.
Lawrence
, &
K. Q.
Weinberger
(Eds.),
Advances in neural information processing systems
,
27
(pp.
2330
2338
).
Red Hook, NY
:
Curran
.
Lisman
,
J. E.
, &
Grace
,
A. A.
(
2005
).
The hippocampal-VTA loop: Controlling the entry of information into long-term memory
.
Neuron
,
46
(
5
),
703
713
.
Loewenstein
,
Y.
,
Yanover
,
U.
, &
Rumpel
,
S.
(
2015
).
Predicting the dynamics of network connectivity in the neocortex
.
Journal of Neuroscience
,
35
(
36
),
12535
12544
.
Markram
,
H.
,
Lübke
,
J.
,
Frotscher
,
M.
, &
Sakmann
,
B.
(
1997
).
Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs
.
Science
,
275
(
5297
),
213
215
.
Marmarelis
,
V. Z.
(
1993
).
Identification of nonlinear biological systems using Laguerre expansions of kernels
.
Annals of Biomedical Engineering
,
21
(
6
),
573
589
.
Marmarelis
,
V. Z.
(
2004
).
Nonlinear dynamic modeling of physiological systems
Hoboken, NJ
:
Wiley
.
Martin
,
S. J.
,
Grimwood
,
P. D.
, &
Morris
,
R. G. M.
(
2000
).
Synaptic plasticity and memory: An evaluation of the hypothesis
.
Annual Reviews in Neuroscience
,
23
,
649
711
.
McCullagh
,
P.
, &
Nelder
,
J. A.
(
1989
).
Generalized linear models
.
London
:
Chapman and Hall
.
Morrison
,
A.
,
Diesmann
,
M.
, &
Gerstner
,
W.
(
2008
).
Phenomenological models of synaptic plasticity based on spike timing
.
Biological Cybernetics
,
98
(
6
),
459
478
.
Nelder
,
J.
, &
Mead
,
R.
(
1965
).
A simplex method for function minimization
.
Computer Journal
,
7
(
4
),
308
313
.
Okatan
,
M.
,
Wilson
,
M.
, &
Brown
,
E.
(
2005
).
Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity
.
Neural Computation
,
17
(
9
),
1927
1961
.
Pfister
,
J.-P.
, &
Gerstner
,
W.
(
2006
).
Triplets of spikes in a model of spike timing-dependent plasticity
.
Journal of Neuroscience
,
26
(
38
),
9673
9682
.
Pozzorini
,
C.
,
Mensi
,
S.
,
Hagens
,
O.
,
Naud
,
R.
,
Koch
,
C.
, &
Gerstner
,
W.
(
2015
).
Automated high-throughput characterization of single neurons by means of simplified spiking models
.
PLoS Computational Biology
,
11
(
6
).
Redondo
,
R. L.
, &
Morris
,
R. G. M.
(
2011
).
Making memories last: The synaptic tagging and capture hypothesis
.
Nature Reviews Neuroscience
,
12
(
1
),
17
30
.
Robinson
,
B. S.
,
Song
,
D.
, &
Berger
,
T. W.
(
2013
).
Laguerre-Volterra identification of spike-timing-dependent plasticity from spiking activity: A simulation study
. In
Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society
(pp.
5578
5581
).
Piscataway, NJ
:
IEEE
.
Rubin
,
J.
,
Lee
,
D. D.
, &
Sompolinsky
,
H.
(
2001
).
Equilibrium properties of temporally asymmetric Hebbian plasticity
.
Physical Review Letters
,
86
(
2
),
364
367
.
Sayer
,
R. J.
,
Friedlander
,
M. J.
, &
Redman
,
S. J.
(
1990
).
The time course and amplitude of EPSPs evoked at synapses between pairs of CA3/CA1 neurons in the hippocampal slice
.
Journal of Neuroscience
,
70
,
828
838
.
Shouval
,
H. Z.
,
Bear
,
M. F.
, &
Cooper
,
L. N.
(
2002
).
A unified model of NMDA receptor-dependent bidirectional synaptic plasticity
.
Proceedings of the National Academy of Sciences of the United States of America
,
99
(
16
),
10831
10836
.
Song
,
D.
,
Chan
,
R. H. M.
,
Marmarelis
,
V. Z.
,
Hampson
,
R. E.
,
Deadwyler
,
S. A.
, &
Berger
,
T. W.
(
2007
).
Nonlinear dynamic modeling of spike train transformations for hippocampal-cortical prostheses
.
IEEE Transactions on Biomedical Engineering
,
54
(
6
),
1053
1066
.
Song
,
D.
,
Chan
,
R. H. M.
,
Marmarelis
,
V. Z.
,
Hampson
,
R. E.
,
Deadwyler
,
S. A.
, &
Berger
,
T. W.
(
2009
).
Nonlinear modeling of neural population dynamics for hippocampal prostheses
.
Neural Networks
,
22
(
9
),
1340
1351
.
Song
,
D.
,
Chan
,
R. H. M.
,
Robinson
,
B. S.
,
Marmarelis
,
V. Z.
,
Opris
,
I.
,
Hampson
,
R. E.
, …
Berger
,
T. W.
(
2015
).
Identification of functional synaptic plasticity from spiking activities using nonlinear dynamical modeling
.
Journal of Neuroscience Methods
,
244
,
123
135
.
Song
,
D.
,
Wang
,
H.
,
Tu
,
C. Y.
,
Marmarelis
,
V. Z.
,
Hampson
,
R. E.
,
Deadwyler
,
S. A.
, &
Berger
,
T. W.
(
2013
).
Identification of sparse neural functional connectivity using penalized likelihood estimation and basis functions
.
Journal of Computational Neuroscience
,
35
(
3
),
335
357
.
Song
,
S.
,
Miller
,
K. D.
, &
Abbott
,
L. F.
(
2000
).
Competitive Hebbian learning through spike-timing-dependent synaptic plasticity
.
Nature Neuroscience
,
3
(
9
),
919
926
.
Song
,
S.
,
Sjöström
,
P. J.
,
Reigl
,
M.
,
Nelson
,
S.
, &
Chklovskii
,
D. B.
(
2005
).
Highly nonrandom features of synaptic connectivity in local cortical circuits
.
PLoS Biology
,
3
(
3
),
e68
.
Stevenson
,
I. H.
, &
Kording
,
K. P.
(
2011
).
Inferring spike-timing-dependent plasticity from spike train data
. In
J.
Shawe-Taylor
,
R. S.
Zemel
,
P. L.
Bartlett
,
F.
Pereira
, &
K. Q.
Weinberger
(Eds.),
Advances in neural information processing systems
(pp.
2582
2590
).
Red Hook, NY
:
Curran
.
Stevenson
,
I. H.
,
Rebesco
,
J. M.
,
Miller
,
L. E.
, &
Körding
,
K. P.
(
2008
).
Inferring functional connections between neurons
.
Current Opinion in Neurobiology
,
18
(
6
),
582
588
.
Truccolo
,
W.
,
Eden
,
U. T.
,
Fellows
,
M. R.
,
Donoghue
,
J. P.
, &
Brown
,
E. N.
(
2005
).
A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects
.
Journal of Neurophysiology
,
93
(
2
),
1074
1089
.
Turrigiano
,
G. G.
,
Leslie
,
K. R.
,
Desai
,
N. S.
,
Rutherford
,
L. C.
, &
Nelson
,
S. B.
(
1998
).
Activity-dependent scaling of quantal amplitude in neocortical neurons
.
Nature
,
391
(
6670
),
892
896
.
Van Rossum
,
M. C.
,
Bi
,
G. Q.
, &
Turrigiano
,
G. G.
(
2000
).
Stable Hebbian learning from spike timing-dependent plasticity
.
Journal of Neuroscience
,
20
(
23
),
8812
8821
.
Wang
,
H. X.
,
Gerkin
,
R. C.
,
Nauen
,
D. W.
, &
Bi
,
G. Q.
(
2005
).
Coactivation and timing-dependent integration of synaptic potentiation and depression
.
Nature Neuroscience
,
8
(
2
),
187
193
.
Wedderburn
,
R. W. M.
(
1976
).
On the existence and uniqueness of the maximum likelihood estimates for certain generalized linear models
.
Biometrika
,
63
(
1
),
27
32
.
Wittenberg
,
G. M.
, &
Wang
,
S. S. H.
(
2006
).
Malleability of spike-timing-dependent plasticity at the CA3-CA1 synapse
.
Journal of Neuroscience
,
26
(
24
),
6610
6617
.
Zhou
,
H.
,
Li
,
L.
, &
Zhu
,
H.
(
2013
).
Tensor regression with applications in neuroimaging data analysis
.
Journal of the American Statistical Association
,
108
(
502
),
540
552
.