## Abstract

We derive conditions for continuous differentiability of inter-spike intervals (ISIs) of spiking neurons with respect to parameters (decision variables) of an external stimulating input current that drives a recurrent network of synaptically connected neurons. The dynamical behavior of individual neurons is represented by a class of discontinuous single-neuron models. We report here that ISIs of neurons in the network are continuously differentiable with respect to decision variables if (1) a continuously differentiable trajectory of the membrane potential exists between consecutive action potentials with respect to time and decision variables and (2) the partial derivative of the membrane potential of spiking neurons with respect to time is not equal to the partial derivative of their firing threshold with respect to time at the time of action potentials. Our theoretical results are supported by showing fulfillment of these conditions for a class of known bidimensional spiking neuron models.

## 1.  Introduction

Brain-machine interfaces (BMIs) establish direct communications between living brain tissue and external devices such as an artificial arm by sensing and interpreting neuronal activity and translating it to the external device (Nicolelis & Lebedev, 2009). Recent advances in BMIs have shown the capability of BMI-based neuroprostheses in rehabilitating motor-disabled subjects such as amputees (Hochberg et al., 2006). Lack of proper incorporation of sensory feedback such as proprioception and tactile information, which are unavailable due to loss of natural pathways in amputees, has greatly limited the widespread clinical deployment of neuroprosthetic systems in rehabilitation (Cunningham et al., 2011). Recently, microstimulation techniques have emerged as a promising approach in providing artificial proprioceptive (Weber et al., 2011) and tactile (Doherty et al., 2011) feedback by stimulating appropriate neurons of the primary somatosensory cortex. Although these techniques are promising for developing future BMI-based neuroprostheses, the experimental trial-and-error approach in designing appropriate stimulating sensory input currents may be detrimental to neural tissue or adversely change neural functionality. Therefore, a systematic approach that uses optimal feedback control theory (OFCT) is highly desirable in designing these artificial sensory feedback pathways (Scherberger, 2009). Figure 1 shows one such approach in an optimal receding horizon control framework (Kumar, Aggarwal, Thakor, Schieber, & Kothare, 2011; Kumar, Schieber, Thakor, & Kothare, 2013).

Figure 1:

An optimal receding horizon control-based closed-loop brain-machine interface (BMI). Here, the overall objective of the receding horizon controller is to recover the natural performance of the closed-loop BMI by stimulating neurons of appropriate cortical sensory areas that encode the natural tactile and proprioception information.

Figure 1:

An optimal receding horizon control-based closed-loop brain-machine interface (BMI). Here, the overall objective of the receding horizon controller is to recover the natural performance of the closed-loop BMI by stimulating neurons of appropriate cortical sensory areas that encode the natural tactile and proprioception information.

As shown in Figure 1, the receding horizon controller designs optimal artificial sensory feedback currents and stimulates neurons of the appropriate cortical sensory areas such that the closed-loop performance of the BMI can be recovered for a given motor task. Briefly, the controller computes optimal stimulating input currents by solving an optimal control problem in a predictive framework (Kwon & Han, 2005). Here is a set of parameters or decision variables. At a given time, system measurements are obtained, and a dynamic model of the cortical network of spiking neurons, decoder, and the prosthetic arm is used to predict the outputs of the system. An optimal control problem is then formulated to minimize the difference between the predicted and the desired outputs. The design of artificial sensory feedback currents is constrained by experimentally observed minimum and maximum limits on the instantaneous firing rate or the corresponding inter-spike interval (ISI) of neurons in the cortical network model as well as any constraints on the system outputs. These constraints appear explicitly in the resulting nonlinear optimization problem. A local gradient-based optimization algorithm for solving the optimization problem requires at least the first-order continuous differentiability of the objective and constraint functions with respect to decision variables. The reset of state variables at the occurrence of each action potential in typical single-neuron models such as the Izhikevich model makes it nontrivial to determine the continuous differentiability of ISIs.

ISIs and their gradients have also been used in developing gradient-based learning rules for training spiking neural networks (see McKennoch, Voegtlin, & Bushnell, 2009 and references therein; Pfister, Toyoizumi, Barber, & Gerstner, 2006). In most of these works (with the exception of Pfister et al. (2006) where a stochastic spike response model (SRM) has been used), the leaky integrate-and-fire model or the theta neuron model has been used where analytical solution for the error gradient has been derived. The extension of these learning rules to models such as the Izhikevich neuron model requires investigation of differentiability of ISIs with respect to the synaptic weights since analytical solutions in these models are difficult to obtain.

With this motivation, we here consider a recurrent network of n interconnected neurons. We assume that each neuron is connected with the remaining n−1 neurons. The dynamical behavior of each neuron is represented by a class of discrete event–based discontinuous single-neuron models. The flow of synaptic information from one neuron to another is modeled using a synaptic conductance model. Stimulation of an input neuron using an external input current drives the network. Measured inter-spike intervals of a single-output neuron defines the output of the network. In this setup, we derive sufficient conditions under which the first-order continuous differentiability of ISIs of this output neuron with respect to parameters (decision variables) of the external stimulating current can be guaranteed.

The letter starts with the essence of our theoretical results in the form of simulations. Section 3 describes the single-neuron model and the synaptic current model used. In section 4, we state a mathematical formulation of the problem. In section 5, we derive sufficient conditions for the continuous differentiability of ISIs with respect to parameters of the external stimulating current using the well-known implicit function theorem. Finally, an example is presented using known bidimensional spiking neuron models, which is followed by conclusions.

Mathematical notation: Most notation is standard and will be introduced as the results are developed. We here define the mathematical symbols frequently used in this letter: : continuous; : continuously differentiable; : membrane potential; : membrane recovery variable; : external input current; : synaptic current; t: continuous time; tf: time of the fth action potential of a single neuron; tfj: time of the fth action potential of the neuron j in the network. : fth ISI of a single neuron; : fth ISI of the neuron j in the network; : decision variables.

## 2.  Simulation Results

In this section, we use simulations to show conditions under which ISIs of individual neurons are continuous with respect to a time-continuous stimulating input current in a recurrent network of synaptically connected neurons. We begin with a single neuron whose dynamics is represented by the following leaky integrate-and-fire model:
2.1
Here, v(t) is the membrane potential, C is the membrane capacitance, and R is the membrane resistance. I(t) is a smoothly varying stimulating input current to the neuron. v(t) is reset to C whenever v(t) exceeds a constant firing threshold vp. We simulate equation 2.1 with C = 1 F, R = 10 k, C = 0 mV, and vp=20 mV. Figure 2a shows our simulation result for the case where ISIs are discontinuous with respect to I(t).
Figure 2:

Continuity of ISIs with respect to a time-continuous stimulating input current I(t) in single-neuron models. The top plot of both panels a and b shows trajectories of the membrane potential v(t) predicted by the leaky integrate-and-fire model in response to a smoothly varying stimulating input current (solid line) and its perturbed form (dashed line). (a) The membrane potential reaches the firing threshold with zero slope at the time of the first action potential. As a result, the first ISI is discontinuous with respect to I(t). (b) The membrane potential reaches the firing threshold with a positive slope at the time of both action potentials. As a result, the corresponding ISIs are continuous with respect to I(t).

Figure 2:

Continuity of ISIs with respect to a time-continuous stimulating input current I(t) in single-neuron models. The top plot of both panels a and b shows trajectories of the membrane potential v(t) predicted by the leaky integrate-and-fire model in response to a smoothly varying stimulating input current (solid line) and its perturbed form (dashed line). (a) The membrane potential reaches the firing threshold with zero slope at the time of the first action potential. As a result, the first ISI is discontinuous with respect to I(t). (b) The membrane potential reaches the firing threshold with a positive slope at the time of both action potentials. As a result, the corresponding ISIs are continuous with respect to I(t).

As shown in Figure 2a, a stimulating input current I(t) is designed such that the membrane potential v(t) reaches the firing threshold vp with zero slope at the time of the first action potential (shown by the solid line). A small perturbation in I(t) shifts the occurrence of this action potential by () 15 ms (shown by the dashed line). This large deviation in the timing of the first action potential with respect to a small perturbation in I(t) clearly indicates that the corresponding ISI is not continuous with respect to I(t). Figure 2b shows the case where the membrane potential v(t) reaches the firing threshold vp with a positive slope at the time of action potentials (shown by the solid line). As shown in this figure, a small perturbation in I(t) leads to a small change (shown by the dashed line) in the timing of the occurrence of both action potentials. Thus, the corresponding ISIs are continuous with respect to I(t).

Next we consider a recurrent network of two synaptically connected neurons S1 and S2, as shown in Figure 4. The dynamics of both neurons are represented by equation 2.1. An external input current (the same used in Figure 2a) drives the network by stimulating neuron S1. Thus, the total input current I(t) to neuron S1 is the sum of the external input current and synaptic currents from neuron S2. The neuron S2 is driven by synaptic currents induced by the action potentials of the neuron S1. With this setup, we simulate the recurrent network. Figure 3a shows our simulation results for the case where the ISIs of both neurons are discontinuous with respect to the external input current.

Figure 3:

Continuity of ISIs with respect to a time-continuous stimulating input current in a recurrent network of two synaptically connected neurons. The top plot of both panels a and b shows the trajectory of the membrane potential v1(t) of neuron S1 predicted by the leaky integrate-and-fire model in response to an external input current (solid line) and its perturbed form (dashed line). The bottom plot of both panels a and b shows trajectories of the membrane potential v2(t) of neuron S2. (a) The case where ISIs of both neurons are discontinuous with respect to the designed input current. (b) The case where ISIs of both neurons are continuous with respect to the designed input current.

Figure 3:

Continuity of ISIs with respect to a time-continuous stimulating input current in a recurrent network of two synaptically connected neurons. The top plot of both panels a and b shows the trajectory of the membrane potential v1(t) of neuron S1 predicted by the leaky integrate-and-fire model in response to an external input current (solid line) and its perturbed form (dashed line). The bottom plot of both panels a and b shows trajectories of the membrane potential v2(t) of neuron S2. (a) The case where ISIs of both neurons are discontinuous with respect to the designed input current. (b) The case where ISIs of both neurons are continuous with respect to the designed input current.

Figure 4:

A recurrent network of two neurons. Here, is the external input current with parameter . and are synaptic input currents to neurons S2 and S1, respectively. Here, and t12, t22, ⋅⋅⋅ are the spike trains of neurons S1, and S2 respectively.

Figure 4:

A recurrent network of two neurons. Here, is the external input current with parameter . and are synaptic input currents to neurons S2 and S1, respectively. Here, and t12, t22, ⋅⋅⋅ are the spike trains of neurons S1, and S2 respectively.

As shown in the top plot of Figure 3a, the membrane potential v1(t) of neuron S1 reaches the firing threshold with zero slope at the time of the first action potential (shown by the solid line). A small perturbation in the designed step input current shifts the timing of the occurrence of this action potential by () 15 ms (shown by the dashed line), which clearly indicates that the first ISI of neuron S1 is discontinuous with respect to the designed input current. Next we analyze the effect of this small perturbation in the designed input current on the first ISI of neuron S2.

It is clear that neuron S2 is initially stimulated by the synaptic current induced by the first action potential of neuron S1. We model this synaptic current as
2.2
Here, q is the maximum conductance transmitted by the synapse of neuron S1. is a time constant, t11 is the time at which the first action potential occurs in neuron S1, and is the heavy-side function. For fixed q(>0) and , Is(t) is a function of tt11. Moreover, Is(t)=0 for . Clearly, a change in t11 will change the time at which Is(t) becomes greater than zero and thus the time at which the first action potential occurs in neuron S2. This is shown at the bottom plot of Figure 3a. As shown in this plot, the change in t11 by () 15 ms (shown in the top plot of Figure 3a) leads to the same change (15 ms) in the timing of the occurrence of the first action potential in neuron S2. Thus, the first ISI of neuron S2 is also discontinuous with respect to the designed input current even though the membrane potential v2(t) reaches the firing threshold with a positive slope.

Figure 3b shows the case where the membrane potential reaches the firing threshold with a positive slope at the time of action potentials in both neurons. As shown in this figure, a small perturbation in the designed input current (the same as the one used in Figure 2b) leads to a small change in ISIs of both neurons, which clearly indicates that the ISIs are continuous with respect to the designed input current.

Our simulation results show that ISIs of individual neurons in a recurrent network are continuous with respect to a time-continuous stimulating input current if the requirement of a nonzero slope of membrane potential at the time of action potentials is satisfied by all the neurons in the network. In the rest of the letter, we derive this condition rigorously and show that this condition is sufficient under certain assumptions to guarantee the continuous differentiability of ISIs in a network of synaptically connected neurons.

## 3.  Single-Neuron Model

To represent the dynamical behavior of a cortical spiking neuron, several computationally efficient mathematical models such as the leaky integrate-and-fire model, the Izhikevich single-neuron model (Izhikevich, 2004), the adaptive exponential integrate-and-fire model (Brette & Gerstner, 2005), and others have been developed. Most of these models are described by coupled first-order ordinary differential equations, which show discontinuous behavior through reset conditions. Here, we consider a class of such models represented by the following dynamical equations:
3.1a
3.1b
3.1c
Here, v(t) and u(t) are the time-varying membrane potential and the membrane recovery variable of a neuron, respectively. C is the membrane capacitance. I(t) is the total input current delivered to the neuron. vp(t) is a firing threshold. C and d are the model parameters. Typically C is set to the membrane resting potential vr. The time at which the membrane potential v(t) reaches the firing threshold vp(t), starting from the resting state, is called the time of the occurrence of an action potential or the spike time. At this time, v(t) is reset to C and u(t) is reset to u(t)+d. Again starting from this reset time and taking the reset values of v(t) and u(t) as initial conditions, the time of occurrence of the next action potential is determined through equations 3.1. We define tf, the time of occurrence of the fth action potential starting from t=0, as
3.2
An inter-spike interval (ISI) is defined as the time difference between the occurrence of two consecutive action potentials. Thus, the fth ISI is for with the convention t0=0. As a result of this, tf can be expressed in terms of the summation of ISIs as
3.3
The input current I(t) can be an external current to the neuron or a synaptic current Is(t) delivered to the neuron in a network or a combination of both. Here is a vector of parameters or decision variables over the real space. Mathematically, the synaptic current Is(t) is modeled as
3.4
Here, ge(t) and gi(t) are the excitatory and inhibitory synaptic conductances, respectively. Ee and Ei are the excitatory and inhibitory membrane reversal potentials, respectively. Typically the synaptic conductance is modeled by taking the weighted sum of all presynaptic neuronal activities and is represented in the following form (Gerstner & Kistler, 2002):
3.5
Nx is the total number of presynaptic neurons of type x. wj is the weight of the synapse j to the postsynaptic neuron. tfj is the time of the fth incoming action potential from the synapse j to the postsynaptic neuron. K(ttfj) models the stereotypical time course of postsynaptic conductances following presynaptic spikes. With appropriate choices for the weight, type, and number of synapses, as well as the kernel shape, this model has shown the capability of predicting the qualitative behavior of various cortical areas under different dynamical regimes (Brunel, 2000; Vogels, Rajan, & Abbott, 2005; Hanuschkin, Kunkel, Helias, Morrison, & Diesmann, 2010).

In this work, we assume that each presynaptic neuron establishes only one synapse with each of its postsynaptic neurons and all synaptic connections within the network are excitatory: gi=0 and with . However, the results apply to the case as well. Moreover, we assume that there is no time delay in the feedback loop. In the next section, we use the models of single-neuron dynamics and synaptic currents presented above to define a network of synaptically connected cortical spiking neurons and state the problem.

## 4.  Problem Statement

We consider a network of cortical spiking neurons denoted by . Neuron in this network is synaptically connected with the remaining n−1 neurons and thus receives and transfers synaptic information within the network in the form of synaptic input currents Is(t) according to equations 3.4 and 3.5. The dynamics of neuron Sj are given by equation 3.1. External input current drives the network by stimulating neuron S1. Here is a vector of parameters or decision variables over the real space. The measured output of neuron Sj is represented as a sequence of ISIs . With this setup, we formulate the problem as follows: Under what conditions is the fth ISI of the jth neuron Sj expressible as
4.1
for all , where hj,f is a continuously differentiable function of in a neighborhood of that is, on a domain with ?

## 5.  Results

In this section, we first consider a single neuron stimulated by an input current . The dynamics of the neuron are given by equation 3.1 with . We establish conditions under which the ith ISI of this neuron is expressible as
5.1
Here, on a domain for some and . We call this problem 1. Then we consider a single neuron stimulated by synaptic currents induced by spikes of a presynaptic neuron. We represent the ith spike time of the presynaptic neuron by tip. Here, the subscript p stands for the presynaptic neuron. The synaptic current induced by a sequence of spikes of a presynaptic neuron (see equations 3.4 and 3.5) stimulates the postsynaptic neuron. Here, . We define as the ith ISI of the presynaptic neuron. Since by definition, we can write where is a vector of first r ISIs of the presynaptic neuron. With this, we establish conditions under which the ith ISI of the postsynaptic neuron is expressible as
5.2
where on a domain for some . Here . This is problem 2. Finally, we use results of problems 1 and 2 to establish conditions for the existence of equation 4.1.

It is clear from our problem setup in the previous section that S1 is the only neuron in the network that is stimulated by both the external input current and the synaptic currents available from the remaining n−1 neurons in the network. The other n−1 neurons in the network are driven by only the synaptic currents and thus have indirect effect of through the ISIs of neuron S1. Also, is the only stimulating current to neuron S1 until the time at which synaptic currents from the remaining neurons in the network arrive at neuron S1. Therefore, the problem described by equation 4.1 can also be viewed as the combination of the following subproblems:

1. Under what conditions are ISIs of neuron S1 continuously differentiable with respect to when the only available stimulating input current to neuron S1 is the external input current ?

2. Under what conditions are ISIs of neuron continuously differentiable with respect to ?

3. Under what conditions are ISIs of neuron S1 continuously differentiable with respect to when the stimulating input current to neuron S1 is the sum of the external input current and the synaptic currents from the remaining n−1 neurons of the network?

We first consider an example of a recurrent network of two synaptically connected neurons and show that the establishment of conditions for the existence of equations 5.1 and 5.2 is sufficient to solve subproblems 1, 2, and 3 and thus guarantee the existence of equation 4.1 in the case of two neurons.

Example 1.

Figure 4 shows a network of two synaptically connected neurons driven by an external stimulating input current .

At this point, let us assume that equations 5.1 and 5.2 hold. We will later show the existence of these equations in sections 5.1 and 5.2, respectively. For simplicity, let us further assume that : the first spike of neuron S2 occurs between the time of the first and the second spike of neuron S1. Thus, neuron S1 receives the feedback information, in the form of the synaptic input current Is(t; t12), from neuron S2 at time t=t12.

We first consider subproblem 1 and show that it can be solved using equation 5.1—problem 1. It is clear that is the only stimulating input current to neuron S1 for t<t12. Therefore by setting in problem 1 until , the continuous differentiability of ISI of neuron S1 with respect to can be established using equation 5.1.

Next we consider subproblem 2 and show that it can be solved using equations 5.1 and 5.2—problems 1 and 2. Since is the only ISI of neuron S1 upto time t=t12, equation 5.2 establishes that is a continuously differentiable function in a small neighborhood of (problem 2). Also from equation 5.1, we know that is a continuously differentiable function of (problem 1). Now using the fact that a continuously differentiable function of a continuously differentiable function is also continuously differentiable with respect to the underlying arguments, equations 5.1 and 5.2 together establish that is a continuously differentiable function of .

Finally, we consider the effect of feedback on S1 and show that subproblem 3 can be solved using equation 5.1. It is clear that for , is the only input to neuron S1. For , the total input to neuron S1 is the sum of and Is(t; t12) (the synaptic current from S2 to S1). By definition . From subproblem 2, we know that is a continuous differentiable function of . Thus, Is(t; t12) is an implicit function of and the total input current to neuron S1 for , that is, the sum of and Is(t; t12) is a function of . Since the input current to neuron S1 is continuous in t at t=t12, equation 5.1 establishes the continuous differentiability of with respect to .

If we continue the arguments presented above for , we find that the ISIs of neurons S1 and S2 beyond time are also a continuously differentiable function of . Thus, the establishment of the existence of equations 5.1 and 5.2 is sufficient to guarantee the existence of equation 4.1 for a network of two neurons.

Example 1 demonstrates that equations 5.1 and 5.2, along with the fact that a continuously differentiable function of a continuously differentiable function is also continuously differentiable with respect to the underlying arguments, establishes the existence of equation 4.1 for a recurrent network of two neurons. By following the arguments in this example, it is not difficult to show that the existence of equations 5.1 and 5.2 is sufficient to guarantee the existence of equation 4.1 for a recurrent network of n>2 synaptically connected neurons.

We next establish the existence of equations 5.1 and 5.2 by solving problems 1 and 2. In both problems, we first make use of the well-known existence and uniqueness theorems (see theorems 2 and 3 in the appendix) for ordinary differential equations. These theorems ensure the existence of a continuously differentiable solution of equation 3.1. Then we apply the implicit function theorem (see theorem 4) and establish conditions under which equations 5.1 and 5.2 exist.

### 5.1.  Problem 1.

In this section, we establish conditions under which equation 5.1 exists. The description of the underlying problem is as follows. An input current , a function of t and , stimulates a single neuron. For , the neuron fires a sequence of action potentials according to equation 3.1. The goal is to find conditions under which equation 5.1 exists in a small neighborhood (it may be infinitesimally small) of .

In this section, ti0 and represent the time of the ith action potential and the ith ISI of a single neuron, respectively, when . represents the ith ISI of a single neuron otherwise. stands for continuity. stands for continuous differentiability.

Definition 1.

We define a domain Di on the space with x=[v, u]T as , where , and t00=0. Here , , , and .

Definition 2.

We define .

Assumption 1.

There exists an such that the solution of equation 3.1 exists on the domain Di for .

Assumption 2.

The solution of equation 3.1 is nonchaotic.

The following result states conditions under which equation 5.1 exists.

Lemma 1.

Under assumptions 1 and 2, there exists a and a unique function hi such that with and on if

1. on Di and on Di

2. on Di and on Di

3. on Di and on Di

4. that is, the partial derivative of the membrane potential with respect to t is not equal to the partial derivative of the firing threshold vp(t) with respect to t at the time of the ith action potential t=ti0

Remark 1.

If the firing threshold vp(t) is time invariant, for all (as is the case for most of the spiking neuron models), the required condition of lemma 1 reduces to the nonzero slope of at the time of the ith action potential—.

Proof.

We first consider equation 3.1 with an initial condition at t=0, where and are constants. We define where . From the hypothesis of lemma 1, on D1 and on D1. Also, the partial derivative of with respect to x and is continuous on D1 and the partial derivative of g(x) with respect to x is continuous on D1. Thus by theorems 2 and 3 (see the appendix) along with assumption 1, there exists a such that for any , there exist unique solutions and of equation 3.1 for all .

We define a function . Clearly, is continuously differentiable with respect to t and for and . Also, . In addition, the partial derivative of with respect to t at t=t10 and is nonzero, that is, from the hypothesis of lemma 1. Now by applying the implicit function theorem (theorem 4 in the appendix) on the function for and , we find that there exists a and a unique function h1 on the domain such that

• •

on

• •

• •

i.e. for every

We next consider equation 3.1 with as the initial condition at , where . It should be noted that is the time at which the reset of and occurs according to equation 3.1.

From the hypothesis of lemma 1, on D2 and on D2. Also, the partial derivative of with respect to x and is continuous on D2, and the partial derivative of g(x) with respect to x is continuous on D2. We know from the reset conditions of equation 3.1 that at , . We also know that on . Thus, again by theorems 2 and 3 (see the appendix), along with assumptions 1 and 2, there exists a such that for any that satisfies
5.3a
5.3b
there exist unique solutions and of equation 3.1 for all .

We define a function . It is clear that is continuously differentiable with respect to t and for all and for all satisfying equation 5.3. Also, . In addition, the derivative of with respect to t at t=t20 and is nonzero, , from the hypothesis of lemma 1. Now by applying the implicit function theorem (see theorem 4 in the appendix) on the function for and satisfying equation 5.3, we find that there exists and a unique function h2 on such that

• •

on

• •

• •

, that is, for every

Now the claim in lemma 1 for follows directly from mathematical induction.

### 5.2.  Problem 2.

In this section, we establish conditions under which equation 5.2 exists. The description of the underlying problem is as follows. The synaptic current stimulates the postsynaptic neuron. Here is a vector of the first r ISIs of the presynaptic neuron. For , the postsynaptic neuron fires its ith action potentials at time ti according to equation 3.1. The goal is to find conditions under which equation 5.2 exists in a small neighborhood (may be infinitesimally small) of .

In this section, ti0 and represent the time of the ith action potential and the ith ISI of a single neuron, respectively, when . represents the ith ISI of a single neuron otherwise.

Definition 3.

We define a domain Fi on the space with x=[v, u]T as where , and t00=0. Here , , , and .

Definition 4.

We define .

Assumption 3.

There exists an such that the solution of equation 3.1 exists on the domain Fi for .

The following result states conditions under which equation 5.2 exists.

Lemma 2.

Under assumptions 2 and 3, there exists a a and a unique function such that with and on if

1. on Fi and on Fi.

2. on Fi and on Fi.

3. on Fi and on Fi.

4. , that is, the partial derivative of the membrane potential with respect to t is not equal to the partial derivative of the firing threshold vp(t) with respect to t at the time of the ith action potential t=ti0.

By following the steps provided in proving the claim of lemma 1, it is straightforward to show the claim of lemma 2.

Remark 2.

If synaptic currents induced by spikes of more than one presynaptic neuron stimulate the postsynaptic neuron, then in lemma 2 can be replaced by the sum of synaptic currents from all the presynaptic neurons.

With the establishment of results for problems 1 and 2, we next state our main result on the existence of equation 4.1.

In this section, tij,0 and tij represent the time of the ith action potential of neuron Sj in the network defined in section 4 for and , respectively.

Theorem 1.

Let us assume that neurons in the network defined in section 4 fire a sequence of action potentials for . Let us also assume that the conditions of lemma 1 are satisfied for neuron S1 and the conditions of lemma 2 are satisfied for neuron Sj for until time for some . Then there exists a neighborhood such that the ith ISI of neuron Sj is expressible as for all and , where on .

Proof.

Let us define a time . Without loss of generality, let us say that the total number of action potentials occurred in neuron S1 until time is . It is clear from section 4 that the only stimulating input current to neuron S1 until time is the external input current with in a small neighborhood of . From the hypothesis of theorem 1, the conditions of lemma 1 are satisfied for neuron S1 until time for some (arbitrarily small) . Thus, from the conclusion of lemma 1, there exists a and a continuously differentiable function h1,i=hi on a domain such that the ith ISI of neuron S1 is expressible as for all and .

We now consider the remaining n−1 neurons in the network. Until time , the total stimulating input current to individual neurons in the remaining n−1 neurons is the synaptic current from neuron S1 for . Defining , we can write as . Since by definition, we say that for some . From the hypothesis of theorem 1, the conditions of lemma 2 are satisfied for neuron Sj until time . Thus, from the conclusions of lemma 2, there exist a and a continuously differentiable function such that the first ISI of neuron Sj is expressible as for . We know that for is continuously differentiable with respect to for . Using the fact that a continuously differentiable function of a continuously differentiable function is also continuously differentiable with respect to the underlying arguments, it is easy to see that is also continuously differentiable with respect to for where . Thus, there exists a and a continuously differentiable composite function hj,1 on such that the first ISI of neuron Sj is expressible as .

It is clear that the synaptic current Is(t; t1j), induced by the first action potential of neuron Sj, contributes to the total stimulating input current to the remaining n−1 neurons beyond the time of this action potential. Since the hypothesis of lemma 2 is satisfied by neuron Sl with and , Is(t; t1j) is continuously differentiable with respect to . We know that is continuously differentiable with respect to on . Thus, Is(t; t1j) is continuously differentiable with respect to and the total input current to neuron S1, that is, is a function of t and on . By applying the above arguments to subsequent action potentials in the network, it is easy to see that the conclusion of theorem 1 holds for all and .

Example 2.

We consider a single neuron stimulated by an external input current . Here, is a vector of parameters or decision variables. The neuron fires a sequence of action potentials according to equation 3.1 for . We assume that is continuously differentiable with respect to along with its continuity with respect to t in a neighborhood of . We also assume that the qualitative behavior of the neuron remains the same in the neighborhood of . We represent the dynamics of the single neuron by the following known spiking single neuron models and state conditions (see lemma 1) under which a neighborhood of exists such that ISIs of the neuron are a continuously differentiable function of in the neighborhood of . Here we assume that the parameters of these models are such that the models exhibit nonchaotic behaviors.

1. Bidimensional models with linear f(v, u). In this class of models, we consider the adaptive integrate-and-fire (aIF) model and the adaptive threshold integrate-and-fire (aTIF) model (Rossant, Goodman, Platkiewicz, & Brette, 2010). The aIF model is given by
5.4a
5.4b
5.4c
Clearly, f(v, u)=−vu is continuously differentiable with respect to v and u. Also, g(v, u)=−u is continuously differentiable with respect to v and u. Since vp(t)=1, the required conditions of lemma 1 are satisfied if the slope of v(t) with respect to t at the time of action potentials is nonzero, that is, for all . The same conclusion holds for the the integrate-and fire model since this model is a special case of the aIF model with u(t)=0 for all .
The aTIF model is given by
5.5a
5.5b
5.5c
Clearly, f(v, u)=−v is continuously differentiable with respect to v and u. Also, g(v, u)=avu is continuously differentiable with respect to v and u. Since vp(t)=1+u(t), the required conditions of lemma 1 are satisfied if the partial derivative of v(t) with respect to t is not equal to the partial derivative of u(t) with respect to t at the time of action potentials.

2. Bidimensional models with nonlinear f(v, u). In this class of models, we consider the Izhikevich model, the adaptive exponential (AdEx) integrate-and-fire (IF) model, and the Touboul model. The AdEx IF model is represented by equation 3.1 with and g(v, u)=a(vEL)−u (Brette & Gerstner, 2005). The Touboul model is represented by equation 3.1 with f(v, u)=v4+2avu and g(v, u)=a(bvu) (Touboul, 2011). In all these models, the firing threshold vp(t)=vp (time invariant). Clearly, f(v, u) is continuously differentiable with respect to v and u. Also, g(v, u) is continuously differentiable with respect to v and u. By assumption, is continuously differentiable with respect to along with its continuity with respect to t in a neighborhood of . Because of the convexity, faster growth and regularity behavior of the nonlinear function f(v, u) (Touboul, 2011), the slope of v(t) with respect to t at v(t)=vp, that is, at the time of action potentials, it always greater than 0 (required condition for the existence of the inverse of the Jacobian in order to apply the implicit function theorem). Thus, these models satisfy the required conditions of lemma 1. From the conclusion of lemma 1, there exists a neighborhood of such that the ISIs predicted by these models are a continuously differentiable function of in the neighborhood of . The same conclusion holds for the quadratic integrate-and-fire model since this model is a special case of the Izhikevich model with u(t)=0 for all .

As an example, Figure 5 shows the continuity of ISIs with respect to in a small neighborhood of for the Izhikevich model. The model parameters are chosen such that the model shows regular spiking behavior with a step input current.

Remark 3.

In the Adex IF model, the spike rises very quickly past the intrinsic threshold vT of the exponential nonlinearity. Therefore, it may appear that our continuous differentiability conditions for ISIs hold over a very narrow domain of . We have observed in simulations that a 10% perturbation in a step input current shows a maximum of 15% deviation in ISIs predicted by this model. Here, we have chosen the model parameters for tonic spiking. The input current and its perturbation are sufficiently larger than the the input current required for spiking through saddle-node bifurcation (Touboul, 2008). These observations suggest that our results are applicable for a significantly large domain of when the model parameters and the stimulating input current are chosen appropriately.

Figure 5:

Continuity of ISIs with respect to in a small neighborhood of a in the Izhikevich model. The top plot shows the trajectories of membrane potential v(t) predicted by the model in response to (shown in the bottom plot), a time-continuous input current with its continuous differentiability with respect to .

Figure 5:

Continuity of ISIs with respect to in a small neighborhood of a in the Izhikevich model. The top plot shows the trajectories of membrane potential v(t) predicted by the model in response to (shown in the bottom plot), a time-continuous input current with its continuous differentiability with respect to .

3. Spike response model. The spike response model (SRM) is represented as
5.6
where vi(t) is the membrane potential of a single neuron i. is the firing time of the last spike of neuron i, and t(f)j are spikes of presynaptic neurons j. wij is the synaptic efficacy. The function describes the form of the action potential and its spike afterpotential. The function describes the time course of response to an incoming spike, and is a linear response to an input pulse. The next action potential occurs when vi(t) hits a threshold with dvi(t)/dt>0 (Gerstner & Kistler, 2002).

In this model, the required conditions of lemma 1 are satisfied if (1) vi(t) is continuously differentiable with respect to t and in a small neighborhood of , and (2) the difference between partial derivatives of v(t) and with respect to t is nonzero at the time of the next action potential. Clearly, satisfaction of these conditions requires restrictions on functions , , and . As an example, let us assume that is time invariant and wij=0. Now if is continuously differentiable with respect to t for , is continuous in t for , and and are continuously differentiable with respect to at in a small neighborhood of , then all the required conditions of lemma 1 are satisfied by the model. As a result, there exists a small neighborhood of such that the next ISI predicted by the model is continuously differentiable with respect to in that neighborhood of .

## 6.  Conclusion

In this letter, we have established conditions under which inter-spike intervals of individual neurons in a recurrent network of synaptically connected spiking neurons are continuously differentiable with respect to parameters (decision variables) of an external stimulating input current that drives the network. The dynamical behavior of a spiking neuron has been represented by a two-state first-order ordinary differential equation with a reset of state variables at the occurrence of each ISI. Using existence theorems for the solution of ordinary differential equations and the implicit function theorem, we have found that ISIs are continuously differentiable with respect to the decision variables if

1. A continuously differentiable solution of spiking neurons with respect to time t and the decision variables exists between consecutive action potentials

2. The partial derivative of the membrane potential of spiking neurons with respect to time is not equal to the partial derivative of their firing threshold with respect to time at the time of action potentials.

Under certain assumptions, we have shown that these conditions are fulfilled by nonlinear bidimensional spiking neuron models in the presence of a time-continuous input current that is also continuously differentiable with respect to its parameters (see example 2). In the case of linear bidimensional spiking neuron models, additional constraints must be imposed on the stimulating input current in order to fulfill these conditions.

Are our results applicable to spiking neuron models in the presence of extrinsic or intrinsic noise? It is well known that single neurons are intrinsically noisy (Faisal, Selen, & Wolpert, 2008; Gerstner & Naud, 2009). Typically this noisy characteristic of single neurons is captured in deterministic dynamical models of spiking neurons by including an additive or multiplicative noise term in form of an external input current to the model. In this case, the dynamical model takes the form of stochastic differential equations and the resultant sequence of ISIs (also known as the first passage times) becomes a stochastic process (Ricciardi & Sato, 1988; Touboul & Faugeras, 2007, 2008; Sirovich & Knight, 2011). From a optimization point of view, relevant questions here would be to find conditions under which the expectation or the conditional expectation of first passage time is continuously differentiable with respect to to decision variables.

Since our derived results in this letter are based on properties of ordinary differential equations, these results are not applicable to stochastic models of spiking neurons. In future work, we will derive conditions under which the expectation or the conditional expectation of first passage time is continuously differentiable in stochastic models of spiking neurons and recurrent networks.

Are our results applicable to spiking neuron models that show chaotic behavior? We consider the adaptive exponential (AdEx) integrate-and-fire (IF) model, which has shown to give a chaotic solution with appropriate model parameters (Naud, Marcille, Clopath, & Gerstner, 2008; Touboul & Brette, 2008). The dynamical behavior of the model is given by equation 3.1 with , g(v, u)=a(vEL)−u and vp(t)=vp. is an external input current to the model.

When the dynamical behavior of this model is nonchaotic, we have shown in example 2 that the model satisfies the conditions of lemma 1 for , which is continuous in t and continuously differentiable with respect to in a small neighborhood of . Thus, in this case, our results guarantee the existence of a small neighborhood of in which ISIs are a continuous differentiable function of . When the dynamical behavior of this model is chaotic, our results cannot guarantee the existence of a small neighborhood of in which ISIs are a continuously differentiable function of beyond the first ISI.

Although our results are not applicable to chaotic neuron models beyond the first ISI, it may be possible to find a small neighborhood of for which the initial ISIs can be expressed as a continuously differentiable function of since the conditions of lemma 1 are satisfied by the model. As an example, Figure 6 shows the deviation in ISIs predicted by the AdEx IF model with chaotic model parameters when the applied step input current is perturbed by 0.01%.

Figure 6:

Change in ISIs predicted by the AdEx IF model with chaotic parameters (Naud et al., 2008) when the step input current I = 160 pA is perturbed by 0.01%. Here, is the ith ISI with the step input current I, and is the ith ISI with the step input current where is 0.01% change in I.

Figure 6:

Change in ISIs predicted by the AdEx IF model with chaotic parameters (Naud et al., 2008) when the step input current I = 160 pA is perturbed by 0.01%. Here, is the ith ISI with the step input current I, and is the ith ISI with the step input current where is 0.01% change in I.

In this example, the initial ISIs show small changes with 0.01% perturbation in I. Moreover, we have verified in simulation (not shown here) that the difference in the initial ISIs decreases with the decrease in the perturbation size which confirms the continuity of these ISIs with respect to I in a small neighborhood. Later ISIs show irregular changes even with an infinitesimal perturbation in I due to chaotic behavior of the model.

In conclusion, our results guarantee the continuous differentiability of ISIs in all spiking neuron models and network models if the required conditions of theorem 1 are satisfied by these models.

Throughout our work, we have assumed that each presynaptic neuron establishes only one synapse with each of its postsynaptic neurons and all synapses within the network are identical (i.e., excitatory). These assumptions are used only to simplify the mathematical complexity of our work. Relaxation of these assumptions by including multiple synapses of both types, excitatory and inhibitory, will not change the conclusion of the reported work as long as our derived conditions are satisfied.

## Appendix:  Frequently Used Theorems

Theorem 2
(Coddington & Levinson, 1955).  Suppose D is a domain in the (t, x) space. Let be the domain of space, that is, for c>0, where is fixed. is defined as . Let on and satisfy a Lipschitz condition in x uniformly on . For a given fixed , let be the solution of
A.1
on an interval . There exists a such that for any where }, there exists a unique solution of equation A.1 on satisfying . Moreover, on n+k+2 dimensional domain .
Theorem 3

(Coddington & Levinson, 1955).  Let the hypothesis of theorem 2 be satisfied. Suppose that , on . Then the solution defined in theorem 2 is of class on .

Theorem 4

(Apostol, 1957).  Let be a vector-valued function defined on an open set S in En+k with values in En. Suppose on S. Let be a point in S for which and for which the determinant . Then there exists a k-dimensional neighborhood T0 of and one, and only one, vector-valued function g, defined on T0 and having values in En, such that

1. on T0,

2. ,

3. for every .

## Acknowledgments

Financial support from the U.S. National Science Foundation, Cyber Enabled Discovery and Innovation program through grant EECS0835632, is gratefully acknowledged. We acknowledge the constructive suggestions by the reviewers, which helped us in improving the readability of the manuscript.

## References

Apostol
,
T. M.
(
1957
).
Mathematical analysis
.
:
.
Brette
,
R.
, &
Gerstner
,
W.
(
2005
).
Adaptive exponential integrate-and-fire model as an effective description of neural activity
.
J. Neurophysiol.
,
94
,
3637
3642
.
Brunel
,
N.
(
2000
).
Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons
.
Journal of Computational Neuroscience
,
8
,
183
208
.
Coddington
,
E. A.
, &
Levinson
,
N.
(
1955
).
Theory of ordinary differential equations
.
New York
:
McGraw-Hill
.
Cunningham
,
J. P.
,
Nuyujukian
,
P.
,
Gilja
,
V.
,
Chestek
,
C. A.
,
Ryu
,
S. I.
, &
Shenoy
,
K. V.
(
2011
).
A closed-loop human simulator for investigating the role of feedback-control in brain machine Interfaces
.
Journal of Neurophysiology
,
105
,
1932
1949
.
Doherty
,
J. E.
,
Lebedev
,
M. A.
,
Ifft
,
P. J.
,
Zhuang
,
K. Z.
,
Shokur
,
S.
,
Bleuler
,
H.
, &
Nicolelis
,
M. A. L.
(
2011
).
Active tactile exploration using a brain-machine-brain interface
.
Nature
,
479
,
228
231
.
Faisal
,
A.
,
Selen
,
L.
, &
Wolpert
,
D.
(
2008
).
Noise in the nervous system
.
Nature Reviews Neuroscience
,
9
,
292
303
.
Gerstner
,
W.
, &
Kistler
,
W.
(
2002
).
Spiking neuron models
.
Cambridge
:
Cambridge University Press
.
Gerstner
,
W.
, &
Naud
,
R.
(
2009
).
How good are neuron models?
Science
,
326
,
379
380
.
Hanuschkin
,
A.
,
Kunkel
,
S.
,
Helias
,
M.
,
Morrison
,
A.
, &
Diesmann
,
M.
(
2010
).
A general and efficient method for incorporating precise spike times in globally time-driven simulations
.
Frontiers in Neuroinformatics
,
4
(
113
),
1
19
.
Hochberg
,
L. R.
,
Serruya
,
M. D.
,
Friehs
,
G. M.
,
Mukand
,
J. A.
,
Saleh
,
M.
,
Caplan
,
A. H.
, &
Donoghue
,
J. P.
(
2006
).
Neuronal ensemble control of prosthetic devices by a human with tetraplegia
.
Nature
,
442
,
164
171
.
Izhikevich
,
E. M.
(
2004
).
Which model to use for cortical spiking neurons?
IEEE Transactions on Neural Networks
,
15
,
1063
1070
.
Kumar
,
G.
,
Aggarwal
,
V.
,
Thakor
,
N. V.
,
Schieber
,
M. H.
, &
Kothare
,
M. V.
(
2011
).
An optimal control problem in closed-loop neuroprostheses
.
In Proceedings of the 2011 50th IEEE conference on Decision and Control and European Control Conference
(pp.
53
58
).
Kumar
,
G.
,
Schieber
,
M. H.
,
Thakor
,
N. V.
, &
Kothare
,
M. V.
(
2013
).
Designing closed-loop brain-machine interfaces using optimal receding horizon control
.
In Proceedings of the 2013 American Control Conference
(pp.
4933
4938
).
Kwon
,
W. H.
, &
Han
,
S.
(
2005
).
Receding horizon control
.
New York
:
Springer
.
McKennoch
,
S.
,
Voegtlin
,
T.
, &
Bushnell
,
L.
(
2009
).
Spike-timing error backpropagation in theta neuron networks
.
Neural Computation
,
21
,
9
45
.
Naud
,
R.
,
Marcille
,
N.
,
Clopath
,
C.
, &
Gerstner
,
W.
(
2008
).
Firing patterns in the adaptive exponential integrate-and-fire model
.
Biological Cybernetics
,
99
,
335
347
.
Nicolelis
,
M. A. L.
, &
Lebedev
,
M. A.
(
2009
).
Principles of neural ensemble physiology underlying the operation of brain-machine interfaces
.
Nature Reviews
,
10
,
530
540
.
Pfister
,
J. P.
,
Toyoizumi
,
T.
,
Barber
,
D.
, &
Gerstner
,
W.
(
2006
).
Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning
.
Neural Computation
,
18
,
1318
1348
.
Ricciardi
,
L. M.
, &
Sato
,
S.
(
1988
).
First-passage-time density and moments of the Ornstein-Uhlenbeck process
.
J. App. Prob.
,
25
,
43
57
.
Rossant
,
C.
,
Goodman
,
D. F. M.
,
Platkiewicz
,
J.
, &
Brette
,
R.
(
2010
).
Automatic fitting of spiking neuron models to electrophysiological recordings
.
Frontiers in Neuroinformatics
,
4
(
2
),
1
10
.
Scherberger
,
H.
(
2009
).
Neural control of motor prostheses
.
Current Opinion in Neurobiology
,
19
,
629
633
.
Sirovich
,
L.
, &
Knight
,
B.
(
2011
).
Spiking neurons and the first passage problem
.
Neural Computation
,
23
,
1
29
.
Touboul
,
J.
(
2008
).
Bifurcation analysis of a general class of nonlinear integrate-and-fire neurons
.
SIAM J. Appl. Math.
,
68
,
1045
1079
.
Touboul
,
J.
(
2011
).
On the simulation of nonlinear bidimensional spiking neuron models
.
Neural Computation
,
23
(
7
),
1704
1742
.
Touboul
,
J.
, &
Brette
,
R.
(
2008
).
Dynamics and bifurcations of the adaptive exponential integrate-and-fire model
.
Biological Cybernetics
,
99
,
319
334
.
Touboul
,
J.
, &
Faugeras
,
O.
(
2007
).
The spike trains probability distributions: A stochastic calculus approach
.
Journal of Physiology–Paris
,
101
,
78
98
.
Touboul
,
J.
, &
Faugeras
,
O.
(
2008
).
A characterization of the first hitting time of double integral processes to curved boundaries
.
,
40
,
501
528
.
Vogels
,
T. P.
,
Rajan
,
R.
, &
Abbott
,
L. F.
(
2005
).
Neural network dynamics
.
Annual Reviews of Neuroscience
,
28
,
357
376
.
Weber
,
D. J.
,
London
,
B. M.
,
Hokanson
,
J. A.
,
Ayers
,
C. A.
,
Gaunt
,
R. A.
,
Torres
,
R. r.
,
Zaaimi
,
B.
, &
Miller
,
L. E.
(
2011
).
Limb-state information encoded by peripheral and central somatosensory neurons: Implications for an afferent interface
.
IEEE Transactions on Neural Systems and Rehabilitation Engineering
,
19
(
5
),
501
513
.