Abstract

We introduce a novel approach for a complete functional identification of biophysical spike-processing neural circuits. The circuits considered accept multidimensional spike trains as their input and comprise a multitude of temporal receptive fields and conductance-based models of action potential generation. Each temporal receptive field describes the spatiotemporal contribution of all synapses between any two neurons and incorporates the (passive) processing carried out by the dendritic tree. The aggregate dendritic current produced by a multitude of temporal receptive fields is encoded into a sequence of action potentials by a spike generator modeled as a nonlinear dynamical system. Our approach builds on the observation that during any experiment, an entire neural circuit, including its receptive fields and biophysical spike generators, is projected onto the space of stimuli used to identify the circuit. Employing the reproducing kernel Hilbert space (RKHS) of trigonometric polynomials to describe input stimuli, we quantitatively describe the relationship between underlying circuit parameters and their projections. We also derive experimental conditions under which these projections converge to the true parameters. In doing so, we achieve the mathematical tractability needed to characterize the biophysical spike generator and identify the multitude of receptive fields. The algorithms obviate the need to repeat experiments in order to compute the neurons’ rate of response, rendering our methodology of interest to both experimental and theoretical neuroscientists.

1.  Introduction

Understanding how neural circuits perform computation is one of the most challenging problems in neuroscience. For the fruit fly (Drosophila), the advent of the genome sequence, coupled with extensive anatomical and electrophysiological studies, has led to an exponential growth in our knowledge about the organization of many neural circuits, including detailed knowledge about circuit connectivity, circuit inputs and outputs, and morphological and biophysical properties of individual neurons (Chiang et al., 2011; Liang & Luo, 2010; Yaksi & Wilson, 2010; Chou et al., 2010; Gouwens & Wilson, 2009). Combined with the powerful genetic tools for visualizing, activating, and deactivating specific circuit components, these advances raise the possibility of providing a comprehensive functional characterization of information processing within and between neurons.

To this day, a number of methods have been proposed to quantify and model neuronal processing (see Wu, David, & Gallant, 2006, for an in-depth review). The majority of these methods assume that the input to a neural circuit is continuous and that the output is a sample path of a point process (e.g., Poisson process). In biological neural circuits, however, the inputs for most neurons are spike trains generated by presynaptic cells and the outputs are determined by a multidimensional dynamical system (Izhikevich, 2007). Furthermore, the highly nonlinear nature of spike generation has been shown to produce interactions between stimulus features (Slee, Higgs, Fairhall, & Spain, 2005; Hong, Agüera y Arcas, & Fairhall, 2007) that profoundly affect the estimation of receptive fields (Pillow & Simoncelli, 2003). Hence, there is a fundamental need to develop tractable methods for identifying neural circuits that incorporate biophysical models of spike generation and receive multidimensional spike trains as input stimuli.

Here we describe a new methodology for a complete neural circuit identification that takes these considerations into account. Our neural circuit models admit multidimensional spike trains as input stimuli. They also incorporate nonlinear dynamical system (e.g., Hodgkin-Huxley, Morris Lecar, hard-threshold IAF) models of the spike-generating mechanism. The nonlinear contribution of a dynamical system such as the Hodgkin-Huxley neuron model is stimulus driven. It changes from one spike to the next and thus affects receptive field estimation if it is not properly taken into account.

Our approach builds on the observation that during any experiment, a neural circuit is projected onto a particular space of input signals, with the circuit projection determined by how well the input space explores that circuit. Employing reproducing kernel Hilbert spaces (in particular, spaces of band-limited functions) to model the input stimuli, we quantitatively describe the relationship between the underlying circuit parameters and their projections. We also derive conditions under which these projections converge to the true parameters. In doing so, we achieve the mathematical tractability needed to characterize biophysical spike generators and identify the multitude of receptive fields in neural circuits with full connectivity. We estimate all model parameters directly from spike times produced by neurons, and repeating the same stimulus is no longer necessary.

2.  Problem Statement and Modeling

Consider the simple neural circuit depicted in Figure 1a. The postsynaptic neuron shown in gray receives its feedforward spiking input from three cells that are highlighted in red, cyan, and green. Presynaptic action potentials, depicted as spikes on the axon terminals, are processed by the dendritic tree of the postsynaptic cell, and the resulting dendritic current is encoded into a single postsynaptic train of action potentials by the axon hillock. For the postsynaptic neuron in Figure 1a, we seek to characterize the encoding carried out by the axon hillock and identify the dendritic processing of presynaptic spikes, assuming that both presynaptic and postsynaptic spike times are available to observers.

Figure 1:

Problem setup. (a) A simple neural circuit in which a neuron receives spiking input from multiple presynaptic cells. (b) Block diagram of the circuit in panel a. Spatiotemporal dendritic processing of spike trains sm, , , is described by M temporal receptive fields with kernels hm. Action potentials are produced by a biophysical spike generator. (c) A canonical neural circuit model with a spiking feedforward input, lateral connections, and feedback.

Figure 1:

Problem setup. (a) A simple neural circuit in which a neuron receives spiking input from multiple presynaptic cells. (b) Block diagram of the circuit in panel a. Spatiotemporal dendritic processing of spike trains sm, , , is described by M temporal receptive fields with kernels hm. Action potentials are produced by a biophysical spike generator. (c) A canonical neural circuit model with a spiking feedforward input, lateral connections, and feedback.

A block diagram representation of the postsynaptic neuron in Figure 1a is shown in Figure 1b. Spikes from different neurons arrive at times , , , forming M input spike trains sm. Since in biological neurons, spikes typically arrive via multiple synapses at different locations on the dendritic tree, we model the processing of each spike train sm with a temporal receptive field hm that describes the combined spatiotemporal contributions of all synapses from neuron . Such a temporal receptive field can capture not only the overall excitatory and inhibitory nature of synapses but also the time domain analog processing carried out by the dendritic tree. The aggregate dendritic current , , produced in the tree is then encoded by a biophysical spike generator model (e.g., the Hodgkin-Huxley model) into a sequence of spike times .

Generalizing the ideas above, one can consider more complex spiking neural circuits, in which every neuron may receive not only feedforward inputs from a presynaptic layer but also lateral inputs from neurons in the same layer and backpropagating action potentials (Waters, Schaefer, & Sakmann, 2005) may contribute to computations within the dendritic tree. A two-neuron circuit incorporating these considerations is shown in Figure 1c. The processing of lateral inputs in layer 2 is described by the temporal receptive fields (cross-feedback filters) h212 and h221, while various signals produced by backpropagating action potentials are modeled by the temporal receptive fields (feedback filters) h211 and h222.

In what follows, we show that the elementary building blocks of spiking neural circuits, including temporal receptive fields and spike generators, together with all inputs and outputs, can be naturally defined as elements of a reproducing kernel Hilbert space (RKHS), thereby making circuit identification mathematically tractable. Without loss of generality, we choose to work with the space of trigonometric polynomials :

Definition 1. 

The space of trigonometric polynomials is the Hilbert space of complex-valued functions , where , , is an orthonormal basis, , . Here, is the period, is the bandwidth, and L is the order of the space. Endowed with the inner product , is a reproducing kernel Hilbert space (RKHS) with a reproducing kernel (RK) given by .

Remark 1. 
The key property of an RKHS is the reproducing kernel property:
formula
for all and .

2.1.  Modeling Dendritic Processing.

We assume that the precise shape and amplitude of action potentials received by the postsynaptic cell are of no particular significance in neural information processing. Although such information can be readily incorporated into the methods outlined below, it can be challenging to obtain in practice as it requires simultaneous intracellular (whole-cell) recordings from multiple presynaptic cells. Making a less stringent assumption that only the timing of action potentials is important, we take each spike arriving at a time sk to be a Dirac-delta function , , so that the train of spikes sm from a presynaptic neuron m, , is given by , (see Figure 1). The spike times smk correspond to peaks (or troughs) of action potentials and can be readily obtained by extracellular recordings. Given presynaptic spike trains sm, , and corresponding temporal receptive fields with kernels hm (see Figure 1b), the aggregate postsynaptic current amounts to , where denotes the convolution of sm with hm.

We model kernels hm as finite energy functions with a finite temporal support (memory) on the interval [0, S]; hm belongs to the space . Under quite natural conditions, each kernel hm can be approximated arbitrarily closely (in the L2 norm; Grafakos, 2008) on [0, S] by its projection in the space of trigonometric polynomials (see definition 1). The conditions for an arbitrarily close L2 approximation of the kernel by its projection are and the bandwidth and the order L of the space are sufficiently high (Lazar & Slutskiy, 2012). Thus,
formula
where and follow from the sampling properties of the RK and the Dirac-delta function, respectively.

In other words, if an input spike is replaced with K(t, sk), the output of the temporal receptive field converges in the L2 norm (with increasing bandwidth and order L of the space ) to the output v elicited by the spike . This is also illustrated in Figure 2, where we compare the output of a temporal receptive field when stimulated with a delta function (top row) and an RK (bottom row).

Figure 2:

Modeling a single spike. (Top) A spike at the input to a temporal receptive field produces a kernel h at its output. (Bottom) If a spike is replaced with the RK , the receptive field output is the signal that closely approximates the output h produced by the spike on .

Figure 2:

Modeling a single spike. (Top) A spike at the input to a temporal receptive field produces a kernel h at its output. (Bottom) If a spike is replaced with the RK , the receptive field output is the signal that closely approximates the output h produced by the spike on .

Example 1. 

Consider three arbitrarily chosen temporal receptive fields with kernels hm, m=1, 2, 3, each with a temporal support of 100 ms and bandlimited to 100 Hz. Given three spike trains s1, s2, s3, shown in red, green and blue in Figure 3a and an appropriate choice of the space (here, rad/s and L=40), we choose spikes in any window of length T (here, s, shown in yellow) and replace every such spike smk with the sampled reproducing kernel K(t, smk). We thus obtain three continuous signals , , that are periodic on the real line, as depicted in Figure 3b. Note that signals with , , . Passing these three signals through the temporal receptive fields hm, m=1, 2, 3, produces an aggregate dendritic current . The latter is indistinguishable from the true dendritic current on the interval (depicted in green in Figure 3c). The error between v and is shown in Figure 3d. Note that the approximation does not work in the time interval because the filters are causal and have memory, that is, the spikes from both the present and the past affect the dendritic current.

Figure 3:

Modeling spiking input. (a) Spike streams (red, green, blue) at the input to a dendritic tree. Since streams have no beginning and end, we pick out spikes in a window (yellow) of length T. (b) Replacing spikes with the sampled RKs results in continuous signals (red, green, blue). (c) Passing signals through receptive fields , we obtain the current , which is the same on (green) as the current v produced by the spiking input. (d) The error between v and .

Figure 3:

Modeling spiking input. (a) Spike streams (red, green, blue) at the input to a dendritic tree. Since streams have no beginning and end, we pick out spikes in a window (yellow) of length T. (b) Replacing spikes with the sampled RKs results in continuous signals (red, green, blue). (c) Passing signals through receptive fields , we obtain the current , which is the same on (green) as the current v produced by the spiking input. (d) The error between v and .

2.2.  Modeling Biophysical Spike Generators.

The model of action potential generation can be chosen from a wide class of spiking point neuron models, including nonlinear conductance-based models with stable limit cycles (e.g., Hodgkin-Huxley, Fitzhugh-Nagumo, Morris Lecar; Izhikevich, 2007), as well as simpler models such as the integrate-and-fire (IF) or the threshold-and-fire neuron model. Although these models are deterministic in their original formulation, they can readily be extended to incorporate various noise sources in the form of, for example, random thresholds (Lazar & Pnevmatikakis, 2009) or stochastic gating variables (Lazar, 2010; see also section 3.3). Furthermore, given the same input, even deterministic models typically do not produce the same output since their response fundamentally depends on the initial conditions of the dynamical system.

Here we focus on conductance-based neuron models only. For readers’ convenience, we first briefly review the Hodgkin-Huxley point neuron model. We then present another model, the reduced project-integrate-and-fire neuron, with conditional phase response curves (reduced PIF-cPRC). This reduced model can be used to accurately capture response properties of many point neuron models, including Hodgkin-Huxley, Morris-Lecar, and Fitzhugh-Nagumo. Furthermore, as discussed in section 3.1, the reduced PIF-cPRC model provides a simple way to faithfully characterize the spike generation process of biological neurons when the underlying neuron parameters are not known.

2.2.1.  Conductance-Based Spike Generator Models.

The point neuron models considered here are described by the set of differential equations,
formula
2.1
where the vector x describes the state of the point neuron, I(t), is the aggregate dendritic current, and xT denotes the transpose of x. A block diagram of a biophysical neuron with a single temporal receptive field is depicted in Figure 4. Here the aggregate dendritic current is assumed to be of the form I(t)=v(t)+Ib, where Ib is a constant bias term and v(t) is the output of the temporal receptive field with the kernel h processing the continuous input stimulus u, .
Figure 4:

Block diagram of a simple [RF]-[biophysical neuron] circuit. Aggregate current v produced by the receptive field is encoded by a nonlinear dynamical system (e.g., Hodgkin-Huxley neuron) described by a set of differential equations dx/dt=f(x).

Figure 4:

Block diagram of a simple [RF]-[biophysical neuron] circuit. Aggregate current v produced by the receptive field is encoded by a nonlinear dynamical system (e.g., Hodgkin-Huxley neuron) described by a set of differential equations dx/dt=f(x).

Example 2. 
The biophysics of action potential generation is captured by the four differential equations of the Hodgkin-Huxley neuron model (Johnston & Wu, 1994):
formula
2.2
with
formula
where V is the membrane potential; m, h, and n are gating variables; and is a constant input (bias) current. The original HH equations above can be compactly written as dx/dt=f(x), where x=[V, m, h, n]T is a vector comprising the membrane voltage and sodium/potassium gating variables, while f=[f1, f2, f3, f4]T is the corresponding function vector. The sequence of spike times is obtained by detecting the peaks of the action potentials of the first component of the vector x, that is, the membrane potential x1=V.

2.2.2.  Reduced Spike Generator Model.

Using nonlinear perturbation analysis, it can be shown that for weak input stimuli, the HH neuron (as well as many other conductance-based neuron models) is a first-order input-output (I/O), equivalent to a reduced project-integrate-and-fire (PIF) neuron (Lazar, 2010). The PIF neuron is closely related to the ideal IAF neuron, with an additional step of projecting the external input current v(t) onto the (infinitesimal) phase response curve (PRC) (Izhikevich, 2007) of the neuron:
formula
2.3
where is the neuron's phase advance or delay, is the period of the neuron, and , is the PRC on a stable orbit. equation 2.3 is also known as the t-transform of the reduced PIF neuron (Lazar, 2010). PRCs have been studied extensively in the neuroscience literature and simply describe the transient change in the cycle period of the neuron induced by a perturbation as a function of the phase at which that perturbation is received (Izhikevich, 2007). For multidimensional models such as the Hodgkin-Huxley model, the function is the first component of the vector-valued PRC , corresponding to the membrane potential V.
For strong input stimuli that introduce large perturbations into the dynamics of the neuronal response, the behavior of the neuron can be accurately described by the reduced PIF-cPRC neuron, the reduced PIF neuron with conditional PRCs (Kim & Lazar, 2011),
formula
2.4
where with corresponding to the PRC . In this model the phase response curve is not frozen but is conditioned on the input signal, as visualized in Figure 5. The aggregate current v generated by the receptive field appears as an input to the PRC block. The latter depicts an entire family of PRCs and produces a PRC that is conditioned on v. The receptive field current is multiplied by the conditional PRC, and the resulting signal, , , is encoded into a sequence of spikes by an IF-type model, in which the threshold is also conditioned on the input stimulus via the PRC. The subscript k points to the fact that the PRC and the threshold may change at each spike time, depending on the input signal v.
Figure 5:

[RF]-Reduced PIF-cPRC neural circuit. Receptive field current v is encoded by a reduced PIF neuron with conditional PRCs. Both the PRC and the threshold are stimulus driven and change at spike times.

Figure 5:

[RF]-Reduced PIF-cPRC neural circuit. Receptive field current v is encoded by a reduced PIF neuron with conditional PRCs. Both the PRC and the threshold are stimulus driven and change at spike times.

3.  Identifying Biophysical Neuron Models

In what follows we present a methodology for identifying neuron models consisting of receptive fields and spike generators. In section 3.1, we discuss the identification of the spike generator. Note that in the previous literature, the spike generator is called the point neuron. The identification of dendritic processing is presented in section 3.2. In section 3.3 the identification of noisy neurons is discussed.

3.1.  Identifying the Spike Generator.

If parameters of the spike generator are not known a priori, we can use the reduced PIF neuron with conditional PRCs introduced above to derive a first-order equivalent model. In this case, identification of the spike generator calls for finding a family of PRCs.

A number of different theoretical and experimental methods have been proposed to compute phase response curves in both model neurons (Izhikevich, 2007) and biological neurons (Netoff, Schwemmer, & Lewis, 2012). One of the most popular and widely used methods involves delivering a perturbation that is infinitesimally small in amplitude and duration and measuring its effect on the timing of subsequent spikes. Simple in its nature, this method requires delivering hundreds to thousands of precisely timed pulses of current at different phases to map out the PRC. While this procedure is easy to simulate in a model neuron, it is a daunting task for any experimentalist working with biological neurons. Furthermore, delivering perturbations that are infinitesimally small in duration is very difficult technically, and the resulting perturbations are usually spread out in time. An alternative method, referred to as Malkin's approach or the adjoint method, involves linearizing the dynamical system about a stable limit cycle and solving the corresponding adjoint equations (Malkin, 1949; Ermentrout & Chow, 2002; Izhikevich, 2007). The drawback of this method is that the differential equations describing the dynamical system need to be known a priori.

Below we introduce a new method for estimating PRCs that does not require knowing parameters of the dynamical system or delivering pulses of current at different phases of the oscillation cycle. Instead, our method is based on injecting an arbitrary current waveform and estimating the PRC from the information contained in the spike train at the output of the neuron. Although similar in spirit to what has been suggested in Izhikevich (2007) and Netoff et al. (2012), our approach is different in that it does not use white noise stimuli and it provides strong insight into how the perturbation signal affects the estimated PRC. It is also worth pointing out that injecting truly white noise signals is not possible in an experimental setting since all electrodes have finite bandwidths. We demonstrate that if the bandwidth of the injected current is not taken into account, the estimated PRC can be substantially different from the underlying PRC of the neuron. Finally, when compared to standard pulse methods for estimating the PRC, our approach is more immune to spike generation jitter since stimulus fluctuations are spread over the entire oscillation cycle as opposed to being concentrated at a particular moment in time (Izhikevich, 2007).

Consider a point neuron model on a stable limit cycle with a period that is generated by an input bias current . Given a weak arbitrary input signal u(t), , the response of the point neuron is faithfully captured by the reduced PIF neuron equation 2.4 as For with a period , we get
formula
3.1
where is the PRC projection onto and holds, provided that , since for t>tk+1tk for most neurons, including the Hodgkin-Huxley neuron (Goel & Ermentrout, 2002). The inequality can be easily satisfied by an appropriate choice of the space .
By the Riesz representation theorem (Berlinet & Thomas-Agnan, 2004), it follows that the right-hand side of equation 3.1 is a linear functional and
formula
where . In other words, spikes time perturbations due to the weak arbitrary input can be interpreted as measurements of the projection . To reconstruct from these measurements, we have the following result:
Theorem 1. 
Let be a collection of N linearly independent weak currents perturbing the Hodgkin-Huxley neuron on a stable limit cycle with a period . If the total number of spikes generated by the neuron satisfies , then the PRC projection can be identified from a collection of I/O pairs as
formula
3.2
whereand. Furthermore, , , and [qi]k=qik with eachof sizeandqiof size. The elements of matricesare given by
formula
3.3
for all, , and.
Proof.  Since , it can be written as . Furthermore, since the stimuli are linearly independent, the measurements provided by the PIF neuron are distinct. Writing equation 3.1 for a stimulus ui, we obtain
formula
3.4
or , with [qi]k=qik, and . Repeating for all , we get with and . This system of linear equations can be solved for , provided that the rank of the matrix is . A necessary condition for the latter is that the total number of spikes generated in response to all N signals satisfies . Then the solution can be computed as . To find the coefficients , we note that .
Remark 2. 

Theorem 1 shows that only the PRC projection can be re- covered from the recorded spike train. Note that is given by the projection of the PRC onto the space of stimuli and in general is quite different from the underlying PRC. In a practical setting, is determined by the choice of stimuli in the experimental setup. Clearly, the bandwidth of the electrode or neuron seal plays a critical role in the PRC estimate.

Remark 3. 

The arbitrary current waveforms can be delivered in either separate experiments or a single experimental trial. Since the effects of a perturbation can last longer than a single cycle (Netoff et al., 2012), each current waveform may be followed by a quiescent period to ensure that one perturbation does not influence the neuronal response to another perturbation. The resulting spike-triggered arbitrary injected current waveform protocol minimizes interactions between consecutive current waveforms and allows one to efficiently measure the PRC projection .

Example 3. 

The proposed PRC identification procedure and its performance for a Hodgkin-Huxley neuron are illustrated in Figure 6. First, a constant bias current A/cm2 injected into the neuron (see Figure 6a) places the state of the neuron onto a stable limit cycle. The period of the oscillation on that limit cycle can be computed by recording stable spikes, produced in response to the constant current Ib (see Figure 6b). Next, a sequence of arbitrary current waveforms with bandwidth rad/s and order L=4 (see Figure 6a) is injected into the neuron and the perturbed spikes (highlighted in red are recorded. In this example, the waveforms are delivered at every other spike in order to minimize the effect of one perturbation on the neuronal response to a subsequent perturbation (see also remark 3). We demonstrate the perturbation experienced by the dynamical system in Figure 6d, where both the stable limit cycle produced by the current Ib is shown Ib and the perturbed trajectory of the Hodgkin-Huxley neuron response are plotted in a two-dimensional Vn phase plane. The oscillation period on the stable limit cycle was found to be ms. Using together with the injected current waveforms and produced spike times (see Figure 6c), we identify the PRC projection and plot it together with the theoretical value of the projection and the underlying PRC in Figure 6d. Note that there is essentially no difference between the three waveforms in the latter plot.

Figure 6:

Estimating a single PRC of an HH neuron, A/cm2. (a) Injected spike-triggered arbitrary current waveform. (b) Corresponding membrane potential of the neuron. Stable and perturbed orbit spikes are shown in lighter and darker shades of gray, respectively. (c) Spike times used for the PRC identification. (d) Trajectory of the HH neuron response and the stable orbit in the Vn phase. (e) The original PRC , its projection onto the input current space, and the identified PRC .

Figure 6:

Estimating a single PRC of an HH neuron, A/cm2. (a) Injected spike-triggered arbitrary current waveform. (b) Corresponding membrane potential of the neuron. Stable and perturbed orbit spikes are shown in lighter and darker shades of gray, respectively. (c) Spike times used for the PRC identification. (d) Trajectory of the HH neuron response and the stable orbit in the Vn phase. (e) The original PRC , its projection onto the input current space, and the identified PRC .

Example 4. 

In the above example, the bandwidth of the stimulus was rad/s and the PRC was identified with a very high precision. In general however, the projection is stimulus (bandwidth) dependent. This dependency is demonstrated in Figure 7, where the mean-squared error between the identified PRC projection and the original PRC as a function of the stimulus bandwidth is depicted. Several identification examples are shown as insets in the plot. Note that the identified functions in the first two examples are very different from the PRC estimated in Figure 6d.

Figure 7:

PRC estimation is stimulus dependent. Mean-squared error between the original and identified PRCs is plotted in decibels (i.e, on a logarithmic scale) as a function of the stimulus bandwidth.

Figure 7:

PRC estimation is stimulus dependent. Mean-squared error between the original and identified PRCs is plotted in decibels (i.e, on a logarithmic scale) as a function of the stimulus bandwidth.

Remark 4. 

The identification error in Figure 7 decreases with increasing bandwidth and eventually levels off, providing us with a measurement of the PRC bandwidth. This is something that cannot be determined using the traditional white noise and pulse methods. Furthermore, glass electrodes traditionally used in the PRC estimation naturally impose a band limitation. Traditional methods do not take that fact into account and simply declare that the estimated PRC is the underlying PRC of the neuron. Our results suggest that in reality, the experimenter often finds only the PRC projected onto the space determined by the input stimuli employed and the electrode properties (bandwidth). As a result, different results may be obtained in day-to-day experiments due to the variability in electrodes, and groups using different types of electrodes may come up with different estimates.

Example 5. 

An entire family of PRCs that was estimated using the above method for 63 different limit cycles is shown in Figure 8. As the input bias current Ib increases, the limit cycle x0 of the neuron shrinks in the Vn plane (see Figure 8a). At the same time, the period of the oscillation decreases from 17.2 ms to 7.6 ms (see Figure 8b). As a result, the temporal support of each PRC decreases as well (see Figure 8d), requiring higher-bandwidth currents for estimating the underlying PRC at high spike rates. The entire family of PRCs as a function of phase and time is shown in Figure 8c and Figure 8d, respectively.

Figure 8:

A family of Hodgkin-Huxley PRCs computed using the method described in this section. (a) Limit cycles shrink as the bias input current Ib increases from to . (b) The period of the oscillation decreases with Ib. (c) Identified PRCs corresponding to each value of Ib are plotted as a function of phase. (d) Identified PRCs are plotted as a function of time.

Figure 8:

A family of Hodgkin-Huxley PRCs computed using the method described in this section. (a) Limit cycles shrink as the bias input current Ib increases from to . (b) The period of the oscillation decreases with Ib. (c) Identified PRCs corresponding to each value of Ib are plotted as a function of phase. (d) Identified PRCs are plotted as a function of time.

3.2.  Identifying Dendritic Processing.

Once the biophysical spike generator has been characterized and the family of its phase response curves computed, one can begin identifying the temporal receptive fields describing the analog processing of incoming spikes by the dendritic tree. In what follows, we use the notation to describe M spike train sequences that stimulate the neuron on the trial, .

Theorem 2: Channel Identification Machine (CIM). 

Let (si)Ni=1 be a collection of spike train M-tuples at the input to a neuron with M temporal receptive fields , . Furthermore, let the family of conditional phase response curves , corresponding to the membrane voltage V, be known, and let , , be the sequence of spikes produced by the neuron. Given a space with and sufficiently high-order L and bandwidth , the filter projections can be identified from a collection of input and output spike trains (si)Ni=1 and (ti)Ni=1 as . Here the coefficients hml are given by with , [qi]k=qik and , provided that the matrix has rank . The ith row of matrix is given by , , with , where and the column index .

Proof. Since each , it can be written as . Hence, for the mth component of the spike train M-tuple , we have , and the aggregate current produced by the dendritic tree is , with . Substituting the last expression into the t-transform, equation 2.4, we obtain where and follow from the Riesz representation theorem (Berlinet & Thomas-Agnan, 2004) with . In matrix form, we have with hML]T, [qi]k=qik, and . Repeating for all M-tuples , , we obtain . This system of equations can be solved for h, provided that the matrix rank . To find the coefficients , we note that .

Remark 5. 

To satisfy the condition of the CIM algorithm, the neuron must produce a total of at least M(2L+1)+N spikes in response to all N spike-train M-tuples. If each M-tuple is of duration T, this condition can be met by increasing the duration NT of the experimental recording.

Example 6. 

Identification results for the circuit of Figure 1b are presented in Figure 9. Here, a single neuron receives two spiking inputs, the analog processing of which is described by two temporal receptive fields. Both temporal receptive fields were chosen arbitrarily and had a positive mean in order to generate an excitatory current when stimulated with spikes. Multiple recorded input spike trains si=(s1i, s2i) were used together with the output spike trains to identify the receptive fields. Two spike trains from the first experimental trial i=1 are shown in Figure 9a. The aggregate current produced in the dendritic tree (see Figure 9b) was then encoded into a sequence of action potentials (see Figure 9c) by the Hodgkin-Huxley model with a bias Ib=0. The corresponding sequence of spike times as measured, for example, in extracellular recordings, is shown in Figure 9d. Even for large perturbations of the dynamical system, the presented algorithm allows one to faithfully identify the two dendritic processing filters (see Figures 9f and 9g). The mean squared error between the original and identified kernels is −27 dB.

Figure 9:

Identifying receptive fields in cascade with a HH neuron. (a) Feedforward input spike trains sm1, m=1, 2. (b) Induced aggregate dendritic current v1(t). (c) Membrane potential of the HH neuron as a function of time. Dots denote the peaks of action potentials. (d) Corresponding spike times . (e) HH neuron response in the Vn phase plane. (f–g) Identified receptive field projections , m=1, 2.

Figure 9:

Identifying receptive fields in cascade with a HH neuron. (a) Feedforward input spike trains sm1, m=1, 2. (b) Induced aggregate dendritic current v1(t). (c) Membrane potential of the HH neuron as a function of time. Dots denote the peaks of action potentials. (d) Corresponding spike times . (e) HH neuron response in the Vn phase plane. (f–g) Identified receptive field projections , m=1, 2.

3.3.  Extension to Spike Generators with Stochastic Conductances.

Since all currents flowing through the neuronal cell membrane are generated by ion channels, the opening and closing of which are probabilistic in nature (Johnston & Wu, 1994), it is natural to introduce stochastic conductances into the differential equations describing a conductance-based spike generator model. This allows one to incorporate the intrinsic noise observed in neuronal responses. For example, a Hodgkin-Huxley neuron with stochastic conductances and an additively coupled input current I(t) can be described by the following equations (Lazar, 2010),
formula
where X=[X1(t), X2(t), X3(t), X4(t)]T is a stochastic process and W=[0, W2(t), W3(t), W4(t)]T is a vector of independent Brownian motions. This neuron is to a first-order I/O equivalent to a project-integrate-and-fire (PIF) neuron with random thresholds, for which the t-transform is given by
formula
where and qk are provided by equation 2.4 and is the error term introduced by stochastic conductances and effectively describes the spike jitter due to noise.
Example 7. 

Multiple histograms of interspike intervals produced by a Hodgkin-Huxley neuron with stochastic conductances are shown in Figure 10. Stationary independent increments Wi(t)−Wi(s), , of Brownian motions followed a normal distribution with a mean and variance . When driven by an input bias current , the neuron produced interspike intervals that followed a normal distribution, whose variance decreased with increasing values of the current Ib. The spike time jitter was as big as 1 ms for A/cm2.

Figure 10:

Spike timing jitter due to stochastic conductances (). HH neuron response (second column), limit cycle (third column), and interspike interval jitter (fourth column) are plotted for several values of the current Ib.

Figure 10:

Spike timing jitter due to stochastic conductances (). HH neuron response (second column), limit cycle (third column), and interspike interval jitter (fourth column) are plotted for several values of the current Ib.

3.3.1.  Estimating the PRC Family from Noisy Measurements.

The methodology employed in section 3.1 can be extended within an appropriate mathematical setting to biophysical neuron models with stochastic conductances.

Since interspike intervals of a Hodgkin-Huxley neuron with stochastic conductances follow a normal distribution, we assume that , , are independent and identically distributed (i.i.d.). In the presence of noise, we can identify an estimate of that is optimal for an appropriately defined cost function. For example, we can formulate a bicriterion Tikhonov regularization problem,
formula
3.5
where the scalar provides a trade-off between the faithfulness of the identified PRC projection to measurements (qk)n−1k=1 and its norm .
Theorem 3. 
Problem 3.5 can be explicitly solved in analytical form. The optimal solution is achieved by
formula
3.6
wherewith, and, , as defined in equation3.3.
Proof. Since the minimizer is in , it is of the form given in equation 3.6. Substituting this into equation 3.5, we obtain
formula
where with , , as defined in equation 3.3. This quadratic optimization problem can be analytically solved by expressing the objective as a convex quadratic function with H denoting the conjugate transpose. A vector minimizes J if and only if , that is, .
Example 8. 

PRC identification results for a Hodgkin-Huxley neuron with stochastic conductances are shown in Figure 11. As before, stationary independent increments Wi(t)−Wi(s), , of Brownian motions followed a normal distribution with mean and variance .

Figure 11:

Identifying PRC from noisy measurements (). (a) Limit cycles for multiple values of current Ib. (b) Period . (c) Identified PRCs plotted as functions of phase. (d) Same PRCs plotted as functions of time.

Figure 11:

Identifying PRC from noisy measurements (). (a) Limit cycles for multiple values of current Ib. (b) Period . (c) Identified PRCs plotted as functions of phase. (d) Same PRCs plotted as functions of time.

3.3.2.  Estimating Temporal Receptive Fields from Noisy Measurements.

Similarly, the CIM methodology (see theorem 2) for identifying temporal kernels describing the dendritic processing of incoming spikes can be extended to the case when the spike times produced by the neuron are noisy. For each temporal receptive field hm, ,
formula
3.7
where the scalar provides a trade-off between the faithfulness of the identified filter projection to measurements (qk)n−1k=1 and its norm .
Theorem 4. 
Problem3.7can be solved explicitly in analytical form. The optimal solution is achieved by
formula
3.8
with, and, .

Proof.  The proof is essentially similar to that of theorem 3.

Example 9. 

Identification of two temporal receptive fields in the presence of spike generation jitter is shown in Figure 12. The inset in plot Figure 12e shows the effect of stochastic conductances on the subthreshold behavior of the neuron. Although a significant amount of noise is introduced into the system, we can identify optimal temporal receptive field estimates and that are very close to the underlying kernels h1 and h2. The MSE of identification is on the order of −24 dB, as seen in Figures 12f and 12g.

Figure 12:

Identifying temporal receptive fields in a noisy circuit (). (a–g) Same as in Figure 9. Effect of the stochastic noise introduced into HH model is shown in the inset of plot e.

Figure 12:

Identifying temporal receptive fields in a noisy circuit (). (a–g) Same as in Figure 9. Effect of the stochastic noise introduced into HH model is shown in the inset of plot e.

4.  Identifying Spiking Neural Circuits

The methods presented in the previous section can be extended in two important directions. First, very few neurons in any organism function in isolation from each other unless they are confined to the very sensory periphery. Most neurons upstream of the periphery are usually part of a larger neural circuit and receive, in addition to feedforward inputs, lateral inputs from other neurons in the same layer. It is desirable to be able to take such inputs into account. Second, there is increasing evidence that the processing of feedforward and lateral inputs by the dendritic tree of a neuron depends on the spiking activity of the neuron itself and that there is an interaction between external inputs and the backpropagating action potential of the neuron (Stuart, Spruston, Sakmann, & Häusser, 1997; Waters et al., 2005; Rózsa, Katona, Kaszás, Szipöcs, & Vizi, 2008; Casale & McCormick, 2011). It is instructive to be able to take such interactions into account by introducing a feedback filter describing the effects of action potential generation on the activity within the dendritic tree. Such a filter can also capture the effects of various adaptation mechanisms observed in all biological neurons. As adaptation may take place on many different timescales (from milliseconds to several seconds), the temporal support of the feedback filter need not be limited to a single cycle of oscillation.

In section 4.1, we present the identification methodology and simulation results for circuits with feedforward and lateral spiking inputs. In section 4.2, feedback is present in the system. Two cases are distinguished: the feedback is fast and essentially confined to a single interspike interval, and the feedback effects are spread over multiple consecutive interspike intervals. In the former, we demonstrate how such short-lived feedback can be captured by the family of phase response curves. In the latter, we show how to identify the kernel in the feedback path.

4.1.  Circuits with Lateral Connectivity.

Without loss of generality, consider the neural circuit in Figure 13. It comprises two second-layer neurons, each receiving multiple feedforward spiking inputs from the first layer. Here index j=1, 2 labels the neurons in layer 2, and index , with , labels the input number. The processing of these feedforward inputs is described by M temporal receptive fields with kernels h1jm, where j=1, 2 and as before, and the first index 1 labels the input of the second layer. For simplicity, we assume that each neuron in layer 2 receives feedforward inputs only from layer 1 and that both neurons receive the same number of feedforward inputs M. In biological circuits, the number of layers and the number of inputs from these layers would be specified by the anatomy and prior knowledge about the circuit.

Figure 13:

A simple circuit with lateral connectivity.

Figure 13:

A simple circuit with lateral connectivity.

In addition to feedforward spiking inputs, each neuron receives a lateral spiking input from the other neuron located in the same layer. The effects of these lateral inputs are described by kernels h221 and h212, where the first index indicates the output of layer number 2 and the last two indices specify the origin and destination of spikes, respectively.

The aggregate dendritic currents v1 and v2, produced by all feedforward temporal receptive fields and cross-feedback, are encoded into sequences of action potentials by biophysical spike generation models. The spike times , comprise the output of the second layer.

To summarize, each neuron receives M inputs from a presynaptic layer and 1 input from its own layer. Using the recorded spike times and , j=1, 2, , we identify 2M+2 temporal receptive fields. To that end, we have the following result:

Corollary 1. 

Let (sji)Ni=1, j=1, 2, be collections of spike train M-tuples at the input of two Hodgkin-Huxley neurons with feedforward temporal receptive fields , j=1, 2, , and lateral receptive fields h212, h221. Let and be sequences of spike times produced by the two neurons. Given a family of conditional phase response curves , corresponding to the membrane voltage V, as well as a space with and sufficiently high-order L and bandwidth , the filter projections , , and , j=1, 2, , can be identified as , , and . Here, the coefficients h212l, h221l, and h1jml are given by with , [qji]k=qjik and h=[h1; h2], where , j=1, 2, provided each matrix has rank . The ith row of is given by , with , . The entries , , are as given in theorem 2.

Proof.  The proof follows the one in theorem 2, with the addition of lateral terms.

Example 10. 

Simulation results demonstrating the performance of the above algorithm are shown in Figure 14. We used M=2 feedforward temporal receptive fields per neuron, with each receptive field kernel having a different bandwidth between rad/s and rad/s. The kernels had a positive mean to produce excitatory currents when stimulated with spikes and had a temporal support on the interval [0, 0.1] s. Kernels of the cross-feedback filters had the same temporal support, but higher bandwidths of rad/s and rad/s. Specific kernel shapes for all receptive fields were arbitrarily chosen. All feedforward and lateral spike trains were projected onto the space with a bandwidth rad/s, period T=0.25 s, and order L=60. We used N=120 experimental trials, T seconds each, for a combined duration of NT=30 s, to identify the kernels of all 2M+2=6 temporal receptive fields. The original and identified feedforward kernels corresponding to the first and second receptive field of each neuron are plotted together in Figures 14a to 14d. Similarly, the original and identified cross-feedback filters are shown in Figures 14e and 14f.

Figure 14:

Identifying receptive fields in a circuit with lateral connectivity. Four feedforward kernels (a–d), and two lateral kernels (e–f) of the circuit in Figure 13 were identified using the algorithm of corollary 1.

Figure 14:

Identifying receptive fields in a circuit with lateral connectivity. Four feedforward kernels (a–d), and two lateral kernels (e–f) of the circuit in Figure 13 were identified using the algorithm of corollary 1.

4.2.  Circuits with Lateral Connectivity and Feedback.

Now consider the canonical circuit of Figure 1c. In addition to lateral connections discussed in the previous section, this circuit incorporates feedback from each neuron onto itself. Depending on the nature of the feedback and whether it can be studied in isolation from other processes, there are several ways to take it into account when modeling a biological neuron.

In one such model, due to a particular dendritic ion channel density or a specific morphology and branching pattern of dendrites (Stuart et al., 1997), the feedback is localized to the axon hillock or the soma of the neuron. While such feedback may not necessarily affect the processing of synaptic inputs within the dendritic tree, it can change the encoding properties of the spike generator. We note that the latter is possible if the feedback is fast, that is, it occurs on the time scale of the nonlinear dynamical system generating the action potentials. In that case, the feedback can be taken into account by computing the PRC of the spike generator with feedback.

Example 11. 

In Figure 15 we summarize the behavior of a Hodgkin-Huxley neuron with a fast inhibitory feedback. The feedback kernel was modeled using an alpha function and had a temporal support of 8 ms. As a result of the feedback current injected into the neuron at every cycle, the behavior of the resulting point neuron differs from that of the standard Hodgkin-Huxley neuron. For the same values of current Ib, the limit cycles in Figure 15c are larger in size when compared to those in Figure 8a. At the same time, the period on the stable orbit is smaller, as can be seen by comparing Figures 15d and 8b. The observed phase response curves, computed by the method presented in section 2.2 and plotted in Figure 15c, are different as well.

Figure 15:

Change in the PRC of the neuron as a consequence of fast feedback. (a) Feedback kernel with a temporal support of 8 ms (fast feedback). (b) Family of measured phase response curves. (c) Limit cycles of the modified dynamical system. (d) Period of the new system as a function of the injected current.

Figure 15:

Change in the PRC of the neuron as a consequence of fast feedback. (a) Feedback kernel with a temporal support of 8 ms (fast feedback). (b) Family of measured phase response curves. (c) Limit cycles of the modified dynamical system. (d) Period of the new system as a function of the injected current.

In other models, the action potential propagates back through the entire neuron affecting either the dendritic tree alone or both the dendritic tree and the spike generator. Both of these outcomes can be modeled by first computing the family of PRCs and then estimating the feedback kernels h211 and h222. For identifying the temporal receptive fields together with feedback and cross-feedback kernels in the circuit of Figure 1c, we have:

Corollary 2. 

Let (sji)Ni=1, j=1, 2, be collections of spike train M-tuples at the input of two Hodgkin-Huxley neurons with feedforward temporal receptive fields , j=1, 2, , lateral receptive fields h212, h221, and feedback receptive fields h211 and h222. Let and be sequences of spike times produced by the two neurons. Given a family of conditional phase response curves , corresponding to the membrane voltage V, as well as a space with and sufficiently high-order L and bandwidth , the filter projections , , , , and , j=1, 2, , can be identified as , , , , and . Here, the coefficients h211l, h212l, h221l, h222l, and h1jml are given by with , [qji]k=qjik, and h=[h1; h2], where , j=1, 2, provided each matrix has rank . The ith row of is given by , , with and . The entries , , are as given in theorem 2.

Proof.  The proof follows the one in theorem 2, with the addition of lateral and feedback terms.

Example 12. 

Two simulation results demonstrating the performance of the algorithm in corollary 2 are shown in Figures 16 and 17. In both simulations, the feedback kernels were chosen arbitrarily and had a temporal support of 100 ms. Furthermore, we assumed that the feedback was not instantaneous and instead arrived with a time delay that was random. For both simulations, the delay had a normal distribution with mean ms and standard deviation ms. In Figure 16, we assumed that the distribution of the delay was known, while in Figure 17, we assumed that the delay could be measured for every spike. Note that even for the case when only the distribution of the delay was known, all kernels could be reasonably identified. Comparing Figures 16 and 17, we see that while knowledge of the delay in the feedback path considerably improves identification of the feedback filters, the two identified kernels and differ from the true kernels h211 and h222 in the neighborhood of t=0. This most likely is a direct consequence of the neuronal PRC being zero or very close to zero for the first few milliseconds after an action potential is produced (see also Figure 8). In other words, for a point neuron such as the Hodgkin-Huxley neuron, a perturbation applied immediately after an action potential has, unless very strong, a minimal effect on the timing of the next action potential. As a consequence, information about a feedback signal h211(tt1k) that is triggered by an action potential produced at time t1k cannot be faithfully encoded by the neuron for some duration of time immediately after t=t1k. Note that this is not the case for feedforward and lateral inputs as spikes from other neurons arrive at different times relative to the spikes produced by the spike generator.

Figure 16:

Identifying spike-processing receptive fields and feedback with additional unknown gaussian spike delays. Kernels of the circuit in Figure 1c, as identified by the algorithm of corollary 2.

Figure 16:

Identifying spike-processing receptive fields and feedback with additional unknown gaussian spike delays. Kernels of the circuit in Figure 1c, as identified by the algorithm of corollary 2.

Figure 17:

Identifying spike-processing receptive fields and feedback with additional known gaussian spike delays. Kernels of the circuit in Figure 1c, as identified by the algorithm of corollary 2.

Figure 17:

Identifying spike-processing receptive fields and feedback with additional known gaussian spike delays. Kernels of the circuit in Figure 1c, as identified by the algorithm of corollary 2.

5.  A Brief Comparison with the GLM

Detailed conductance-based models can accurately reproduce neuronal responses to stimuli (Koch & Segev, 1989; Bower & Beeman, 1995; Gabbiani & Cox, 2010). However, due to the relatively large number of parameters needed for their description, biophysical models are computationally expensive. As a consequence, simpler phenomenological models are often used.

In recent years, a particular phenomenological model, called generalized linear model (GLM), has become popular in the neuroscience community. A single-neuron GLM is an extension of the well-known linear-nonlinear-Poisson (LNP) model (Pillow, 2007). Similar to the LNP model, the GLM omits the biophysics of spike generation. Instead, it employs a static nonlinearity to map the output of a set of linear filters into an instantaneous rate of neuronal response. However, in contrast to the LNP model, the GLM includes a feedback filter in order to overcome the inability of a simple Poisson spike generator to capture refractory effects and adaptation often observed in biological neurons (Paninski, 2004).

Single-neuron as well as coupled GLMs that incorporate coupling between neurons have been applied to many neural circuits in a number of sensory modalities (Pillow et al., 2008; Babadi, Casti, Xiao, Kaplan, & Paninski, 2010; Calabrese, Schumacher, Schneider, Paninski, & Woolley, 2011; Vidne et al., 2012). However, it is not a priori clear to what extent simple static nonlinearities used in GLMs can account for the highly nonlinear behavior of biophysical spike generators (Izhikevich, 2007). Furthermore, it is not clear how identification of the underlying receptive fields as well as the coupling between neurons is affected when spikes produced by an actual biophysical model are used in the GLM framework.

While all studies employing linear-nonlinear cascades (including LNP, GLM) demonstrate their model performance by predicting the response of a neuron to a novel stimulus, the identified and underlying filter kernels are rarely compared. Although this may not be possible when the linear-nonlinear framework is applied to real data, a simple check can be performed on simulated data. Furthermore, the novel stimulus employed for cross-validation is almost always chosen to have the same statistical properties as the stimuli used in identification. Given the theoretically infinite space of stimuli, this raises the question of how well the model performs when other stimuli are used.

To provide insight into these issues and compare the methodology proposed in this letter with a linear-nonlinear approach, we carried out extensive simulations with a circuit consisting of two coupled neurons (see Figure 1c). Each neuron had only one feedforward filter and received the same external input as the other neuron. We used a continuous feedforward signal, as methods for identifying feedforward filters with spiking input in the context of GLMs have not been described or implemented. For illustrative purposes, the feedback filter in the underlying model was set to zero. However, the identification algorithm was blind to this fact, and full connectivity was assumed during the identification procedure.

The first set of simulations was carried out with the spike generator modeled as an integrate-and-fire (IF) neuron. The IF neuron is the best-known example of a formal spiking neuron model and is the basis of many theoretical studies. Although it is an idealization of a biophysical spike generator, IF-based models have been found to reliably predict spike trains of many real neurons, including neocortical pyramidal cells, retinal ganglion cells and lateral geniculate nucleus neurons in the visual pathway (Keat, Reinagel, Reid, & Meister, 2001; Rauch, La Camera, Lüscher, Senn, & Fusi, 2003; Jolivet, Rauch, Lüscher, & Gerstner, 2005). Moreover, the IF model preserves many of the neurocomputational properties of more complex point neuron models. Specifically, in the context of the more general class of models discussed in this letter, the dynamics of spike generation of the IF neuron can be captured using phase response curves (PRCs). However, in comparison to a full-blown biophysical model, the PRC of the IF neuron exhibits a simpler functional form. It is flat, or constant as a function of phase, for an ideal IF neuron, and it is an increasing function of phase for the leaky IF neuron. For both the leaky and the ideal IF neuron, the magnitude of the PRC changes as a function of the bias current (Brown, Moehlis, & Holmes, 2004).

The results of the experiment are shown in Figure 18. The solid curve on the bottom of the plot depicts the average identification error for all four kernels (two feedforward and two lateral) as a function of the number of spikes used in the proposed channel identification machine (CIM) algorithm. The error is plotted on the logarithmic scale, and a low identification error (−32 dB; the smaller the number, the better) is achieved for only a few hundred spikes. The original and identified kernels are shown in Figure 19. Note that since the IF PRC is not zero right after the spike, the feedback kernels can be recovered (see also section 4.2).

Figure 18:

Comparison of the kernel identification error between GLM and CIM. The error was computed as an average across kernels. Solid line: GLM applied to data produced by IF neurons, assuming exponential nonlinearities. Dash-dotted line: GLM applied to the same spikes, given log-exponential nonlinearities computed from input-output data. Dashed line: GLM applied to data produced by a GLM with a log-exponential nonlinearity. Solid line with square markers: CIM applied to data produced by IF neurons.

Figure 18:

Comparison of the kernel identification error between GLM and CIM. The error was computed as an average across kernels. Solid line: GLM applied to data produced by IF neurons, assuming exponential nonlinearities. Dash-dotted line: GLM applied to the same spikes, given log-exponential nonlinearities computed from input-output data. Dashed line: GLM applied to data produced by a GLM with a log-exponential nonlinearity. Solid line with square markers: CIM applied to data produced by IF neurons.

Figure 19:

Original kernels and kernels identified using a CIM.

Figure 19:

Original kernels and kernels identified using a CIM.

The solid curve at the top of Figure 18 corresponds to the average identification error when exactly the same input signals and spike trains are provided to the GLM with an exponential nonlinearity. The error does not appear to change with the number of spikes and remains well above 0 dB. For comparison, the dashed line depicts the case when the GLM methodology is applied to a spike train produced not by the IF neuron but a combination of a log-exponential nonlinearity and a Poisson spike generator, while the filters remain the same. The error steadily decreases with the number of spikes provided to the algorithm, reaching −15 dB at 4000 spikes (see Figure S3 in the online supplement for a comparison of the identified and original kernels).

These results may seem surprising, and one might be tempted to think that the GLM methodology does not recover the underlying kernels because the nonlinearity is simply assumed to be exponential (as is often done in the literature; Pillow et al., 2008) rather than being derived from data. To test this hypothesis, we estimated nonlinearities for both neurons and fitted them with log-exponential functions, which provide an alternative choice of the nonlinearity that ensures that the optimization problem is convex (see the online supplementary Figures S7 and S8). The dash-dotted curve in Figure 18 shows that while the identification improves, it does so only marginally. The actual kernels identified with 2600 spikes are shown in Figure 20.

Figure 20:

Original kernels and kernels identified with a GLM having a log-exponential nonlinearity. Spikes were produced by IF neurons, while the kernels were estimated using a GLM with log-exponential nonlinearities that were computed from the input-output data. Original and identified kernels are shown using dotted and solid lines, respectively.

Figure 20:

Original kernels and kernels identified with a GLM having a log-exponential nonlinearity. Spikes were produced by IF neurons, while the kernels were estimated using a GLM with log-exponential nonlinearities that were computed from the input-output data. Original and identified kernels are shown using dotted and solid lines, respectively.

Note that the magnitudes of all kernels are different from those of the underlying filters. Given that the difference in kernel magnitude may be compensated by the nonlinearity, the GLM data plotted in Figure 18 were computed for normalized kernels. In addition to the magnitude however, the temporal profiles of all kernels are substantially different as well. While the identified feedback kernels shown in Figures 20c and 20f may be justified, given that a simple Poisson spike generator without a feedback filter cannot describe a neuron's dependence on its own spiking history, the shape of feedforward and lateral filters should not change.

Nevertheless and quite surprisingly, when using these identified kernels to predict the neural circuit response to a novel stimulus having the same statistical properties, perfectly matching PSTHs are obtained. This is demonstrated in Figure 21, where we use an input signal drawn from the same RKHS and having the same mean and variance. The input stimulus as well as the rasters and PSTHs produced by the underlying neural circuit and the GLM model are shown in the left and right columns for the first and second neuron, respectively.

Figure 21:

The identified GLM model can predict the response to a novel stimulus with the same statistics despite using filters that are substantially different from the underlying filters.

Figure 21:

The identified GLM model can predict the response to a novel stimulus with the same statistics despite using filters that are substantially different from the underlying filters.

In other words, all filters of the GLM model were identified subject to the specified nonlinearity, and despite being substantially different from the underlying filters, they can in some cases predict the circuit output. This is noteworthy, because identification of the GLM parameters is always verified by looking at the PSTH produced by the model. However, since the nonlinearity provides a simplified static description of the more complex spike generation dynamics and since the filters are fit to that nonlinearity, a question arises as to whether the filter-nonlinearity combination fully captures the behavior of the neural circuit.

Intuitively, since the class of input signals is theoretically infinite, there must exist stimuli for which the GLM prediction will break down. Indeed, this is the case, as demonstrated in Figure 22, where we use a novel stimulus with a time-varying mean. The response of both neurons differs not only in magnitude, but also in the temporal pattern produced, as highlighted by the PSTH differences in the bottom row. Note, however, that this does not arise when a nonlinearity and a Poisson spike generator are used in the underlying model instead (see supplementary online Figure S12).

Figure 22:

The identified GLM model fails to predict the response to a novel stimulus with different statistics.

Figure 22:

The identified GLM model fails to predict the response to a novel stimulus with different statistics.

In contrast to the GLM framework, the proposed approach identifies the neural circuit in two steps: first by identifying the spike generator PRCs through current injections and then identifying the associated kernels using CIM. Since all filters are found subject to the spike generation dynamics captured by phase response curves, they match the true filters (see supplementary online Figures S13 and S14 for the Hodgkin-Huxley examples).

In that regard, although a direct comparison between the GLM and proposed methods is enlightening, it is not a complete one. The GLM framework does not require a separate characterization of the spike generator and, as demonstrated, does allow the construction of neural circuit models that can predict responses to some novel stimuli. It may not always be possible to obtain intracellular recordings from neurons in order to characterize the nonlinear dynamical system governing the spike generation. At the same time, one should be careful about interpreting identification results based solely on how well the identified model can predict the PSTH to a particular stimulus. This is especially important when the identified parameters are used to infer circuit connectivity and describe the processing performed by the system. For these and other reasons, we view the two approaches as being complementary to each other.

6.  Discussion and Future Work

In this letter, we introduced a novel approach that provides a complete functional identification of biophysical spike-processing neural circuits. The circuits considered accept multidimensional spike trains as their input and comprise a multitude of temporal receptive fields and conductance-based models of action potential generation. Each temporal receptive field describes the spatiotemporal contribution of all synapses between any two neurons and incorporates the (passive, i.e., linear) processing carried out by the dendritic tree. The aggregate dendritic current produced by a multitude of temporal receptive fields is encoded into a sequence of action potentials by a spike generator modeled as a nonlinear dynamical system. Full identification of biophysically grounded circuits with single sample path spiking inputs and outputs (as typically obtained in neurophysiology) has been an open problem until now.

Our approach builds on the observation that during any experiment, an entire neural circuit, including its receptive fields and biophysical spike generators, is projected onto the space of stimuli. For a given neural circuit, the projection is determined by the input signals used to identify the circuit parameters. Employing the reproducing kernel Hilbert space (RKHS) of trigonometric polynomials to describe input stimuli, we quantitatively described the relationship between underlying circuit parameters and their projections. We also derived experimental conditions under which these projections converge to the true parameters. In doing so, we achieved the mathematical tractability needed to characterize the biophysical spike generator and identify the multitude of receptive fields.

The identification approach developed here does not require using white noise as input stimuli. Instead, all signals employed have a finite bandwidth in the frequency domain. By modeling stimuli as elements of an RKHS, one can work with band-limited functions while keeping the basic computational formalism of the Dirac-delta distribution on the space of functions with continuous derivates of any order (used in white noise analysis). Formally, the key relationship in an RKHS providing this property is the reproducing kernel property (see remark 1). The identification algorithm for spiking input signals is based on the key observation that, given an appropriate choice of the RKHS , the current produced by a temporal receptive field in response to a spike is indistinguishable from the current produced in response to a reproducing kernel . This leads to a straightforward inner-product formulation of the receptive field contribution to neuronal response. Importantly, presynaptic spikes need not be broadband Poisson, a condition that is necessary to estimate the receptive field kernels in a generalized Volterra model (GVM) (Lu, Song, & Berger, 2011). Clearly, the methodology employed can be also applied to other RKHS models of the input space including Sobolev spaces (see Lazar & Pnevmatikakis, 2010).

Characterization of conductance-based spike generators is accomplished by identifying a family of phase-response curves (PRCs) that capture the nonlinear dynamical contribution of the axon hillock to the neuronal response. Although finding PRCs requires either prior knowledge of the biophysical model or access to the cell to perform current injections, such a capability is becoming more and more available, especially in smaller organisms such as Drosophila melanogaster. We proposed a novel approach for estimating the PRCs that involves injection of a constant-amplitude current to place the nonlinear dynamical system onto a stable limit cycle, followed by injection of spike-triggered arbitrary band-limited current waveforms to perturb the system trajectory. In contrast to standard methods, which require injecting hundreds to thousands of delta pulses at particular phases of the oscillation, we employed perturbation signals that are spread over the entire oscillation cycle. As a result, every perturbation signal explores not a small segment, but the entirety of the PRC, rendering the procedure significantly more efficient when compared to traditional approaches (Izhikevich, 2007; Netoff et al., 2012; see also section 3.1). Since the perturbation signals are not concentrated at particular moments in time, the procedure is also intrinsically more immune to experimental or system noise (see section 3.3) and allows one to reproduce exactly the same current waveform for a reliable estimation of the PRC.

Moreover, the proposed approach provides strong insight into how perturbation signals affect the estimated PRC and establishes experimental guidelines for its estimation. Conventional methods employ very brief and large pulses of current (essentially Dirac-like pulses) to estimate the PRC. What is generally assumed is that these exact pulses infinitesimally perturb the dynamical system. However, given the finite bandwidth of electrodes used in practice, the actual current entering the neuron has a finite, not infinite, bandwidth and consequently is quite different from a Dirac pulse. In that regard, we note that the identification error shown in Figure 7 decreases with increasing bandwidth and eventually levels off, thereby providing us with a measurement of the PRC bandwidth. This result cannot be obtained by using conventional pulse-based methods, which simply declare that the estimated PRC is the underlying PRC of the neuron. Our results suggest that in reality, an experimenter often finds only the PRC projected onto the space determined by the electrode properties (bandwidth) and the input stimuli employed. Consequently, different results may be obtained in day-to-day experiments due to electrode variability, and research groups using different types of electrodes may come up with different estimates.

Once the PRC family is computed, it can be used in conjunction with the PIF-cPRC neuron (see Figure 5) to identify the multitude of temporal receptive fields describing the processing of presynaptic spike trains. The PIF-cPRC neuron is a reduced-parameter model that has been previously investigated in Kim and Lazar (2011) and shown to provide a faithful input-output description of conductance-based models for large-amplitude currents typically observed in experiments. As with the PRC estimate, only a projection of the temporal receptive field onto the input stimulus can be recovered. However, since input spikes may not always be controlled (in contrast to PRC currents), the identification tractability is afforded by an appropriate choice of the RKHS into which these spikes are embedded (see Figure 3). An observer can simply pick a particular time interval containing spikes from a continuous spike stream (yellow region of Figure 3) and compute the aggregate current within a subset of that interval (green region of Figure 3). The computed and true currents are indistinguishable only in the green region since the filters have memory (nonzero temporal support) and the value of the dendritic current is affected not only by the spikes arriving at the present moment but also by spikes from the past (basic property of convolution). In practice, the green region is determined by the difference between the period T of the RKHS and the temporal support of the receptive field. Since an arbitrarily large T can be selected, any finite-memory receptive field can be modeled.

The general approach presented here is both flexible and scalable. For spike generators, it accommodates all spiking models for which the PRC can be computed, including conductance-based models such as Hodgkin-Huxley, Morris-Lecar, Fitzhugh-Nagumo, Wang-Buzsáki, Hindmarsh-Rose (see also a variety of models described in Izhikevich, 2007), arbitrary oscillators with multiplicative coupling (Lazar & Slutskiy, 2012), and simpler models such as the integrate-and-fire neuron (Brown et al., 2004). In order to determine the PRC, the neuron has to be driven to an oscillatory regime. However, it does not need to be tonically spiking, as the PRC methodology is applicable to bursting neurons as well (Brown et al., 2004). For circuit architectures, our approach incorporates models with complex connectivity, including circuits with a large number of feedforward, lateral, and feedback connections.

In the future, it would be instructive to extend the methods we have presented in several important directions. First, for many neural circuits, it is desirable to be able to take into account not only the spiking feedforward and lateral inputs but also various continuous inputs. Such mixed-signal models are important, for example, in studying circuits that have extensive dendro-dendritic connections (e.g., the olfactory bulb; Urban & Sakmann, 2002), or circuits that respond to a neuromodulator (global release of, e.g., dopamine, acetylcholine). Second, important models of sensory processing include receptive fields that are tuned not only to temporal but also to spatial variations of stimuli (e.g., audition; Kowalski, Depireux, & Shamma, 1996), vision (DeAngelis, Ohzawa, & Freeman, 1995), and our results should be extended to these models as well. Third, not all inputs may be recorded in a practical setting when dealing with a very complex system. It would be helpful to extend the methods presented here to the case when only a subset of all inputs is available to the observer. Finally, while the employed spike generation models are highly nonlinear, the temporal model of dendritic processing is linear. Extending our framework to nonlinear temporal processing would allow one to capture active processes likely to take place in neuronal dendrites (Lazar & Slutskiy, 2013). These and other related questions will be explored in greater detail elsewhere.

Acknowledgments

We thank the reviewers for their valuable suggestions on how to improve the quality and presentation of this letter. This work was supported by by AFOSR under grant FA9550-12-1-0232 and by NIH under grant R021 DC012440001.

References

Babadi
,
B.
,
Casti
,
A.
,
Xiao
,
Y.
,
Kaplan
,
E.
, &
Paninski
,
L.
(
2010
).
A generalized linear model of the impact of direct and indirect inputs to the lateral geniculate nucleus
.
Journal of Vision
,
10
(
10
),
22
.
Berlinet
,
A.
, &
Thomas-Agnan
,
C.
(
2004
).
Reproducing kernel Hilbert spaces in probability and statistics
.
New York
:
Kluwer
.
Bower
,
J.
, &
Beeman
,
D.
(
1995
).
The book of Genesis
.
New York
:
Springer-Verlag
.
Brown
,
E.
,
Moehlis
,
J.
, &
Holmes
,
P.
(
2004
).
On the phase reduction and response dynamics of neural oscillator populations
.
Neural Computation
,
16
,
673
715
.
Calabrese
,
A.
,
Schumacher
,
J. W.
,
Schneider
,
D. M.
,
Paninski
,
L.
, &
Woolley
,
S. M.
(
2011
).
A generalized linear model for estimating spectrotemporal receptive fields from responses to natural sounds
.
PLoS One
,
6
(
1
).
Casale
,
A. E.
, &
McCormick
,
D. A.
(
2011
).
Active action potential propagation but not initiation in thalamic interneuron dendrites
.
Journal of Neuroscience
,
31
(
50
),
18289
18302
.
Chiang
,
A.-S.
,
Lin
,
C.-Y.
,
Chuang
,
C. C.
,
Chang
,
H.-M.
,
Hseih
,
C.-H.
,
Yeh
,
C.-W.
, et al
(
2011
).
Three-dimensional reconstruction of brain-wide wiring networks in drosophila at single-cell resolution
.
Current Biology
,
21
,
1
11
.
Chou
,
Y.
,
Spletter
,
M.
,
Yaksi
,
E.
,
Leong
,
J.
,
Wilson
,
R.
, &
Luo
,
L.
(
2010
).
Diversity and wiring variability of olfactory local interneurons in the drosophila antennal lobe
.
Nature Neuroscience
,
13
,
439
449
.
DeAngelis
,
G. C.
,
Ohzawa
,
I.
, &
Freeman
,
R. D.
(
1995
).
Receptive-field dynamics in the central visual pathways
.
Trends in Neurosciences
,
18
(
10
),
451
458
.
Ermentrout
,
G. B.
, &
Chow
,
C. C.
(
2002
).
Modeling neural oscillations
.
Physiology and Behavior
,
77
(
4–5
),
629
633
.
Gabbiani
,
F.
, &
Cox
,
S. J.
(
2010
).
Mathematics for neuroscientists
.
Orlando, FL
:
Academic Press
.
Goel
,
P.
, &
Ermentrout
,
B.
(
2002
).
Synchrony, stability, and firing patterns in pulse-coupled oscillators
.
Physica D
,
163
,
191
216
.
Gouwens
,
N. W.
, &
Wilson
,
R. I.
(
2009
).
Signal propagation in drosophila central neurons
.
Journal of Neuroscience
,
29
,
6239
6249
.
Grafakos
,
L.
(
2008
).
Modern Fourier analysis
.
New York
:
Springer
.
Hong
,
S.
,
Agüera y Arcas
,
B.
, &
Fairhall
,
A. L.
(
2007
).
Single neuron computation: From dynamical system to feature detector
.
Neural Computation
,
112
,
3133
3172
.
Izhikevich
,
E. M.
(
2007
).
Dynamical systems in neuroscience
.
Cambridge, MA
:
MIT Press
.
Johnston
,
D.
, &
Wu
,
S. M.-S.
(
1994
).
Foundations of cellular neurophysiology
.
Cambridge, MA
:
Bradford Books
.
Jolivet
,
R.
,
Rauch
,
A.
,
Lüscher
,
H.-R.
, &
Gerstner
,
W.
(
2005
).
Integrate-and-fire models with adaptation are good enough: Predicting spike times under random current injection
. In
Y. Weiss, B. Schölkopf, & J. Platt
(Eds.),
Advances in neural information processing systems
,
18
.
Cambridge, MA
:
MIT Press
.
Keat
,
J.
,
Reinagel
,
P.
,
Reid
,
R. C.
, &
Meister
,
M.
(
2001
).
Predicting every spike: A model for the responses of visual neurons
.
Neuron
,
30
,
803
817
.
Kim
,
A. J.
, &
Lazar
,
A. A.
(
2011
).
Recovery of stimuli encoded with a Hodgkin-Huxley neuron using conditional PRCs
. In
N. Schultheiss, A. Prinz, & R. Butera (Eds.)
,
Phase response curves in neuroscience
.
New York
:
Springer
.
Koch
,
C.
, &
Segev
,
I.
(
1989
).
Methods in neuronal modeling
.
Cambridge, MA
:
MIT Press
.
Kowalski
,
N.
,
Depireux
,
D. A.
, &
Shamma
,
S. A.
(
1996
).
Analysis of dynamic spectra in ferret primary auditory cortex. I. Characteristics of single-unit responses to moving ripple spectra
.
Journal of Neurophysiology
,
76
,
3503
3523
.
Lazar
,
A. A.
(
2010
).
Population encoding with Hodgkin-Huxley neurons
.
IEEE Transactions on Information Theory
,
56
(
2
),
821
837
.
Lazar
,
A. A.
, &
Pnevmatikakis
,
E. A.
(
2009
).
Reconstruction of sensory stimuli encoded with integrate-and-fire neurons with random thresholds
.
EURASIP Journal on Advances in Signal Processing
,
2009
,
682930
.
Lazar
,
A. A.
, &
Pnevmatikakis
,
E. A.
(
2010
).
Consistent recovery of sensory stimuli encoded with MIMO neural circuits
.
Computational Intelligence and Neuroscience
,
2010
,
469568
.
Lazar
,
A. A.
, &
Slutskiy
,
Y. B.
(
2012
).
Channel identification machines
.
Computational Intelligence and Neuroscience
,
2012
,
1
20
.
Lazar
,
A. A.
, &
Slutskiy
,
Y. B.
(
2013
).
Identification of nonlinear-nonlinear neuron models and stimulus decoding
.
BMC Neuroscience
,
14
(
Suppl. 1
),
P367
.
Liang
,
L.
, &
Luo
,
L.
(
2010
).
The olfactory circuit of the fruit fly drosophila melanogaster
.
Science China
,
53
(
4
),
472
484
.
Lu
,
U.
,
Song
,
D.
, &
Berger
,
T. W.
(
2011
).
Nonlinear dynamic modeling of synaptically driven single hippocampal neuron intracellular activity
.
IEEE Transactions on Biomedical Engineering
,
58
(
5
),
1303
1313
.
Malkin
,
I. G.
(
1949
).
Methods of Poincaré and Liapunov in theory of non-linear oscillations
. [
in Russian: “metodi puankare i liapunova v teorii nelineinix kolebanii”
].
Moscow
:
Gostexizdat
.
Netoff
,
T.
,
Schwemmer
,
M. A.
, &
Lewis
,
T. J.
(
2012
).
Experimentally estimating phase response curves of neurons: Theoretical and practical issues
. In
N. W. Schultheiss, A. A. Prinz, & R. J. Butera (Eds.)
,
Phase response curves in neuroscience
.
New York
:
Springer
.
Paninski
,
L.
(
2004
).
Maximum likelihood estimation of cascade point-process neural encoding models
.
Network: Comput Neural Sys.
,
15
,
243
262
.
Pillow
,
J.
(
2007
).
Likelihood-based approaches to modeling the neural code
. In
K. Doya, S. Ishii, A. Pouget, & R. P. N. Rao (Eds.)
,
Bayesian brain: Probabilistic approaches to neural coding
.
Cambridge, MA
:
MIT Press
.
Pillow
,
J. W.
,
Shlens
,
J.
,
Paninski
,
L.
,
Sher
,
A.
,
Litke
,
A. M.
,
Chichilnisky
,
E. J.
, et al
(
2008
).
Spatio-temporal correlations and visual signaling in a complete neuronal population
.
Nature
,
454
,
995
999
.
Pillow
,
J. W.
, &
Simoncelli
,
E. P.
(
2003
).
Biases in white noise analysis due to non-Poisson spike generation
.
Neurocomputing
,
52–54
,
109
115
.
Rauch
,
A.
,
La Camera
,
G.
,
Lüscher
,
H.-R.
,
Senn
,
W.
, &
Fusi
,
S.
(
2003
).
Neocortical pyramidal cells respond as integrate-and-fire neurons to in vivo–like input currents
.
Journal of Neurophysiology
,
90
,
1598
1612
.
Rózsa
,
B.
,
Katona
,
G.
,
Kaszás
,
A.
,
Szipöcs
,
R.
, &
Vizi
,
E. S.
(
2008
).
Dendritic nicotinic receptors modulate backpropagating action potentials and long-term plasticity of interneurons
.
European Journal of Neuroscience
,
27
(
2
),
364
377
.
Slee
,
S. J.
,
Higgs
,
M. H.
,
Fairhall
,
A. L.
, &
Spain
,
W. J.
(
2005
).
Two-dimensional time coding in the auditory brainstem
.
Journal of Neuroscience
,
25
(
43
),
9978
9988
.
Stuart
,
G.
,
Spruston
,
N.
,
Sakmann
,
B.
, &
Häusser
,
M.
(
1997
).
Action potential initiation and backpropagation in neurons of the mammalian CNS
.
Trends in Neurosciences
,
20
(
3
),
125
131
.
Urban
,
N.
, &
Sakmann
,
B.
(
2002
).
Reciprocal intraglomerular excitation and intra- and interglomerular lateral inhibition between mouse olfactory bulb mitral cells
.
Journal of Physiology
,
542
,
355
367
.
Vidne
,
M.
,
Ahmadian
,
Y.
,
Shlens
,
J.
,
Pillow
,
J. W.
,
Kulkarni
,
J.
,
Litke
,
A. M.
, … &
Paninski
,
L.
(
2012
).
Modeling the impact of common noise inputs on the network activity of retinal ganglion cells
.
Journal of Computational Neuroscience
,
33
(
1
),
97
121
.
Waters
,
J.
,
Schaefer
,
A.
, &
Sakmann
,
B.
(
2005
).
Backpropagating action potentials in neurones: Measurement, mechanisms and potential functions
.
Progress in Biophysics and Molecular Biology
,
87
,
145
170
.
Wu
,
M. C.-K.
,
David
,
S. V.
, &
Gallant
,
J. L.
(
2006
).
Complete functional characterization of sensory neurons by system identification
.
Annual Reviews of Neuroscience
,
29
,
477
505
.
Yaksi
,
E.
, &
Wilson
,
R.
(
2010
).
Electrical coupling between olfactory glomeruli
.
Neuron
,
67
,
1034
1047
.

Author notes

The authors’ names are alphabetically ordered.