We introduce a novel approach for a complete functional identification of biophysical spike-processing neural circuits. The circuits considered accept multidimensional spike trains as their input and comprise a multitude of temporal receptive fields and conductance-based models of action potential generation. Each temporal receptive field describes the spatiotemporal contribution of all synapses between any two neurons and incorporates the (passive) processing carried out by the dendritic tree. The aggregate dendritic current produced by a multitude of temporal receptive fields is encoded into a sequence of action potentials by a spike generator modeled as a nonlinear dynamical system. Our approach builds on the observation that during any experiment, an entire neural circuit, including its receptive fields and biophysical spike generators, is projected onto the space of stimuli used to identify the circuit. Employing the reproducing kernel Hilbert space (RKHS) of trigonometric polynomials to describe input stimuli, we quantitatively describe the relationship between underlying circuit parameters and their projections. We also derive experimental conditions under which these projections converge to the true parameters. In doing so, we achieve the mathematical tractability needed to characterize the biophysical spike generator and identify the multitude of receptive fields. The algorithms obviate the need to repeat experiments in order to compute the neurons’ rate of response, rendering our methodology of interest to both experimental and theoretical neuroscientists.
Understanding how neural circuits perform computation is one of the most challenging problems in neuroscience. For the fruit fly (Drosophila), the advent of the genome sequence, coupled with extensive anatomical and electrophysiological studies, has led to an exponential growth in our knowledge about the organization of many neural circuits, including detailed knowledge about circuit connectivity, circuit inputs and outputs, and morphological and biophysical properties of individual neurons (Chiang et al., 2011; Liang & Luo, 2010; Yaksi & Wilson, 2010; Chou et al., 2010; Gouwens & Wilson, 2009). Combined with the powerful genetic tools for visualizing, activating, and deactivating specific circuit components, these advances raise the possibility of providing a comprehensive functional characterization of information processing within and between neurons.
To this day, a number of methods have been proposed to quantify and model neuronal processing (see Wu, David, & Gallant, 2006, for an in-depth review). The majority of these methods assume that the input to a neural circuit is continuous and that the output is a sample path of a point process (e.g., Poisson process). In biological neural circuits, however, the inputs for most neurons are spike trains generated by presynaptic cells and the outputs are determined by a multidimensional dynamical system (Izhikevich, 2007). Furthermore, the highly nonlinear nature of spike generation has been shown to produce interactions between stimulus features (Slee, Higgs, Fairhall, & Spain, 2005; Hong, Agüera y Arcas, & Fairhall, 2007) that profoundly affect the estimation of receptive fields (Pillow & Simoncelli, 2003). Hence, there is a fundamental need to develop tractable methods for identifying neural circuits that incorporate biophysical models of spike generation and receive multidimensional spike trains as input stimuli.
Here we describe a new methodology for a complete neural circuit identification that takes these considerations into account. Our neural circuit models admit multidimensional spike trains as input stimuli. They also incorporate nonlinear dynamical system (e.g., Hodgkin-Huxley, Morris Lecar, hard-threshold IAF) models of the spike-generating mechanism. The nonlinear contribution of a dynamical system such as the Hodgkin-Huxley neuron model is stimulus driven. It changes from one spike to the next and thus affects receptive field estimation if it is not properly taken into account.
Our approach builds on the observation that during any experiment, a neural circuit is projected onto a particular space of input signals, with the circuit projection determined by how well the input space explores that circuit. Employing reproducing kernel Hilbert spaces (in particular, spaces of band-limited functions) to model the input stimuli, we quantitatively describe the relationship between the underlying circuit parameters and their projections. We also derive conditions under which these projections converge to the true parameters. In doing so, we achieve the mathematical tractability needed to characterize biophysical spike generators and identify the multitude of receptive fields in neural circuits with full connectivity. We estimate all model parameters directly from spike times produced by neurons, and repeating the same stimulus is no longer necessary.
2. Problem Statement and Modeling
Consider the simple neural circuit depicted in Figure 1a. The postsynaptic neuron shown in gray receives its feedforward spiking input from three cells that are highlighted in red, cyan, and green. Presynaptic action potentials, depicted as spikes on the axon terminals, are processed by the dendritic tree of the postsynaptic cell, and the resulting dendritic current is encoded into a single postsynaptic train of action potentials by the axon hillock. For the postsynaptic neuron in Figure 1a, we seek to characterize the encoding carried out by the axon hillock and identify the dendritic processing of presynaptic spikes, assuming that both presynaptic and postsynaptic spike times are available to observers.
A block diagram representation of the postsynaptic neuron in Figure 1a is shown in Figure 1b. Spikes from different neurons arrive at times , , , forming M input spike trains sm. Since in biological neurons, spikes typically arrive via multiple synapses at different locations on the dendritic tree, we model the processing of each spike train sm with a temporal receptive field hm that describes the combined spatiotemporal contributions of all synapses from neuron . Such a temporal receptive field can capture not only the overall excitatory and inhibitory nature of synapses but also the time domain analog processing carried out by the dendritic tree. The aggregate dendritic current , , produced in the tree is then encoded by a biophysical spike generator model (e.g., the Hodgkin-Huxley model) into a sequence of spike times .
Generalizing the ideas above, one can consider more complex spiking neural circuits, in which every neuron may receive not only feedforward inputs from a presynaptic layer but also lateral inputs from neurons in the same layer and backpropagating action potentials (Waters, Schaefer, & Sakmann, 2005) may contribute to computations within the dendritic tree. A two-neuron circuit incorporating these considerations is shown in Figure 1c. The processing of lateral inputs in layer 2 is described by the temporal receptive fields (cross-feedback filters) h212 and h221, while various signals produced by backpropagating action potentials are modeled by the temporal receptive fields (feedback filters) h211 and h222.
In what follows, we show that the elementary building blocks of spiking neural circuits, including temporal receptive fields and spike generators, together with all inputs and outputs, can be naturally defined as elements of a reproducing kernel Hilbert space (RKHS), thereby making circuit identification mathematically tractable. Without loss of generality, we choose to work with the space of trigonometric polynomials :
The space of trigonometric polynomials is the Hilbert space of complex-valued functions , where , , is an orthonormal basis, , . Here, is the period, is the bandwidth, and L is the order of the space. Endowed with the inner product , is a reproducing kernel Hilbert space (RKHS) with a reproducing kernel (RK) given by .
2.1. Modeling Dendritic Processing.
We assume that the precise shape and amplitude of action potentials received by the postsynaptic cell are of no particular significance in neural information processing. Although such information can be readily incorporated into the methods outlined below, it can be challenging to obtain in practice as it requires simultaneous intracellular (whole-cell) recordings from multiple presynaptic cells. Making a less stringent assumption that only the timing of action potentials is important, we take each spike arriving at a time sk to be a Dirac-delta function , , so that the train of spikes sm from a presynaptic neuron m, , is given by , (see Figure 1). The spike times smk correspond to peaks (or troughs) of action potentials and can be readily obtained by extracellular recordings. Given presynaptic spike trains sm, , and corresponding temporal receptive fields with kernels hm (see Figure 1b), the aggregate postsynaptic current amounts to , where denotes the convolution of sm with hm.
In other words, if an input spike is replaced with K(t, sk), the output of the temporal receptive field converges in the L2 norm (with increasing bandwidth and order L of the space ) to the output v elicited by the spike . This is also illustrated in Figure 2, where we compare the output of a temporal receptive field when stimulated with a delta function (top row) and an RK (bottom row).
Consider three arbitrarily chosen temporal receptive fields with kernels hm, m=1, 2, 3, each with a temporal support of 100 ms and bandlimited to 100 Hz. Given three spike trains s1, s2, s3, shown in red, green and blue in Figure 3a and an appropriate choice of the space (here, rad/s and L=40), we choose spikes in any window of length T (here, s, shown in yellow) and replace every such spike smk with the sampled reproducing kernel K(t, smk). We thus obtain three continuous signals , , that are periodic on the real line, as depicted in Figure 3b. Note that signals with , , . Passing these three signals through the temporal receptive fields hm, m=1, 2, 3, produces an aggregate dendritic current . The latter is indistinguishable from the true dendritic current on the interval (depicted in green in Figure 3c). The error between v and is shown in Figure 3d. Note that the approximation does not work in the time interval because the filters are causal and have memory, that is, the spikes from both the present and the past affect the dendritic current.
2.2. Modeling Biophysical Spike Generators.
The model of action potential generation can be chosen from a wide class of spiking point neuron models, including nonlinear conductance-based models with stable limit cycles (e.g., Hodgkin-Huxley, Fitzhugh-Nagumo, Morris Lecar; Izhikevich, 2007), as well as simpler models such as the integrate-and-fire (IF) or the threshold-and-fire neuron model. Although these models are deterministic in their original formulation, they can readily be extended to incorporate various noise sources in the form of, for example, random thresholds (Lazar & Pnevmatikakis, 2009) or stochastic gating variables (Lazar, 2010; see also section 3.3). Furthermore, given the same input, even deterministic models typically do not produce the same output since their response fundamentally depends on the initial conditions of the dynamical system.
Here we focus on conductance-based neuron models only. For readers’ convenience, we first briefly review the Hodgkin-Huxley point neuron model. We then present another model, the reduced project-integrate-and-fire neuron, with conditional phase response curves (reduced PIF-cPRC). This reduced model can be used to accurately capture response properties of many point neuron models, including Hodgkin-Huxley, Morris-Lecar, and Fitzhugh-Nagumo. Furthermore, as discussed in section 3.1, the reduced PIF-cPRC model provides a simple way to faithfully characterize the spike generation process of biological neurons when the underlying neuron parameters are not known.
2.2.1. Conductance-Based Spike Generator Models.
2.2.2. Reduced Spike Generator Model.
3. Identifying Biophysical Neuron Models
In what follows we present a methodology for identifying neuron models consisting of receptive fields and spike generators. In section 3.1, we discuss the identification of the spike generator. Note that in the previous literature, the spike generator is called the point neuron. The identification of dendritic processing is presented in section 3.2. In section 3.3 the identification of noisy neurons is discussed.
3.1. Identifying the Spike Generator.
If parameters of the spike generator are not known a priori, we can use the reduced PIF neuron with conditional PRCs introduced above to derive a first-order equivalent model. In this case, identification of the spike generator calls for finding a family of PRCs.
A number of different theoretical and experimental methods have been proposed to compute phase response curves in both model neurons (Izhikevich, 2007) and biological neurons (Netoff, Schwemmer, & Lewis, 2012). One of the most popular and widely used methods involves delivering a perturbation that is infinitesimally small in amplitude and duration and measuring its effect on the timing of subsequent spikes. Simple in its nature, this method requires delivering hundreds to thousands of precisely timed pulses of current at different phases to map out the PRC. While this procedure is easy to simulate in a model neuron, it is a daunting task for any experimentalist working with biological neurons. Furthermore, delivering perturbations that are infinitesimally small in duration is very difficult technically, and the resulting perturbations are usually spread out in time. An alternative method, referred to as Malkin's approach or the adjoint method, involves linearizing the dynamical system about a stable limit cycle and solving the corresponding adjoint equations (Malkin, 1949; Ermentrout & Chow, 2002; Izhikevich, 2007). The drawback of this method is that the differential equations describing the dynamical system need to be known a priori.
Below we introduce a new method for estimating PRCs that does not require knowing parameters of the dynamical system or delivering pulses of current at different phases of the oscillation cycle. Instead, our method is based on injecting an arbitrary current waveform and estimating the PRC from the information contained in the spike train at the output of the neuron. Although similar in spirit to what has been suggested in Izhikevich (2007) and Netoff et al. (2012), our approach is different in that it does not use white noise stimuli and it provides strong insight into how the perturbation signal affects the estimated PRC. It is also worth pointing out that injecting truly white noise signals is not possible in an experimental setting since all electrodes have finite bandwidths. We demonstrate that if the bandwidth of the injected current is not taken into account, the estimated PRC can be substantially different from the underlying PRC of the neuron. Finally, when compared to standard pulse methods for estimating the PRC, our approach is more immune to spike generation jitter since stimulus fluctuations are spread over the entire oscillation cycle as opposed to being concentrated at a particular moment in time (Izhikevich, 2007).
Theorem 1 shows that only the PRC projection can be re- covered from the recorded spike train. Note that is given by the projection of the PRC onto the space of stimuli and in general is quite different from the underlying PRC. In a practical setting, is determined by the choice of stimuli in the experimental setup. Clearly, the bandwidth of the electrode or neuron seal plays a critical role in the PRC estimate.
The arbitrary current waveforms can be delivered in either separate experiments or a single experimental trial. Since the effects of a perturbation can last longer than a single cycle (Netoff et al., 2012), each current waveform may be followed by a quiescent period to ensure that one perturbation does not influence the neuronal response to another perturbation. The resulting spike-triggered arbitrary injected current waveform protocol minimizes interactions between consecutive current waveforms and allows one to efficiently measure the PRC projection .
The proposed PRC identification procedure and its performance for a Hodgkin-Huxley neuron are illustrated in Figure 6. First, a constant bias current A/cm2 injected into the neuron (see Figure 6a) places the state of the neuron onto a stable limit cycle. The period of the oscillation on that limit cycle can be computed by recording stable spikes, produced in response to the constant current Ib (see Figure 6b). Next, a sequence of arbitrary current waveforms with bandwidth rad/s and order L=4 (see Figure 6a) is injected into the neuron and the perturbed spikes (highlighted in red are recorded. In this example, the waveforms are delivered at every other spike in order to minimize the effect of one perturbation on the neuronal response to a subsequent perturbation (see also remark 3). We demonstrate the perturbation experienced by the dynamical system in Figure 6d, where both the stable limit cycle produced by the current Ib is shown Ib and the perturbed trajectory of the Hodgkin-Huxley neuron response are plotted in a two-dimensional V−n phase plane. The oscillation period on the stable limit cycle was found to be ms. Using together with the injected current waveforms and produced spike times (see Figure 6c), we identify the PRC projection and plot it together with the theoretical value of the projection and the underlying PRC in Figure 6d. Note that there is essentially no difference between the three waveforms in the latter plot.
In the above example, the bandwidth of the stimulus was rad/s and the PRC was identified with a very high precision. In general however, the projection is stimulus (bandwidth) dependent. This dependency is demonstrated in Figure 7, where the mean-squared error between the identified PRC projection and the original PRC as a function of the stimulus bandwidth is depicted. Several identification examples are shown as insets in the plot. Note that the identified functions in the first two examples are very different from the PRC estimated in Figure 6d.
The identification error in Figure 7 decreases with increasing bandwidth and eventually levels off, providing us with a measurement of the PRC bandwidth. This is something that cannot be determined using the traditional white noise and pulse methods. Furthermore, glass electrodes traditionally used in the PRC estimation naturally impose a band limitation. Traditional methods do not take that fact into account and simply declare that the estimated PRC is the underlying PRC of the neuron. Our results suggest that in reality, the experimenter often finds only the PRC projected onto the space determined by the input stimuli employed and the electrode properties (bandwidth). As a result, different results may be obtained in day-to-day experiments due to the variability in electrodes, and groups using different types of electrodes may come up with different estimates.
An entire family of PRCs that was estimated using the above method for 63 different limit cycles is shown in Figure 8. As the input bias current Ib increases, the limit cycle x0 of the neuron shrinks in the V−n plane (see Figure 8a). At the same time, the period of the oscillation decreases from 17.2 ms to 7.6 ms (see Figure 8b). As a result, the temporal support of each PRC decreases as well (see Figure 8d), requiring higher-bandwidth currents for estimating the underlying PRC at high spike rates. The entire family of PRCs as a function of phase and time is shown in Figure 8c and Figure 8d, respectively.
3.2. Identifying Dendritic Processing.
Once the biophysical spike generator has been characterized and the family of its phase response curves computed, one can begin identifying the temporal receptive fields describing the analog processing of incoming spikes by the dendritic tree. In what follows, we use the notation to describe M spike train sequences that stimulate the neuron on the trial, .
Let (si)Ni=1 be a collection of spike train M-tuples at the input to a neuron with M temporal receptive fields , . Furthermore, let the family of conditional phase response curves , corresponding to the membrane voltage V, be known, and let , , be the sequence of spikes produced by the neuron. Given a space with and sufficiently high-order L and bandwidth , the filter projections can be identified from a collection of input and output spike trains (si)Ni=1 and (ti)Ni=1 as . Here the coefficients hml are given by with , [qi]k=qik and , provided that the matrix has rank . The ith row of matrix is given by , , with , where and the column index .
Proof. Since each , it can be written as . Hence, for the mth component of the spike train M-tuple , we have , and the aggregate current produced by the dendritic tree is , with . Substituting the last expression into the t-transform, equation 2.4, we obtain where and follow from the Riesz representation theorem (Berlinet & Thomas-Agnan, 2004) with . In matrix form, we have with hML]T, [qi]k=qik, and . Repeating for all M-tuples , , we obtain . This system of equations can be solved for h, provided that the matrix rank . To find the coefficients , we note that .
To satisfy the condition of the CIM algorithm, the neuron must produce a total of at least M(2L+1)+N spikes in response to all N spike-train M-tuples. If each M-tuple is of duration T, this condition can be met by increasing the duration NT of the experimental recording.
Identification results for the circuit of Figure 1b are presented in Figure 9. Here, a single neuron receives two spiking inputs, the analog processing of which is described by two temporal receptive fields. Both temporal receptive fields were chosen arbitrarily and had a positive mean in order to generate an excitatory current when stimulated with spikes. Multiple recorded input spike trains si=(s1i, s2i) were used together with the output spike trains to identify the receptive fields. Two spike trains from the first experimental trial i=1 are shown in Figure 9a. The aggregate current produced in the dendritic tree (see Figure 9b) was then encoded into a sequence of action potentials (see Figure 9c) by the Hodgkin-Huxley model with a bias Ib=0. The corresponding sequence of spike times as measured, for example, in extracellular recordings, is shown in Figure 9d. Even for large perturbations of the dynamical system, the presented algorithm allows one to faithfully identify the two dendritic processing filters (see Figures 9f and 9g). The mean squared error between the original and identified kernels is −27 dB.
3.3. Extension to Spike Generators with Stochastic Conductances.
Multiple histograms of interspike intervals produced by a Hodgkin-Huxley neuron with stochastic conductances are shown in Figure 10. Stationary independent increments Wi(t)−Wi(s), , of Brownian motions followed a normal distribution with a mean and variance . When driven by an input bias current , the neuron produced interspike intervals that followed a normal distribution, whose variance decreased with increasing values of the current Ib. The spike time jitter was as big as 1 ms for A/cm2.
3.3.1. Estimating the PRC Family from Noisy Measurements.
The methodology employed in section 3.1 can be extended within an appropriate mathematical setting to biophysical neuron models with stochastic conductances.
PRC identification results for a Hodgkin-Huxley neuron with stochastic conductances are shown in Figure 11. As before, stationary independent increments Wi(t)−Wi(s), , of Brownian motions followed a normal distribution with mean and variance .
3.3.2. Estimating Temporal Receptive Fields from Noisy Measurements.
Proof. The proof is essentially similar to that of theorem 3.
Identification of two temporal receptive fields in the presence of spike generation jitter is shown in Figure 12. The inset in plot Figure 12e shows the effect of stochastic conductances on the subthreshold behavior of the neuron. Although a significant amount of noise is introduced into the system, we can identify optimal temporal receptive field estimates and that are very close to the underlying kernels h1 and h2. The MSE of identification is on the order of −24 dB, as seen in Figures 12f and 12g.
4. Identifying Spiking Neural Circuits
The methods presented in the previous section can be extended in two important directions. First, very few neurons in any organism function in isolation from each other unless they are confined to the very sensory periphery. Most neurons upstream of the periphery are usually part of a larger neural circuit and receive, in addition to feedforward inputs, lateral inputs from other neurons in the same layer. It is desirable to be able to take such inputs into account. Second, there is increasing evidence that the processing of feedforward and lateral inputs by the dendritic tree of a neuron depends on the spiking activity of the neuron itself and that there is an interaction between external inputs and the backpropagating action potential of the neuron (Stuart, Spruston, Sakmann, & Häusser, 1997; Waters et al., 2005; Rózsa, Katona, Kaszás, Szipöcs, & Vizi, 2008; Casale & McCormick, 2011). It is instructive to be able to take such interactions into account by introducing a feedback filter describing the effects of action potential generation on the activity within the dendritic tree. Such a filter can also capture the effects of various adaptation mechanisms observed in all biological neurons. As adaptation may take place on many different timescales (from milliseconds to several seconds), the temporal support of the feedback filter need not be limited to a single cycle of oscillation.
In section 4.1, we present the identification methodology and simulation results for circuits with feedforward and lateral spiking inputs. In section 4.2, feedback is present in the system. Two cases are distinguished: the feedback is fast and essentially confined to a single interspike interval, and the feedback effects are spread over multiple consecutive interspike intervals. In the former, we demonstrate how such short-lived feedback can be captured by the family of phase response curves. In the latter, we show how to identify the kernel in the feedback path.
4.1. Circuits with Lateral Connectivity.
Without loss of generality, consider the neural circuit in Figure 13. It comprises two second-layer neurons, each receiving multiple feedforward spiking inputs from the first layer. Here index j=1, 2 labels the neurons in layer 2, and index , with , labels the input number. The processing of these feedforward inputs is described by M temporal receptive fields with kernels h1jm, where j=1, 2 and as before, and the first index 1 labels the input of the second layer. For simplicity, we assume that each neuron in layer 2 receives feedforward inputs only from layer 1 and that both neurons receive the same number of feedforward inputs M. In biological circuits, the number of layers and the number of inputs from these layers would be specified by the anatomy and prior knowledge about the circuit.
In addition to feedforward spiking inputs, each neuron receives a lateral spiking input from the other neuron located in the same layer. The effects of these lateral inputs are described by kernels h221 and h212, where the first index indicates the output of layer number 2 and the last two indices specify the origin and destination of spikes, respectively.
The aggregate dendritic currents v1 and v2, produced by all feedforward temporal receptive fields and cross-feedback, are encoded into sequences of action potentials by biophysical spike generation models. The spike times , comprise the output of the second layer.
To summarize, each neuron receives M inputs from a presynaptic layer and 1 input from its own layer. Using the recorded spike times and , j=1, 2, , we identify 2M+2 temporal receptive fields. To that end, we have the following result:
Let (sji)Ni=1, j=1, 2, be collections of spike train M-tuples at the input of two Hodgkin-Huxley neurons with feedforward temporal receptive fields , j=1, 2, , and lateral receptive fields h212, h221. Let and be sequences of spike times produced by the two neurons. Given a family of conditional phase response curves , corresponding to the membrane voltage V, as well as a space with and sufficiently high-order L and bandwidth , the filter projections , , and , j=1, 2, , can be identified as , , and . Here, the coefficients h212l, h221l, and h1jml are given by with , [qji]k=qjik and h=[h1; h2], where , j=1, 2, provided each matrix has rank . The ith row of is given by , with , . The entries , , are as given in theorem 2.
Proof. The proof follows the one in theorem 2, with the addition of lateral terms.
Simulation results demonstrating the performance of the above algorithm are shown in Figure 14. We used M=2 feedforward temporal receptive fields per neuron, with each receptive field kernel having a different bandwidth between rad/s and rad/s. The kernels had a positive mean to produce excitatory currents when stimulated with spikes and had a temporal support on the interval [0, 0.1] s. Kernels of the cross-feedback filters had the same temporal support, but higher bandwidths of rad/s and rad/s. Specific kernel shapes for all receptive fields were arbitrarily chosen. All feedforward and lateral spike trains were projected onto the space with a bandwidth rad/s, period T=0.25 s, and order L=60. We used N=120 experimental trials, T seconds each, for a combined duration of NT=30 s, to identify the kernels of all 2M+2=6 temporal receptive fields. The original and identified feedforward kernels corresponding to the first and second receptive field of each neuron are plotted together in Figures 14a to 14d. Similarly, the original and identified cross-feedback filters are shown in Figures 14e and 14f.
4.2. Circuits with Lateral Connectivity and Feedback.
Now consider the canonical circuit of Figure 1c. In addition to lateral connections discussed in the previous section, this circuit incorporates feedback from each neuron onto itself. Depending on the nature of the feedback and whether it can be studied in isolation from other processes, there are several ways to take it into account when modeling a biological neuron.
In one such model, due to a particular dendritic ion channel density or a specific morphology and branching pattern of dendrites (Stuart et al., 1997), the feedback is localized to the axon hillock or the soma of the neuron. While such feedback may not necessarily affect the processing of synaptic inputs within the dendritic tree, it can change the encoding properties of the spike generator. We note that the latter is possible if the feedback is fast, that is, it occurs on the time scale of the nonlinear dynamical system generating the action potentials. In that case, the feedback can be taken into account by computing the PRC of the spike generator with feedback.
In Figure 15 we summarize the behavior of a Hodgkin-Huxley neuron with a fast inhibitory feedback. The feedback kernel was modeled using an alpha function and had a temporal support of 8 ms. As a result of the feedback current injected into the neuron at every cycle, the behavior of the resulting point neuron differs from that of the standard Hodgkin-Huxley neuron. For the same values of current Ib, the limit cycles in Figure 15c are larger in size when compared to those in Figure 8a. At the same time, the period on the stable orbit is smaller, as can be seen by comparing Figures 15d and 8b. The observed phase response curves, computed by the method presented in section 2.2 and plotted in Figure 15c, are different as well.
In other models, the action potential propagates back through the entire neuron affecting either the dendritic tree alone or both the dendritic tree and the spike generator. Both of these outcomes can be modeled by first computing the family of PRCs and then estimating the feedback kernels h211 and h222. For identifying the temporal receptive fields together with feedback and cross-feedback kernels in the circuit of Figure 1c, we have:
Let (sji)Ni=1, j=1, 2, be collections of spike train M-tuples at the input of two Hodgkin-Huxley neurons with feedforward temporal receptive fields , j=1, 2, , lateral receptive fields h212, h221, and feedback receptive fields h211 and h222. Let and be sequences of spike times produced by the two neurons. Given a family of conditional phase response curves , corresponding to the membrane voltage V, as well as a space with and sufficiently high-order L and bandwidth , the filter projections , , , , and , j=1, 2, , can be identified as , , , , and . Here, the coefficients h211l, h212l, h221l, h222l, and h1jml are given by with , [qji]k=qjik, and h=[h1; h2], where , j=1, 2, provided each matrix has rank . The ith row of is given by , , with and . The entries , , are as given in theorem 2.
Proof. The proof follows the one in theorem 2, with the addition of lateral and feedback terms.
Two simulation results demonstrating the performance of the algorithm in corollary 2 are shown in Figures 16 and 17. In both simulations, the feedback kernels were chosen arbitrarily and had a temporal support of 100 ms. Furthermore, we assumed that the feedback was not instantaneous and instead arrived with a time delay that was random. For both simulations, the delay had a normal distribution with mean ms and standard deviation ms. In Figure 16, we assumed that the distribution of the delay was known, while in Figure 17, we assumed that the delay could be measured for every spike. Note that even for the case when only the distribution of the delay was known, all kernels could be reasonably identified. Comparing Figures 16 and 17, we see that while knowledge of the delay in the feedback path considerably improves identification of the feedback filters, the two identified kernels and differ from the true kernels h211 and h222 in the neighborhood of t=0. This most likely is a direct consequence of the neuronal PRC being zero or very close to zero for the first few milliseconds after an action potential is produced (see also Figure 8). In other words, for a point neuron such as the Hodgkin-Huxley neuron, a perturbation applied immediately after an action potential has, unless very strong, a minimal effect on the timing of the next action potential. As a consequence, information about a feedback signal h211(t−t1k) that is triggered by an action potential produced at time t1k cannot be faithfully encoded by the neuron for some duration of time immediately after t=t1k. Note that this is not the case for feedforward and lateral inputs as spikes from other neurons arrive at different times relative to the spikes produced by the spike generator.
5. A Brief Comparison with the GLM
Detailed conductance-based models can accurately reproduce neuronal responses to stimuli (Koch & Segev, 1989; Bower & Beeman, 1995; Gabbiani & Cox, 2010). However, due to the relatively large number of parameters needed for their description, biophysical models are computationally expensive. As a consequence, simpler phenomenological models are often used.
In recent years, a particular phenomenological model, called generalized linear model (GLM), has become popular in the neuroscience community. A single-neuron GLM is an extension of the well-known linear-nonlinear-Poisson (LNP) model (Pillow, 2007). Similar to the LNP model, the GLM omits the biophysics of spike generation. Instead, it employs a static nonlinearity to map the output of a set of linear filters into an instantaneous rate of neuronal response. However, in contrast to the LNP model, the GLM includes a feedback filter in order to overcome the inability of a simple Poisson spike generator to capture refractory effects and adaptation often observed in biological neurons (Paninski, 2004).
Single-neuron as well as coupled GLMs that incorporate coupling between neurons have been applied to many neural circuits in a number of sensory modalities (Pillow et al., 2008; Babadi, Casti, Xiao, Kaplan, & Paninski, 2010; Calabrese, Schumacher, Schneider, Paninski, & Woolley, 2011; Vidne et al., 2012). However, it is not a priori clear to what extent simple static nonlinearities used in GLMs can account for the highly nonlinear behavior of biophysical spike generators (Izhikevich, 2007). Furthermore, it is not clear how identification of the underlying receptive fields as well as the coupling between neurons is affected when spikes produced by an actual biophysical model are used in the GLM framework.
While all studies employing linear-nonlinear cascades (including LNP, GLM) demonstrate their model performance by predicting the response of a neuron to a novel stimulus, the identified and underlying filter kernels are rarely compared. Although this may not be possible when the linear-nonlinear framework is applied to real data, a simple check can be performed on simulated data. Furthermore, the novel stimulus employed for cross-validation is almost always chosen to have the same statistical properties as the stimuli used in identification. Given the theoretically infinite space of stimuli, this raises the question of how well the model performs when other stimuli are used.
To provide insight into these issues and compare the methodology proposed in this letter with a linear-nonlinear approach, we carried out extensive simulations with a circuit consisting of two coupled neurons (see Figure 1c). Each neuron had only one feedforward filter and received the same external input as the other neuron. We used a continuous feedforward signal, as methods for identifying feedforward filters with spiking input in the context of GLMs have not been described or implemented. For illustrative purposes, the feedback filter in the underlying model was set to zero. However, the identification algorithm was blind to this fact, and full connectivity was assumed during the identification procedure.
The first set of simulations was carried out with the spike generator modeled as an integrate-and-fire (IF) neuron. The IF neuron is the best-known example of a formal spiking neuron model and is the basis of many theoretical studies. Although it is an idealization of a biophysical spike generator, IF-based models have been found to reliably predict spike trains of many real neurons, including neocortical pyramidal cells, retinal ganglion cells and lateral geniculate nucleus neurons in the visual pathway (Keat, Reinagel, Reid, & Meister, 2001; Rauch, La Camera, Lüscher, Senn, & Fusi, 2003; Jolivet, Rauch, Lüscher, & Gerstner, 2005). Moreover, the IF model preserves many of the neurocomputational properties of more complex point neuron models. Specifically, in the context of the more general class of models discussed in this letter, the dynamics of spike generation of the IF neuron can be captured using phase response curves (PRCs). However, in comparison to a full-blown biophysical model, the PRC of the IF neuron exhibits a simpler functional form. It is flat, or constant as a function of phase, for an ideal IF neuron, and it is an increasing function of phase for the leaky IF neuron. For both the leaky and the ideal IF neuron, the magnitude of the PRC changes as a function of the bias current (Brown, Moehlis, & Holmes, 2004).
The results of the experiment are shown in Figure 18. The solid curve on the bottom of the plot depicts the average identification error for all four kernels (two feedforward and two lateral) as a function of the number of spikes used in the proposed channel identification machine (CIM) algorithm. The error is plotted on the logarithmic scale, and a low identification error (−32 dB; the smaller the number, the better) is achieved for only a few hundred spikes. The original and identified kernels are shown in Figure 19. Note that since the IF PRC is not zero right after the spike, the feedback kernels can be recovered (see also section 4.2).
The solid curve at the top of Figure 18 corresponds to the average identification error when exactly the same input signals and spike trains are provided to the GLM with an exponential nonlinearity. The error does not appear to change with the number of spikes and remains well above 0 dB. For comparison, the dashed line depicts the case when the GLM methodology is applied to a spike train produced not by the IF neuron but a combination of a log-exponential nonlinearity and a Poisson spike generator, while the filters remain the same. The error steadily decreases with the number of spikes provided to the algorithm, reaching −15 dB at 4000 spikes (see Figure S3 in the online supplement for a comparison of the identified and original kernels).
These results may seem surprising, and one might be tempted to think that the GLM methodology does not recover the underlying kernels because the nonlinearity is simply assumed to be exponential (as is often done in the literature; Pillow et al., 2008) rather than being derived from data. To test this hypothesis, we estimated nonlinearities for both neurons and fitted them with log-exponential functions, which provide an alternative choice of the nonlinearity that ensures that the optimization problem is convex (see the online supplementary Figures S7 and S8). The dash-dotted curve in Figure 18 shows that while the identification improves, it does so only marginally. The actual kernels identified with 2600 spikes are shown in Figure 20.
Note that the magnitudes of all kernels are different from those of the underlying filters. Given that the difference in kernel magnitude may be compensated by the nonlinearity, the GLM data plotted in Figure 18 were computed for normalized kernels. In addition to the magnitude however, the temporal profiles of all kernels are substantially different as well. While the identified feedback kernels shown in Figures 20c and 20f may be justified, given that a simple Poisson spike generator without a feedback filter cannot describe a neuron's dependence on its own spiking history, the shape of feedforward and lateral filters should not change.
Nevertheless and quite surprisingly, when using these identified kernels to predict the neural circuit response to a novel stimulus having the same statistical properties, perfectly matching PSTHs are obtained. This is demonstrated in Figure 21, where we use an input signal drawn from the same RKHS and having the same mean and variance. The input stimulus as well as the rasters and PSTHs produced by the underlying neural circuit and the GLM model are shown in the left and right columns for the first and second neuron, respectively.
In other words, all filters of the GLM model were identified subject to the specified nonlinearity, and despite being substantially different from the underlying filters, they can in some cases predict the circuit output. This is noteworthy, because identification of the GLM parameters is always verified by looking at the PSTH produced by the model. However, since the nonlinearity provides a simplified static description of the more complex spike generation dynamics and since the filters are fit to that nonlinearity, a question arises as to whether the filter-nonlinearity combination fully captures the behavior of the neural circuit.
Intuitively, since the class of input signals is theoretically infinite, there must exist stimuli for which the GLM prediction will break down. Indeed, this is the case, as demonstrated in Figure 22, where we use a novel stimulus with a time-varying mean. The response of both neurons differs not only in magnitude, but also in the temporal pattern produced, as highlighted by the PSTH differences in the bottom row. Note, however, that this does not arise when a nonlinearity and a Poisson spike generator are used in the underlying model instead (see supplementary online Figure S12).
In contrast to the GLM framework, the proposed approach identifies the neural circuit in two steps: first by identifying the spike generator PRCs through current injections and then identifying the associated kernels using CIM. Since all filters are found subject to the spike generation dynamics captured by phase response curves, they match the true filters (see supplementary online Figures S13 and S14 for the Hodgkin-Huxley examples).
In that regard, although a direct comparison between the GLM and proposed methods is enlightening, it is not a complete one. The GLM framework does not require a separate characterization of the spike generator and, as demonstrated, does allow the construction of neural circuit models that can predict responses to some novel stimuli. It may not always be possible to obtain intracellular recordings from neurons in order to characterize the nonlinear dynamical system governing the spike generation. At the same time, one should be careful about interpreting identification results based solely on how well the identified model can predict the PSTH to a particular stimulus. This is especially important when the identified parameters are used to infer circuit connectivity and describe the processing performed by the system. For these and other reasons, we view the two approaches as being complementary to each other.
6. Discussion and Future Work
In this letter, we introduced a novel approach that provides a complete functional identification of biophysical spike-processing neural circuits. The circuits considered accept multidimensional spike trains as their input and comprise a multitude of temporal receptive fields and conductance-based models of action potential generation. Each temporal receptive field describes the spatiotemporal contribution of all synapses between any two neurons and incorporates the (passive, i.e., linear) processing carried out by the dendritic tree. The aggregate dendritic current produced by a multitude of temporal receptive fields is encoded into a sequence of action potentials by a spike generator modeled as a nonlinear dynamical system. Full identification of biophysically grounded circuits with single sample path spiking inputs and outputs (as typically obtained in neurophysiology) has been an open problem until now.
Our approach builds on the observation that during any experiment, an entire neural circuit, including its receptive fields and biophysical spike generators, is projected onto the space of stimuli. For a given neural circuit, the projection is determined by the input signals used to identify the circuit parameters. Employing the reproducing kernel Hilbert space (RKHS) of trigonometric polynomials to describe input stimuli, we quantitatively described the relationship between underlying circuit parameters and their projections. We also derived experimental conditions under which these projections converge to the true parameters. In doing so, we achieved the mathematical tractability needed to characterize the biophysical spike generator and identify the multitude of receptive fields.
The identification approach developed here does not require using white noise as input stimuli. Instead, all signals employed have a finite bandwidth in the frequency domain. By modeling stimuli as elements of an RKHS, one can work with band-limited functions while keeping the basic computational formalism of the Dirac-delta distribution on the space of functions with continuous derivates of any order (used in white noise analysis). Formally, the key relationship in an RKHS providing this property is the reproducing kernel property (see remark 1). The identification algorithm for spiking input signals is based on the key observation that, given an appropriate choice of the RKHS , the current produced by a temporal receptive field in response to a spike is indistinguishable from the current produced in response to a reproducing kernel . This leads to a straightforward inner-product formulation of the receptive field contribution to neuronal response. Importantly, presynaptic spikes need not be broadband Poisson, a condition that is necessary to estimate the receptive field kernels in a generalized Volterra model (GVM) (Lu, Song, & Berger, 2011). Clearly, the methodology employed can be also applied to other RKHS models of the input space including Sobolev spaces (see Lazar & Pnevmatikakis, 2010).
Characterization of conductance-based spike generators is accomplished by identifying a family of phase-response curves (PRCs) that capture the nonlinear dynamical contribution of the axon hillock to the neuronal response. Although finding PRCs requires either prior knowledge of the biophysical model or access to the cell to perform current injections, such a capability is becoming more and more available, especially in smaller organisms such as Drosophila melanogaster. We proposed a novel approach for estimating the PRCs that involves injection of a constant-amplitude current to place the nonlinear dynamical system onto a stable limit cycle, followed by injection of spike-triggered arbitrary band-limited current waveforms to perturb the system trajectory. In contrast to standard methods, which require injecting hundreds to thousands of delta pulses at particular phases of the oscillation, we employed perturbation signals that are spread over the entire oscillation cycle. As a result, every perturbation signal explores not a small segment, but the entirety of the PRC, rendering the procedure significantly more efficient when compared to traditional approaches (Izhikevich, 2007; Netoff et al., 2012; see also section 3.1). Since the perturbation signals are not concentrated at particular moments in time, the procedure is also intrinsically more immune to experimental or system noise (see section 3.3) and allows one to reproduce exactly the same current waveform for a reliable estimation of the PRC.
Moreover, the proposed approach provides strong insight into how perturbation signals affect the estimated PRC and establishes experimental guidelines for its estimation. Conventional methods employ very brief and large pulses of current (essentially Dirac-like pulses) to estimate the PRC. What is generally assumed is that these exact pulses infinitesimally perturb the dynamical system. However, given the finite bandwidth of electrodes used in practice, the actual current entering the neuron has a finite, not infinite, bandwidth and consequently is quite different from a Dirac pulse. In that regard, we note that the identification error shown in Figure 7 decreases with increasing bandwidth and eventually levels off, thereby providing us with a measurement of the PRC bandwidth. This result cannot be obtained by using conventional pulse-based methods, which simply declare that the estimated PRC is the underlying PRC of the neuron. Our results suggest that in reality, an experimenter often finds only the PRC projected onto the space determined by the electrode properties (bandwidth) and the input stimuli employed. Consequently, different results may be obtained in day-to-day experiments due to electrode variability, and research groups using different types of electrodes may come up with different estimates.
Once the PRC family is computed, it can be used in conjunction with the PIF-cPRC neuron (see Figure 5) to identify the multitude of temporal receptive fields describing the processing of presynaptic spike trains. The PIF-cPRC neuron is a reduced-parameter model that has been previously investigated in Kim and Lazar (2011) and shown to provide a faithful input-output description of conductance-based models for large-amplitude currents typically observed in experiments. As with the PRC estimate, only a projection of the temporal receptive field onto the input stimulus can be recovered. However, since input spikes may not always be controlled (in contrast to PRC currents), the identification tractability is afforded by an appropriate choice of the RKHS into which these spikes are embedded (see Figure 3). An observer can simply pick a particular time interval containing spikes from a continuous spike stream (yellow region of Figure 3) and compute the aggregate current within a subset of that interval (green region of Figure 3). The computed and true currents are indistinguishable only in the green region since the filters have memory (nonzero temporal support) and the value of the dendritic current is affected not only by the spikes arriving at the present moment but also by spikes from the past (basic property of convolution). In practice, the green region is determined by the difference between the period T of the RKHS and the temporal support of the receptive field. Since an arbitrarily large T can be selected, any finite-memory receptive field can be modeled.
The general approach presented here is both flexible and scalable. For spike generators, it accommodates all spiking models for which the PRC can be computed, including conductance-based models such as Hodgkin-Huxley, Morris-Lecar, Fitzhugh-Nagumo, Wang-Buzsáki, Hindmarsh-Rose (see also a variety of models described in Izhikevich, 2007), arbitrary oscillators with multiplicative coupling (Lazar & Slutskiy, 2012), and simpler models such as the integrate-and-fire neuron (Brown et al., 2004). In order to determine the PRC, the neuron has to be driven to an oscillatory regime. However, it does not need to be tonically spiking, as the PRC methodology is applicable to bursting neurons as well (Brown et al., 2004). For circuit architectures, our approach incorporates models with complex connectivity, including circuits with a large number of feedforward, lateral, and feedback connections.
In the future, it would be instructive to extend the methods we have presented in several important directions. First, for many neural circuits, it is desirable to be able to take into account not only the spiking feedforward and lateral inputs but also various continuous inputs. Such mixed-signal models are important, for example, in studying circuits that have extensive dendro-dendritic connections (e.g., the olfactory bulb; Urban & Sakmann, 2002), or circuits that respond to a neuromodulator (global release of, e.g., dopamine, acetylcholine). Second, important models of sensory processing include receptive fields that are tuned not only to temporal but also to spatial variations of stimuli (e.g., audition; Kowalski, Depireux, & Shamma, 1996), vision (DeAngelis, Ohzawa, & Freeman, 1995), and our results should be extended to these models as well. Third, not all inputs may be recorded in a practical setting when dealing with a very complex system. It would be helpful to extend the methods presented here to the case when only a subset of all inputs is available to the observer. Finally, while the employed spike generation models are highly nonlinear, the temporal model of dendritic processing is linear. Extending our framework to nonlinear temporal processing would allow one to capture active processes likely to take place in neuronal dendrites (Lazar & Slutskiy, 2013). These and other related questions will be explored in greater detail elsewhere.
We thank the reviewers for their valuable suggestions on how to improve the quality and presentation of this letter. This work was supported by by AFOSR under grant FA9550-12-1-0232 and by NIH under grant R021 DC012440001.
The authors’ names are alphabetically ordered.