## Abstract

We use high-order approximation schemes for the space derivatives in the nonlinear cable equation and investigate the behavior of numerical solution errors by using exact solutions, where available, and grid convergence. The space derivatives are numerically approximated by means of differentiation matrices. Nonlinearity in the equation arises from the Hodgkin-Huxley dynamics of the gating variables for ion channels. We have investigated in particular the effects of synaptic current distribution and compared the accuracy of the spectral solutions with that of finite differencing. A flexible form for the injected current is used that can be adjusted smoothly from a very broad to a narrow peak, which furthermore leads, for the passive cable, to a simple, exact solution. We have used three distinct approaches to assess the numerical solutions: comparison with exact solutions in an unbranched passive cable, the convergence of solutions with progressive refinement of the grid in an active cable, and the simulation of spike initiation in a biophysically realistic single-neuron model. The spectral method provides good numerical solutions for passive cables comparable in accuracy to those from the second-order finite difference method and far greater accuracy in the case of a simulated system driven by inputs that are smoothly distributed in space. It provides faster convergence in active cables and in a realistic neuron model due to better approximation of propagating spikes.

## 1 Introduction

Accurate modeling and simulation of neuronal behavior is crucial for estimating biophysical characteristics of neurons and understanding their information processing abilities. This requires mathematical modeling of the electrical properties of the neuronal membrane, which has been attempted at least since Lapicque (Brunel & van Rossum, 2007, for a review) and with much greater detail and precision since Hodgkin and Huxley (1952). Since neurons consist mostly of elongated branching processes, they can be modeled through an appropriate cable equation. The cable equation enables an extensive body of analytical and numerical techniques to be brought into the study of neuronal behavior. While cable theory has been primarily analytical and based on modeling passive dendritic trees (Rall, 1989), the inclusion of voltage-gated ion channels and other features of real neurons has led to numerous computational results, as well as versatile simulation packages such as NEURON (Carnevale & Hines, 2006) and GENESIS (Bower & Beeman, 1998).

Most research using the neuronal cable equation is carried out computationally because of the complex geometries of cells, nonlinear dependence of membrane currents on the membrane potential, the widespread existence of dendritic nonlinearities, the fact that dendritic trees can comprise most of the entire surface area of a neuron, and the increasingly convenient availability of digital computing. Nonlinearity alone almost surely necessitates numerical solutions. A partial differential equation obtained from cable theory contains both space and time derivatives, which need to be approximated in order to obtain a set of algebraic equations suitable for solution on a computer. The numerical approximation of space derivatives in these studies is often based on second-order finite differences, which is a practical method that is reasonably accurate under a wide variety of conditions and is straightforward to implement. When complemented by the Hines method of reordering grid points at section boundaries, second-order finite differencing can lead to an efficient numerical representation of a neuron, where the discretized dynamics of an entire set of branching cables is captured in a tridiagonal matrix (Hines, 1984).

In this study, we investigate the feasibility of high-order schemes for modeling active cables, in particular, the spectral method and finite difference schemes of higher than second order. The order of the approximation of space derivatives indicates the dependence of the truncation error on the characteristic grid spacing and serves as the measure of accuracy of a numerical scheme. The familiar second-order central difference formula, *dV*/*dx* = [*V*(*x* + Δ*x*) − *V*(*x* − Δ*x*)]/(2Δ*x*) + *O*(*N*^{−2}), where *N* ∼ 1/Δ*x* is the grid size and *V* is the function we wish to differentiate, provides a simple illustration of this dependence. The formula follows from a set of Taylor expansions or fitting a second-degree polynomial (parabola) to *V* at the grid points *x* − Δ*x*, *x*, and *x* + Δ*x*. Higher-order approximations involve higher-degree polynomials and a greater number of neighboring points, leading to the decay of the truncation error as higher powers of *N* (Fletcher, 1988).

Spectral methods, on the other hand, commonly start with an infinite series *V* = ∑*a _{k}*ϕ

_{k}(

*x*), where each basis function ϕ

_{k}is globally defined (i.e., typically nonzero everywhere in the computational domain). When the series is truncated to

*N*terms, as required in a practical implementation, the resulting approximation error can be shown to decay faster than any power of

*N*for sufficiently smooth

*V*, leading to what is often called spectral, infinite-order, or exponential accuracy. In the Galerkin formulation of the spectral method, the problem is transformed entirely into the spectral domain and expressed in terms of a set of ordinary differential equations governing the behavior of the coefficients

*a*. In nonlinear problems, this is usually cumbersome and unsuitable. In the collocation approach adopted in this letter, by contrast, the spectral approximation is employed solely for calculating the space derivatives, and the problem is ultimately formulated in terms of the values of

_{k}*V*at grid points. The ability to easily incorporate boundary conditions and efficiently compute nonlinear terms is among the advantages of the collocation approach.

In a bounded nonperiodic computational domain where *V* is defined in a finite region, as in most biological problems, the basis functions of choice are Chebyshev polynomials. They concentrate grid points near the boundaries in a way that can be shown to be optimal for eliminating numerical oscillations associated with equally spaced grids and trigonometric basis functions (Trefethen, 2000; Fornberg, 1998b). Note that the overall solution accuracy in a simulation depends not only on the order of the method but also other problem-dependent circumstances such as the smoothness of the spatial distribution of parameter values and whether a grid of appropriate size has been used. A detailed presentation of spectral methods, which are widely used in engineering and applied mathematics, is provided by Canuto, Hussaini, Quarteroni, and Zang (2007). They also supply numerous applications and further references.

There have been attempts to introduce highly accurate numerical solutions into the modeling of neurons (Toth & Crunelli, 1999; Baer, Crook, Dur-e-Ahmad, & Jackiewicz, 2007; Dur-e-Ahmad, Jackiewicz, Zubik-Kowal, & Crook, 2007; Funaro, 1992; Lindsay, Ogden, & Rosenberg, 1999). But no systematic comparative study of spectral and high-order finite differences for neurons with active dendritic tree has been done. In subsequent sections, we formulate a numerical solution for coupled active cable equations that lends itself naturally to such a comparative study. Spectral methods are highly accessible in the form of robust differentiation matrices (Trefethen, 2000) that do not require elaborate mathematical preparation, and our numerical solution is in terms of differentiation matrices so that it can alternatively accommodate both spectral and finite difference differentiation. Collocation also allows the use of fast Fourier transforms, discussed and compared with our approach in section 4. Another major approximation method, finite elements, is excluded from this study since its strength comes into play only when two or more space dimensions are involved.

Our results indicate that the spectral method provides good numerical solutions for the cable equation, comparable in accuracy to those from the second-order finite difference method. However, the spectral method provides far greater accuracy in the case of a simulated system driven by inputs that are smoothly distributed in space. In a branched spiking neuron model, the spectral method has greater accuracy than the finite difference methods, due mainly to its ability to better approximate propagating action potentials.

## 2 Methods

*x*. It expresses, at a point, the relationship between the spatiotemporal rates of change of membrane potential,

*V*, and the transmembrane currents. Taking

*C*as membrane capacitance,

*p*as the perimeter of the cable,

*A*as the cross-sectional area, and

*R*as the axial electrical resistance, all of which may be space dependent, one can write (Jack, Noble, & Tsien, 1975; Dayan & Abbott, 2003): The ion current is the sum of the currents (by convention, positive when outward) resulting from different types of ions and often represented by the ohmic relation

*I*= ∑

_{ion}_{i}

*g*(

_{i}*x*)(

*V*−

*E*), where

_{i}*g*and

_{i}*E*are the conductance and reversal potential, respectively, of ion type

_{i}*i*. If voltage-gated ion currents are present, a nonlinearity is introduced into the equation. The injected current,

*I*, accounts for any other imposed currents, including those resulting from synaptic inputs. The diffusive term, the first term on the right, is associated with axial currents. The solutions of equation 2.1 describe how the potential varies in time and along the axis of neuronal sections.

_{inj}The equation is augmented by boundary conditions of the general form *a V*(*x _{o}, t*) +

*b*∂

*V*/∂

*x*(

*x*) =

_{o}, t*c*, where

*x*is the position where the condition holds. The coefficients

_{o}*a*,

*b*, and

*c*may be time dependent. For example, a voltage clamp can be expressed as

*b*= 0 and

*a*≠ 0, the sealed end of a neuronal section as

*a*=

*c*= 0 and

*b*≠ 0, and current injection into the end of a cable by

*a*= 0,

*b*≠ 0 and

*c*≠ 0. One instance of equation 2.1 can be written for each nonbranching section of a neuron, and it requires at least two boundary conditions since the equation is second order in space. At a branching point or node where

*m*sections meet, additional conditions are supplied by the fact that the total current entering the node must equal zero (one equation) and that the potentials of the sections meeting at the node are equal at the node (

*m*− 1 equations). Hence, any tree structure always comes with a number of boundary conditions that equals twice the number of sections, as required.

When the cable equation is nonlinear, it cannot be solved analytically, and an approximate solution needs to be computed through a numerical method. There are a number of distinct approaches to high-order numerical solutions. We discretized the membrane potential and used differentiation matrices to calculate space derivatives. Differentiation matrices help maintain clarity in investigating alternative numerical methods since different methods can be implemented by simply replacing differentiation matrices, and they are efficient in many problems of the size typically encountered in neuronal modeling.

*V*only at a set of grid points,

*x*, not necessarily equidistant, distributed in the interval 0 ⩽

_{j}*x*⩽

*L*, where

*L*is the length of the cable. The values of the potential at the grid points are represented by the components,

*V*(

_{j}*t*) =

*V*(

*x*,

_{j}*t*), of a vector

**V**(

*t*). The same notation is used for all functions of

*x*. We approximate the space derivative and integral of the potential at the grid points by multiplying

**V**by matrices, where the differentiation and integration matrices have been denoted

*D*and

*F*, respectively.

We created finite difference differentiation matrices by Taylor expanding *V*(*x*, *t*) at every grid point on a uniform grid and solving for the derivative at a point in terms of neighboring points. This is similar to the Taylor table described in Lomax, Pulliam, and Zingg (2001). The truncation level of the expansion produces an estimate of the error in the approximation, referred to as the truncation error. For the spectral differentiation matrices, we use the approach and the software supplied by Trefethen (2000). These spectral matrices follow from fitting the unique polynomial passing through the values of *V*(*x*,*t*) at the entire set of grid points and using the derivative of the polynomial as the required approximation to the space derivative. The grid points in this case are defined at the Chebyshev points, *x _{j}* = cos(

*j*π/

*N*), where

*N*is the grid size. The polynomial fitting method in fact provides a unifying framework for both spectral and finite difference methods since the finite difference matrices can also be obtained by fitting polynomials to

*V*(

*x*,

*t*) where each polynomial is local, that is, confined to a set of neighboring points for every grid point (Fornberg, 1998a).

*j*and

*j*+ 1, Using the notation

*G*=

_{jk}*F*

_{j+1,k}−

*F*for the operator for numerical integration between the two grid points, This is the semidiscrete analog of the cable equation for one section of cable.

_{jk}*V*/∂

*x*would contain the dot product of a row of

*D*(the row corresponding to that grid point), multiplying those elements of that correspond to that section. Assume that there are

*N*constraints. The entire set of constraints can be assembled together as the set of rows of a constraint matrix , multiplying the vector of potentials equal to a vector

_{c}**H**of known values: Consider a section of the neuron corresponding to a set,

*J*, of indices so that

*V*,

_{j}*j*ϵ

*J*, represents that section within . If the constraint

*aV*+

_{i}*b*∑

_{j}

*D*=

_{ij}V_{j}*c*is imposed for this section at grid point

*i*, one row of is used to express this by having the entry

*a*at column

*i*and entries

*bD*in columns

_{ij}*j*ϵ

*J*while

**H**contains

*c*in the corresponding row. The rest of that row of contains zeros. Another row of could express the fact that the potentials of two sections at a node where they meet are equal, that is,

*V*−

_{k}*V*= 0, as follows: the

_{l}*k*th and

*l*th elements have the entries 1 and −1, where

*k*is the index of the last member of one section and

*l*is the first member of the next section, and 0 is the the entry in the corresponding row of

**H**.

*N*constraints to eliminate

_{c}*N*variables from the numerical problem. We choose the elements of where the constraints are expressed as the variables to be eliminated. Rearrange the elements of so that they are partitioned into two groups, where the second group, , contains all the variables that are to be eliminated. The columns of are rearranged in exactly the same way so that equation 2.6 can be written as that is, . In our case, the diagonal entries of the differentiation matrix are nonzero at the locations where the constraints are defined. We also note that in each constraint,

_{c}*a*and

*b*cannot simultaneously vanish since the constraint would be undefined otherwise. It follows that almost surely, that is, aside from rare combinations of the values of

*a*,

*b*, and the entries of

*D*. Consequently one can solve for the boundary potentials in terms of interior potentials as where and .

**V**and define the quantities to finally obtain a dynamical system that is similar to equation 2.5 but is smaller and satisfies all the constraints:

Note that *P* and *Q* are square and *P* is in general nonsingular, so that equation 2.11 could be simplified by multiplying through by the inverse of *P*. However, it is computationally advisable to retain this form, as *Q* depends in general on dynamically changing gating variables, and the multiplication by *P*^{−1} would need to be repeatedly performed during a simulation. By elementary linear algebra, all eigenvalues of *P*^{−1}*Q* must have nonpositive real parts in order for this system to possess equilibria under steady input; we have found that numerically computed eigenvalues satisfy this condition. We have tested this because a random programming error is likely to result in matrices that would not have this property. This illustrates our adherence to a helpful programming practice: verifying at intermediate stages of development that the software is in conformity with the implications of the model. The charge conservation property of equation 2.11, described in section 4, furnishes another condition used for this purpose.

According to equation 2.11, the membrane potential would relax, under a time-independent injected current, to a steady equilibrium given by **V**_{eq} = −*Q*^{−1}**R**. We have employed this fact as a tool for ensuring that equation 2.11 has been correctly set up by comparing **V**_{eq} with a known exact equilibrium distribution and verifying that they agree up to truncation error. (The exact equilibrium solution is obtained by letting *t* → ∞ in equation 3.3 below.)

**M**whose behavior is governed by where the Hodgkin-Huxley dynamics is represented by

*f*(

*V*) and

*h*(

*V*). Note that the index

*i*ranges over the full set of grid points, including points where constraints have been imposed. Since the constraints are for

*V*only, they cannot be used to eliminate gating variables from the system. But the dynamics of the gating variables at those points are available since the corresponding potentials can be found from equation 2.8.

The momentary state of the nonlinear dynamical system is fully described by the set of vectors {**V**, **M**}, which has total size *N* + *n*(*N* + *N _{c}*), where

*n*is the number of the gating variables in the original model. Given its state, the vector field of the dynamical system can be computed from equations 2.11 and 2.12 augmented by the algebraic relation in equation 2.8. Time integration can be carried out by any number of the standard methods available when the vector field is used. In sections 3.1 and 3.2, we have used the package ode15s in Matlab, chosen for its ability to efficiently handle stiff equations in moderate-sized systems. Since we are interested in the accuracy of space discretization, we set the time integrator's accuracy to the maximum level (approximately the round-off error). In section 3.3 we use the Crank-Nicolson method of time integration (Press, Teukolsky, Vetterling, & Flannery, 1992) due to its efficiency in large systems with moderate accuracy requirements. Preliminary verification of the consistency and stability of the numerical scheme was conducted by successively refining the grid and ensuring that the solutions appear to converge. The software is available on ModelDB (senselab.med.yale.edu/senselab/modeldb).

## 3 Results

We have used the following approaches to assess the numerical solution: comparison with exact solutions in passive cables, the convergence of solutions with successively finer grids (grid convergence) in an unbranched active cable, and grid convergence in simulations of a neuron in a realistic setting. These are described in subsequent sections.

### 3.1 Solution Accuracy for Linear Cable.

*V*(

^{REF}*x*,

*t*), which may be obtained from an exact solution or a numerical solution on a highly refined grid, and the numerically computed membrane potential

*V*(

_{n}*t*), where

*N*is the total number of grid points, or the grid size.

*I*=

_{ion}*g*(

_{l}*V*−

*E*). For simplicity, we also eliminate the space dependence of parameters, although this is not needed in order to be able to solve the equation exactly. We choose a form for the injected current density that is flexible enough so that it can be located anywhere, and its region of application can be adjusted continuously from a broadly distributed to a sharply concentrated one. We define the spatial dependence of the current to be peaked as a cosine function with a maximum value of

_{l}*I*

_{0}and nonzero only in a region of width

*w*centered at

*x*=

*x*

_{0}: The region can extend at most to the nearest boundary, hence

*w*⩽ 2min(

*x*

_{0},

*L*−

*x*

_{0}). The maximally broad distribution is one that is centered in the middle of the section,

*x*

_{0}=

*L*/2, and whose range coincides with the length of the section,

*w*=

*L*. There are several advantages to using equation 3.2. The cosine distribution provides zero slope at its ends, therefore satisfying sealed-end boundary conditions, even when its peak is adjusted to be as wide as possible. This is quite unlike a gaussian peak, for example, whose tails extend to infinity. On the other hand, any other sealed-end distribution can be approximated as a sum of inputs of the type we use. Note that the use of sealed-end condition is not a requirement of our method. Finally, equation 3.2 leads to a relatively simple solution since trigonometric functions are eigenfunctions of the time-independent cable equation. In the context of this study, an inhibitory input is obtained by reversing the sign of

*I*, and we have verified that the accuracy results described below are unchanged when the cable is driven by inhibitory inputs.

_{inj}*V*(

*x*, 0) =

*E*. The solution then becomes where the coefficients

_{l}*B*(

_{n}*t*) are entirely in terms of elementary functions (see the appendix). As expected from equation 2.1, the membrane capacitance acts only as a timescale in equation 3.3. The terms of the sum decrease rapidly with increasing

*n*, and we have found that it converges to within

*O*(10

^{−10}) when 2000 terms are included. The potential calculated from equation 3.3 is used as the reference potential in equation 3.1.

Figure 1 shows the error (equation 3.1) at *t* = 20 ms for an *L* = 400 μm section of passive cable where constant input current is distributed as shown in the inset. In order to illustrate the full range of behaviors, two different current distributions are used: one that is maximally broad (thick curve), and one that is narrower than the closest grid spacing in any of the simulations (thin curve). Since both have the same total current, 0.65 nA, the narrow distribution has a far higher peak and appears as a vertical line in the inset. Corresponding to each current, there are four different numerical results shown in the main plot. In the case of the broad distribution (thick curves), the observed accuracy is consistent with what is expected from the order of the truncation errors in the approximation to the derivatives in the differential operator (exemplified by the two thick gray curves). Figure 1 shows exponential accuracy in the case of the broadly distributed input, where the spectral method reaches the level of the round-off error at around *N* = 16. The figure also shows that the fourth-order finite difference error decays approximately as *N*^{−4}.

Figure 1 shows an entirely different set of behaviors for the same numerical methods when the input current is narrowly distributed (thin curves). First, the solution errors were collectively displaced upward, although the accuracy still remained under a millivolt. Second, the accuracy did not markedly improve with increasing grid size, unlike in the case of the broadly distributed input. The high-order finite difference schemes had poorer accuracy than the other methods. In order to understand the surprisingly low performance of high-order finite difference methods, note that the leading term in the truncation error of a finite difference methods is given by *O*(*N*^{−n}*d ^{n}V*/

*dx*), where

^{n}*n*is the order of the method. This indicates that the faster decline in the error with increasing grid size may be more than offset by the magnitude of the derivative if the potential has abrupt variations in space. This phenomenon is discussed in detail with computational examples in section 3.3.1 of Fletcher (1988). The fluctuations in

*E*with

_{N}*N*were caused by the movement of grid points as

*N*is increased and the consequent jump in the location of the injected current from one compartment to another. Such jumps discontinuously affected

*E*, especially with small

_{N}*N*, because the sharp change in

*V*due to the injected current propagates in the entire solution.

We have repeated these simulations with NEURON. For a point input, the accuracy of NEURON (which uses FD2; we will hereafter use the abbreviation FD*n* for the *n*th-order finite difference approximation) approximately equaled that of our FD2 and spectral methods. For the broadly distributed input, NEURON's error decayed with *N* at the same rate as that of FD2, as expected, while being consistently lower by a factor of 3.87 on average. The reason for the latter is explained in section 4.

The dependence of accuracy on the smoothness of the solution appears more clearly in Figure 2, where we have fixed *N* = 30 as the grid size and plotted *E _{N}* as a function of,

*w*/

*L*, the spatial scale of the input relative to the size of the cable section. When

*w*/

*L*is small, the spectral and finite difference methods (including NEURON) performed similarly with an accuracy on the order of a few tenths of a millivolt, and all methods improved in accuracy as

*w*/

*L*is increased. However, as the input region became comparable in size to the cable (

*w*/

*L*≈

*e*),

*E*began to plunge, and the high-order methods, particularly the spectral method, performed at a substantially higher level.

_{N}### 3.2 Nonlinear Cable.

Performance with passive cables is no guarantee of similar results with active ones. Hence, we have repeated the previous simulation in an unbranched cable with voltage-gated sodium and potassium currents included, a longer section (*L* = 2000 μm) in order to more clearly observe the propagation of action potentials, and with constant input current 0.965 nA centered at *x*_{0}/*L* = 0.8 with a moderately narrow distribution, *w*/*L* = 0.2. Since no exact solution is available, we studied grid convergence.

Figure 3A shows that during the evolution of *V*(*x*,*t*), there is an initial spike followed by another about 13 ms later. The spikes first occur where the input was applied and rapidly propagate in both directions. The asymptotic steady firing rate was about 80 Hz, and in the simulation, the oscillations were just beginning to develop, starting from an initial condition *V*(*x*, 0) = *E _{l}*. To study convergence, we have used the space-time averaged solution error , where

*t*are the discretized time points. In this simulation, it has not been possible to take a fine enough grid where a solution has converged to within round-off error and use it as a reference for calculating . Since we used maximal accuracy in time integration, we could use grids only of size less than

_{m}*O*(10

^{3}) in order to avoid excessive computation times.

Fortunately, a reference solution that has not converged is also useful as long as its distance to the exact solution can be estimated. We used FD2 and *N* = 510 as a reference solution and estimated this distance as follows. We first noted that (using FD2 with *N* = 510 as a reference) depended approximately linearly on log *N*, and by fitting a line to the available points (*N* = 30, 62, 126, 254), we calculated that . Hence we can expect that *O*(10^{−3}) is a lower bound for any calculation of which uses as reference FD2 with *N* = 510. Using this reference, we have calculated for all of our numerical methods and plotted them in Figure 3B. The result shows that the spectral method's convergence on the exact solution is fastest, followed by those of the high-order finite difference methods and, finally, with FD2 as the slowest. The decrease in for all methods bottomed out as became comparable to that of the reference solution. In the context of the semidiscrete equation, the consistency of our numerical scheme would refer to the fact that equation 2.11 converges to equation 2.1 (together with its boundary conditions) in the limit of an infinitely refined grid. The behavior shown in Figure 3B provides evidence regarding the consistency of our numerical scheme.

In order to examine the local numerical errors we plot, in Figure 3C, *V*(*x*,*t*) during the second action potential at *x* = 200 μm for the interval 14 < *t* < 20. Immediately under the voltage trace, the difference between the voltage trace in the numerical solution with *N* = 30 and the reference solution is plotted for the spectral (solid curve) and FD2 (dashed) methods. The difference is most pronounced for FD2 during the rise of *V* in the beginning of the action potential. This difference could in principle arise from two distinct factors: the numerical error in the amplitude of the voltage and the timing of the action potential. The latter is associated with the fact that due to the steepness of the voltage trace, a small time translation of the action potential could result in a large transient difference between the numerical and reference solutions. In order to help examine these factors individually, we generated an error that is due only to the timing of the spike. We did so by defining a new curve as the difference between the reference voltage trace and a copy of it that has been retarded by 0.018 ms; this was plotted as the thin curve in Figure 3C. Comparison of this with the error in FD2 (dashed) indicates that the numerical error is largely, although not completely, explained by a shift in the timing of the action potential in the approximate solution. Thus, the spectral method has a clear advantage in problems where precise spike timing is important.

In further numerical experiments (not shown), the superior spike timing accuracy of the spectral method remained valid for a wide range of current distributions. However, the timing errors of FD2 and spectral methods became comparable and equaled about 0.3 ms as the distribution approached the shape of a point. The reason for the shifts in spike time was that during a simulation, the numerical and exact values of the potential at a given point on the cable occasionally reached the spiking threshold at different times. Due to the slow rise of the subthreshold potential before an action potential, small differences in potential could lead to relatively large differences in timing. The spike-generating nonlinearity therefore served as a mechanism for amplifying the numerical errors in potential and shifting the time of the action potential.

### 3.3 Simulation of Spike Initiation.

Figure 4 shows spike initiation in a model of a rat layer 5 pyramidal neuron driven by a 20 ms pulse of 0.52 nA current delivered either at the soma or 415 μm away from the soma at an apical dendritic site. We have followed the geometry and electrical properties described in Mainen, Joerges, Huguenard, and Sejnowski (1995) with similar total axonal, apical, and basal dendritic lengths and diameters, and channel density distributions. This branched cable model has a highly excitable axon and weakly excitable soma and dendrites with 4 axonal sections (hillock, initial segment, myelin coated section, node of Ranvier), a soma, basal dendritic tree (30 sections), apical trunk (6 sections), and apical dendritic tree (15 sections). Total lengths of the basal and apical dendrites, respectively, are 5990 and 6590 μm, and the median diameter is 0.8 μm. The simulation in Figure 4 uses the spectral method at a total of 667 interior points and does not implement tapering. Action potentials were initiated at the axon hillock or soma for both stimulation sites and propagated into the apical dendrites, similar to physiological results (Stuart & Sakmann, 1994), including the site of spike initiation, attenuation with distance, and propagation with a speed of about 0.2 m/s. The point where the input current is applied appears as a slight bend at about 415 μm in the path of the propagating action potential shown in the contour plots of Figure 4.

We studied grid convergence in this system by calculating from the entire neuron's behavior during an 18 ms period when a constant point current was applied with the intensity and at the dendritic location described above. During the simulation, an action potential was generated at the soma and propagated into the apical dendrite. For time integration, we used the Crank-Nicolson method (Press et al., 1992) with a fixed time step Δ*t* = 0.1 ms. Figure 5 shows the dependence of error on the number of grid points used. The reference solution was computed with FD2 on a grid with size *N* = 2551. The figure indicates that the spectral method had the fastest convergence, followed, for about *N*>400, by FD6, FD4, and FD2 in order of decreasing speed of convergence. It also shows that approached a baseline with increasing *N*. Unlike the results of the previous section, this baseline appeared before the scaling of with *N* became observable. (For this reason, the figure is a linear plot.) It is therefore plausible that the current baseline was imposed by the size of Δ*t*. This is supported by the fact that when all the simulations in Figure 5 were repeated with Δ*t* = 0.01 ms, the value of , averaged over all four methods, tended to decrease (e.g., from 0.294 to 0.290 for *N* = 932). Most important, when we repeated the simulations in Figure 5 with an input that was maximally broad in the section where it was applied (total length, *L* = 50 μm), it was found that the effects on the results shown were negligible.

Figure 5 indicates that the accuracy can be quite poor for coarse grids, especially for FD2. This was due primarily to the shifts in the time of the action potential in the coarse grid solutions relative to that in the reference solution. The time shifts were greater farther away from the soma. Figure 6 displays a typical set of results in the coarse grid (gray curves) and the reference grid (black), for increasing distances away from the soma and for each discretization method. Near the tip of the apical dendrite, the FD2 solution appears to contain a time translation of the spike, while the high-order finite difference solutions have numerical oscillations in the potential. The oscillations, caused by the inability of the time integrator to handle sharp temporal changes in the potential, were observed to decay since our time integration method was unconditionally stable. The magnitude of the deviations exemplified by Figure 6 was consistent with the accuracies displayed in Figure 5.

## 4 Discussion

Our results for the unbranched passive cable indicated that the advantages of the spectral method were strongest when the neuron model was driven in a spatially distributed fashion. We found in section 3.1 that the accuracy advantage of the high-order methods diminished as the spatial distribution of the injected current became narrower. However, the results in section 3.3 indicate that the high-order methods retain a considerable advantage in accuracy when they are used to simulate a biophysically realistic neuron, even when it is driven by a point input. The results show that such methods have substantially lower values of than does FD2 for a range of small grid sizes. This is due to the fact that the effects of a point input remain confined to the section where it is applied and affect neighboring sections only through shared boundary conditions. Therefore, the sections without such input have smooth solutions suitable for treatment by high-order methods. We have found, in particular, that the spike timing accuracy of the high-order methods plays a significant role in increasing the accuracy.

It is important for a numerical scheme to preserve the basic conservation properties of the original model. We will examine how charge conservation is handled in our scheme (see equations 2.4 to 2.12). If equation 2.1 is integrated over the entire neuron, one obtains the statement that the space-averaged rate of change of the potential is proportional to the total amount of current going in, which follows from charge conservation. This is due to the fact that the diffusion terms in equation 2.1 are related only to axial currents, do not contribute to the total input, and cancel out of the integral. There is an analog of this in the discrete system. Integration over the neuron corresponds to taking the sum of the ordinary differential equations represented by equation 2.11. In this way, all of the diffusion-related terms cancel out. This is evident in equation 2.4 where, when summed over the index *j*, the first two terms on the right-hand side cancel each other out. The remaining terms are proportional to the discrete analog of the space-averaged rate of change of the potential, on the left-hand side, and the total transmembrane current, on the right. The dynamical system 2.11 therefore has the charge conservation property. This is a consequence of the fact that we discretized the integrated version of the cable equation, namely, equation 2.3, instead of the cable equation itself. Discretizing the cable equation directly does not in general preserve charge conservation. Probability conservation is another property of some partial differential equations arising in neuroscience where this numerical scheme proves useful (Omurtag, Knight, & Sirovich, 2000).

Higher accuracy implies higher speed, ceteris paribus, since it allows a wider grid spacing (fewer grid calculation points) to be used. But the speed of the spectral method merits a separate discussion. In computing a vector of space derivatives *d***V**/*dx* from a vector of potentials **V** of size *N*, the spectral method has the disadvantage, relative to finite differences, of having a full differentiation matrix, which results in an operation count *O*(*N*^{2}). Spectral methods, fortunately, possess fast transforms (FFT) that can significantly improve on this, and the Chebyshev collocation method, which underlies the differentiation matrix used in this study, is no exception. To assess the potential effect of FFT on our problem, consider the spectral differentiation matrix as a “telescoped” version of three distinct operations. The first one is a discrete Chebyshev transform, , to obtain the vector of spectral coefficients corresponding to the potential; the second is a recursive calculation in *O*(*N*) steps of the Chebyshev coefficients of the derivative vector, which can also be written as a matrix operation, **c**′ = *A***c** (Canuto et al., 2007); the third is a discrete inverse Chebyshev transform to obtain the derivatives of the potential, *d***V**/*dx* = *T***c**′. Putting these together illustrates spectral differentiation as matrix multiplication, . When carried out sequentially, the operation becomes *O*(*N*(2log *N* + 1)), including the recursion, since the FFT can be carried out in *O*(*N*log *N*) steps. For a large simulation with *O*(10^{3}) grid points, the sequential operations represent an improvement in speed by about two orders of magnitude.

Nevertheless, for sufficiently small problems (*N* < 100; see Boyd, 2001, for a discussion and references), the explicit matrix interpretation remains competitive. The explicit matrix approach has other advantages as well. It enables one to formulate a semidiscrete system as in equation 2.11 and lets the eigenvalues and eigenvectors of the right-hand side to be calculated. This has numerous potential uses, including obtaining an analytical solution via diagonalization to solve the steady linear problem (Mascagni & Sherman, 1989; Pearlmutter & Zador, 1999), formulating a low-dimensional model that uses the eigenfunctions as a moving basis (Knight, Omurtag, & Sirovich, 2000), studying the stability of time-marching schemes (Mofid & Peyret, 1993), and investigating the properties of the differential operator of the exact model (Gottlieb & Lustman, 1983; Reddy & Trefethen, 1994).

The truncation error for a finite difference method for a grid of size *N* scales as *N*^{−n}, where *n*, the order of the method, is the number of neighboring grid points used for each point or the degree of the polynomial fit. The error in spectral differentiation of a sufficiently smooth function, on the other hand, scales in grid size as *c ^{N}*, for some constant

*c*< 1. The overall solution error depends on the choice of numerical scheme and the smoothness of the approximated function (Boyd, 2001; Fornberg, 1998b; Trefethen, 2000). The results exemplified in Figure 1 are consistent with these considerations.

A salient feature of our results in the unbranched passive cable is that the numerical error of NEURON is lower than that of our FD2 for broadly distributed inputs. This provides a useful illustration of the properties of staggered grids. (We have not used staggered grids since their expected advantage did not justify the additional complexity in the spectral application.) In NEURON's staggered grid, each potential is defined at the midpoint of a compartment at whose end points the currents are located. The resulting effective grid spacing of NEURON is half that of our FD2. Since both NEURON and FD2 are second order, this implies that NEURON's error would be reduced by a factor of four. We have observed, on average, a factor of 3.87 as described in section 3.1.

We have demonstrated that the accuracy of spectral methods is at worst equivalent and at best far superior to that of finite differencing in simulations involving the nonlinear neuronal cable equation. This conclusion naturally extends to other systems described by distributed quantities governed by nonlinear advection-diffusion-reaction equations, such as the dynamics of intracellular calcium (Koch, 1999; Nagaiah, Rüdiger, Warnecke, & Falcke, 2008) or intramembranous receptor protein diffusion in the dendritic tree (Bressloff, 2009). We have compared the results obtained by the spectral method with those from finite differences. Our results show that the performance of high-order finite differences may lag behind that of second order, and the accuracy level they ultimately reach on a very refined grid may be far greater than is necessary. The spectral method, by contrast, consistently outperforms finite differences unless the simulation involves an unbranched passive cable with sharp features in the distributed quantities. It is, in addition, likely to be particularly efficient in nonlinear problems where precise spike timing is important.

## Appendix

*U*in terms of an orthogonal set of functions, which satisfies the boundary conditions term by term, an ordinary differential equation is obtained for each time-dependent coefficient

*B*(

_{n}*t*), which is straightforward to solve: where

*D*=

_{n}*g*+

_{l}*A*(

*n*π/

*L*)

^{2}/(

*pR*),

*c*= 1/

_{n}*L*if

*n*= 0, and

*c*= 2/

_{n}*L*otherwise. The initial value of the coefficient is determined from the initial potential distribution as

*b*(0) =

_{n}*c*∫

_{n}^{L}

_{0}

*dx*[

*V*(

*x*, 0) −

*E*]cos(

_{l}*n*π

*x*/

*L*). The resulting expression for

*b*is inserted into equation A.2 to obtain the general solution for an arbitrarily distributed input current.

_{n}*b*(0) vanish. Finally, the input current in equation 3.2 is inserted into the general solution and algebraic manipulation leads to the solution in equation 3.3, where the coefficients are given as If the width of the input happens to satisfy

_{n}*w*= 2

*L*/

*n*, the denominator of

*B*in the above expression goes to zero, but it can be shown that the second term in

_{n}*B*then converges to cos(

_{n}*n*π

*x*

_{0}/

*L*)/(

*nD*) as

_{n}*w*→ 2

*L*/

*n*. If the section length were an integer multiple of the input width, that is,

*L*=

*kw*, this would occur when

*n*= 2

*k*. For example, in the maximally broad distribution where

*w*=

*L*, we find that

*B*

_{2}= −(1 − exp(−

*D*

_{2}

*t*/

*C*))/(2

*D*

_{2}), while all other coefficients

*B*

_{n≠2}= 0, so that the equilibrium distribution of the potential,

*V*(

*x*,

*t*→ ∞), replicates the distribution of the input in a form that is vertically translated and rescaled.