Abstract

Estimation of current source density (CSD) from the low-frequency part of extracellular electric potential recordings is an unstable linear inverse problem. To make the estimation possible in an experimental setting where recordings are contaminated with noise, it is necessary to stabilize the inversion. Here we present a unified framework for zero- and higher-order singular-value-decomposition (SVD)—based spectral regularization of 1D (linear) CSD estimation from local field potentials. The framework is based on two general approaches commonly employed for solving inverse problems: quadrature and basis function expansion. We first show that both inverse CSD (iCSD) and kernel CSD (kCSD) fall into the category of basis function expansion methods. We then use these general categories to introduce two new estimation methods, quadrature CSD (qCSD), based on discretizing the CSD integral equation with a chosen quadrature rule, and representer CSD (rCSD), an even-determined basis function expansion method that uses the problem’s data kernels (representers) as basis functions. To determine the best candidate methods to use in the analysis of experimental data, we compared the different methods on simulations under three regularization schemes (Tikhonov, tSVD, and dSVD), three regularization parameter selection methods (NCP, L-curve, and GCV), and seven different a priori spatial smoothness constraints on the CSD distribution. This resulted in a comparison of 531 estimation schemes. We evaluated the estimation schemes according to their source reconstruction accuracy by testing them using different simulated noise levels, lateral source diameters, and CSD depth profiles. We found that ranking schemes according to the average error over all tested conditions results in a reproducible ranking, where the top schemes are found to perform well in the majority of tested conditions. However, there is no single best estimation scheme that outperforms all others under all tested conditions. The unified framework we propose expands the set of available estimation methods, provides increased flexibility for 1D CSD estimation in noisy experimental conditions, and allows for a meaningful comparison between estimation schemes.

1  Introduction

Understanding brain function entails disentangling the activity of different neuronal populations. Owing to advances in microelectrode fabrication technology, neuroscientists are now in a position to simultaneously record extracellular potentials from multiple spatial locations in close proximity. In particular, a growing number of studies are using linear electrode arrays to disentangle laminar processing in the cortex. In order to extract useful information about laminar processing from such recordings, it is imperative to comprehend how extracellular potentials are related to the underlying neural activity.

With extracellular recordings, it is not possible to measure neuronal activity directly. Instead we measure electrical potential differences that result from the flow of currents. In particular, the low-frequency part of extracellular potentials, called the local field potential (LFP), is thought to be related to currents generated during synaptic activity (Nunez & Srinivasan, 2006; Einevoll, Kayser, Logothetis, & Panzeri, 2013). Direct interpretation of LFP is problematic given that electric potentials are a nonlocal measure of the neural activity due to volume conduction. Thus, estimating current source density (CSD), which represents the volume density of net transmembrane currents generating the LFP, has become common practice in extracellular neurophysiology (Freeman & Nicholson, 1975; Mitzdorf, 1985; Schroeder, Mehta, & Givre, 1998; Bedard & Destexhe, 2011; Tenke & Kayser, 2012).

In a homogeneous, isotropic, and purely resistive medium with electrical conductivity , current source density C is related to the extracellular potential via the Poisson equation: where is the Laplace operator (Nicholson & Freeman, 1975; Tenke, Schroeder, Arezzo, & Vaughan, 1993; Brette & Destexhe, 2012). Originally, CSD was estimated by approximating the Laplacian by a discrete second derivative (Freeman & Nicholson, 1975). Moreover, in the analysis of linear (laminar) electrodes inserted perpendicular to cortical layers, it has been commonplace to simply ignore the lateral (i.e., x and y) derivatives in the Laplacian (Mitzdorf, 1985; Schroeder et al., 1998). This is equivalent to assuming that potentials have minimal curvature in the lateral direction, a situation that can be achieved only with laterally extended CSD profiles. In recent years, a novel CSD method, iCSD, has been developed to circumvent the difficulties associated with spatially localized sources (Pettersen, Devor, Ulbert, Dale, & Einevoll, 2006; Leski et al., 2007; Leski et al., 2011). Instead of simply computing the derivative, iCSD assumes a particular parametric form for the CSD and uses a forward-inverse scheme for estimating CSD amplitude. This method has the advantage of allowing the estimation of CSD at the outermost electrode contacts, something that in the standard method can be done only under strong assumptions (Vaknin, DiScenna, & Teyler, 1988; Pettersen et al., 2006). Additionally, it also facilitates the incorporation of slightly more complex medium assumptions like the parameterization of the medium as two semi-infinite media with different electrical conductivity. Recently, a generalized version of iCSD, called kCSD (short for kernel CSD), has been introduced (Potworowski, Jakuczun, Leski, & Wojcik, 2012). Using results from machine learning, kCSD transforms CSD estimation into an underdetermined problem where the uniqueness of the solution is enforced using a minimum-norm requirement. This approach allows greater flexibility in changing the electrode arrangement. This is of particular interest when working on nonregular grids, which occurs when the recording electrode has nonfunctional contacts.

In this letter, we propose a unification of all 1D CSD estimation methods in a single framework by adding elements of linear inverse theory. In particular, we focus on comparing regularization methods based on spectral filtering to deal with recording noise. It is a well-known fact from linear inverse theory that inverse problems with smoothening kernels are ill posed, resulting in a high sensitivity to noise. We start by presenting two general approaches that are common in solving inverse problems: quadrature and basis function expansion. This will allow us to show that both iCSD and kCSD fall into the category of basis function expansion methods. We then use these general categories to introduce two new methods for CSD estimation, which we call quadrature CSD (qCSD) and representer CSD (rCSD). As its name implies, quadrature CSD is based on discretizing the CSD integral equation using a chosen quadrature rule, while rCSD is an even-determined basis function expansion method that uses the problem’s data kernels (also called representers) as basis functions. We will see that our approach is in line with the approach presented in Riera, Goto, and Kawashima (2014), which focuses on 3D CSD estimation using a realistic volume conductor model for rat barrel cortex. In a second step, we compare the different methods under various regularization schemes (Tikhonov, tSVD, and dSVD), different regularization parameter selection methods (NCP, L-curve, and GCV), and different a priori spatial smoothness constraints on the CSD distribution. We compare these different estimation schemes using simulations and rank them according to their source reconstruction accuracy under a range of source diameters and noise conditions. We found that no single estimation scheme outperforms all others under all tested conditions. Nevertheless, we found that ranking schemes according to the average error over all tested conditions results in a reproducible and meaningful ranking where the top schemes are found to perform well in the majority of tested conditions.

2  Theory

In this section, we describe the different aspects of 1D CSD estimation from the standpoint of linear inverse theory. We discuss the theoretical aspects in more detail than what is common in the hope of making it more accessible to newcomers to the field. Figure 1 shows a schematic of the steps leading to the final formulation and proposed solution of the problem.

Figure 1:

Flowchart describing the spectral regularization framework.

Figure 1:

Flowchart describing the spectral regularization framework.

2.1  Forward and Inverse Problem

Before attempting to solve the inverse problem of CSD estimation, it is necessary to have a clear understanding of the associated forward problem. This is crucial because current CSD estimation methods do not allow for the verification of the forward model assumptions from within the model (Bedard & Destexhe, 2011). In the case of current CSD methods, the forward problem is described by the Poisson equation for volume conductors, which is obtained by combining the quasi-static approximation with Ohm’s law (Plonsey & Barr, 2007). Moreover, in line with iCSD and kCSD, we describe the recording medium as being composed of two semi-infinite media with different but constant electrical conductivity (, as graphically depicted in Figure 2A. The plane represents the brain surface and separates the extracellular space (assumed unbounded from below) from the medium above the brain. The upper medium is added to model the effect of substances such as artificial cerebrospinal fluid (aCSF) or mineral oil on potential generation. It is common in invasive in vivo electrophysiological experiments to add liquids onto the skull opening to avoid drying of the brain surface.

Figure 2:

Simulation of extracellular potential. (A) Schematic of the medium parameterization and cylindrical lateral source assumption. A 32-channel electrode is located in the center of the cylinder. (B) Effect of lateral source parameterization choice on simulated potential. The value of extracellular potential (dots in last panel) is obtained by integrating the CSD depth profile (first panel) with the corresponding representer (middle panel). The depth profile of the representer is shown for an electrode located at a depth of 0.55 mm. It shows how the potential at this electrode is affected by sources at different depths. Representers for a uniform cylinder (UniCyl) and gaussian lateral source parameterization with a diameter of 0.5 mm or 1 mm are shown. To obtain the simulated potential (last panel) as a function of depth, the integration is performed using a representer proper to each electrode.

Figure 2:

Simulation of extracellular potential. (A) Schematic of the medium parameterization and cylindrical lateral source assumption. A 32-channel electrode is located in the center of the cylinder. (B) Effect of lateral source parameterization choice on simulated potential. The value of extracellular potential (dots in last panel) is obtained by integrating the CSD depth profile (first panel) with the corresponding representer (middle panel). The depth profile of the representer is shown for an electrode located at a depth of 0.55 mm. It shows how the potential at this electrode is affected by sources at different depths. Representers for a uniform cylinder (UniCyl) and gaussian lateral source parameterization with a diameter of 0.5 mm or 1 mm are shown. To obtain the simulated potential (last panel) as a function of depth, the integration is performed using a representer proper to each electrode.

Taken together, the electrostatic assumption, as well as the semi-infinite medium parameterization with constant conductivity, form the basic set of assumptions defining the forward problem for CSD estimation. Under these circumstances, the potential generated at any point in space by a CSD distribution can be found using the method of images (Jackson, 1999),
formula
2.1
with and , while and are the electrical conductivity for and , respectively. The limits of integration are chosen to cover all current sources. Here, we slightly expand on the approach used in Pettersen et al. (2006) because we make it possible to use electrodes located in both media.

The above formulation defines a classic linear inverse problem for , and the particular problem we are interested in solving is the following: What information about can be recovered from 1D linear (laminar) depth electrode recordings ?

2.2  Source Parameterization

To solve this inverse problem for laminar recordings, the three-dimensional problem needs to be reduced to one dimension. This is required because in laminar recordings, all electrode contacts are located on a line approximately orthogonal to the local cortical surface. Therefore, the contacts provide no information about the in-plane potential variation. The dimensionality reduction is achieved by assuming a fixed lateral profile for the function :
formula
2.2
It is customary to choose L(x,y) to be either a cylinder (see Pettersen et al., 2006) or a gaussian function:
formula
2.3
It is important to note that these parameterizations impose very strong assumptions on the underlying function . Their choice stems from a plausibility argument relying on the columnar organization of sensory cortices as the main justification. However, to our knowledge, it has never been experimentally verified whether and under which circumstances these assumptions are warranted.
Inserting equation 2.2 into equation 2.1 and integrating over x and y provides us with the desired 1D linear inverse problem,
formula
2.4
where is called the kernel of the linear operator. Some authors also refer to as representers or data kernels (Parker, 1994; Aster, Thurber, & Borchers, 2005). If we assume that the electrode is located at the center of a cylinder or a gaussian (i.e., ), then the kernel can be found in closed form using
formula
2.5
where and erfc is the complementary error function.

More generally, equation 2.4 is an example of a general form of well-studied problems known as Fredholm integral equations of the first kind. Problems of this form are ubiquitous in science, and therefore a large body of literature is devoted to their potential solution. Unfortunately, inversion of linear Fredholm integral equations of the first kind is often plagued with issues of uniqueness or stability, or both (Wing & Zahrt, 1991; Aster et al., 2005; Hansen, 2010).

2.3  Discretizing the Inverse Problem

To make solving equation 2.4 amenable to numerical treatment, the problem needs to be discretized. Since in an experimental setting, is known at only a discrete number of recording positions, we will first discretize the left side of equation 2.4 by projecting onto a set of delta functions centered at the recording points: (Wing & Zahrt, 1991). This results in
formula
2.6
where zi is the position of recording electrode i.

Next, we add the assumption that the function is an L2 function on an interval . It is necessary to make the interval finite because neither the cylindrical nor the gaussian 1D CSD kernels are L2 operators on an infinite interval.

Here, we investigate two approaches to discretize the right-hand side of equation 2.6. In line with tradition, we will call these approaches qCSD for quadrature CSD and rCSD for representer CSD. We also add a variant of kCSD that omits the kernel trick (by “kernel trick,” we refer to a method in machine learning that allows operating in a high-dimensional, implicit feature space by computing the inner products between the images of all pairs of data in the feature space).

2.3.1  qCSD

First, we investigate how the problem is solved with simple quadrature methods, replacing the integral in equation 2.6 by a sum. Using an N-point quadrature formula with associated weights , equation 2.6 is transformed into (Wing & Zahrt, 1991)
formula
2.7
where the sampling points zj are given by
formula
2.8
For M recording electrodes, we see that the original integral equation has been transformed into a matrix equation,
formula
2.9
If the number of quadrature points N is larger than the number of electrodes M, the problem is underdetermined, meaning that there is an infinite number of source functions f that can satisfy the measurements .

2.3.2  rCSD

A second option for discretizing the right-hand side of equation 2.6 is to expand in a finite set of linearly independent functions :
formula
2.10
Inserting this expansion into equation 2.6 results in
formula
2.11
Again, we see that for M recording electrodes, the original integral equation has been transformed into a matrix equation:
formula
2.12
This approach is also referred to as the projection or Galerkin method (Wing & Zahrt, 1991).

This is similar to the approach taken by both iCSD and kCSD. In iCSD, the basis functions are either delta, step, or spline functions, and N is chosen to be the same as M (i.e., the same number of basis functions as recording electrodes) (Pettersen et al., 2006). In contrast, kCSD chooses either gaussian or step functions as basis functions. Moreover, kCSD allows the system to be underdetermined by making (Potworowski et al., 2012).

An obvious question arises at this point: What is the best choice of basis function, and why? The general answer is that the best choice is given by basis functions that resemble the underlying sources most closely, as already hinted at in Potworowski et al. (2012). This is apparent from equation 2.10 since such basis functions will best approximate the underlying function . The difficulty is that under experimental conditions, it is not known a priori what form takes. Therefore, we propose to expand in a set of basis functions naturally given by the problem.

Remember that we assumed that is an element of the Hilbert space . As long as is bounded (as it is in our case), the forward problem described by equation 2.6 can be expressed as a bounded linear functional. Then, using the Riesz representation theorem, we can rewrite equation 2.6 as an inner product (Parker, 1994):
formula
2.13
Moreover, because we are working in a Hilbert space, we can employ the decomposition theorem to write
formula
2.14
where is the subspace spanned by the M representers and is its orthogonal complement.
Since any function in is orthogonal to a function in , it follows that
formula
2.15
This means that any part of that lies in has no effect on the recorded potential. Moreover, the existence of also implies that there is an infinite number of functions that satisfy the recorded data perfectly.
In order to single out a particular solution, it is customary to pick the one with the minimum norm (Parker, 1994). In this case, this amounts to set because
formula
2.16
Hence, by setting , the minimum norm solution is given by
formula
2.17
with . Comparing equation 2.17 with equation 2.10 shows that this approach leads to choosing the basis functions .
Thereby, inserting equation 2.17 into equation 2.12 produces
formula
2.18
A matrix created from the inner products of a set of functions is called a Gram matrix and has a number of interesting properties (Olver & Shakiban, 2006). Most important, the linear independence of the representers guarantees that the matrix is full rank, which in turn means that it is invertible. Hence, the Gram matrix approach produces a full rank linear system, .

2.3.3  eCSD

To facilitate the comparison of the expansion methods with the kernel method, we also add a small variation of kCSD. Instead of employing the kernel trick from machine learning to reduce the model estimation dimension, we directly use the expansion into gaussian basis functions to create an underdetermined inverse problem. This means we simply choose in equation 2.10 to be
formula
2.19
The number of sources j can be chosen freely in the same way as in kCSD. Note that in kCSD, the width of the sources w is called R. We have avoided this choice here to prevent confusion with the lateral source radius in iCSD. Finally, although we describe eCSD as a variation of kCSD, it could equally well be viewed as an underdetermined version of iCSD with a different choice of basis function. However, since we added eCSD specifically to explore the effect of the kernel trick in a 1D setting, we will refer to it as a variant of kCSD.

2.4  Solving the Discretized Inverse Problem Using SVD

Both the quadrature (qCSD) and the expansion-type (rCSD, kCSD, eCSD, and iCSD) discrete inverse problem can be solved numerically using the singular value decomposition (SVD). As we will see in section 2.5 on regularization, the SVD approach is particularly attractive for solving Fredholm integral equations because these problems are often ill posed and highly sensitive to noise.

In the SVD, an M by N matrix is decomposed into (Aster et al., 2005; Olver & Shakiban, 2006)
formula
2.20
where
  • is an M by M orthogonal matrix (i.e., where is the identity matrix) whose columns span the data space RM.

  • is an N by N orthogonal matrix whose columns span the model space RN.

  • is an M by N diagonal matrix with p positive diagonal elements called singular values. The number of nonzero singular values p is equal to the rank of the matrix . Moreover, the singular values are customarily arranged in decreasing order

Because there are only p nonzero singular values, the SVD of can also be written in a more compact form:
formula
2.21
The compact form simply clarifies that the last mp columns of and the last np columns of are multiplied by 0. More important, the compact form allows the computation of the Moore-Penrose pseudo-inverse,
formula
2.22
which provides us with the solution to both the qCSD- and rCSD-type discrete inverse problem.

For full rank matrices (like the ones given by iCSD, kCSD, and rCSD), there is no difference between the pseudo-inverse and the regular matrix inverse (i.e., ). This is not so for the underdetermined problems (i.e., like qCSD and eCSD, where a regular matrix inverse does not even exist. In the underdetermined CSD methods, the model null space is nontrivial, while the data null space is trivial as long as the M recording electrodes are at different spatial locations. As a result, an infinite number of source distributions can fit the recorded data vector exactly. Hence, an important question is, Which solution is found when using the pseudo-inverse? It turns out that the pseudo-inverse solution has the property of being unique as well as being the solution with the minimum norm (Aster et al., 2005).

It may seem that the problem has now been solved and the estimated CSD distribution is uniquely given by
formula
2.23

Unfortunately, this solution is often useless in practical settings because even under mild noise levels, the estimated solution bears little resemblance to the true source distribution . To circumvent this difficulty, we turn to so-called regularization techniques.

2.5  Regularization

It can be shown that Fredholm integral equations of the first kind with a square integrable kernel are ill posed (Vogel, 2002). Ill-posed problems are defined in opposition to well-posed problems, which require that the following three properties be given (Parker, 1977):

  1. Existence of a solution: For each , there exists a solution satisfying equation 2.6.

  2. Uniqueness: The solution is unique.

  3. Stability: The solution is stable with respect to perturbations in .

So how well does our discretized 1D CSD problem align with these properties? First, as long as the matrix has full row rank, which is the case if all recording sites are at different locations, any data vector V will be in Range() and therefore condition 1 will be satisfied. Second, we already know that underdetermined problems like qCSD or eCSD admit an infinite number of solutions, thereby violating condition 2. However, we have circumvented this problem by limiting ourselves to the minimum norm solution, which we have found to be unique. Third, because discrete inverse problems always have a finite condition number, they cannot formally be unstable. Instead, they are referred to as numerically ill posed or ill conditioned (Aster et al., 2005). This last part is worth investigating more thoroughly because it is a major reason that the CSD inverse problem is difficult to solve.

Let us recall that the solution to the discretized problem is given by
formula
2.24
It can be easily seen from the above relationship that small singular values si can heavily amplify the contribution of the corresponding model space basis vector and thereby make it dominate the solution. We have already discussed in the previous section that it is customary to arrange the singular values in decreasing order. But we have not mentioned yet that the singular values always decay to zero. In fact, the smoother the kernel of our Fredholm integral equation, the faster the decay (Hansen, 2010). Moreover, the singular vectors can be compared to spectral basis in the sense that they exhibit increasing numbers of zero crossings (i.e., oscillations) for smaller singular values (Hansen, 2010). Thus, the amplification of singular vectors corresponding to small singular values introduces more and more high-frequency oscillations into the estimated solution . This explains why even small noise in the data vector can have a substantial effect on the solution as exemplified by
formula
2.25
where is an additive noise vector and is the noise-free data. Considering that
formula
2.26
an upper bound for this noise amplification is given by
formula
2.27
where denotes the regular 2-norm and is the condition number of . As shown in section 4, it is not rare in 1D CSD estimation to have condition numbers largely exceeding 1000. This means that unless there are some very lucky cancellation effects, noise at a level of 1 part in 1000 is potentially enough to make the solution practically useless.
A common way to circumvent this difficulty is to filter the singular values by multiplying them by a regularizing function designed such that for (Vogel, 2002). Thereby, the exact solution from equation 2.24 is replaced by the regularized solution
formula
2.28
This approach is sometimes referred to as regularization by spectral filtering with the individual entries of the regularization function being called filter factors (Hansen, 2010). In this letter, we focus on three regularization schemes differing only in their regularizing function :
  • Truncated SVD (TSVD):
    formula
    2.29
  • Tikhonov:
    formula
    2.30
  • Damped SVD (dSVD):
    formula
    2.31

It is obvious that all these methods function in the same way: they dampen the effect of small singular values by reducing the contribution of their corresponding singular vector to the solution. This process can be likened to a low-pass filter because the singular vectors associated with the smaller singular values are increasingly oscillatory. Damped SVD is similar to Tikhonov regularization, but the transition between including large and excluding small singular values is slower.

2.6  Choosing a Prior

So far we have considered only the case where the minimum norm of the model (in the case of quadrature) or the minimum norm of the coefficients in expansion methods is enforced using SVD. However, it is also possible to enforce the minimum norm of smoothness measures of the model ( for expansion methods) using generalized SVD (GSVD) instead of SVD. The matrix is called a prior and is usually chosen to measure the size or the roughness of the function . In the case of quadrature methods, would then respectively be the identity matrix or a discrete approximation to a derivative of the function (the first and second derivative are the most common choices). The situation is a bit more difficult with expansion methods because the expansion coefficients do not represent the function , only the contribution of their respective basis function. It is, however, still possible to define measures of the size or the roughness of in terms of its expansion coefficients. By expressing the function f in terms of its expansion as in equation 2.10, we find that the norm of its derivative of order d is given by
formula
2.32
Since is composed of the inner products of the basis functions , it is a positive definite matrix (Olver & Shakiban, 2006). This allows us to decompose it into a lower triangular matrix and its transpose by means of the Cholesky factorization. Thereby, we can rewrite the above relation as
formula
2.33
and fulfill the goal of expressing the size of as a function of its expansion coefficients .

2.7  Methods for Choosing the Regularization Parameter

The main difficulty remaining to be solved is to find ways to choose a good value for the regularization parameter . The objective of a regularization parameter selection method is to automatically find the value of , which minimizes the error between the true function f and the regularized solution . It is therefore informative to examine the nature of these errors.

Regularization by spectral filtering stabilizes the solution by introducing bias. This implies that there are two types of errors that are traded off against each other: the bias error introduced by filtering the singular values and the perturbation error resulting from inverting the noise component in the data. Writing the filter factors as a diagonal matrix allows us to express these as
formula
2.34
where is the identity matrix and all the other quantities are the same as in equations 2.24 and 2.25. The contribution of these errors depends monotonically on the value of the regularization parameter : if is large (oversmoothing), most of the noise will get filtered out, but the bias will be increased. If is small (undersmoothing), there will be almost no bias, but the noise will dominate the solution.

Successful regularization therefore hinges on choosing the regularization parameter in a way to minimize the bias while preventing the perturbation error from blowing up. In practice, this is far from trivial because neither the true solution f nor the error term is known. Therefore, it is necessary to find other measures on the data to choose ; the literature abounds with regularization parameter selection methods. Here, we have tested three: L-curve, generalized cross-validation (GCV), and the normalized cumulative periodogram (NCP).

The L-curve method aims to balance the residual norm and a measure of the size of a desirable characteristic of the solution, for example, (respectively, and for expansion methods). The balance is achieved by changing the value of the regularization parameter . The L-curve rests on the observation that away from the optimal solution, the model norm tends to increase rapidly with decreasing (undersmoothing) or the residual norm increases rapidly with increasing (oversmoothing). Therefore, sweeping through various values of optimally generates an L-shaped curve, giving the name to the selection method. The regularization parameter is then chosen by identifying the point of maximal positive curvature in the L-curve.

In contrast, GCV selects by minimizing the prediction error of the potential under the assumption that the additive noise is white. In practice, this is done by separating the data in two sets. An estimate of the solution is found using the first set and then used to predict the values in the second set. The parameter for which the error between the predicted values and the values in the second set is minimized is then selected. Here, we have used leave-one-out generalized cross-validation as described in Hansen (2007).

Finally, also based on the assumption that the additive noise is white, NCP chooses the regularization parameter such that the vector of residuals, or ( for expansion methods), is closest to flat (white residuals). In practice, this is done by choosing such that the cumulated sum of the power spectrum of the residual vector (also called the normalized cumulative periodogram) is closest to linear (Hansen, 2010).

2.8  Solution Resolution

We have already mentioned that regularization stabilizes the estimation at the cost of introducing bias. However, this is not the only cost because the filtering of singular values also affects the resolution of the solution. This concept is easiest to understand by looking at the forward and inverse estimation process assuming no noise:
formula
2.35
By identifying the estimation resolution with , we see that the resolution matrix will not be an identity matrix if in . Therefore, whenever a regularized inverse is obtained, it is useful to check the resolution of the estimation to verify that interesting variations in the solution are indeed within the resolution capability of the inversion method.

In practice, the model resolution is usually qualitatively evaluated using a so-called spike test (we have called it delta test here in order to avoid confusion with a neural action potential). This means simply replacing the model f with a delta function and then evaluating how well the delta function is resolved using the inversion matrix with the estimated from the experimental data of interest. The full mathematical treatment of this topic goes beyond the scope of this work but interested readers are referred to Menke (1989), Parker (1994), and Aster et al. (2005) for a detailed discussion.

3  Methods

We have compared the accuracy of different CSD methods under various regularization schemes in order to identify the methods that are likely to work best in a noisy experimental setting. We have compared the previously published (1) spline iCSD (Pettersen et al., 2006) and (2) kCSD with gaussian basis functions (Potworowski et al., 2012); and the newly presented (3) eCSD, which uses the same gaussian basis function expansion as kCSD but without the kernel trick, (4) qCSD using Simpson’s quadrature rule, and (5) rCSD. Each of these CSD methods was tested under three regularization schemes (Tikhonov, dSVD, tSVD), three regularization coefficient selection methods (L-curve, GCV, NCP), and seven different priors. Moreover, we have applied the priors in two different ways for the expansion methods. We called a particular combination of these methods an estimation scheme. Hence, each scheme is composed of five parameters: a CSD method (iCSD, kCSD, eCSD, qCSD, or rCSD), a regularization method (Tikhonov, dSVD, or tSVD), a regularization parameter selection method (NCP, L-curve, GCV), a prior (see below for detailed explanation), and a way of enforcing the prior (model or coefficient). This results in 531 different estimation schemes after removal of impossible or redundant schemes (e.g., qCSD with model prior enforcement). The behavior of each estimation scheme has been evaluated for five diameters (0.5, 1, 2, 3, and 5 mm) under seven noise levels (0, 1, 2, 3, 5 and 10 dB). The reconstruction accuracy of each scheme was evaluated using the process schematically shown in Figure 3A. The following section describes these steps in more detail.

Figure 3:

Flowchart describing the estimation scheme comparison process. (A) Simulation process and computation of the source reconstruction error. (B) Ranking of estimation schemes according to the mean overall conditions of the mean of the sampling distribution of the mean.

Figure 3:

Flowchart describing the estimation scheme comparison process. (A) Simulation process and computation of the source reconstruction error. (B) Ranking of estimation schemes according to the mean overall conditions of the mean of the sampling distribution of the mean.

3.1  “True” CSD Function , Forward Model, and Noise

For the underlying “true” CSD function , we have chosen eight different CSD depth profiles. As the ground profile, we used a sum of gaussians similar to the one used in Pettersen et al. (2006):
formula
3.1
The detailed description in section 4 is based on this profile. In order to assess the stability and reproducibility of the methods ranking obtained from this “sum of gaussian” profile, we repeated the ranking for seven additional CSD depth profiles:
  • 1, 2, and 3: Small 1 ( mm; mm), small 2 ( mm; mm), and large ( mm; mm) monopole:
    formula
    3.2
  • 4: Large dipole (the sum of gaussians negates the need for an additional small dipole):
    formula
    3.3
  • 5 and 6: Small ( mm; mm) and large (; mm) quadrupole. In both cases .
    formula
    3.4
  • 7: CSD depth profile described in Glabska, Potworowski, Leski, and Wojcik (2014). In order to obtain the Glabska profile, we first extracted the CSD depth profile at from their Figure 4b. We then transformed their color-coded image into intensity values using a reverse lookup table approach and fitted the obtained depth profile with seven gaussian functions. This provided us with a parametric description for the Glabska profile given by
    formula
    3.5
    with
    formula
    For the lateral source profile, we have used the cylinder assumption (see equation 2.3) with diameters of 0.5, 1, 2, 3, or 5 mm.

From these CSD distributions, we simulated the potential for 32 electrodes (interelectrode spacing of 100 m) according to the procedure described in Figure 2B. The choice of this setup is meant to simulate a cortical recording with a linear 32-channel electrode. The medium was parameterized as two semi-infinite media separated at , differing only in their electrical conductivity (homogeneous and isotropic conductivity of for the top medium and for the bottom medium). The bottom medium conductivity corresponds to the conductivity generally assumed for extracellular cortical tissue, while is the conductivity of saline (Wagner, Zahn, Grodzinsky, & Pascual-Leone, 2004). The virtual electrode was positioned to have four contacts in the top medium, and the fifth contact positioned at , 50 m below the surface. This setup is a realistic representation of a feasible cortical recording because a 32-channel electrode has a larger span (3.1 mm) than most mammalian cortices. The assumed setup is schematically displayed in Figure 2A.

For each level of signal-to-noise ratio (SNR) we then generated 1000 noise realizations of the appropriate amplitude and added them to the noise-free potential . The resulting noisy potential was used as an input to the different source estimation schemes. It should be noted that the 531 estimation schemes were evaluated on the same data; we did not regenerate different noise for each scheme.

3.2  Priors

We have tested the methods under seven different priors (the symbol used for each prior is shown in brackets: (1) minimization of the quadrature/expansion coefficients ; (2) minimization of the model norm , that is, for qCSD or corresponding to in equation 2.33 for the expansion methods; (3) minimization of the first derivative [1]; (4) minimization of the second derivative [2]; (5) minimization of the combination of the model norm and the first derivative [0 1]; (6) minimization of the combination of the model norm and the second derivative [0 2]; and (7) minimization of the combination of the model norm, the first derivative, and the second derivative [0 1 2]. In the case of qCSD, each of these methods is implemented using the identity matrix for measuring the model norm or with chosen as a discrete approximation to the dth derivative. In the case of expansion methods, there are actually two ways in which the priors can be applied. In the first case, we can construct such that it measures the relevant characteristic (i.e., norm or norm of a derivative of the function f), as shown in equations 2.32 and 2.33. In the second case, we can directly minimize the norm of the coefficients (or its derivative). For example, minimizing the first derivative would then prioritize all coefficients having the same value instead of directly enforcing a small norm for the first derivative . We decided to explore both options here because calculating the matrix involves numerical approximation of the integral from equation 2.32, which potentially introduces errors or instability into the problem. Thus, we also applied all the priors used in qCSD (i.e., the identity matrix and the discrete derivatives) to the expansion methods. We refer to these priors as the coefficient priors in contrast to the model priors. Note that this distinction is meaningless for qCSD because it is not a basis function expansion method.

Finally, all priors except the first are implemented using GSVD instead of SVD. We note that prior and [0] (for the coefficient priors) are enforcing the same condition but using either SVD or GSVD. Hence, these conditions are useful in assessing the effect of numerical differences between the two methods.

3.3  Reconstruction Error

The source reconstruction error was estimated using a discrete approximation (361 samples regularly spaced in 10 micron steps) to the 2-norm of the error
formula
3.6

3.4  Comparison of Estimation Schemes

We compared the 531 estimation schemes under seven different noise levels and with five different source diameters (a total of 35 conditions). In order to rank the estimation schemes, we first created a sampling distribution of the mean estimation error by subsampling the full 1000-trial data set 1000 times with a sample size of 20 samples without replacement. We then computed the mean of the sampling distribution of the mean and the standard error of the mean and ranked the estimation schemes from lowest to highest mean error. This approach was chosen to answer the following question: “If one records 20 trials of noisy potential, which method will most likely provide the lowest average CSD estimation error?” Moreover, to avoid excessively contaminating the estimate with outliers, we have discarded the worst 10% of trials for each estimation scheme. Thereby, any method that has a larger number of outliers is implicitly penalized because the remaining outliers will increase the mean error. A schematic of the whole procedure is shown in Figure 3B.

3.5  Simulation Environment

All the simulations were conducted using a custom-made Matlab (MathWorks) package that we made available for download at http://www.bic.mni.mcgill.ca/∼amirs/. This package builds on the freely available regtools by Hansen (2007), which implements all the SVD- and GSVD-based regularization methods. In order to handle the large amount of computation required for the comparison, we have used a high-performance computing cluster (Guillimin, McGill University) for running the simulations.

4  Results

4.1  Noise Sensitivity

We first evaluated the condition number associated with the nave (unregularized) CSD estimation to assess the upper bound of noise amplification as defined in equation 4.1. Table 1 shows the condition number for each estimation scheme and each evaluated source diameter. As expected, increasing the source diameter leads to a larger condition number because of the increased smoothness of the forward kernel K (see equation 2.4). This in turn implies that CSD estimation is more difficult at larger source diameters. Moreover, the large condition number suggests that CSD estimation is potentially highly sensitive to noise. In fact, a condition number of means that noise at an amplitude of 1 part in 1 million could be enough to significantly affect the estimation quality. However, it is important to keep in mind that equation 4.1 provides an upper bound only on noise amplification. Therefore, to get a better picture of the need for regularization, we have explicitly computed the average noise amplification over trials for the nave CSD estimation using the following equation:
formula
4.1
Table 1:
Condition Number and Noise Amplification.
Diam 0.5Diam 1Diam 2Diam 3Diam 5
Condition number      
rCSD 4.76E+04 3.85E+05 2.62E+06 7.18E+06 2.22E+07 
kCSD 2.44E+05 1.94E+06 1.31E+07 3.59E+07 1.11E+08 
qCSD 2.50E+02 7.08E+02 1.85E+03 3.05E+03 5.38E+03 
iCSD 1.15E+03 1.13E+03 2.08E+03 3.43E+03 6.02E+03 
eCSD 4.94E+02 1.39E+03 3.62E+03 5.99E+03 1.05E+04 
Noise amplification      
rCSD 63.54 61.48 60.70 60.61 61.12 
kCSD 67.23 64.86 63.97 63.82 64.46 
qCSD 68.53 66.20 65.29 65.11 65.73 
iCSD 64.70 62.65 61.84 61.70 62.28 
eCSD 67.23 64.86 63.97 63.82 64.46 
Diam 0.5Diam 1Diam 2Diam 3Diam 5
Condition number      
rCSD 4.76E+04 3.85E+05 2.62E+06 7.18E+06 2.22E+07 
kCSD 2.44E+05 1.94E+06 1.31E+07 3.59E+07 1.11E+08 
qCSD 2.50E+02 7.08E+02 1.85E+03 3.05E+03 5.38E+03 
iCSD 1.15E+03 1.13E+03 2.08E+03 3.43E+03 6.02E+03 
eCSD 4.94E+02 1.39E+03 3.62E+03 5.99E+03 1.05E+04 
Noise amplification      
rCSD 63.54 61.48 60.70 60.61 61.12 
kCSD 67.23 64.86 63.97 63.82 64.46 
qCSD 68.53 66.20 65.29 65.11 65.73 
iCSD 64.70 62.65 61.84 61.70 62.28 
eCSD 67.23 64.86 63.97 63.82 64.46 

Notes: The condition number is given for each CSD method and each evaluated source diameter. As expected, the condition number increases for increasing source diameter because representers become flatter. The noise amplification for all methods and source diameters is computed according to equation 3.3. Note that the noise amplification was averaged over the different SNRs.

Table 1 also shows the computed noise amplification for all methods and source diameters (the noise amplification was averaged over the different SNRs). Although the picture is not as bleak as suggested by the condition number, we can see that noise still has a considerable effect on estimation accuracy. It is interesting to see that the noise amplification as defined above is independent of the diameter for all methods.

4.2  Demonstration of the Regularization

Before proceeding to the comparison of the various CSD estimation schemes, it is useful to build some visual intuition about the functioning of regularization. This is especially important in the present case because the large number of compared schemes makes it impossible to show the reconstruction for each case. Moreover, because a central aim of this work is to assess the performance of the estimation under various noise conditions, a visual inspection of every trial is impossible. Therefore, the following section will present the general characteristics of the SVD regularized CSD estimation using rCSD as an example.

In Figure 4, we demonstrate the average CSD estimation over 10 trials using the rCSD expansion method, Tikhonov regularization, NCP regularization parameter selection, and a minimum model norm. Figure 4A shows the noise-free potential (black curve) and the potential with additive gaussian white noise at a signal-to-noise ratio of 3 dB, simulating experimentally measured potential. The potential has been calculated for an electrode with 32 contacts spaced 100 m apart using the method demonstrated in Figure 2B and the true CSD depth profile named “True CSD” from Figure 4B (same as Figure 2B). A cylinder with a radius of 0.25 mm was used for the lateral CSD source profile (see equation 2.3). The electrode was positioned in a way to leave four contacts above the brain surface, which is shown by the horizontal green dotted line at depth . The fifth electrode is positioned 50 m below the brain surface. The electrical conductivity above the brain () was set to 1.7 S/m to emulate the conductivity of artificial cerebrospinal fluid (aCSF) and the typical conductivity for gray matter, 0.3 S/m, has been chosen for (Wagner et al., 2004; Pettersen et al., 2006). All of these parameters were chosen to emulate a laminar cortical recording with aCSF covering the brain surface.

Figure 4:

Demonstration of CSD estimation using the rCSD expansion method, Tikhonov regularization, NCP regularization parameter selection, and a minimum model norm. (A) Simulated potential without noise (black) and with additive gaussian white noise at an SNR of 3 dB (red). (B) CSD estimation averaged over 10 trials. Ideal (blue) estimation is computed assuming no noise. Optimal (red) is computed by finding the that minimizes the reconstruction error. Naive (cyan) estimation corresponds to the source reconstruction obtained without regularization ().

Figure 4:

Demonstration of CSD estimation using the rCSD expansion method, Tikhonov regularization, NCP regularization parameter selection, and a minimum model norm. (A) Simulated potential without noise (black) and with additive gaussian white noise at an SNR of 3 dB (red). (B) CSD estimation averaged over 10 trials. Ideal (blue) estimation is computed assuming no noise. Optimal (red) is computed by finding the that minimizes the reconstruction error. Naive (cyan) estimation corresponds to the source reconstruction obtained without regularization ().

Figure 4B shows the average reconstruction over 10 trials using the rCSD expansion method, along with the regularization scheme described above. The ideal estimation (blue curve) was calculated by using a nave estimation ( on the noise-free potential. This gives a benchmark value for the best possible estimation using the chosen expansion method. It is interesting to note that the ideal estimation is not perfect, as shown by the small dent on the positive peak of the CSD curve. This could potentially be explained by the fact that the rapid fluctuation of the CSD profile at the peak does not lie within the span of the expansion functions. However, as can be seen by the actual estimation (magenta curve), this small lack in fidelity is meaningless for the CSD estimation under noise. The nave estimation (cyan curve) is found using on the noisy potential. This is the estimation one would get without regularization. As expected, there is substantial noise amplification even after averaging over 10 trials. Moreover, it is interesting to note that the noise amplification seems larger outside the brain, where the conductivity is higher. The optimal lambda estimation (red curve) is calculated by finding the regularization parameter , which minimizes the error between the ideal CSD and the estimated CSD. This estimation represents the best possible estimation from the noisy potential under the chosen estimation scheme. The optimal estimation depends on the chosen CSD method (in this case, rCSD), the regularization method (Tikhonov), the prior, and the particular noise realization. However, because the regularization parameter . is found by minimizing the error with the ideal CSD, it is independent of the particular regularization parameter selection method. In fact, the quality of a regularization parameter selection method can be evaluated by looking at how similar the value of the regularization parameter (found using the particular scheme) is to .

It is clear from Figure 4B that the regularization stabilizes the estimation since the rCSD estimation (magenta curve) is much smoother than the nave estimation. However, it also exemplifies the cost of the increased stability. Because of the filtering of the right singular vectors associated with small singular values, the estimation is now biased, as seen by the reduced amplitude of the estimated CSD compared to the “True CSD.” Furthermore, the closeness of the rCSD estimation and the optimal estimation suggests that the blind application of the NCP regularization parameter selection works almost optimally in this case.

4.3  Comparison of Regularization Parameter Selection Methods

A major challenge in regularization is the choice of the regularization parameter . We have compared three different regularization parameter selection methods: NCP, L-curve, and GCV. Figures 5A to 5C show the process of regularization parameter selection for one trial for each of these selection methods. All parameters for the generation of the potential and the source profile are the same as in the previous section. In NCP (see Figure 5A), the regularization parameter is chosen such that the vector of residuals is closest to flat (white residuals). In practice, this is done by choosing such that the cumulated sum of the power spectrum of the residual vector (also called the normalized cumulative periodogram) is closest to linear (Hansen, 2010). The changing of shape of the NCP as is varied is shown in Figure 5A.

Figure 5:

Demonstration of regularization parameter selection methods. (A) NCP attempts to find the normalized cumulative periodogram that is closest to linear. The selected curve is shown in cyan and the optimal in red. The optimal is found by minimizing the estimation error as measured by the 2-norm. (B) The L-curve chooses the regularization parameter by identifying the point of maximal positive curvature. (C) In GCV the minimum of the leave-one-out prediction error is used for selecting . (D) Overlay of the estimated CSD using the regularization parameter identified using the methods in panels A to C. (E) Box plot of the estimation error for 100 trials. (F) Box plot of the success of each regularization parameter selection method as evaluated by the ratio between the selected and optimal . A value of 1 means optimal parameter selection.

Figure 5:

Demonstration of regularization parameter selection methods. (A) NCP attempts to find the normalized cumulative periodogram that is closest to linear. The selected curve is shown in cyan and the optimal in red. The optimal is found by minimizing the estimation error as measured by the 2-norm. (B) The L-curve chooses the regularization parameter by identifying the point of maximal positive curvature. (C) In GCV the minimum of the leave-one-out prediction error is used for selecting . (D) Overlay of the estimated CSD using the regularization parameter identified using the methods in panels A to C. (E) Box plot of the estimation error for 100 trials. (F) Box plot of the success of each regularization parameter selection method as evaluated by the ratio between the selected and optimal . A value of 1 means optimal parameter selection.

Instead of focusing on the shape of the residuals, the L-curve criterion attempts to balance the residual norm and the model (semi-) norm. The L-curve rests on the assumption that, away from the optimal solution, the model norm will increase rapidly with decreasing (undersmoothing) or the residual norm will increase rapidly with increasing (oversmoothing). Therefore, sweeping through various values of optimally generates an L-shaped curve that gives the name to the selection method. The regularization parameter is then chosen by identifying the point of maximal positive curvature in the L-curve. It is not necessarily the case that the optimal really lies at the point of maximal curvature, as can be seen in Figure 5B, but it generally is close by. However, especially with very smooth models, the L-curve does not always have a point of positive curvature, in which case the L-curve method will fail (Hansen, 2010).

Finally, GCV selects by minimizing the prediction error of the potential under the assumption that the additive noise is white. Here, we have used leave-one-out generalized cross-validation as described in Hansen (2007) (see also Hansen, 2010). The same approach is used in Potworowski et al. (2012). However, it is important to mention that although the principle is the same, the implementation of GCV used here differs from theirs. Figure 5C shows the value of the generalized cross-validation function as a function of the regularization parameter . We see that the chosen regularization parameter is quite close to the optimal. Interestingly, the cross-validation exhibits two local minima, suggesting two different solutions where the prediction error is small. This can be problematic and leads to a higher level of outliers.

Figure 5D shows an overlay of the CSD estimation for one trial using the three different regularization parameter selection methods. However, Figure 5D does not necessarily provide a representative view of the general situation, since it shows only the resulting estimation for a single trial. In order to get a better picture of the quality of each of the selection methods, we have computed the estimation error for 100 trials (oultiers removed). The resulting distribution of the error is shown as a box plot in Figure 5E. We see that for this particular estimation scheme (rCSD, Tikhonov, minimum model norm prior), NCP performs best, followed by the L-curve criterion. In fact, NCP performs almost optimally. In order to show how each of the selection methods compares to the optimal case, we have also investigated how close the regularization parameter lies to the optimal one. Figure 5F shows the ratio between the regularization parameter and for each trial. The closer this ratio is to one, the better the performance of the regularization parameter selection method. As expected from Figure 5E, NCP also performs better in this respect. Moreover, it is interesting to note that NCP tends to overestimate under the chosen conditions, while both the L-curve and the GCV tend to underestimate it.

4.4  Regularization and Resolution

It has already been mentioned that regularization stabilizes the solution at the cost of introducing bias. However, the magnitude of the regularization parameter also affects the resolution of the solution. We have explored this reduction in resolution by performing a delta test, which consists of looking at the noise-free source reconstruction assuming . Figure 6A shows the CSD estimation for . It is clear that there is a marked difference between the unregularized ( and the regularized case. Most important, the estimation goes from a wavelet-type shape resembling the impulse response of a high-pass filter to a smoothening function akin to a low-pass filter.

Figure 6:

Effect of regularization on solution resolution. (A) Resolution of a delta function located at mm using estimation naive (blue), optimal (red), and NCP-selected (magenta) regularization parameter. The estimation scheme and trial are the same as in Figure 5. (B) Pseudocolor image of the resolution of a delta function located at various depths.

Figure 6:

Effect of regularization on solution resolution. (A) Resolution of a delta function located at mm using estimation naive (blue), optimal (red), and NCP-selected (magenta) regularization parameter. The estimation scheme and trial are the same as in Figure 5. (B) Pseudocolor image of the resolution of a delta function located at various depths.

Figure 6B shows the resolution profiles obtained using a delta test with the discrete delta functions located at different depths. We see that the conductivity jump at affects the estimation resolution for . Moreover, we note that the higher conductivity at dramatically reduces the estimation. Finally, we also observe a small reduction in resolution toward the bottom-most electrode stemming from the fact that there are no data from below that can be used to constrain the estimate.

The width of the low-pass resolution profile is very useful for a qualitative assessment of estimated CSD profiles because it provides an approximation of the size of the features that can be resolved under the given conditions. In fact, the estimated CSD profile (assuming no noise) is simply the convolution of the resolution profile with the true CSD profile . This can be nicely approximated by multiplying each column in Figure 6B by the corresponding magnitude of the true CSD profile and summing over all rows (the result has to be multiplied by the distance between the delta functions to get the correct magnitude).

4.5  Comparison of CSD Estimation Schemes for Each Condition for Sum of Gaussians Profile

So far, we have demonstrated the working of one estimation scheme only under a single condition. This obviously does not give a very good picture about which estimation scheme should be preferred in a general situation. For instance, the exact signal-to-noise ratio and the source diameter is usually unknown in an experimental setting. Therefore, we decided to compare the 531 estimation schemes under seven different noise levels and with five different source diameters (a total of 35 conditions). We ranked the estimation schemes based on the mean error of their sampling distribution of the mean computed from subsampling without replacement the data set 1000 times with a sample size of 20 samples. This approach was chosen to answer the question: “If one records 20 trials of noisy potential, which method will most likely provide the lowest average CSD estimation error?” Moreover, to avoid excessively contaminating the estimate with outliers, we have discarded the worst 10% of trials for each method. Thereby, any method that has a larger number of outliers is implicitly penalized because the remaining outliers will increase the mean error.

Table 2 shows the best-ranked estimation scheme for each SNR and source diameter combination. As expected from the analysis of the condition number, the mean error increases with increasing diameter. Moreover, unsurprisingly, the mean error decreases with increasing SNR. The opposite is true for noise amplification. Moreover, there are some interesting trends in the distribution of the parameters in the top methods. First, regularization involving a prior minimizing the second derivative is dominating the top ranks. In all but three conditions, the prior involves minimizing the second derivative. Second, there is a clear tendency to favor Tikhonov regularization over dSVD for smaller diameters. A similar tendency is observed for the comparison between the L-curve and NCP regularization parameter selection method. However, the main finding that stands out from Table 2 is that no single estimation scheme completely outperforms the others across all tested conditions. It is therefore necessary to investigate further which types of schemes perform well in the different conditions.

Table 2:
Best-Ranked Estimation Scheme for Each SNR and Source Diameter for Sum of Gaussians Profile.
CSDRegularization SelectionPriorMeanStandardMean Noise
DiameterSNRMethodMethodMethodPriorMethodErrorErrorAmplification
0.5 eCSD Tikhonov NCP Model - 0 0.6971 0.0189 3.12 
 kCSD dSVD L-curve 0  2 Coeff + 0.6623 0.0212 3.29 
 kCSD dSVD L-curve 0  2 Coeff + 0.6311 0.0163 3.52 
 iCSD Tikhonov L-curve 0  1  2 Coeff + 0.6022 0.0181 3.75 
 iCSD Tikhonov L-curve 0  2 Coeff + 0.5497 0.0157 4.34 
 iCSD Tikhonov L-curve 0  2 Coeff + 0.5085 0.0133 5.07 
 10 iCSD Tikhonov L-curve 0  2 Coeff + 0.4404 0.0111 6.20 
kCSD Tikhonov NCP Model - 0 0.7686 0.0217 1.61 
 kCSD Tikhonov NCP Model - 0 0.7403 0.0201 1.75 
 iCSD Tikhonov L-curve 0  1  2 Coeff + 0.7016 0.0233 1.84 
 iCSD Tikhonov L-curve 0  1  2 Coeff + 0.6804 0.0204 2.02 
 iCSD Tikhonov L-curve 0  2 Coeff + 0.6199 0.0173 2.30 
 iCSD Tikhonov L-curve 0  2 Coeff + 0.5734 0.0145 2.69 
 10 rCSD Tikhonov L-curve 0  1  2 Model + 0.5109 0.0129 3.40 
rCSD Tikhonov NCP 0  2 Model + 0.8489 0.0241 0.91 
 rCSD Tikhonov NCP 0  2 Model + 0.8247 0.0251 1.00 
 rCSD Tikhonov NCP 0  2 Model + 0.7987 0.0236 1.07 
 rCSD Tikhonov NCP 0  2 Model + 0.7627 0.0213 1.16 
 rCSD Tikhonov NCP 0  2 Model + 0.7235 0.0183 1.38 
 rCSD Tikhonov NCP 0  2 Model + 0.6795 0.0159 1.63 
 10 iCSD Tikhonov L-curve 0  2 Coeff + 0.6175 0.0155 2.08 
kCSD dSVD NCP 0  2 Model + 0.8941 0.0179 0.67 
 iCSD dSVD NCP Coeff + 0.8822 0.0191 0.74 
 rCSD Tikhonov NCP 0  2 Model + 0.8685 0.0198 0.81 
 iCSD dSVD NCP Coeff + 0.8559 0.0187 0.89 
 rCSD Tikhonov NCP 0  2 Model + 0.8090 0.0206 1.08 
 rCSD Tikhonov NCP 0  2 Model + 0.7656 0.0186 1.29 
 10 rCSD Tikhonov NCP 0  2 Model + 0.7165 0.0176 1.68 
kCSD dSVD NCP 0  2 Model + 0.9316 0.0144 0.46 
 iCSD dSVD NCP Coeff + 0.9188 0.0166 0.51 
 iCSD dSVD NCP Coeff + 0.9071 0.0169 0.57 
 kCSD dSVD NCP 0  2 Model + 0.8934 0.0181 0.63 
 iCSD dSVD NCP Coeff + 0.8646 0.0184 0.77 
 iCSD dSVD NCP Coeff + 0.8313 0.0181 0.93 
 10 kCSD dSVD NCP 0  2 Model + 0.7795 0.0178 1.24 
CSDRegularization SelectionPriorMeanStandardMean Noise
DiameterSNRMethodMethodMethodPriorMethodErrorErrorAmplification
0.5 eCSD Tikhonov NCP Model - 0 0.6971 0.0189 3.12 
 kCSD dSVD L-curve 0  2 Coeff + 0.6623 0.0212 3.29 
 kCSD dSVD L-curve 0  2 Coeff + 0.6311 0.0163 3.52 
 iCSD Tikhonov L-curve 0  1  2 Coeff + 0.6022 0.0181 3.75 
 iCSD Tikhonov L-curve 0  2 Coeff + 0.5497 0.0157 4.34 
 iCSD Tikhonov L-curve 0  2 Coeff + 0.5085 0.0133 5.07 
 10 iCSD Tikhonov L-curve 0  2 Coeff + 0.4404 0.0111 6.20 
kCSD Tikhonov NCP Model - 0 0.7686 0.0217 1.61 
 kCSD Tikhonov NCP Model - 0 0.7403 0.0201 1.75 
 iCSD Tikhonov L-curve 0  1  2 Coeff + 0.7016 0.0233 1.84 
 iCSD Tikhonov L-curve 0  1  2 Coeff + 0.6804 0.0204 2.02 
 iCSD Tikhonov L-curve 0  2 Coeff + 0.6199 0.0173 2.30 
 iCSD Tikhonov L-curve 0  2 Coeff + 0.5734 0.0145 2.69 
 10 rCSD Tikhonov L-curve 0  1  2 Model + 0.5109 0.0129 3.40 
rCSD Tikhonov NCP 0  2 Model + 0.8489 0.0241 0.91 
 rCSD Tikhonov NCP 0  2 Model + 0.8247 0.0251 1.00 
 rCSD Tikhonov NCP 0  2 Model + 0.7987 0.0236 1.07 
 rCSD Tikhonov NCP 0  2 Model + 0.7627 0.0213 1.16 
 rCSD Tikhonov NCP 0  2 Model + 0.7235 0.0183 1.38 
 rCSD Tikhonov NCP 0  2 Model + 0.6795 0.0159 1.63 
 10 iCSD Tikhonov L-curve 0  2 Coeff + 0.6175 0.0155 2.08 
kCSD dSVD NCP 0  2 Model + 0.8941 0.0179 0.67 
 iCSD dSVD NCP Coeff + 0.8822 0.0191 0.74 
 rCSD Tikhonov NCP 0  2 Model + 0.8685 0.0198 0.81 
 iCSD dSVD NCP Coeff + 0.8559 0.0187 0.89 
 rCSD Tikhonov NCP 0  2 Model + 0.8090 0.0206 1.08 
 rCSD Tikhonov NCP 0  2 Model + 0.7656 0.0186 1.29 
 10 rCSD Tikhonov NCP 0  2 Model + 0.7165 0.0176 1.68 
kCSD dSVD NCP 0  2 Model + 0.9316 0.0144 0.46 
 iCSD dSVD NCP Coeff + 0.9188 0.0166 0.51 
 iCSD dSVD NCP Coeff + 0.9071 0.0169 0.57 
 kCSD dSVD NCP 0  2 Model + 0.8934 0.0181 0.63 
 iCSD dSVD NCP Coeff + 0.8646 0.0184 0.77 
 iCSD dSVD NCP Coeff + 0.8313 0.0181 0.93 
 10 kCSD dSVD NCP 0  2 Model + 0.7795 0.0178 1.24 

Notes: The ranking was obtained according to the procedure described in Figure 3B. Note that in this case, the last step of averaging over all tested conditions is omitted, and each condition is considered independently.

In order to get a better idea of the type of estimation schemes that are found in the top of the ranking in each condition, we chose to consider all schemes whose mean estimation error was within the one-tailed 99% confidence interval of the best scheme’s mean error. Each estimation scheme that fulfilled this criterion was considered a good candidate to use in that condition (i.e., . Table 3 shows the number of estimation schemes that satisfied this selection criterion in each condition and the percentage difference in mean error between the best scheme and the last accepted as a possible candidate. It is interesting to note that although a fewer number of schemes satisfy the selection criterion at smaller diameters, the percentage difference in mean error is larger. This suggests that the difference among estimation schemes is smaller at larger diameters where CSD estimation is more difficult.

Table 3:
Distribution of Methods within the Top Rank.
Diam 0.5Diam 1Diam 2Diam 3Diam 5
Number of methods      
SNR 0 47 84 109 98 114 
SNR 1 48 83 87 100 111 
SNR 2 38 82 67 103 94 
SNR 3 40 65 30 90 92 
SNR 5 31 41 14 82 87 
SNR 7 27 22 10 65 77 
SNR 10 20 23 31 55 66 
Percent difference      
SNR 0 6.96 7.27 7.35 4.76 3.99 
SNR 1 7.62 7.03 7.51 5.35 4.36 
SNR 2 6.40 8.58 7.71 5.82 4.66 
SNR 3 7.64 7.55 6.70 5.88 5.13 
SNR 5 6.71 6.97 6.19 6.21 5.47 
SNR 7 6.78 6.40 5.68 6.02 5.53 
SNR 10 6.24 6.26 6.12 6.06 5.68 
Diam 0.5Diam 1Diam 2Diam 3Diam 5
Number of methods      
SNR 0 47 84 109 98 114 
SNR 1 48 83 87 100 111 
SNR 2 38 82 67 103 94 
SNR 3 40 65 30 90 92 
SNR 5 31 41 14 82 87 
SNR 7 27 22 10 65 77 
SNR 10 20 23 31 55 66 
Percent difference      
SNR 0 6.96 7.27 7.35 4.76 3.99 
SNR 1 7.62 7.03 7.51 5.35 4.36 
SNR 2 6.40 8.58 7.71 5.82 4.66 
SNR 3 7.64 7.55 6.70 5.88 5.13 
SNR 5 6.71 6.97 6.19 6.21 5.47 
SNR 7 6.78 6.40 5.68 6.02 5.53 
SNR 10 6.24 6.26 6.12 6.06 5.68 

Notes: The number of estimation schemes is given in each condition whose mean estimation error is within the one-tailed 99% confidence interval of the best scheme’s mean error in that condition: . The percent difference in error is between the best scheme and the last one accepted within the top rank.

Figure 7A displays the number of estimation schemes that satisfy the chosen selection criterion as a function of the number of tested conditions. We see that a small number of schemes (three—see Table 4) satisfy the selection criterion in all conditions. This is an encouraging result for our attempt to identify schemes that are likely to perform well across a range of experimental conditions. To investigate whether certain characteristics dominate the ranking within and across each condition, we counted the number of times the labels identifying each estimation scheme parameter were found within the top-ranked schemes (see Figures 7B to 7F). Each bar within a diameter block corresponds to an SNR value; for example, the first bar is for SNR = 0 and the last for SNR = 10 dB. It is important to remember that the different labels are not present in equal numbers in the whole set of estimation schemes. For example, because qCSD can be regularized only on the coefficients and not on the model, it makes up only 12% of the schemes instead of 22% for the other CSD methods. This means that if the top ranking was insensitive to the CSD method, we would expect only 12% of the labels to be qCSD in Figure 7B. Therefore, we show the relative frequency of each label in the full set of schemes in the respective panel legend.

Figure 7:

Distribution of labels within the top rank. (A) Number of conditions in which a particular estimation scheme was found in the top rank. (B–F) Percentage with which the given label appears in the top rank of each tested condition. Note that the number of schemes considered in the top rank varies in each condition and is shown in Table 3. Each block corresponds to a diameter, and each bar within the block represents a signal-to-noise ratio ([0 1 2 3 5 10] dB). The relative frequency of each label within the 531 estimation schemes is shown as a percentage in each panel legend. The relative frequency is important in order to judge whether a label appears more or less often than expected by chance.

Figure 7:

Distribution of labels within the top rank. (A) Number of conditions in which a particular estimation scheme was found in the top rank. (B–F) Percentage with which the given label appears in the top rank of each tested condition. Note that the number of schemes considered in the top rank varies in each condition and is shown in Table 3. Each block corresponds to a diameter, and each bar within the block represents a signal-to-noise ratio ([0 1 2 3 5 10] dB). The relative frequency of each label within the 531 estimation schemes is shown as a percentage in each panel legend. The relative frequency is important in order to judge whether a label appears more or less often than expected by chance.

Figure 7B shows the relative distribution of CSD method labels. We see that at small diameters, iCSD is represented less frequently than expected by chance, thus suggesting that iCSD is not an ideal choice in this situation. Moreover, iCSD appears to perform better at larger SNRs in contrast to kCSD. In fact, there is an interesting trade-off between kCSD and eCSD. At the smallest diameter, it is very clear that kCSD and eCSD evolve in opposite directions with increasing SNR. These variations are much less pronounced at larger diameters, where all methods occur roughly proportionally to their relative frequency in the data set. However, overall, the top-ranking scheme identity seems only weakly related to CSD label. This means that the type of CSD method chosen is not a major predictor of estimation accuracy. This is in clear contrast to the relative occurrence of regularization scheme shown in Figure 7C. First, we note that tSVD is almost absent from the top ranking. This is not particularly surprising because tSVD is a cruder method than the two others. Second, there is a clear trend for Tikhonov to perform better than dSVD, especially at small diameters and large SNRs.

This discrepancy is even more pronounced in the frequency distribution of the regularization parameter selection methods. Again, the top rankings contain almost exclusively L-curve and NCP. Moreover, it is obvious that NCP outperforms the L-curve except for small diameters and large SNRs. It turns out that the absence of GCV is mainly explained by its increased rate of outliers. When increasing the rejected trials to 25%, GCV is much more heavily represented (data not shown).

For the predictive power of the choice of prior, the situation is again split between small and large diameters (see Figure 7E). At large diameters, the methods appear more equal, and the priors are more evenly distributed. This is not the case for small diameters, where penalizing the model norm (either directly the model or simply the coefficients) seems to be a promising approach. In order to study the choice of priors in more depth, Figure 7F also shows the priors broken up according to the way they are applied. Again, the discrepancy between small and large diameters is clearly visible. While penalizing higher-order functions seems to be a promising approach for larger diameters regardless of the way it is done (i.e., measuring the model norm or the norm of the coefficients), it is clearly preferable to penalize the coefficients at smaller diameters. Moreover, at smaller diameters, it seems best to implement a minimum norm prior directly. Finally, we also observe that penalizing the coefficients norm using SVD directly (Coeff –[]) is preferable over its GSVD alternative (Coeff-0) despite their implementing the same prior. The increased algorithmic complexity and numerical approximations of GSVD appear to negatively influence the estimation accuracy.

Table 4:
Top 40 Estimation Schemes in Ranking over All Tested Conditions for Sum of Gaussians Profile.
CSDRegularization SelectionPriorNumber ofMeanStandardMeanStd.%
RankMethodMethodMethodPriorMethodTop RanksErrorErrorRankRankDiff.
rCSD Tikhonov NCP Model-0 35 0.7581 0.0184 14.89 6.26 0.00 
kCSD Tikhonov NCP Model-0 35 0.7585 0.0182 16.51 10.98 0.06 
iCSD Tikhonov NCP [] Coeff-[] 35 0.7590 0.0183 14.49 8.25 0.12 
rCSD dSVD NCP 0  1 Coeff + 31 0.7591 0.0177 15.77 7.84 0.14 
rCSD dSVD NCP 0  2 Coeff + 31 0.7596 0.0177 15.00 7.02 0.19 
eCSD Tikhonov NCP Model-0 32 0.7603 0.0174 18.86 15.03 0.29 
kCSD dSVD NCP [] Coeff-[] 33 0.7605 0.0183 16.94 7.84 0.32 
eCSD Tikhonov NCP [] Coeff-[] 33 0.7605 0.0183 16.97 8.05 0.32 
rCSD dSVD NCP 0  1  2 Coeff + 31 0.7609 0.0176 17.66 8.67 0.37 
10 rCSD dSVD NCP [] Coeff-[] 34 0.7615 0.0182 19.63 7.57 0.44 
11 rCSD dSVD NCP Coeff-0 32 0.7625 0.0181 21.40 15.95 0.58 
12 qCSD Tikhonov NCP 0  2 qCSD + 30 0.7628 0.0174 19.17 6.66 0.62 
13 qCSD Tikhonov NCP 0  1 qCSD + 31 0.7634 0.0168 20.57 9.91 0.71 
14 rCSD dSVD NCP Coeff + 28 0.7665 0.0181 26.40 12.00 1.10 
15 eCSD Tikhonov NCP Coeff-0 27 0.7683 0.0171 28.34 7.82 1.35 
16 kCSD dSVD NCP Coeff-0 30 0.7687 0.0170 31.26 9.07 1.39 
17 qCSD Tikhonov NCP 0  1  2 qCSD + 25 0.7690 0.0169 29.97 8.97 1.44 
18 iCSD dSVD NCP Coeff + 23 0.7702 0.0171 29.63 30.12 1.60 
19 kCSD dSVD NCP 0  2 Model + 22 0.7723 0.0171 31.97 28.97 1.88 
20 kCSD dSVD NCP Model + 22 0.7724 0.0171 32.97 29.06 1.88 
21 eCSD dSVD NCP 0  2 Model + 23 0.7725 0.0166 32.54 22.23 1.90 
22 kCSD dSVD NCP 0  1  2 Model + 22 0.7725 0.0171 33.89 28.68 1.90 
23 kCSD dSVD NCP 0  2 Coeff + 25 0.7728 0.0173 36.31 11.43 1.94 
24 eCSD dSVD NCP 0  1  2 Model + 21 0.7729 0.0169 32.57 27.19 1.95 
25 eCSD dSVD NCP Model + 23 0.7740 0.0169 33.57 23.54 2.10 
26 kCSD dSVD NCP 0  1  2 Coeff + 24 0.7755 0.0177 41.34 7.41 2.30 
27 eCSD Tikhonov NCP 0  1  2 Coeff + 26 0.7763 0.0170 44.46 10.86 2.40 
28 eCSD Tikhonov NCP 0  2 Coeff + 27 0.7772 0.0174 45.51 13.81 2.52 
29 eCSD Tikhonov NCP 0  1 Coeff + 25 0.7777 0.0171 46.29 10.98 2.58 
30 kCSD dSVD NCP 0  1 Coeff + 23 0.7798 0.0171 48.06 16.31 2.86 
31 kCSD dSVD NCP Coeff + 24 0.7801 0.0179 47.06 11.72 2.90 
32 eCSD dSVD NCP 0  1 Model + 20 0.7826 0.0162 48.97 32.81 3.24 
33 rCSD Tikhonov NCP [] Coeff-[] 23 0.7827 0.0176 55.40 16.43 3.25 
34 eCSD dSVD NCP Model + 19 0.7830 0.0162 48.97 32.97 3.28 
35 kCSD dSVD NCP 0  1 Model + 20 0.7831 0.0161 49.40 32.25 3.30 
36 kCSD dSVD NCP Model + 20 0.7834 0.0161 50.71 31.31 3.34 
37 rCSD Tikhonov NCP Coeff-0 20 0.7835 0.0167 58.94 24.65 3.35 
38 rCSD dSVD NCP Coeff + 19 0.7842 0.0216 54.74 30.42 3.44 
39 iCSD dSVD NCP Model + 20 0.7845 0.0169 51.94 24.89 3.48 
40 kCSD Tikhonov NCP [] Coeff-[] 22 0.7858 0.0174 60.03 14.94 3.65 
CSDRegularization SelectionPriorNumber ofMeanStandardMeanStd.%
RankMethodMethodMethodPriorMethodTop RanksErrorErrorRankRankDiff.
rCSD Tikhonov NCP Model-0 35 0.7581 0.0184 14.89 6.26 0.00 
kCSD Tikhonov NCP Model-0 35 0.7585 0.0182 16.51 10.98 0.06 
iCSD Tikhonov NCP [] Coeff-[] 35 0.7590 0.0183 14.49 8.25 0.12 
rCSD dSVD NCP 0  1 Coeff + 31 0.7591 0.0177 15.77 7.84 0.14 
rCSD dSVD NCP 0  2 Coeff + 31 0.7596 0.0177 15.00 7.02 0.19 
eCSD Tikhonov NCP Model-0 32 0.7603 0.0174 18.86 15.03 0.29 
kCSD dSVD NCP [] Coeff-[] 33 0.7605 0.0183 16.94 7.84 0.32 
eCSD Tikhonov NCP [] Coeff-[] 33 0.7605 0.0183 16.97 8.05 0.32 
rCSD dSVD NCP 0  1  2 Coeff + 31 0.7609 0.0176 17.66 8.67 0.37 
10 rCSD dSVD NCP [] Coeff-[] 34 0.7615 0.0182 19.63 7.57 0.44 
11 rCSD dSVD NCP Coeff-0 32 0.7625 0.0181 21.40 15.95 0.58 
12 qCSD Tikhonov NCP 0  2 qCSD + 30 0.7628 0.0174 19.17 6.66 0.62 
13 qCSD Tikhonov NCP 0  1 qCSD + 31 0.7634 0.0168 20.57 9.91 0.71 
14 rCSD dSVD NCP Coeff + 28 0.7665 0.0181 26.40 12.00 1.10 
15 eCSD Tikhonov NCP Coeff-0 27 0.7683 0.0171 28.34 7.82 1.35 
16 kCSD dSVD NCP Coeff-0 30 0.7687 0.0170 31.26 9.07 1.39 
17 qCSD Tikhonov NCP 0  1  2 qCSD + 25 0.7690 0.0169 29.97 8.97 1.44 
18 iCSD dSVD NCP Coeff + 23 0.7702 0.0171 29.63 30.12 1.60 
19 kCSD dSVD NCP 0  2 Model + 22 0.7723 0.0171 31.97 28.97 1.88 
20 kCSD dSVD NCP Model + 22 0.7724 0.0171 32.97 29.06 1.88 
21 eCSD dSVD NCP 0  2 Model + 23 0.7725 0.0166 32.54 22.23 1.90 
22 kCSD dSVD NCP 0  1  2 Model + 22 0.7725 0.0171 33.89 28.68 1.90 
23 kCSD dSVD NCP 0  2 Coeff + 25 0.7728 0.0173 36.31 11.43 1.94 
24 eCSD dSVD NCP 0  1  2 Model + 21 0.7729 0.0169 32.57 27.19 1.95 
25 eCSD dSVD NCP Model + 23 0.7740 0.0169 33.57 23.54 2.10 
26 kCSD dSVD NCP 0  1  2 Coeff + 24 0.7755 0.0177 41.34 7.41 2.30 
27 eCSD Tikhonov NCP 0  1  2 Coeff + 26 0.7763 0.0170 44.46 10.86 2.40 
28 eCSD Tikhonov NCP 0  2 Coeff + 27 0.7772 0.0174 45.51 13.81 2.52 
29 eCSD Tikhonov NCP 0  1 Coeff + 25 0.7777 0.0171 46.29 10.98 2.58 
30 kCSD dSVD NCP 0  1 Coeff + 23 0.7798 0.0171 48.06 16.31 2.86 
31 kCSD dSVD NCP Coeff + 24 0.7801 0.0179 47.06 11.72 2.90 
32 eCSD dSVD NCP 0  1 Model + 20 0.7826 0.0162 48.97 32.81 3.24 
33 rCSD Tikhonov NCP [] Coeff-[] 23 0.7827 0.0176 55.40 16.43 3.25 
34 eCSD dSVD NCP Model + 19 0.7830 0.0162 48.97 32.97 3.28 
35 kCSD dSVD NCP 0  1 Model + 20 0.7831 0.0161 49.40 32.25 3.30 
36 kCSD dSVD NCP Model + 20 0.7834 0.0161 50.71 31.31 3.34 
37 rCSD Tikhonov NCP Coeff-0 20 0.7835 0.0167 58.94 24.65 3.35 
38 rCSD dSVD NCP Coeff + 19 0.7842 0.0216 54.74 30.42 3.44 
39 iCSD dSVD NCP Model + 20 0.7845 0.0169 51.94 24.89 3.48 
40 kCSD Tikhonov NCP [] Coeff-[] 22 0.7858 0.0174 60.03 14.94 3.65 

Note: The overall ranking is obtained by ordering the estimation schemes according to the mean over the mean of the sampling distribution of the mean errors of each of the 35 conditions.

4.6  Comparison of CSD Estimation Schemes Over All Conditions for Sum of Gaussians Profile

Since it is notoriously difficult to know the true source diameter in an experimental setting and one generally does not know the exact signal-to-noise ratio, it is useful to try to select a CSD estimation scheme that performs well across a large set of conditions. We have already observed in Figure 7A that there are estimation schemes that consistently appear in the top of the ranking, suggesting that it should be possible to create a meaningful ranking across conditions. Therefore, we created an overall ranking of the estimation schemes by ordering the schemes according to the mean over the mean of the sampling distribution of the mean errors of each of the 35 conditions. The top 40 estimation schemes in the overall ranking are shown in Table 4. Looking at the characteristics of these top estimation schemes, it stands out that NCP clearly outscores the other regularization parameter selection methods since every scheme in the list relies on it. Next, the results for the preferable regularization methods are slightly favoring dSVD over Tikhonov (tSVD did not make it into the list at all): dSVD (24), Tikhonov (15). In terms of CSD methods, the results are distributed as follows: rCSD (10), kCSD (13), eCSD (11), qCSD (3), and iCSD (3). Although there is a clear preference to rCSD, kCSD, and eCSD, the fact that all methods are represented shows that none of them is definitely worse than the others.

To examine the results for the top estimation schemes in more detail, a distribution of the error (with outliers removed) over the whole data set is shown as a violin plot in Figure 8A. We see that their error distributions are very similar, which supports our choice of considering them as similarly good candidates. We also performed a similar analysis on the noise amplification as defined in equation 4.1. Again, the distributions were found to be very similar with a mean noise amplification of 2 (median of 1.5), a substantial improvement over the mean noise amplification of 60 for nave CSD estimation (see Table 1). Finally, Figure 8B shows the distribution of the ratio of the estimated regularization coefficient and the optimal regularization coefficient over the whole data set. A ratio of 1, shown by the green dotted line, represents optimal regularization. In contrast to Figure 8A, there is a much larger heterogeneity between estimation schemes. Although all estimation schemes tend to slightly oversmooth (ratio greater than 1), the oversmoothing tends to be more pronounced for estimation schemes employing dSVD as the regularization method. However, it is interesting to see that the increased spread of ratios does not affect the overall distribution of error or noise amplification.

Figure 8:

Distribution of error (A) and lambda ratio (B) for top 40 schemes in the final ranking. The violin plots are computed over the full data set but without the worst 10% of trials in each condition. The green line in B marks a ratio of 1, which represents optimal regularization.

Figure 8:

Distribution of error (A) and lambda ratio (B) for top 40 schemes in the final ranking. The violin plots are computed over the full data set but without the worst 10% of trials in each condition. The green line in B marks a ratio of 1, which represents optimal regularization.

To assess the robustness of the final ranking and make sure that the ranking is not simply fitting to the noise of this particular data set, we have repeated the whole procedure for another set of 1000 trials. We found that almost exactly the same estimation schemes were in the final ranking of both data sets (only 1 difference out of 160 schemes). Moreover, we then compared the similarity of the two final rankings using Spearman rank correlation. We found a correlation coefficient of 0.96 when the correlation was computed over the first 160 estimation schemes of the final ranking. Computing the rank correlation over only the top 40 schemes (those displayed in Table 4), the correlation coefficient was 0.99. Both correlation coefficients are statistically significant at a significance level of 0.001. Moreover, the average absolute rank difference between the two data sets is 1.56 1.86 when computed over the top 160 schemes and 0.75 1.13 when computed over the top 40 schemes. Thus, the final ranking was found to be highly reproducible for two realizations using the same Sum of Gaussians spatial CSD profile.

4.7  Effect of Conductivity on the Stability of the Final Ranking

The CSD forward problem used here depends on four general parameters: (1) lateral source profile (in our case, the uniform cylinder), (2) lateral extent of sources (i.e., the diameter of the cylinder), (3) the electrode positions, and (4) the medium conductivities. Assessing the effect on estimation accuracy of errors in these parameters is a very difficult task and it is beyond the scope of our current work to do this systematically. However, due to the pronounced effect of a discontinuity in conductivity on the amplitude of the potential close to the interface, as demonstrated by Pettersen et al. (2006) and Ness et al. (2015), we decided to validate the final ranking assuming a mismatch in conductivity between the forward and inverse models. In particular, we assumed a conductivity of for the top medium in the forward model but estimated the CSD using . The conductivity of the bottom medium remained as before. We then repeated the previously presented analysis using the Sum of Gaussians profile and compared the final rankings. We found that the fidelity of the ranking was well preserved. In order to investigate the relationship between the two rankings, we display in Figure 9 the percentage of estimation schemes found in both rankings within the top N ranks (computed as . The “chance” curve shows the expected percentage of matches when twice randomly drawing N samples from a population of 531 elements, which follows a hypergeometric distribution. On average, approximately 80% of the estimation schemes can be found within the same top ranks of both rankings, confirming the similarity between the two rankings.

Figure 9:

Validation of the ranking for a mismatch in conductivity between the forward and inverse model. The potential was simulated using a conductivity of for the top medium in the forward model but the CSD was estimated with . The conductivity of the bottom medium was kept constant at . The black curve shows the percentage of estimation schemes found in both rankings within the top N ranks (computed as . The “chance” curve shows the expected percentage of matches when twice drawing N random samples from a population of 531 elements. It follows a hypergeometric distribution.

Figure 9:

Validation of the ranking for a mismatch in conductivity between the forward and inverse model. The potential was simulated using a conductivity of for the top medium in the forward model but the CSD was estimated with . The conductivity of the bottom medium was kept constant at . The black curve shows the percentage of estimation schemes found in both rankings within the top N ranks (computed as . The “chance” curve shows the expected percentage of matches when twice drawing N random samples from a population of 531 elements. It follows a hypergeometric distribution.

4.8  Validation of Final Ranking with Different CSD Depth Profiles

The robustness of the final ranking described in the previous section for the Sum of Gaussians depth profile does not guarantee that the ranking will turn out the same for different spatial CSD profiles. Discrepancies can be expected because, at least in the case of expansion methods, the estimation accuracy will always depend on the similarity between the spatial profile and the basis functions used in the expansion. This was already pointed out by Potworowski et al. (2012) in their discussion about the factors affecting the optimal choice for the width of the gaussian expansion functions. Hence, it is important to assess how stable the ranking of estimation schemes is to a change in the CSD depth profile. We addressed this issue by repeating the previously presented analysis for seven additional CSD depth profiles: (1) small monopole close to the surface (see Figure 10A); (2) small monopole farther away from the surface (see Figure 10A); (3) large monopole (see Figure 10A); (4) large dipole (see Figure 10B); (5) small quadrupole (see Figure 10C); (6) large quadrupole (see Figure 10C); and (7) the CSD depth profile described in Glabska et al. (2014; see Figure 10B). The Glabska profile was added because it provides an interesting additional validation case since it differs from the gaussian profile on three important points that could affect estimation performance: it is unbalanced (the sum over the profile is positive, indicating that sources are stronger than sinks), the amplitudes in Glabska’s profile are larger than in the gaussian profile, and this profile shows two dipole-like structures instead of a single one.

Figure 10:

Pairwise comparison of the similarity in rankings between different CSD depth profiles. (A–C) The eight tested CSD depth profiles (the amplitude of the Glabska profile was divided by 10 for visualization purposes). (D–K) The percentage of estimation schemes which are found within the top N ranks in a pairwise manner for all CSD profiles. It is calculated in the same way as in Figure 9 with “chance” again referring to the expected percentage of matches if the rankings were random permutations of 531 elements.

Figure 10:

Pairwise comparison of the similarity in rankings between different CSD depth profiles. (A–C) The eight tested CSD depth profiles (the amplitude of the Glabska profile was divided by 10 for visualization purposes). (D–K) The percentage of estimation schemes which are found within the top N ranks in a pairwise manner for all CSD profiles. It is calculated in the same way as in Figure 9 with “chance” again referring to the expected percentage of matches if the rankings were random permutations of 531 elements.

We repeated the same analysis as presented for the Sum of Gaussians profile with these new CSD profiles and then performed a pairwise comparison of the overall rankings. When computed over all 531 schemes, the Spearman rank correlation is highly significant () for all pairwise comparisons (smallest rank correlation coefficient ) and remains highly significant under Bonferroni correction for multiple comparisons. However, this is rather unsurprising since we would not expect the rankings to be completely random. Hence, such a null hypothesis is too liberal to serve as a good measure of the usefulness of our rankings.

To provide a more focused comparison on the similarities in the top ranks, we show in Figures 10D to 10K) the percentage of estimation schemes found within the top N ranks in a pairwise manner for all CSD profiles. It is calculated in the same way used for Figure 9 with “chance” again referring to the expected percentage of matches if the rankings were random permutations of 531 elements (i.e., expected number of identical elements within two independent draws of N elements from a population of 531 elements). We found that there are only three situations where the percentage of matches lies below the chance level: between the small dipole and (1) the small monopolar source farther from the surface, (2) the large monopole, and (3) the large quadrupole. Hence, it appears that the presence of a source close to the conductivity jump affects the estimation accuracy of the different estimation schemes in a nonuniform manner. When looking at these discrepancies in more detail, we find that it originates mainly from the higher prevalence of the L-curve among these rankings. For example, the large quadrupole ranking has 37 L-curve estimation schemes within the top 40 ranks, while the small dipole has none. When the differentiation according to the regularization parameter selection methods is removed, only the comparison between the small dipole and the large quadrupole fails to show a number of matches exceeding the chance level within the top 20 ranks (results not shown). It is, however, not obvious how this effect comes about. A qualitative analysis of the final rankings suggests that more extended sources with lower spatial frequencies are better captured with the L-curve method. However, this effect is not clearly differentiable from other factors such as the overall extension of the sources or the presence of a source near the discontinuity in conductivity. Moreover, it is probable that these differences affect the estimation differently for small or large diameter sizes or for different SNRs. Hence, although clearly of interest, such an in-depth analysis will have to be deferred to future work. Nevertheless, despite the discrepancies discussed above, Figures 10D to 10K show clearly that the final rankings for the tested profiles are not independent of each other but rather share common features.

It is important to note however that pairwise similarities do not guarantee that there are estimation schemes that perform well across all tested spatial profiles. To test whether it is possible to come up with a suggestion for optimal estimation schemes, we ranked the estimation schemes according to their average ranking in each of the eight spatial profiles. The results for the top 20 estimation schemes are shown in Table 5. The general features already discussed in the final ranking of the Sum of Gaussians profile remain valid for this global ranking. First, expansion methods outperform the quadrature methods. Second, there is no preference between the Tikhonov or dSVD regularization method. Finally, NCP is generally preferred over the L-curve for the selection of the regularization parameter, and GCV is completely absent from the list. However, in contrast to the Sum of Gaussians ranking, four of the top five estimation schemes in the global ranking use the L-curve. Although it is clearly not straightforward to predict the accuracy of a particular estimation scheme on a new spatial CSD profile, the consistency of the rankings across the tested subset of profiles suggests that the list in Table 5 represents the candidate schemes most likely to achieve good estimation accuracy on a new spatial CSD profile.

Table 5:
Top 20 Estimation Schemes in Ranking over All CSD Depth Profiles.
CSDRegu SelPriorMeanMean%MonoMonoMonoDipoleDipoleQuadQuad
RankMethodMethodMethodPriorMethodRankErrorDiff.S1S2LSLGlabskaSL
iCSD Tikhonov L-curve 0  2 Coeff + 7.4 0.6809 0.00 42 
iCSD Tikhonov L-curve 0  1  2 Coeff + 8.4 0.6828 0.28 46 
kCSD dSVD NCP Coeff-0 24.0 0.7066 3.78 15 19 33 16 10 14 28 57 
eCSD Tikhonov L-curve 0  1  2 Coeff + 25.4 0.7030 3.25 10 13 75 57 27 
eCSD Tikhonov L-curve 0  1 Coeff + 27.3 0.7061 3.70 68 52 47 28 
eCSD Tikhonov NCP Coeff-0 28.6 0.7097 4.23 14 22 29 15 28 23 37 61 
iCSD Tikhonov NCP [] Coeff-[] 29.0 0.7063 3.73 20 37 88 30 40 
eCSD Tikhonov NCP [] Coeff-[] 29.1 0.7062 3.72 28 44 86 18 10 35 
kCSD dSVD NCP [] Coeff-[] 29.1 0.7062 3.72 29 43 87 17 11 34 
10 rCSD dSVD NCP 0  2 Coeff + 31.3 0.7101 4.29 41 69 13 43 68 
11 kCSD dSVD NCP 0  2 Coeff + 31.5 0.7109 4.42 22 20 30 23 44 50 58 
12 eCSD dSVD NCP 0  2 Model + 31.9 0.7125 4.64 26 21 29 50 35 83 
13 eCSD dSVD NCP Model + 32.3 0.7125 4.64 34 25 27 28 54 76 
14 kCSD dSVD NCP 0  1 Coeff + 33.6 0.7120 4.58 34 17 18 30 32 19 57 62 
15 kCSD Tikhonov NCP Model-0 34.0 0.7087 4.08 42 35 85 44 17 44 
16 rCSD Tikhonov NCP Model-0 34.8 0.7103 4.32 43 25 77 48 21 15 48 
17 eCSD Tikhonov NCP Model-0 34.8 0.7121 4.59 13 23 66 71 17 29 53 
18 rCSD dSVD NCP 0  1  2 Coeff + 34.8 0.7122 4.61 47 70 15 48 72 
19 eCSD dSVD NCP 0  1  2 Model + 35.8 0.7137 4.82 15 28 24 31 41 65 78 
20 rCSD dSVD NCP [] Coeff-[] 35.9 0.7102 4.31 27 50 105 10 34 10 43 
CSDRegu SelPriorMeanMean%MonoMonoMonoDipoleDipoleQuadQuad
RankMethodMethodMethodPriorMethodRankErrorDiff.S1S2LSLGlabskaSL
iCSD Tikhonov L-curve 0  2 Coeff + 7.4 0.6809 0.00 42 
iCSD Tikhonov L-curve 0  1  2 Coeff + 8.4 0.6828 0.28 46 
kCSD dSVD NCP Coeff-0 24.0 0.7066 3.78 15 19 33 16 10 14 28 57 
eCSD Tikhonov L-curve 0  1  2 Coeff + 25.4 0.7030 3.25 10 13 75 57 27 
eCSD Tikhonov L-curve 0  1 Coeff + 27.3 0.7061 3.70 68 52 47 28 
eCSD Tikhonov NCP Coeff-0 28.6 0.7097 4.23 14 22 29 15 28 23 37 61 
iCSD Tikhonov NCP [] Coeff-[] 29.0 0.7063 3.73 20 37 88 30 40 
eCSD Tikhonov NCP [] Coeff-[] 29.1 0.7062 3.72 28 44 86 18 10 35 
kCSD dSVD NCP [] Coeff-[] 29.1 0.7062 3.72 29 43 87 17 11 34 
10 rCSD dSVD NCP 0  2 Coeff + 31.3 0.7101 4.29 41 69 13 43 68 
11 kCSD dSVD NCP 0  2 Coeff + 31.5 0.7109 4.42 22 20 30 23 44 50 58 
12 eCSD dSVD NCP 0  2 Model + 31.9 0.7125 4.64 26 21 29 50 35 83 
13 eCSD dSVD NCP Model + 32.3 0.7125 4.64 34 25 27 28 54 76 
14 kCSD dSVD NCP 0  1 Coeff + 33.6 0.7120 4.58 34 17 18 30 32 19 57 62 
15 kCSD Tikhonov NCP Model-0 34.0 0.7087 4.08 42 35 85 44 17 44 
16 rCSD Tikhonov NCP Model-0 34.8 0.7103 4.32 43 25 77 48 21 15 48 
17 eCSD Tikhonov NCP Model-0 34.8 0.7121 4.59 13 23 66 71 17 29 53 
18 rCSD dSVD NCP 0  1  2 Coeff + 34.8 0.7122 4.61 47 70 15 48 72 
19 eCSD dSVD NCP 0  1  2 Model + 35.8 0.7137 4.82 15 28 24 31 41 65 78 
20 rCSD dSVD NCP [] Coeff-[] 35.9 0.7102 4.31 27 50 105 10 34 10 43 

Notes: This global ranking is obtained by ordering the estimation schemes according to their average rank over the eight tested CSD depth profiles. The ranks for each of the depth profiles were calculated in the same way as in Table 4. They are presented in the eight columns to the right (Dipole S = Sum of Gaussians profile).

5  Discussion

In this letter, we have introduced a unified framework for zero and higher-order regularization of 1D CSD estimation problems. In particular, we have focused on showing how the current (iCSD and kCSD) and the newly presented methods (rCSD, eCSD (variation of kCSD) and qCSD) can be understood as special cases of two approaches for the discretization of linear inverse problems: quadrature and basis function expansion. Moreover, the presented framework greatly facilitates dealing with recording noise, a ubiquitous problem in solving inverse problems, by incorporating zeroth- and higher-order regularization methods with multiple regularization parameter selection methods. To show the utility of the presented framework, we have compared the performance of the different estimation schemes under various noise conditions and source diameters. We show that no single estimation scheme outperforms all others under all tested conditions. Nevertheless, we found that ranking schemes according to the average error over all tested conditions results in a reproducible and meaningful ranking where the top schemes are found to perform well in the majority of tested conditions. We have further validated this ranking procedure by repeating it over a set of spatial CSD profiles. The general features of the ranking are preserved across this comparison as well. However, a qualitative analysis of the distribution of the ranks across the eight spatial profiles reveals a structure that suggests the presence of unidentified features that influence estimation accuracy. The identification of source characteristics (e.g., spatial frequency, extent, and/or position of source), which could be better resolved with certain estimation schemes, however, goes beyond the scope of our current work. In fact, it is easily conceivable that these features could affect the estimation differently for small or large diameter sizes (or different SNRs), which would require an additional level of detail in the analysis.

Our study also presents a set of additional, smaller contributions: it shows the detrimental effect of noise on the spatial resolution of the solution, it facilitates the use of electrodes from two media with different conductivities in the estimation process, and it provides a thorough introduction to the difficulties of solving inverse problems in the hope of making the problem more easily accessible to experimentalists. Finally, our work makes available a ready-to-use open source Matlab toolbox containing all the CSD estimation methods discussed, including existing methods and the two novel methods we introduce, along with all the regularization methods that we have tested.

An unexpected finding of this work is the overall performance of the NCP regularization parameter selection method. It is interesting to ponder whether this dominance is likely to translate to the experimental setting. Unfortunately, as discussed by Hansen (2010), there is no simple method to predict which parameter selection method is optimal for a given problem because each inverse problem has its own characteristics and error model. By construction, NCP will perform best under white noise. This might explain its success here but could affect its performance in an experimental setting where noise characteristics might be nonwhite. Although the derivation of GCV also relies on noise being white, it was almost totally absent from the top ranking, which is especially surprising given its successful application in Potworowski et al. (2012). A potential explanation for this discrepancy can be found in the higher level of outliers with GCV. Occasional failures of GCV are a well-understood phenomenon and tend to occur whenever the minimum is located in the flat part of the GCV function (Hansen, 2010). However, in the simulated situation used here, the failure rate appears to be further increased by the occasional existence of two local minima (see Figure 5C, for example). We have observed a similar situation with the L-curve in certain trials where the L-curve shows two locations of positive curvature. These situations can technically be avoided by setting bounds on the regularization parameter. Although this is quite easy to do in practice, we have not pursued this avenue because the bounds are problem dependent, thus making this approach impractical in a large-scale comparison like the one here. It has also been suggested that GCV can potentially be improved further by adding a weighting factor to the GCV function (Hansen, 2010). Finally, since the L-curve is the only regularization parameter selection method that is not based on noise being white, it has been advocated for applications where no prior information about noise is available (Vogel, 2002).

A similar problem-dependent optimization of parameters could also be pursued for eCSD, kCSD, and qCSD. In both eCSD and kCSD, the width and number of basis functions could be adapted to the problem at hand. Here we chose the basis width (three sigma point) to be 1.5 times the interelectrode distance following the guidelines from Potworowski et al. (2012) where a factor of 1 to 2 was suggested. It is interesting to note that eCSD and kCSD were found to perform equally well, suggesting that the kernel trick does not provide any advantages in the 1D case. However, this should not be interpreted as being generally valid. Especially in higher dimensions, it is quite probable that it provides additional stability to the estimation. The only reason eCSD was included here is to assess the similarity between the expansion and kernel method in the 1D setting. It remains to be shown whether this similarity also applies in an experimental setting.

A central question arising in any computational study based on simulations is the applicability of the results in an experimental setting. As we have hinted at on various occasions, CSD estimation contains a large amount of built-in assumptions with the potential to dramatically affect source reconstruction accuracy. Therefore, in order for CSD estimation to become a more streamlined technology to be applied to the analysis of intracranial electrophysiology data, the theoretical assumptions incorporated into the estimation need to be carefully addressed. In the following sections we critically discuss the major factors that could affect the applicability of the proposed methods to the experimental setting.

5.1  Accuracy of the Volume Conductor Model

The volume conductor model employed here relies on the tissue being purely resistive and thus justifying the use of Ohm’s law for explaining the relation between the extracellular electric field and neuronal currents. The literature contains both experimental results validating this assumption (Logothetis, Kayser, & Oeltermann, 2007) and others reporting significant non-ohmic effects (Gabriel, Gabriel, & Corthout, 1996; Gabriel, Lau, & Gabriel, 1996a, 1996b). The interpretation of these contradictory findings is complicated by the fact that a newer study from Gabriel, Peyman, and Grant (2009) critically analyzes the usefulness and limitation of their previous measurements. In particular, they point out that their previous work is based on excised tissue instead of in vivo measurements. In addition, they mention that the frequency region below 1 MHz is particularly error prone, and hence the results provide best estimates of only the conductivity values. On the other hand, recent experimental findings from Riera et al. (2012) find potential problems with the resistive assumption based on an unbalanced ratio of current sinks and sources during evoked neuronal activity. Similar findings were also reported by a study of the brain tissue’s effect on the propagation of neurostimulation fields (Wagner et al., 2014). The authors conclude that “living tissue carries currents through both dipole and ionic mechanisms, in a frequency dependent manner.” Although Wagner et al.’s results found ohmic mechanisms to be dominant, permittivities were deemed to be of sufficient magnitude to support significant displacement currents. Finally, based on a theoretical study, Bedard and Destexhe (2009) proposed that ionic diffusion could play a significant role in the reequilibration of extracellular potential following neural activation. This is corroborated by recent experimental evidence that found non-ohmic properties of the extracellular medium around neurons that could possibly be related to ionic diffusion (Gomes et al., 2016). In conclusion, the electric properties of extracellular tissue (e.g., ohmic, capacitive, polarizable or diffusive) are currently under heavy debate and will likely remain debated in the near future. However, it is clear that a modification to the resistive tissue assumption would require a reevaluation of all source estimation methods. In particular, it would entail that the spread of potential would depend on both the spatial and temporal components of neuronal currents instead of only the spatial component as assumed in a purely resistive medium.

A second assumption of the current volume conductor model is that electrical conductivity is homogeneous and isotropic within each medium. This assumption has also been challenged by Goto et al. (2010), who found that there is likely a noticeable difference in conductivity between the different cortical layers. However, in contrast to the nonapplicability of the resistive assumption, a nonhomogeneous conductivity as proposed by Goto et al. could be relatively easily incorporated into the current framework because it only involves changing the equations describing the induced potential (see equation 2.1). The inaccuracies arising from substituting a layered conductivity by a homogeneous conductivity profile are hard to assess since no data are available on the conductivity in each layer. However, even with current knowledge, it is clear that the assumption of two semi-infinite media represents a significant simplification of the actual brain geometry. In fact, simply the presence of the white matter bordering the cortex clearly forms another discontinuity in conductivity analogous to the discontinuity between cortex and the medium above it. To address a similar issue, Ness et al. (2015) implemented the method of images in a three-layer medium. However, in contrast to our situation, their problem required only modeling sources and potentials within the middle layer. Hence, we have refrained from adopting this approach in this work because the method would have to be extended in order to allow the use of electrodes from all three media. To our knowledge, such an extension of the infinite series of image charges has not been presented so far. Alternatively, several papers have discussed an extension to multilayered media based on the Fourier transform (Barrera, Guzman, & Balaguer, 1978). However, the lack of closed-form solutions makes these approaches unsuitable for our purposes since both the forward and inverse problem calculation would require longer computation time. Yet it is unquestionable that even the approximation of the medium by planar surfaces represents a significant simplification of brain geometry. Addressing this would require employing more elaborate models for medium parameterization such as models based on finite element modeling (Ness et al., 2015) or a spherical volume conductor as proposed by Goto et al. (2010). Although such issues are certainly important for CSD estimation, the comparison of volume conductor models goes beyond the scope of our current work. Hence, we have used the most commonly employed medium parameterization.

5.2  Accuracy of Forward Model Assumptions

To our knowledge, the applicability of the cylindrical source assumption has never been experimentally validated. This is most likely due to the fact that such a validation is quite difficult because it requires a way to manipulate the source diameter while at the same time having good knowledge about the depth variation of current sources. Nevertheless, the choice of source diameter is a major challenge in the application of the current CSD methods and can potentially affect different estimation schemes in different ways. Although we have evaluated the accuracy of estimation under various levels of noise, we have not attempted to systematically assess the robustness of the presented framework to errors in the forward model parameters. However, because of the pronounced effect of a discontinuity in conductivity on the amplitude of the potential close to the interface, we have performed a single point validation of our ranking approach assuming a mismatch in conductivity between the forward and inverse model. In particular, although we assumed a conductivity of for the top medium in the forward model, we estimated the CSD using . The conductivity of the bottom medium remained . A comparison of the final ranking for the Sum of Gaussians profile showed that the fidelity of the ranking was well preserved between the matched and unmatched conductivity values. Although encouraging, this should not be understood as ruling out an important effect of conductivity but rather gives us some confidence that the sensitivity to its choice is manageable. Moreover, the validation presented here relies on a single set of values, whereas errors in the forward model can take various forms (e.g., error in electrode placement, error in conductivity, inhomogeneity of conductivity, error in source diameter, noncylindrical source distribution, and/or combination thereof). Therefore, our current analysis does not alleviate the need to perform a more thorough analysis of the errors introduced by mismatches between the forward and inverse model. To our knowledge, a partial assessment of such errors has been performed on only noiseless data so far (Pettersen et al., 2006) but would merit further investigation in order to facilitate the application of these methods in an experimental setting.

Acknowledgments

We thank Gabriel Martine-de la Boissonière for helpful discussions in the planning and elaboration of this work. We also thank Roland Pilgram for the helpful comments and discussions on the manuscript. This work was supported by grants from the Natural Sciences and Engineering Research Council of Canada (NSERC Discovery grants RGPIN 375457-09 and RGPIN-2015-05103), the Canadian Institute of Health Research (MOP-102599), and the Human Frontier Science Program (RGY0080/2008) awarded to AS.

References

Aster
,
R. C.
,
Thurber
,
C. H.
, &
Borchers
,
B.
(
2005
).
Parameter estimation and inverse problems
.
Amsterdam
:
Elsevier Academic Press
.
Barrera
,
R. G.
,
Guzman
,
O.
, &
Balaguer
,
B.
(
1978
).
Point-charge in a 3-dielectric medium with planar interfaces
.
American Journal of Physics
,
46
(
11
):
1172
1179
.
Bedard
,
C.
, &
Destexhe
,
A.
(
2009
).
Macroscopic models of local field potentials and the apparent 1/f noise in brain activity
.
Biophys. J.
,
96
(
7
):
2589
2603
.
Bedard
,
C.
, &
Destexhe
,
A.
(
2011
).
Generalized theory for current-source-density analysis in brain tissue
.
Phys. Rev. E. Stat. Nonlin. Soft Matter Phys.
,
84
(
4 pt. 1
),
041909
.
Brette
,
R.
, &
Destexhe
,
A.
(
2012
).
Handbook of neural activity measurement
.
Cambridge
:
Cambridge University Press
.
Einevoll
,
G. T.
,
Kayser
,
C.
,
Logothetis
,
N. K.
, &
Panzeri
,
S.
(
2013
).
Modelling and analysis of local field potentials for studying the function of cortical circuits
.
Nat. Rev. Neurosci.
,
14
(
11
),
770
785
.
Freeman
,
J. A.
, &
Nicholson
,
C.
(
1975
).
Experimental optimization of current source-density technique for anuran cerebellum
.
J. Neurophysiol.
,
38
(
2
),
369
382
.
Gabriel
,
C.
,
Gabriel
,
S.
, &
Corthout
,
E.
(
1996
).
The dielectric properties of biological tissues: I. Literature survey
.
Phys. Med. Biol.
,
41
(
11
),
2231
2249
.
Gabriel
,
C.
,
Peyman
,
A.
, &
Grant
,
E. H.
(
2009
).
Electrical conductivity of tissue at frequencies below 1 MHz
.
Phys. Med. Biol.
,
54
(
16
),
4863
4878
.
Gabriel
,
S.
,
Lau
,
R. W.
, &
Gabriel
,
C.
(
1996a
).
The dielectric properties of biological tissues: II. Measurements in the frequency range 10 Hz to 20 GHz
.
Phys. Med. Biol.
,
41
(
11
),
2251
2269
.
Gabriel
,
S.
,
Lau
,
R. W.
, &
Gabriel
,
C.
(
1996b
).
The dielectric properties of biological tissues: III. Parametric models for the dielectric spectrum of tissues
.
Phys. Med. Biol.
,
41
(
11
),
2271
2293
.
Glabska
,
H.
,
Potworowski
,
J.
,
Leski
,
S.
, &
Wojcik
,
D. K.
(
2014
).
Independent components of neural activity carry information on individual populations
.
PLoS One
,
9
(
8
),
e105071
.
Gomes
,
J. M.
,
Bedard
,
C.
,
Valtcheva
,
S.
,
Nelson
,
M.
,
Khokhlova
,
Pouget, P.
, …
Destexhe
,
A.
(
2016
).
Intracellular impedance measurements reveal non-ohmic properties of the extracellular medium around neurons
.
Biophys. J.
,
110
(
1
),
234
246
.
Goto
,
T.
,
Hatanaka
,
R.
,
Ogawa
,
T.
,
Sumiyoshi
,
A.
,
Riera
,
J.
, &
Kawashima
,
R.
(
2010
).
An evaluation of the conductivity profile in the somatosensory barrel cortex of Wistar rats
.
J. Neurophysiol.
,
104
(
6
),
3388
3412
.
Hansen
,
P. C.
(
2007
).
Regularization tools version 4.0 for Matlab 7.3
.
Numerical Algorithms
,
46
(
2
),
189
194
.
Hansen
,
P. C.
(
2010
).
Discrete inverse problems: Insight and algorithms
.
Philadelphia, PA
:
SIAM
.
Jackson
,
J. D.
(
1999
).
Classical electrodynamics
.
New York
:
Wiley
.
Leski
,
S.
,
Pettersen
,
K. H.
,
Tunstall
,
B.
,
Einevoll
,
G. T.
,
Gigg
,
J.
, &
Wojcik
,
D. K.
(
2011
).
Inverse current source density method in two dimensions: Inferring neural activation from multielectrode recordings
.
Neuroinformatics
,
9
(
4
),
401
425
.
Leski
,
S.
,
Wojcik
,
D. K.
,
Tereszczuk
,
Swiejkowski
,
Kublik
, &
Wrobel, A.
, (
2007
).
Inverse current-source density method in 3D: Reconstruction fidelity, boundary effects, and influence of distant sources
.
Neuroinformatics
,
5
(
4
),
207
222
.
Logothetis
,
N. K.
,
Kayser
,
C.
, &
Oeltermann
,
A.
(
2007
).
In vivo measurement of cortical impedance spectrum in monkeys: Implications for signal propagation
.
Neuron
,
55
(
5
),
809
823
.
Menke
,
W.
(
1989
).
Geophysical data analysis: Discrete inverse theory
.
San Diego
:
Academic Press
.
Mitzdorf
,
U.
(
1985
).
Current source-density method and application in cat cerebral cortex: Investigation of evoked potentials and EEG phenomena
.
Physiol. Rev.
,
65
(
1
),
37
100
.
Ness
,
T. V.
,
Chintaluri
,
C.
,
Pontworowski
,
J.
,
Leski
,
S.
,
Glabska
,
H.
,
Wojcik
,
D. K.
, &
Einevoll
,
D. T.
(
2015
).
Modelling and analysis of electrical potentials recorded in microelectrode arrays (MEAs)
.
Neuroinformatics
,
13
(
4
),
403
426
.
Nicholson
,
C.
, &
Freeman
,
J. A.
(
1975
).
Theory of current source-density analysis and determination of conductivity tensor for anuran cerebellum
.
J. Neurophysiol.
,
38
(
2
),
356
368
.
Nunez
,
P. L.
, &
Srinivasan
,
R.
(
2006
).
Electric fields of the brain: The neurophysics of EEG
.
New York
:
Oxford University Press
.
Olver
,
P. J.
, &
Shakiban
,
C.
(
2006
).
Applied linear algebra
.
Upper Saddle River, NJ
:
Prentice Hall
.
Parker
,
R. L.
(
1977
).
Understanding inverse theory
.
Annual Review of Earth and Planetary Sciences
,
5
,
35
64
.
Parker
,
R. L.
(
1994
).
Geophysical inverse theory
.
Princeton, NJ
:
Princeton University Press
.
Pettersen
,
K. H.
,
Devor
,
A.
,
Ulert
,
I.
,
Dale
,
A. M.
, &
Einevoll
,
G. T.
(
2006
).
Current-source density estimation based on inversion of electrostatic forward solution: Effects of finite extent of neuronal activity and conductivity discontinuities
.
J. Neurosci. Methods
,
154
(
1–2
),
116
133
.
Plonsey
,
R.
, &
Barr
,
R. C.
(
2007
).
Bioelectricity: A quantitative approach
.
New York
:
Springer
.
Potworowski
,
J.
,
Jakuczun
,
W.
,
Leski
,
S.
, &
Wojcik
,
D. K.
(
2012
).
Kernel current source density method
.
Neural Comput.
,
24
(
2
),
541
575
.
Riera
,
J. J.
,
Goto
,
T.
, &
Kawashima
,
R.
(
2014
).
A methodology for fast assessments to the electrical activity of barrel fields in vivo: From population inputs to single unit outputs
.
Front. Neural Circuits
,
8
,
4
.
Riera
,
J. J.
,
Ogawa
,
T.
,
Goto
,
T.
,
Sumiyoshi
,
A.
,
Nonaka
,
H.
,
Evans
,
A.
, …
Kawashima
,
R.
(
2012
).
Pitfalls in the dipolar model for the neocortical EEG sources
.
J. Neurophysiol.
,
108
(
4
),
956
975
.
Schroeder
,
C. E.
,
Mehta
,
A. D.
, &
Givre
,
S. J.
(
1998
).
A spatiotemporal profile of visual system activation revealed by current source density analysis in the awake macaque
.
Cereb. Cortex
,
8
(
7
),
575
592
.
Tenke
,
C. E.
, &
Kayser
,
J.
(
2012
).
Generator localization by current source density (CSD): Implications of volume conduction and field closure at intracranial and scalp resolutions
.
Clin. Neurophysiol.
,
123
(
12
),
2328
2345
.
Tenke
,
C. E.
,
Schroeder
,
C. E.
,
Arezzo
,
J. C.
, &
Vaughan
,
H. G.
Jr.
(
1993
).
Interpretation of high-resolution current source density profiles: A simulation of sublaminar contributions to the visual evoked potential
.
Exp. Brain. Res.
,
94
(
2
),
183
192
.
Vaknin
,
G.
,
DiScenna
,
P. G.
, &
Teyler
,
T. J.
(
1988
).
A method for calculating current source density (CSD) analysis without resorting to recording sites outside the sampling volume
.
J. Neurosci. Methods
,
24
(
2
),
131
135
.
Vogel
,
C. R.
(
2002
).
Computational methods for inverse problems
.
Philadelphia, PA
:
SIAM
.
Wagner
,
T.
,
Eden
,
U.
,
Rushmore
,
J.
,
Russo
,
C. J.
,
Dipietro
,
L.
,
Fregni
,
F.
, … 
Valero-Cabre
,
A.
(
2014
).
Impact of brain tissue filtering on neurostimulation fields: A modeling study
.
Neuroimage
,
85
(pt 3),
1048
1057
.
Wagner
,
T. A.
,
Zahn
,
M.
,
Grodzinsky
,
A. J.
, &
Pascual-Leone
,
A.
(
2004
).
Three-dimensional head model simulation of transcranial magnetic stimulation
.
IEEE Transactions on Biomedical Engineering
,
51
,
1586
1594
.
Wing
,
G. M.
, &
Zahrt
,
J. D.
(
1991
).
A primer on integral equations of the first kind: The problem of deconvolution and unfolding
.
Philadelphia, PA
:
SIAM
.