Abstract

Local field potentials (LFP), the low-frequency part of extracellular electrical recordings, are a measure of the neural activity reflecting dendritic processing of synaptic inputs to neuronal populations. To localize synaptic dynamics, it is convenient, whenever possible, to estimate the density of transmembrane current sources (CSD) generating the LFP. In this work, we propose a new framework, the kernel current source density method (kCSD), for nonparametric estimation of CSD from LFP recorded from arbitrarily distributed electrodes using kernel methods. We test specific implementations of this framework on model data measured with one-, two-, and three-dimensional multielectrode setups. We compare these methods with the traditional approach through numerical approximation of the Laplacian and with the recently developed inverse current source density methods (iCSD). We show that iCSD is a special case of kCSD. The proposed method opens up new experimental possibilities for CSD analysis from existing or new recordings on arbitrarily distributed electrodes (not necessarily on a grid), which can be obtained in extracellular recordings of single unit activity with multiple electrodes.

1.  Introduction

Extracellular recordings of electric potential have great significance in the studies of neural activity in vivo. In the past few years, we have witnessed rapid development of technology for large-scale electrical recordings. Various types of multielectrodes were devised to simultaneously record extracellular potentials from multiple spatial locations (Normann, Maynard, Rousche, & Warren, 1999; Csicsvari et al., 2003; Barthó et al., 2004; Buzsáki, 2004; Sher et al., 2007; Imfeld et al., 2008; Frey, Egert, Heer, Hafizovic, & Hierlemann, 2009; Ward, Rajdev, Ellison, & Irazoqui, 2009; Charvet et al., 2010). The low-frequency part of these recordings, the local field potentials (LFP), typically reflect the dendritic processing of synaptic inputs (Nunez & Srinivasan, 2006; Einevoll et al., 2007; Pettersen, Hagen, & Einevoll, 2008; Lindén, Pettersen, & Einevoll, 2010). Direct interpretation of LFP is difficult as it is a nonlocal measure of the neural activity: it may have contributions from neurons located more than 1 millimeter away from the electrode (Kreiman et al., 2006; Liu & Newsome, 2006; Berens, Keliris, Ecker, Logothetis, & Tolias, 2008; Katzner et al., 2009; Xing, Yeh, & Shapley, 2009) or even a few milimeters (Hunt, Falinska, Leski, Wójcik, & Kasicki, 2010). Therefore, if possible, it is convenient to estimate the CSD, the volume density of net transmembrane currents, generating the LFP (Lorente de No, 1947; Pitts, 1952; Plonsey, 1969; Freeman & Nicholson, 1975; Nicholson & Freeman, 1975; Mitzdorf, 1985). CSD directly relates to the local neural activity, and CSD analysis is a convenient tool for analysis of LFP recorded from multielectrodes (Haberly & Shepherd, 1973; Mitzdorf, 1985; Schroeder, Tenke, & Givre, 1992; Ylinen et al., 1995; Lakatos et al., 2005; Lipton, Fu, Branch, & Schroeder, 2006; Rajkai et al., 2008; de Solages et al., 2008).

Since CSD in a homogeneous and isotropic tissue is given by the Laplacian of the potentials, originally it was estimated by a discrete differentiation scheme from the potentials measured on a regular grid (Freeman & Nicholson, 1975; Nicholson & Freeman, 1975; Mitzdorf, 1985). In the past few years, a new method for CSD estimation has been developed: the inverse CSD (iCSD) method (Pettersen, Devor, Ulbert, Dale, & Einevoll, 2006; Łȩski et al., 2007; Wójcik & Łȩski, 2010; Łȩski et al., 2011). The main idea behind iCSD is to assume a specific parametric form of CSD generating the measured potentials (e.g., spline interpolated between the grid nodes), calculating the LFP in a forward-modeling scheme to obtain the values of CSD parameters (e.g., CSD values at the nodes) by matching the experimental data with computed values. The iCSD framework requires an assumption of a specific geometry of contacts requiring new calculations for each distribution of electrodes. So far, all the propositions assumed recordings on regular, Cartesian grids.

Here we introduce a new, nonparametric method for CSD estimation. The kernel CSD method (kCSD) uses some basic facts from the theory of reproducing kernel Hilbert spaces used in machine learning (Aronszajn, 1950; Vapnik, 1998; Schölkopf & Smola, 2002; Shawe-Taylor & Christiani, 2004). This method does not require the user to specify the restricted, parameterized form of the admissible CSD distributions. Instead, one specifies an arbitrarily broad family of possible distributions, and the uniqueness of the solution is guaranteed by the minimum-norm requirement built in the method. The assumption of regular electrode arrangement is not necessary; kCSD can be applied to recordings from electrodes distributed at any position on one-, two-, and three-dimensional sets with equal ease. Moreover, we show that kCSD is a general nonparametric framework for CSD estimation, including all the previous variants of iCSD methods as special cases.

The problem of CSD reconstruction is similar in spirit to reconstruction of sources in EEG (Guljarani, 1998; He & Lian, 2005; Phillips, Mattout, Rugg, Maquet & Friston, 2005; Nunez & Srinivasan, 2006) and ECoG (Freeman, 1980; Zhang, van Drongelen, Kohrman, & He, 2008), and different source reconstruction methods including those using rich bases were studied in this context (see the review in He & Lian, 2005). However, since extracellular recordings of LFP are taken much closer to the sources than in the case of EEG, one builds different models (typically dipolar sources rather than monopoles (see the discussion in Łȩski et al., 2011), so it is not obvious a priori if approaches useful or required in EEG analysis would work just as well in CSD reconstruction or the other way around. In this letter, we present one candidate method for CSD reconstruction and show its utility on variety of model data.

The letter is organized as follows. In section 2, we introduce the basic framework of the method using reproducing kernel Hilbert spaces (RKHS; Aronszajn, 1950; Vapnik, 1998; Schölkopf & Smola, 2002; Shawe-Taylor & Christiani, 2004) and show an efficient regression algorithm applicable in RKHS. However, the CSD we want to estimate and the potentials we measure are different physical quantities, which normally forces us to solve a linear operator equation. Introducing a cross-kernel between spaces of potentials and sources, we can easily obtain estimation of sources. We apply this technique to estimate the most plausible CSD consistent with the measured potentials. We show how to do this in cases where the measurements were taken on sets of different dimensionality (e.g., for laminar multielectrodes and multishaft multielectrodes and in the general three-dimensional case). We also show that the previously introduced iCSD methods are special cases of kCSD introduced here.

To test the viability of the proposed scheme, we performed a number of tests on model data where we control the sources to be recovered from potentials. These tests are presented in section 3. We first consider the case of regular grids, as these are the only ones that have been treated by the methods available so far, to show how the new method compares with readily available alternatives. Then we consider the case of arbitrary distributions of contacts as the proposed methods can easily treat arbitrary geometry of the electrode setup.

Since every measurement is subject to noise, it is important to study its effect on the method. Having established the soundness of the basic approach, in section 4, we study ridge regression as a possible means of avoiding overfitting and removing almost singularities that might arise, for instance, in atypical setups. The properties of the proposed method, further directions of development, and the significance of the kernel approach are discussed in the final section.

2.  Kernel Current Source Density Method

Consider the following problem. N electrodes are placed in the brain at x1, …,, and we measure the extracellular potential V at N locations: (V1, …, VN). The extracellular potential we sample is generated by transmembrane currents whose density C (CSD) is what we would like to estimate. The connection between V and C is given by the Poisson equation,
formula
2.1
Let us approximate the CSD generated by the concerted activity of the myriad neurons as a sum of M localized sources ,
formula
2.2
Conceptually we consider each as a model of activity of a small population of neurons. In practice, we consider sources of constant density within a ball of radius R and 0 elsewhere,
formula
2.3
or gaussians,
formula
2.4
Each of the sources generates potential in the whole space consistent with the Poisson equation, 2.1. In what follows, we assume homogeneous and uniform conductivity value σ and no boundary conditions (we return to discuss this issue in the final section). Then the potential bi generated by source is obtained by applying a linear operator :
formula
2.5
The relation between the sources and potentials is different in lower dimensionality and for other models of propagation, boundary conditions, and so on, but it does not affect the general formulation of the method. In every case we can introduce a linear operator connecting sources and potentials by
formula
2.6
with , where the space of sources is
formula
2.7
and the space of potentials is
formula
2.8
We assume that (bi) are linearly independent, and so they constitute bases of the linear spaces and . We consider in one- and two-dimensional cases below.
Let us introduce a kernel function (a kernel, for short) by the
formula
2.9
With this kernel, one can show (see, e.g., section 1.3, theorem A in Aronszajn, 1950, and theorem 3.11 in Shawe-Taylor & Christiani, 2004) that the space of potentials is the feature space of K and as such, is a reproducing kernel hilbert space (RKHS). That is, one can show that
formula
2.10
and it is a Hilbert space with the inner product of functions f(x) = ∑li=1αiK(xi, x), g(x) = ∑mj=1βjK(zj, x) given by
formula
2.11
Note that we have now two representations of every function in —as a sum a of kernels or sum of basis elements—
formula
where ai = ∑lj=1αjbi(xj).
Using the inner product, we can define the norm in by . It is easy to see that in the two representations, we have
formula
We use to induce a norm in by
formula
2.12
Our goal is to find the CSD C(x) consistent with the measured potentials.1 We first estimate the potential from data. Because there are many more sources than measurements, there is an infinite number of solutions. Consider potentials V(x) = ∑Mi=1aibi(x) consistent with the measurements, that is, V(xk) = ∑Mi=1aibi(xk) = Vk for all k. To find the potential with minimum norm ‖V2 = ∑Mi=1a2i satisfying these constraints, the derivative ∂‖V2/∂ak must be a linear combination of constraints derivatives along ak. That is, we have
formula
and the potential we seek takes the form
formula
2.13
Solving the constraints, we get the parameters β1, …, βN to be
formula
2.14
which can be written in more compact notation as
formula
with an obvious definition of terms.

We have also assumed here that the measurements are sufficiently independent (informative) that K is of full order and so can be inverted. In all the cases we considered, K was invertible, and we expect this to be true for all experimentally accessible electrode setups. If this is not the case, for instance, if two contacts are too close, giving unstable inverse, one can use one of many strategies to stabilize inversion. In particular, the regularization that we discuss in section 4 also overcomes the problem of possible almost singularities of K.

Having V* given by
formula
2.15
where aj = ∑Ni=1βibj(xi), we know that there exists exactly one generating V*, and it is given by
formula
2.16
Thus, we see that it is convenient to introduce the cross-kernel function,
formula
2.17
If we define the vector function,
formula
then
formula
2.18
From equation 2.12, we see that C* is the current source density consistent with the measured potentials that has the smallest norm in .

2.1.  kCSD for Measurements Taken on Planes or Lines.

In lower dimensionality, the framework changes because in order to calculate the potentials generated by a source, we must assume the structure of the source in the normal (perpendicular) directions to the plane (in 2D) or line (in 1D) of measurements. The need for such models and specific examples were carefully discussed by Pettersen et al. (2006) for the case of laminar recordings and by Łȩski et al. (2011) for planar recordings (such as multishaft electrodes).

2.1.1.  kCSD in 2D.

Consider a situation in which LFP is measured by electrodes that are arranged on a flat surface, as in Csicsvari et al. (2003), for example. To estimate CSD, we need to make assumptions about its profile in the direction perpendicular to the surface. We introduce a coordinate system (x, y, z) and assume that the electrodes are arranged on the surface spanned by the x- and y-axes. In Łȩski et al. (2011) we proposed to consider the CSD as a product of a two-dimensional profile and a specific profile H in the perpendicular direction z:
formula
For H(z) here, we take a simple step function:
formula
although other choices such as a gaussian profile are also possible. Thus, we assume that the CSD profile is constant in z direction within a slice of thickness 2h centered at the surface with electrodes and 0 elsewhere. It turns out that the specific choice of profile H(z) influences mainly the amplitude of the calculated potentials, and so the estimated sources, while their overall shape is reasonably robust (Łȩski et al., 2011).
The potential measured by an electrode placed in some point (x, y, 0) is in this case given by
formula
2.19
In this case, it is sufficient to estimate the two-dimensional profile to get an estimate of the overall CSD in the region. Therefore, we can define spaces and by introducing two-variable basis functions. This can be done similarly as in the 3D case, using simple step basis functions for space ,
formula
2.20
or gaussians,
formula
2.21
The potential basis functions can be derived by applying equation 2.19:
formula
2.22

2.1.2.  kCSD in 1D.

Assume that the electrodes are arranged along a straight line. As in the previous section, we need to make assumptions on the CSD profile in the perpendicular plane. Pettersen et al. (2006) proposed to introduce an overall CSD profile of the form
formula
We assume rotational symmetry around z-axis and for H(x,y), we take a simple step function on a disk of radius r:
formula
The potential measured by an electrode placed in some point (0, 0, z) is in this case given by
formula
2.23
Now the space of CSD can be defined by introducing one-variable basis functions. As in the previous cases, one can use simple step functions,
formula
2.24
or gaussians,
formula
2.25
Finally, the potential basis functions can be obtained by applying equation 2.23:
formula
2.26

2.2.  Spatial Arrangement of the Basis Elements.

In section 2.1, we introduced example shapes of the basis functions , which we considered in various dimensions. To implement kCSD, we have to specify the number and localization of these sources. Let us denote the area where we estimate CSD by where d ∈ {1, 2, 3}. In all the tests we carried out, was a product of intervals, , , for example, in section 3.1:
formula

The parameter ξ is introduced to treat the cases when the actual sources extend beyond the grid of electrodes (i.e., beyond xminxxmax, yminyymax). For ξ = 0, such sources would cause artifacts at the boundary of the reconstruction region. We observed this phenomenon in 3D and 2D iCSD (Łȩski et al., 2007, 2011). The remedy there was to add layer of nonzero sources to the original electrodes grid, a method called the B or D boundary conditions. In the case of kCSD, we achieve the same result by choosing ξ>0.

To generate the basis of sources, we always took a spherically symmetric template function and translated it to nodes of a regular rectangular grid , obtaining the full basis while making sure that each point in the estimation area belongs to the support of at least two basis sources. It turns out that to get apparently smooth results, R should be a multiple of the spacing between the grid nodes; otherwise, we observed significant irregularities.

2.3.  Relation Between iCSD and kCSD.

The main feature of kCSD is efficient estimation in spaces with rich bases. We assumed here that in general, the dimension of the space of sources N is much higher than the number of measurements, M. If M = N, then kCSD is equivalent to a variant of previously developed inverse CSD method (Pettersen et al., 2006; Łȩski et al., 2007, 2011), in which we take the basis from the kCSD method as the N-parameter family of sources in the inverse CSD method. Then, in both models, we have the same space of sources and no degeneracy; hence, the solutions have to be the same. To illustrate the connection between the two approaches, we show this explicitly in appendix B.

3.  Tests and Examples

To test the viability of the kCSD method, we performed a number of numerical experiments using model sources and experimentally registered potentials.

The first question is how the kCSD method compares to the other methods (finite-difference approximation, inverse CSD). To answer this, we used several configurations of the model CSD to calculate the potentials that would have been measured using multicontact electrodes. To be able to apply all the different CSD methods, we had to use regular grids; that means we calculated the potentials at either equidistant points in 1D or points that formed a Cartesian grid in 2D or 3D. Then we tested the similarity of the CSD reconstructions to the model CSD for a wide range of parameters of the kCSD method. These tests are described in more detail in section 3.1. The conclusion is that for the electrode grids where all the methods can be applied, the kCSD method performs as well as the spline iCSD method or better (which is typically better than the finite difference, or traditional CSD analysis) if we choose the basis appropriately.

A major strength of kCSD is its capability to estimate CSD from arbitrary distributions of contacts with equal ease. Thus, the second, and perhaps the more interesting, question is how the kCSD method performs for contacts not forming a regular grid. Though it is sometimes possible to use other CSD methods, in such cases, it is usually harder to use them without the assumption of the regularity, and the kCSD method seems to be the most natural choice. We illustrate this in section 3.2. First, we show how the kCSD method can be easily applied to (model) potentials recorded on a grid used in Wirth and Lüscher (2004). Then we test the quality of reconstruction for electrodes placed randomly within the probed area and check how it changes with increasing number of electrodes.

The intermediate case between regular and irregular grids is when we use a grid of regularly placed contacts but with a small number of contacts missing. This can happen, for example, when one or two contacts are used for stimulation instead of recording. This problem was studied earlier in Wjcik and Łȩski (2010), where two approaches based on iCSD were proposed: one was to substitute the missing channels with averages of their neighbors (LA, for local averages), the other to restrict the dimensionality of the possible CSD distributions and use the least-squares fit to all available recordings (LS). Again, the kCSD method seems to be a natural choice here. In section 3.3, we test the kCSD method on the same experimental data as used in Wjcik and Łȩski (2010) and show that it is a substantially better approach than their LS method. Comparison of kCSD with the LA method depends on the data set tested.

3.1.  Comparison of CSD Methods on Regular Grids.

The kCSD method as defined above has a number of parameters that need to be specified before the method can be applied to data. Specifically, we need to define the basis of the space of the CSD distributions . As an example, we consider a two-dimensional regular, rectangular electrode grid (z = 0 for all electrodes). We generate all the basis functions by translating a single reference function of the form c(x, y)H(z) where, as in section 2.1.1 for c(x, y), we take either a two-dimensional gaussian,
formula
3.1
or a two-dimensional cylindrically symmetric step function,
formula
3.2
Therefore, each basis function is a translation of cs or cg. The parameter R in the formulas above is the size of the basis sources in the xy plane. As the transverse profile H(z), we take a step function: H(z) = 1 for −hzh. Let xmin, xmax denote the minimum and the maximum of the x and y coordinates of the electrodes; the spacing of the grid is Δx, Δy. We assume that the sources can extend beyond the electrode grid; specifically, the central points (x, y) of the basis functions can be in the region xmin − ξΔxxxmax + ξΔx, ymin − ξΔyyymax + ξΔy, where ξ is a parameter. We arrange the sources as described along a regular rectangular grid. The final parameter is the number of sources n. We choose such n that it is a product of numbers of equally spaced sources in x and y directions. Summarizing, the parameters we have to specify are n, R, h, and ξ, and the choice between step and gaussian profiles in the xy plane. The choice of the translation parameters and number of sources was described in section 2.2.

Let us focus on an eight-by-eight grid with equal spacing in both directions (Δx = Δy = 0.2, all lengths in this section are in mm) spanning the area 0 ≤ x, y ≤ 1.4. We chose two sets of test sources, both having product structure c(x, y)H(z) with H(z) = 1 for −0.5 ≤ z ≤ 0.5. The data sets are composed of gaussian sources. In the first set, the sources are large compared to the interelectrode distance (see Figure 1A), and in the second data set, they are small (see Figure 1F). The exact formulas are given in the appendix A.

Figure 1:

(A–E) Test results for the first model data set (large sources). (A) The model CSD. (B) The potentials. (C) Reconstruction using traditional CSD. (D) Reconstruction using spline iCSD method with D boundary conditions. (E) Reconstruction using the kCSD method for n = 8100, ξ = 2, R = 0.3, h = 0.5, step basis. (F–J) Test results for the second model data set (small sources). (F) The model CSD. (G) The potentials. (H) Reconstruction using traditional CSD. (I) Reconstruction using the spline iCSD method with D boundary conditions. (J) Reconstruction using the kCSD method for n = 8100, ξ = 0.5, R = 0.2, h = 0.5, gaussian basis.

Figure 1:

(A–E) Test results for the first model data set (large sources). (A) The model CSD. (B) The potentials. (C) Reconstruction using traditional CSD. (D) Reconstruction using spline iCSD method with D boundary conditions. (E) Reconstruction using the kCSD method for n = 8100, ξ = 2, R = 0.3, h = 0.5, step basis. (F–J) Test results for the second model data set (small sources). (F) The model CSD. (G) The potentials. (H) Reconstruction using traditional CSD. (I) Reconstruction using the spline iCSD method with D boundary conditions. (J) Reconstruction using the kCSD method for n = 8100, ξ = 0.5, R = 0.2, h = 0.5, gaussian basis.

We calculated the potentials at the registration points (for that purpose, the integration area was (x, y) ∈ [− 0.5, 1.9] × [− 0.5, 1.9]). Then we performed a scan over the space of parameters of the kCSD method. We took all possible combinations of R = 0.05, 0.1, 0.15, …, 0.4, n = 902, 1202, …, 2402, h = 0.2, 0.5, 1, and ξ = 0, 0.5, 1, 2, 3. For each combination of parameters and for both the gaussian and the step profiles, we reconstructed the CSD and calculated the normalized reconstruction error e using the formula
formula
where is the reconstructed CSD (Łȩski et al., 2007).

In Figure 1, we show example reconstructions using the kCSD method with parameters close to optimal (see Figures 1E and 1J), with exact parameter sets given in the caption. These are compared with traditional CSD (see Figures 1C and 1H) and spline iCSD reconstructions (see Figures 1D and 1I).

By “traditional CSD” here and in the following, we understand the following procedure: (1) extend the grid by an extra layer in each direction and copy the potential value at extra points from nearest neighbors; (2) calculate the CSD value at the grid points by discrete numerical approximation to the Laplacian; and (3) cubic spline interpolate in between. The iCSD with D boundary conditions means the CSD model where the original grid was extended with an extra layer and the same CSD value was assumed as in the nearest neighbor, spline-interpolated CSD between the nodes (Łȩski et al., 2007, 2011).

The errors for optimal kCSD parameters (e = 0.06% and e = 35% for large and small sources, respectively) are smaller than the errors of the traditional CSD (e = 43% and e = 38%) and spline iCSD (e = 1% and e = 36%). Note that for the small sources data set, all errors are rather large. This is because the electrodes grid is too sparse to probe the detailed structure of the sources (compare Figures 1H–J with Figure 3, where a denser electrodes grid leads to much better reconstruction). This is intuitively very natural, as it resembles the situation in Fourier analysis where it is impossible to recover frequencies higher than half the sampling rate of the signal (Nyquist theorem).

The results of the parameters space scan can be summarized as follows. The most important parameter is the size of the basis sources R. For the first set of model sources (large sources), it is best to choose large R (R = 0.4 or even larger), while for the second set (small sources), the results are best for small R (∼0.1). This is not surprising since any CSD estimation method works best if the assumed CSD family closely matches the actual distribution.

Since optimal reconstruction parameters depend on the data set, we further tested the dependence of reconstruction error on R on a large set of randomly placed gaussian sources of different sizes, from small to large. We used 2000 data sets (the details on how the sources were chosen are given in appendix A). For each data set, we performed reconstructions for different R (other parameters kept fixed). The results for both gaussian and step bases are presented in Figure 2. For this collection of 2000 data sets, the optimal R is approximately 0.3 to 0.4 for the gaussian basis and approximately 0.15 to 0.2 for the step basis. One interesting observation is that the gaussian basis leads to weaker dependence of the reconstruction error on R. Therefore, the recommendation for this electrode grid would be to use gaussian basis sources and an intermediate value of R—say, R = 0.35. For this choice, the method should work reasonably well for a wide range of CSD sources.

Figure 2:

Dependence of (logarithm of) reconstruction error on R for step (A) and gaussian (B) basis sources. The boxes in the box-and-whisker plots show the median and the lower and upper quartile values. The whiskers extend over the neighboring values up to a maximum of 1.5 times the interquartile range. The values farther away are shown as outliers (+ signs, typically less than 10% of the data points).

Figure 2:

Dependence of (logarithm of) reconstruction error on R for step (A) and gaussian (B) basis sources. The boxes in the box-and-whisker plots show the median and the lower and upper quartile values. The whiskers extend over the neighboring values up to a maximum of 1.5 times the interquartile range. The values farther away are shown as outliers (+ signs, typically less than 10% of the data points).

The conclusions regarding the choice of h are the same as in the inverse CSD method (Łȩski et al., 2011): the closer we are to the actual h, the better, assuming wrong h may have a strong influence on the amplitude of the reconstructed sources, but the shape of the distribution is roughly preserved.

The nonzero values of the parameter ξ dramatically help in cases where the actual activity in the xy plane extends beyond the electrodes grid (the large sources case). This corresponds to choosing B or D boundary conditions in iCSD (Łȩski et al., 2007, 2011). The reconstruction error grows with ξ when there is no activity beyond the grid, but this effect is very small; therefore, ξ = 1 is a safe choice in any case.

The number of sources (originally between n = 902 up to n = 2402) had almost no effect on the reconstructions. We further studied reconstructions with a smaller number of sources (n = 102, 112, …152, 202, 302, 452, 602, 752), and we found that the reconstructions break only for very sparse bases (such as n = 102); taking n = 202 yields errors only slightly higher than the optimal values of n. The reason that the method does not work for very small n (especially when used together with small R) is that the character of the cross-kernel functions changes dramatically. For larger n, they are smooth functions with a single maximum, whereas for small n, the kernels have multiple maxima located at the observation points and are therefore unable to reproduce smooth CSD distributions faithfully. Our recommendation is that n should be such that the basis sources are denser than the observation points (e.g., n = 202 for 8 by 8 grid).

3.2.  kCSD on Irregular Grids.

One of the strengths of the kCSD method is that it can be easily applied to any configuration of the recording points. As an example, we consider an electrode array that Wirth and Lüscher (2004), used (see Figure 3). The contacts of the array do not form a regular, rectangular grid similar to the ones used in section 3.1 (although locally, they form a square lattice). It would be possible to apply some form of inverse CSD (or even numerical second derivative), but the kCSD method is the most natural method to use in this case. Figure 3 presents the test sources used in this case (the small sources data set described in section 3.1), the potentials resulting from the sources and the reconstructed CSD. Note that this reconstruction is better than the reconstructions obtained in section 3.1 because the interelectrode distance, 140 μm, is smaller here.

Figure 3:

CSD reconstruction on an irregular grid of electrodes (circles). (A) The model CSD. (B) The estimated potentials. (C) CSD reconstructed from potential values at the grid using kCSD method.

Figure 3:

CSD reconstruction on an irregular grid of electrodes (circles). (A) The model CSD. (B) The estimated potentials. (C) CSD reconstructed from potential values at the grid using kCSD method.

In fact, it is equally easy to apply the kCSD method to regular and irregular grids, which is not the case for other CSD methods such as spline iCSD. To demonstrate the strength of this new approach, we consider electrodes placed randomly. Such irregular placement could occur in real experiments, for example, when many electrodes are positioned independently to record spiking activity and then the LFP signal is recorded. For the two test data sets defined in section 3.1, we chose randomly a set of electrodes placed within the area (x, y) ∈ [0, 1.4] × [0, 1.4] and calculated the reconstruction error e (example reconstructions are shown in Figure 4).2 For each number of electrodes, we repeated this procedure 50 times to obtain error bars on e. The results are presented in Figure 5. For large sources, the CSD can be reconstructed quite faithfully from as few as 16 electrodes. Because of the small spatial extent of small sources, the errors are large (∼40%) even for 64 electrodes.

Figure 4:

CSD reconstructions from randomly placed electrodes (examples). (Top) Large sources. Bottom: small sources.

Figure 4:

CSD reconstructions from randomly placed electrodes (examples). (Top) Large sources. Bottom: small sources.

Figure 5:

Distribution of errors for reconstruction from randomly placed electrodes. (A) Large sources. (B) Small sources. For the meaning of the box-and-whisker, plots see the caption for Figure 2.

Figure 5:

Distribution of errors for reconstruction from randomly placed electrodes. (A) Large sources. (B) Small sources. For the meaning of the box-and-whisker, plots see the caption for Figure 2.

Figure 6:

(A–C) CSD reconstruction from randomly placed electrodes in 1D. (A) Model CSD. (B) Single reconstruction. Electrode positions are marked with vertical bars at the x-axis. (C) Reconstructions for 12 different sets of randomly placed electrodes. (D–E) One-dimensional example of reconstructing CSD from a regular grid with one missing electrode. (D) Reconstruction from all but one channel Electrode positions are marked with vertical bars at the x axis. Note that one is missing. (E) Realizations for different electrodes removed.

Figure 6:

(A–C) CSD reconstruction from randomly placed electrodes in 1D. (A) Model CSD. (B) Single reconstruction. Electrode positions are marked with vertical bars at the x-axis. (C) Reconstructions for 12 different sets of randomly placed electrodes. (D–E) One-dimensional example of reconstructing CSD from a regular grid with one missing electrode. (D) Reconstruction from all but one channel Electrode positions are marked with vertical bars at the x axis. Note that one is missing. (E) Realizations for different electrodes removed.

Similar reconstructions from randomly placed electrodes can also be performed in one and three dimensions. Figure 6 shows an example of the reconstruction of sources (A) from nine electrodes placed randomly on a line (B). Figure 6C shows reconstructed sources for 12 different distributions of nine contacts. (The sources used in Figure 6 are described in section A.4 in appendix  A.)

3.3.  Kernel CSD on Incomplete Regular Grids.

One interesting case of irregular grids is the regular grid with a number of missing contacts. Such situations arise often in real experiments, for example, because of the specific experimental setup (Bakker et al., 2009) or hardware failures. In the kCSD framework one can deal with such situations without problems. Figures 6D to 6E show a simple one-dimensional example for the sources presented in Figure 6A.

In previous work (Wjcik & Łȩski, 2010), we studied possible remedies to the incomplete grid problem in the context of the iCSD method. We proposed two solutions: substitute the missing channels with averages of the neighbors (LA, for local averaging) or fit a CSD distribution described by fewer parameters than the number of electrodes using the least squares method (LS). Here we treat the same problem in the context of kCSD. The kCSD method is a natural replacement for the LS approach. It can also be applied to the full data set obtained with the LA method (the results are very close to the LA + spline iCSD method). In this section, we compare the two approaches: kCSD on an incomplete grid or LA + kCSD. Because the irregularity of the grid is naturally accounted for in the kCSD method without the need to explicitly use least squares fit, we expect this method to perform much better than the LS method in the inverse CSD case.

To test the relative performance of kCSD and LA + kCSD methods, we first studied the two-dimensional data sets (large sources and small sources) introduced in section 3.1. We set a number n of missing channels (1 ≤ n ≤ 8) and studied a large number of possible combinations of missing points (for n = 1 and n = 2, we checked all possibilities, 64 and (64 × 63) ÷ 2 = 2016, respectively; for each larger n, we randomly chose 2000 combinations). For each configuration and each of the two data sets, we reconstructed the CSD twice: first using the kCSD method on the incomplete data set and then using the LA + kCSD approach. The results plotted as the mean of the reconstruction error e ± standard deviation are presented in Figure 7.

Figure 7:

Comparison of two methods (LA followed by kCSD, o’s, kCSD on an incomplete grid, x’s) to reconstruct CSD from data on a grid with missing points. The x-axis shows the number of recording points removed from the grid. The values plotted on the y-axis are the means of normalized reconstruction error e; error bars are ± standard deviation. (A) Large sources. (B) Small sources. (C) The best 90% out of 2000 random choices of removed points (except n = 1 and n = 2; where 90% of all possibilities are used). (D) Same as C but for the worst 10% of the cases. The data used here are the same as used in Figures 3 and 4 in Wjcik and Łȩski (2010).

Figure 7:

Comparison of two methods (LA followed by kCSD, o’s, kCSD on an incomplete grid, x’s) to reconstruct CSD from data on a grid with missing points. The x-axis shows the number of recording points removed from the grid. The values plotted on the y-axis are the means of normalized reconstruction error e; error bars are ± standard deviation. (A) Large sources. (B) Small sources. (C) The best 90% out of 2000 random choices of removed points (except n = 1 and n = 2; where 90% of all possibilities are used). (D) Same as C but for the worst 10% of the cases. The data used here are the same as used in Figures 3 and 4 in Wjcik and Łȩski (2010).

The kCSD method applied to the incomplete set yields better results than the LA + kCSD method. The difference is striking in case of large sources (see Figure 7A), which is not unexpected; we already saw that for this data set, the kCSD reconstructions are very precise even for a much smaller number of available measurements (see Figure 5). Evidently in this case, local averages of neighbors lead to an incorrect approximation of the missing values of potential distorting the data and resulting in bigger reconstruction errors.

To directly compare the new kCSD method to the LA + iCSD and LS methods from Wjcik and Łȩski (2010), we performed another numerical experiment, this time using the experimental data sets used in Wjcik & Łȩski (2010). The data are the extracellular evoked potentials recorded in the rat brain on a three-dimensional grid of 4 × 5 × 7 points and are described in detail elsewhere (Łȩski et al., 2007; Łȩski, Kublik, Świejkowski, Wróbel, & Wójcik, 2010). In Wjcik and Łȩski (2010) we concluded that the LA + iCSD method is preferable to the less stable LS approach.

Here, as in the two-dimensional case above, we calculated the reconstruction error for different electrodes setups using either kCSD or the LA + kCSD approach.3 As expected, the kCSD method is indeed much better than LS combined with spline iCSD. The errors of reconstruction are much smaller, and the results are much more robust (there are no cases of huge errors as opposed to the LS method in Wjcik & Łȩski, 2010). Still, the LA + kCSD method performs better for this data set than pure kCSD. This is a result similar to the one obtained in Wjcik and Łȩski (2010), and it is different from the result for the two-dimensional case of large and small sources. The results are presented in Figures 7C and 7D (Figures 7A and 7B are direct counterparts of corresponding to Figures 4C and 4D in Wjcik and Łȩski, 2010).

Summarizing the two tests described above, the relative performance of the pure kCSD method on an incomplete grid versus the LA + kCSD method depends on the exact geometry of the electrodes. Therefore, we recommend that the decision if missing values should be supplemented with local averages of the neighbors is made based on tests on plausible model data for every particular electrodes setup.

4.  Regularization and Parameter Selection

Until now we have assumed that the data are precise (measured without errors) and suggested heuristics for choosing parameters. Such an approach allowed us to show how kCSD can be seen as a generalization of the previously developed iCSD method. Setting R equal to half of the electrode spacing in a regular grid also corresponds to iCSD, where the shape of the basis functions was determined by the geometry of the electrode array. Such an approach, demanding little computational power, can be recommended in cases when one wants quick results.

In practice, one never encounters noiseless data. Moreover, in a thorough analysis, one would like to set the parameters of the method optimally to do justice to the data, not just to the electrode setup. In this section, we examine one possible approach to address these issues. Since machine learning theory offers a variety of solutions, the following discussion merely indicates further directions of research.

4.1.  Regularization.

Measurements from electrodes are always corrupted with noise. To minimize the influence of this noise on inferences, one should avoid overfitting estimations to the data observed. Taking such precautions is known in statistics as regularization. Regularization can be understood as reducing the variance of estimators but allowing bias. Until now, we have constructed unbiased estimators that exactly fit the observations:
formula
4.1
To regularize the solution we used Tikhonov regularization (Tikhonov & Arsenin, 1977; Shawe-Taylor & Christiani, 2004) requiring
formula
4.2
which is solved by
formula
4.3
Thus, the resulting regularized estimate of CSD is
formula
Increasing the value of λ results in a continuous reduction of magnitude of the parameters β, which stabilizes the model (decreases variation but increases bias), but choosing λ too large eventually ignores the data. Therefore, one has to decide on a compromise value for λ.

There are several ways to select optimal value of λ from the data of Hastie and Tibshirani (2001), He and Lian (2005). We used cross-validation. The idea behind cross-validation is fairly simple. The observations are divided into L subsets, which can be of equal size. Next, we construct L regressors, each time using one of the L subsets as a test set and the rest as a training set. Each time the estimation error on the test set is calculated. This procedure is repeated for a wide range of λ values, and each time the average error is calculated. At the end, we choose the value of λ that results in the smallest average error. In practice, we used leave-one-out cross-validation, where each test set consists of one element.

To test the viability of Tikhonov regularization, we performed a test using the large sources model data set. To the calculated potentials, we added gaussian noise of SD equal to 10% of the total variation of the noise-free potential in the studied domain (max V(x) − min V(x)). We set R to 1.5 of maximum electrode distance and chose λ using cross-validation. Figure 8 shows that the kCSD method without regularization works rather poorly for such large noise, whereas combining kCSD with ridge regression and cross-validation improves the reconstruction significantly.

Figure 8:

(A) Distribution of model sources used to test ridge regression. Circles indicate positions of simulated electrodes. To the potentials measured there, we added gaussian noise with SD equal to 10% of the total variation of the noise-free potential in the studied domain. (B) CSD reconstruction without regularization. (C) Reconstruction using Tikhonov regularization with R set to 1.5 of maximum electrode distance and λ chosen via cross-validation.

Figure 8:

(A) Distribution of model sources used to test ridge regression. Circles indicate positions of simulated electrodes. To the potentials measured there, we added gaussian noise with SD equal to 10% of the total variation of the noise-free potential in the studied domain. (B) CSD reconstruction without regularization. (C) Reconstruction using Tikhonov regularization with R set to 1.5 of maximum electrode distance and λ chosen via cross-validation.

In the test situation, where one knows the model source, optimal λ can be calculated not only through cross-validation but also through direct comparison of reconstructed sources with the true source. In experiments we do not have this luxury. To find out how close to optimal the value of λ selected by cross-validation is, we checked the reconstruction error in ridge regression on the model data set for a wide range of λ. Figure 9A shows that the λ obtained with cross-validation gives a very good estimate of the optimal value.

Figure 9:

(A) Line: Reconstruction error of kCSD with ridge regression on the whole data set for a wide range of parameters. The circle indicates the value of λ chosen via cross-validation. (B) Line: Reconstruction error for iCSD and kCSD (with λ and R chosen via cross-validation) for increasing the SD of noise. Potentials used for the reconstructions were generated by the big sources CSD profile and recorded on a 7 × 7 regular rectangular grid.

Figure 9:

(A) Line: Reconstruction error of kCSD with ridge regression on the whole data set for a wide range of parameters. The circle indicates the value of λ chosen via cross-validation. (B) Line: Reconstruction error for iCSD and kCSD (with λ and R chosen via cross-validation) for increasing the SD of noise. Potentials used for the reconstructions were generated by the big sources CSD profile and recorded on a 7 × 7 regular rectangular grid.

4.2.  Parameter Selection.

In the previous sections, we proposed heuristic suggestions for selecting parameter values. Cross-validation, which we used for selecting optimal value of λ in ridge regression, can also be used to select values of other parameters of the method. The main parameter is R, but we could have a parameter setting the extent of the sources beyond the area span by the electrodes and thus consider the influence of sources beyond the region of interest. In general, we may want our method to depend on parameters p1, …, pm and get their values using cross-validation. In practice we select m finite sets containing the parameters , consider all possible combinations of parameter values , and do cross-validation as discussed. One can construct the sets P1, …, Pm using prior knowledge or just let them cover a wide range of values.

To illustrate the power of this approach to parameter tuning that includes regularization, we examined the performance of kCSD as noise grows (see Figure 9B). At each level of noise, λ and R were chosen through cross-validation. We compared the reconstruction error of the obtained solution with the performance of spline iCSD. The improvement is significant.

5.  Discussion and Summary

In this letter, we have introduced a new framework for estimating of current sources from extracellular potentials using kernel methods. Introducing kernels in this context opens up new experimental possibilities allowing efficient approximation of the sources from arbitrary distributions of contacts. We discuss here the advantages and limitations of this approach, indicating possible further directions of development in both the physical and statistical aspects of the problem.

5.1.  Advantages of kCSD.

The main advantages of kCSD compared with the previously developed methods are (1) conceptual separation of the model construction (introducing the sources and potentials—the bi(x) and basis functions) from the distribution of electrode locations and (2) ease of reconstructing CSD from arbitrarily located contacts. One immediate benefit is that in cases such as the 3D recordings analyzed in Łȩski et al. (2007, 2010), where we know that the potentials where not recorded exactly on a grid, in the framework of kCSD, we can take the best estimates of electrode position and the cost of calculations does not change, as opposed to the 3D iCSD, where we assumed electrode location on a regular grid and neglected possible errors.

This flexibility may lead to new experimental possibilities. One case we see is combining acute experiments, such as the one described in Łȩski et al. (2007, 2010), where one can perform precise scans of electrical activation of tissue with high spatial resolution, with chronic experiments, where, of necessity, one would restrict oneself to a few precisely positioned electrodes, usually not on a grid. One could then use the information about the activity of CSD obtained in the acute experiments to build optimal model spaces, allowing the best possible reconstruction of the sources from the limited number of measurements available in the chronic situation. This may lead to a clearer spatial and temporal separation of functional pathways than possible using methods available so far.

5.2.  Interplay of Modeling and Data Analysis in CSD Reconstruction.

A question that arises in connection with the above-mentioned approach is this. Given a specific profile of the sources, including their temporal dynamics or not, assume available N electrodes. How should they be positioned, and how should one construct the model RKHS to minimize errors of CSD reconstruction in the studied process? Or alternatively, how many electrodes are needed, and how should they be positioned to allow efficient estimation of CSD with given precision? We expect that the answers to these open questions would significantly depend on the specific activity and structure. To find optimal positions for recording, it would probably be necessary to test different arrangements of electrodes on simulated data. This calls for a new optimization scheme and development of efficient simulations of local field potentials in realistic geometries.

5.3.  Spectral Decomposition.

We have concentrated here on the estimation of sources from potentials, as the CSD seems to be the main object of interest in terms of physiology. However, the approach through RKHS can give us additional insight through spectral decomposition (Shi, Belkin, & Yu, 2008). That is, for a selected representation of sources (given a set of basis sources ), we can calculate the contribution of every basis source to the estimated CSD at any time point. We can write equation 2.18 as
formula
where
formula
It may happen that temporal changes of activation αj of a source centered on xj are different from the value of estimated CSD at x, as this is a sum of contributions from all the sources containing xj in their support. This could yield additional insight in the analysis of data, especially if the construction of the underlying RKHS is motivated anatomically and the basic sources can be attributed functional meaning.

5.4.  Parameter Choice for kCSD.

The kCSD framework that we have introduced here is very flexible; one can use many different types of bases. This leads to a question of what the recommended first choice of model space is and how to choose parameters for unknown sources. As our numerical experiments in section 3 show, gaussian models give smaller errors in a larger range of parameters than the step functions, so we recommend the gaussians. The optimal value of R for electrodes on regular grid is between one and two interelectrode distances. We expect that even better results can be obtained with basis adapted to the problem at hand. Its construction should be motivated by available anatomical and functional information or tests on sources generated in computational modeling if possible. Efficient construction of optimal basis for a problem at hand is another direction worth exploring. One possible approach is the use of cross-validation, which we demonstrated in section 4 for selection of the regularization constant λ.

5.5.  Including Time Dependence.

As mentioned in section 2, time dependence of the potentials was not taken into account in this letter. However, LFP data coming from experiments usually have the form of several time series —one for each electrode. As a result of modeling these time series, the relationships between them, and incorporating this knowledge in the estimation may reveal more information about the examined region.

A possible way of extending the framework described in this letter would be to add time dependence to the basis functions introduced in section 2. In the simplest case, one could think of basis functions that factorize into the product of two seperate location and time-dependent factors,
formula
where could have a step or a gaussian shape.

If we calculate the potential generated by in every moment t, we get a set of potential basis functions bi(x, t). We are therefore free to construct kernels as in equations 2.9 and 2.17, run the kCSD method, and obtain a time dependent CSD estimator.

5.6.  Direct kCSD Method.

To this point, we have always derived kernels K and from the basis functions via equations 2.9 and 2.17. One can also try defining kernel K directly, omitting the introduction of basis functions. Shawe-Taylor and Christiani (2004, theorem 3.11) showed that as long as a kernel function K is positive definite, then there exists an RKHS and an equivalent mapping such that
formula
We are therefore free to model the potentials with kernels typically used in learning algorithms (e.g., a gaussian kernel), as presented, for instance, in Schölkopf and Smola (2002). To model CSD, it remains to find the equivalent cross-kernel. This can be done regarding the following equation:
formula
where by we denote operator acting on the second variable, so we can write
formula
Calculating involves inverting operator , which is simple in the 3D case where the inverse operator is just the Laplacian. However, in the 1D and 2D cases, this operator depends on the model of the tissue in the directions normal to the space spanned by the electrodes, so its inversion is more involved.

Our preliminary numerical experiments with the 3D case indicate that this direct kCSD method, as we call it, is even faster and very easy to calculate, and it is very stable. A thorough study is underway.

5.7.  Generalized Models of Tissue Conductivity.

We have assumed in the analysis constant conductivity. This simplifies the problem and, in view of the lack of available data on conductivity in many areas, is a natural approach to start. However, as it is now becoming feasible to measure conductivity more and more precisely and as the changing conductivity seems to influence substantially the fields (Goto et al., 2010), it is necessary to develop kCSD to incorporate richer models of sources taking into account space-dependent, and perhaps nonscalar, conductivity. One important example that calls for a dedicated approach is that of slices on multielectrode arrays.

Appendix A:  Specification of the Sources Used in the Tests

In this appendix we provide detailed information about the sources used in testing kCSD method in section 3.

A.1.  Large Sources.

The large sources were generated using the following Matlab function:
formula
Note that the sources used in this letter have product structure, which means that the above function was evaluated only for z = 0 and we assumed a step profile in z variable. More general sources were considered in Łȩski et al. (2011).

A.2.  Small Sources.

Let a, μ1, μ2, and C be the parameters (amplitude, mean, covariance matrix) of the following gaussian function:
formula
The small sources dataset was generated by a sum of four such gaussians with parameters given in Table 1.
Table 1:
Parameters Defining Gaussian Test Sources.
NumberaC
0.2 0.2 0.3  
-0.25 0.2 0.6  
0.24 0.5 0.3  
-0.2 0.5 0.6  
NumberaC
0.2 0.2 0.3  
-0.25 0.2 0.6  
0.24 0.5 0.3  
-0.2 0.5 0.6  

A.3.  Random Gaussian Sources.

The random gaussian sources were constructed according to the following algorithm (all probability distributions are uniform):

  1. Choose randomly rmin between 0.1 and 0.2, let rmax = 2rmin.

  2. Choose number n of gaussian sources, 4 ≤ n ≤ 8.

  3. For each source choose amplitude a between −1 and 1, angle ϑ between 0 and, 2π, position of the source (x0, y0) in the square [0, 1.4]2, and σx, σy between rmin and rmax.

  4. The test CSD distribution is equal to the sum of n terms, each of the form
    formula
See http://en.wikipedia.org/wiki/Gaussian_function.

A.4.  1D Sources.

The one-dimensional sources were constructed in the following manner as a mixture of two gaussians:
formula
The parameter values were:
formula

Appendix B:  iCSD Is a Special Case of kCSD

To set the stage, we rewrite iCSD in the language used here. We start from a set of k LFP measurements: . Then a model of CSD is assumed in the form of N-parameter distribution:
formula
In all the work so far, xi were assumed to form a regular rectangular grid, and Ci were the values of CSD at the nodes of the grid, which were to be estimated from the given potentials. The spatial profiles and the associated potentials bi(x) are set by the dimensionality of the problem and the variant of the method used (e.g., step, linear spline, cubic spline). For example, in the 3D spline method, would be a spline-interpolated three-dimensional function between grid points, taking values 1 at xi and 0 at xji with appropriate boundary conditions (Łȩski et al. 2007). The potential generated by source is given by equation 2.5. In lower dimensionality, one has to add a model of sources in the directions not probed by the electrodes. Thus in the 2D step method (Łȩski et al., 2011), for example, assuming xi ≡ (xi, yi, 0) with interelectrode distance Δ and step profile in the perpendicular direction of the depth H we would have
formula
with the potentials given by equation 2.22.
In any iCSD variant, the potentials according to the assumed model of sources are given by V(x) = ∑jCjbj(x) with the values measured at a grid point i equal to V(xi) = ∑jCjbj(xi) = Vi. To find the model sources, we first solve for parameters Cj given the potentials
formula
with obvious notation b(x) = [b1(x), …, bN(x)]T. Inverting this relation, we obtain
formula
Then the sources are given by
formula
B.1
In the kCSD framework, we have (see equation 2.15):
formula
which gives
formula
B.2
Then the sources CkCSD(x) according to equation 2.16 are given by
formula
B.3
Using equations B.2 and B.3, we obtain
formula
B.4

Acknowledgments

This work was supported by the Polish Ministry of Science and Higher Education grant PBZ/MNiSW/07/2006/1 and by the infrastructural grant from the Polish Ministry of Regional Development, POIG.02.03.00-00-003/09.

Notes

1

Such consistency with data is appropriate when the observations are noise free. In section 4, we discuss a more general treatment of data with noise.

2

The only constraint was that any two electrodes cannot be closer than 0.14 mm.

3

Note that here, we know only the potentials, not the true sources; therefore, the error is the difference between the reconstruction from complete and incomplete data.

References

Aronszajn
,
N.
(
1950
).
Theory of reproducing kernels
.
Transactions of the American Mathematical Society
,
68
(
3
),
337
404
.
Bakker
,
R.
,
Schubert
,
D.
,
Levels
,
K.
,
Bezgin
,
G.
,
Bojak
,
I.
, &
Kötter
,
R.
(
2009
).
Classification of cortical microcircuits based on micro-electrode-array data from slices of rat barrel cortex.
Neural Netw.
,
22
(
8
),
1159
1168
.
Barthó
,
P.
,
Hirase
,
H.
,
Monconduit
,
L.
,
Zugaro
,
M.
,
Harris
,
K. D.
, &
Buzsáki
,
G.
(
2004
).
Characterization of neocortical principal cells and interneurons by network interactions and extracellular features
.
J. Neurophysiol.
,
92
(
1
),
600
608
.
Berens
,
P.
,
Keliris
,
G. A.
,
Ecker
,
A. S.
,
Logothetis
,
N. K.
, &
Tolias
,
A. S.
(
2008
).
Feature selectivity of the gamma-band of the local field potential in primate primary visual cortex.
Frontiers in Neuroscience
,
2
(
2
),
199
207
.
Buzsáki
,
G.
(
2004
).
Large-scale recording of neuronal ensembles.
Nat. Neurosci.
,
7
(
5
),
446
451
.
Charvet
,
G.
,
Rousseau
,
L.
,
Billoint
,
O.
,
Gharbi
,
S.
,
Rostaing
,
J.-P.
,
Trevisiol
,
M.
, et al. (
2010
).
BioMEA: A versatile high-density 3D microelectrode array system using integrated electronics.
Biosensors and Bioelectronics
,
25
,
1889
1896
.
Csicsvari
,
J.
,
Henze
,
D. A.
,
Jamieson
,
B.
,
Harris
,
K. D.
,
Sirota
,
A.
,
Barthó
,
P.
, et al. (
2003
).
Massively parallel recording of unit and local field potentials with silicon-based electrodes.
J. Neurophysiol.
,
90
(
2
),
1314
1323
.
de Solages
,
C.
,
Szapiro
,
G.
,
Brunel
,
N.
,
Hakim
,
V.
,
Isope
,
P.
,
Buisseret
,
P.
, et al
(
2008
).
High-frequency organization and synchrony of activity in the Purkinje cell layer of the cerebellum
.
Neuron
,
58
(
5
),
775
788
.
Einevoll
,
G. T.
,
Pettersen
,
K. H.
,
Devor
,
A.
,
Ulbert
,
I.
,
Halgren
,
E.,
&
Dale
,
A. M.
(
2007
).
Laminar population analysis: Estimating firing rates and evoked synaptic activity from multielectrode recordings in rat barrel cortex.
J. Neurophysiol.
,
97
(
3
),
2174
2190
.
Freeman
,
J. A.
, &
Nicholson
,
C.
(
1975
).
Experimental optimization of current source-density technique for anuran cerebellum.
J. Neurophysiol.
,
38
(
2
),
369
382
.
Freeman
,
W. J.
(
1980
).
Use of spatial deconvolution ot compensate for distortion of EEG by volume conduction.
IEEE Trans. Biomed. Eng.,
27
(
8
),
421
429
.
Frey
,
U.
,
Egert
,
U.
,
Heer
,
F.
,
Hafizovic
,
S.
, &
Hierlemann
,
A.
(
2009
).
Microelectronic system for high-resolution mapping of extracellular electric fields applied to brain slices.
Biosensors and Bioelectronics
,
24
,
2191
2198
.
Goto
,
T.
,
Hatanaka
,
R.
,
Ogawa
,
T.
,
Sumiyoshi
,
A.
,
Riera
,
J. J.
, &
Kawashima
,
R.
(
2010
).
An evaluation of the conductivity profile in the somatosensory barrel cortex of wistar rats.
J. Neurophysiol.
,
104
,
3388
3412
.
Guljarani
,
R.
(
1998
).
Bioelectricity and biomagnetism
.
Hoboken, NJ
:
Wiley
.
Haberly
,
L. B.
, &
Shepherd
,
G. M.
(
1973
).
Current-density analysis of summed evoked potentials in opossum prepyriform cortex.
J. Neurophysiol.
,
36
(
4
),
789
802
.
Hastie
,
T.
, &
Tibshirani
,
R.
(
2001
).
The elements of statistical learning
.
Berlin
:
Springer
.
He
,
B.
, &
Lian
,
J.
(
2005
).
Electrophysiological neuroimaging
. In
B. He
(Ed.),
Neural engineering
.
New York
:
Kluwer Academic
.
Hunt
,
M. J.
,
Falinska
,
M.
,
Łȩski
,
S.
,
Wójcik
,
D. K.
, &
Kasicki
,
S.
(
2010
).
Differential effects produced by ketamine on oscillatory activity recorded in the rat hippocampus, dorsal striatum and nucleus accumbens.
J. Psychopharmacol.
,
25
(
6
),
808
821
.
Imfeld
,
K.
,
Neukom
,
S.
,
Maccione
,
A.
,
Bornat
,
Y.
,
Martinoia
,
S.
,
Farine
,
P.-A.
, et al
(
2008
).
Large-scale, high-resolution data acquisition system for extracellular recording of electrophysiological activity
.
IEEE Transactions on Biomedical Engineering
,
55
(
8
),
2064
2073
.
Katzner
,
S.
,
Nauhaus
,
I.
,
Benucci
,
A.
,
Bonin
,
V.
,
Ringach
,
D. L.
, &
Carandini
,
M.
(
2009
).
Local origin of field potentials in visual cortex
,
Neuron
,
61
(
1
),
35
41
.
Kreiman
,
G.
,
Hung
,
C. P.
,
Kraskov
,
A.
,
Quiroga
,
R. Q.
,
Poggio
,
T.
, &
Dicarlo
,
J. J.
(
2006
).
Object selectivity of local field potentials and spikes in the macaque inferior temporal cortex.
Neuron
,
49
,
433
445
.
Lakatos
,
P.
,
Shah
,
A. S.
,
Knuth
,
K. H.
,
Ulbert
,
I.
,
Karmos
,
G.
, &
Schroeder
,
C. E.
(
2005
).
An oscillatory hierarchy controlling neuronal excitability and stimulus processing in the auditory cortex.
Journal of Neurophysiology
,
94
,
1904
1911
.
Łȩski
,
S.
,
Kublik
,
E.
,
Świejkowski
,
D. A.
,
Wróbel
,
A.
, &
Wójcik
,
D. K.
(
2010
).
Extracting functional components of neural dynamics with independent component analysis and inverse current source density.
Journal of Computational Neuroscience
,
29
,
459
473
.
Łȩski
,
S.
,
Pettersen
,
K. H.
,
Tunstall
,
B.
,
Einevoll
,
G.
,
Gigg
,
J.
&
Wójcik
,
D. K.
(
2011
).
Inverse current source density method in two dimensions: Inferring neural activation from multielectrode recordings.
Neuroinformatics
.
Doi:10.1007/s12021-011-9111-4
.
Łȩski
,
S.
,
Wójcik
,
D. K.
,
Tereszczuk
,
J.
,
Świejkowski
,
D. A.
,
Kublik
,
E.
, &
Wróbel
,
A.
(
2007
).
Inverse current-source density method in 3D: Reconstruction fidelity, boundary effects, and influence of distant sources
.
Neuroinformatics
,
5
(
4
),
207
222
.
Lindén
,
H.
,
Pettersen
,
K. H.
, &
Einevoll
,
G. T.
(
2010
).
Intrinsic dendritic filtering gives low-pass power spectra of local field potentials.
Journal of Computational Neuroscience
,
29
,
423
444
.
Lipton
,
M. L.
,
Fu
,
K.-M.G.
,
Branch
,
C. A.
, &
Schroeder
,
C. E.
(
2006
).
Ipsilateral hand input to area 3b revealed by converging hemodynamic and electrophysiological analyses in macaque monkeys.
Journal of Neuroscience
,
26
,
180
185
.
Liu
,
J.
, &
Newsome
,
W. T.
(
2006
).
Local field potential in cortical area MT: Stimulus tuning and behavioral correlations.
Journal of Neuroscience
,
26
(
30
),
7779
7790
.
Lorente de No
,
R.
(
1947
).
A study of nerve physiology.
Studies from the Rockefeller Institute for Medical Research
,
131
,
1
496
.
Mitzdorf
,
U.
(
1985
).
Current source-density method and application in cat cerebral cortex: Investigation of evoked potentials and EEG phenomena.
Physiol. Rev.
,
65
(
1
),
37
100
.
Nicholson
,
C.
, &
Freeman
,
J. A.
(
1975
).
Theory of current source-density analysis and determination of conductivity tensor for anuran cerebellum.
J. Neurophysiol.
,
38
(
2
),
356
368
.
Normann
,
R. A.
,
Maynard
,
E. M.
,
Rousche
,
P. J.
, &
Warren
,
D. J.
(
1999
).
A neural interface for a cortical vision prosthesis.
Vision Res.
39
(
15
),
2577
2587
.
Nunez
,
P. L.
, &
Srinivasan
,
R.
(
2006
).
Electric fields of the brain: The neurophysics of EEG
.
New York
:
Oxford University Press
.
Pettersen
,
K. H.
,
Devor
,
A.
,
Ulbert
,
I.
,
Dale
,
A. M.
, &
Einevoll
,
G. T.
(
2006
).
Current-source density estimation based on inversion of electrostatic forward solution: Effects of finite extent of neuronal activity and conductivity discontinuities.
J. Neurosci. Methods
,
154
(
1–2
),
116
133
.
Pettersen
,
K. H.
,
Hagen
,
E.
, &
Einevoll
,
G. T.
(
2008
).
Estimation of population firing rates and current source densities from laminar electrode recordings.
J. Comput. Neurosci.
24
(
3
),
291
313
.
Phillips
,
C.
,
Mattout
,
J.
,
Rugg
,
M. D.
,
Maquet
,
P.
, &
Friston
,
K. J.
(
2005
).
An empirical Bayesian solution to the source reconstruction problem in EEG.
Neuroimage
,
24
(
4
),
997
1011
.
Pitts
,
W. H.
(
1952
).
Investigations on synaptic transmission
. In
Cybernetics; Trans. 9th Conf. Josiah Macy Foundation H. von Foerster
(pp. 159--166). New York
.
Plonsey
,
R.
(
1969
).
Bioelectric phenomena
.
New York
:
McGraw-Hill
.
Rajkai
,
C.
,
Lakatos
,
P.
,
Chen
,
C.-M.
,
Pincze
,
Z.
,
Karmos
,
G.
, &
Schroeder
,
C. E.
(
2008
).
Transient cortical excitation at the onset of visual fixation.
Cerebral Cortex
,
18
,
200
209
.
Schölkopf
,
B.
, &
Smola
,
A.
(
2002
).
Learning with kernels
.
Cambridge, MA
:
MIT Press
.
Schroeder
,
C. E.
,
Tenke
,
C. E.
, &
Givre
,
S. J.
(
1992
).
Subcortical contributions to the surface-recorded flash-VEP in the awake macaque.
Electroencephalography and Clinical Neurophysiology
,
84
,
219
231
.
Shawe-Taylor
,
J.
, &
Christiani
,
N.
(
2004
).
Kernel methods for pattern analysis
.
Cambridge
:
Cambridge University Press
.
Sher
,
A.
,
Chichilnisky
,
E.
,
Dabrowski
,
W.
,
Grillo
,
A.
,
Grivich
,
M.
,
Gunning
,
D.
, et al. (
2007
).
Large-scale multielectrode recording and stimulation of neural activity.
Nuclear Instruments and Methods in Physics Research A
,
579
,
895
900
.
Shi
,
T.
,
Belkin
,
M.
, &
Yu
,
B.
(
2008
).
Data spectroscopy: Learning mixture models using eigenspaces of convolution operators
. In
Proceedings of the 25th International Conference on Machine Learning
.
Eastbourne, UK: Omnipress
.
Vapnik
,
V. N.
(
1998
).
Statistical learning theory
.
New York
:
Wiley
.
Tikhonov
,
A. N.
, &
Arsenin
,
V. Y.
(
1977
).
Solutions of ill-posed problems
.
New York
:
Wiley
.
Ward
,
M. P.
,
Rajdev
,
P.
,
Ellison
,
C.
, &
Irazoqui
,
P. P.
(
2009
).
Toward a comparison of microelectrodes for acute and chronic recordings.
Brain Research
,
1282
,
183
200
.
Wirth
,
C.
, &
Lüscher
,
H.-R.
(
2004
).
Spatiotemporal evolution of excitation and inhibition in the rat barrel cortex investigated with multielectrode arrays.
J. Neurophysiol.
,
91
(
4
),
1635
1647
.
Wójcik
,
D. K.
, &
Łȩski
,
S.
(
2010
).
Current source density reconstruction from inco-mplete data.
Neural Computation
,
22
,
48
60
.
Xing
,
D.
,
Yeh
,
C.-I.
, &
Shapley
,
R. M.
(
2009
).
Spatial spread of the local field potential and its laminar variation in visual cortex.
Journal of Neuroscience
,
29
(
37
),
11540
11549
.
Ylinen
,
A.
,
Bragin
,
A.
,
Nadasdy
,
Z.
,
Jando
,
G.
,
Szabo
,
I.
,
Sik
,
A.
, et al. (
1995
).
Sharp wave-associated high-frequency oscillation (200 Hz) in the intact hippocampus: Network and intracellular mechanisms
.
Journal of Neuroscience
, 15,
30
46
.
Zhang
,
Y.
,
van Drongelen
,
W.
,
Kohrman
,
M.
, &
He
,
B.
(
2008
).
Three-dimensional brain current source reconstruction from intra-cranial ECOG recordings.
Neuroimage
,
42
(
2
),
683
695
.