Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-10 of 10
Valérie Ventura
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (12): 3290–3310.
Published: 01 December 2017
FIGURES
Abstract
View article
PDF
Decoding in the context of brain-machine interface is a prediction problem, with the aim of retrieving the most accurate kinematic predictions attainable from the available neural signals. While selecting models that reduce the prediction error is done to various degrees, decoding has not received the attention that the fields of statistics and machine learning have lavished on the prediction problem in the past two decades. Here, we take a more systematic approach to the decoding prediction problem and search for risk-optimized reverse regression, optimal linear estimation (OLE), and Kalman filter models within a large model space composed of several nonlinear transformations of neural spike counts at multiple temporal lags. The reverse regression decoding framework is a standard prediction problem, where penalized methods such as ridge regression or Lasso are routinely used to find minimum risk models. We argue that minimum risk reverse regression is always more efficient than OLE and also happens to be 44% more efficient than a standard Kalman filter in a particular application of offline reconstruction of arm reaches of a rhesus macaque monkey. Yet model selection for tuning curves–based decoding models such as OLE and Kalman filtering is not a standard statistical prediction problem, and no efficient method exists to identify minimum risk models. We apply several methods to build low-risk models and show that in our application, a Kalman filter that includes multiple carefully chosen observation equations per neural unit is 67% more efficient than a standard Kalman filter, but with the drawback that finding such a model is computationally very costly.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (5): 849–881.
Published: 01 May 2016
FIGURES
| View All (36)
Abstract
View article
PDF
Populations of cortical neurons exhibit shared fluctuations in spiking activity over time. When measured for a pair of neurons over multiple repetitions of an identical stimulus, this phenomenon emerges as correlated trial-to-trial response variability via spike count correlation (SCC). However, spike counts can be viewed as noisy versions of firing rates, which can vary from trial to trial. From this perspective, the SCC for a pair of neurons becomes a noisy version of the corresponding firing rate correlation (FRC). Furthermore, the magnitude of the SCC is generally smaller than that of the FRC and is likely to be less sensitive to experimental manipulation. We provide statistical methods for disambiguating time-averaged drive from within-trial noise, thereby separating FRC from SCC. We study these methods to document their reliability, and we apply them to neurons recorded in vivo from area V4 in an alert animal. We show how the various effects we describe are reflected in the data: within-trial effects are largely negligible, while attenuation due to trial-to-trial variation dominates and frequently produces comparisons in SCC that, because of noise, do not accurately reflect those based on the underlying FRC.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (5): 1033–1050.
Published: 01 May 2015
FIGURES
| View All (9)
Abstract
View article
PDF
Spike-based brain-computer interfaces (BCIs) have the potential to restore motor ability to people with paralysis and amputation, and have shown impressive performance in the lab. To transition BCI devices from the lab to the clinic, decoding must proceed automatically and in real time, which prohibits the use of algorithms that are computationally intensive or require manual tweaking. A common choice is to avoid spike sorting and treat the signal on each electrode as if it came from a single neuron, which is fast, easy, and therefore desirable for clinical use. But this approach ignores the kinematic information provided by individual neurons recorded on the same electrode. The contribution of this letter is a linear decoding model that extracts kinematic information from individual neurons without spike-sorting the electrode signals. The method relies on modeling sample averages of waveform features as functions of kinematics, which is automatic and requires minimal data storage and computation. In offline reconstruction of arm trajectories of a nonhuman primate performing reaching tasks, the proposed method performs as well as decoders based on expertly manually and automatically sorted spikes.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (1): 40–56.
Published: 01 January 2014
FIGURES
| View All (19)
Abstract
View article
PDF
Corticomotoneuronal cells (CMN), located predominantly in the primary motor cortex, project directly to alpha motoneuronal pools in the spinal cord. The effects of CMN spikes on motoneuronal excitability are traditionally characterized by visualizing postspike effects (PSEs) in spike-triggered averages (SpTA; Fetz, Cheney, & German, 1976 ; Fetz & Cheney, 1980 ; McKiernan, Marcario, Karrer, & Cheney, 1998 ) of electromyography (EMG) data. Poliakov and Schieber ( 1998 ) suggested a formal test, the multiple-fragment analysis (MFA), to automatically detect PSEs. However, MFA's performance was not statistically validated, and it is unclear under what conditions it is valid. This paper's contributions are a power study that validates the MFA; an alternative test, the single-snippet analysis (SSA), which has the same functionality as MFA but is easier to calculate and has better power in small samples; a simple bootstrap simulation to estimate SpTA baselines with simulation bands that help visualize potential PSEs; and a bootstrap adjustment to the MFA and SSA to correct for nonlinear SpTA baselines.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (9): 2466–2501.
Published: 01 September 2009
FIGURES
| View All (13)
Abstract
View article
PDF
Current spike sorting methods focus on clustering neurons' characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes' identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (4): 923–963.
Published: 01 April 2008
Abstract
View article
PDF
We propose a novel paradigm for spike train decoding, which avoids entirely spike sorting based on waveform measurements. This paradigm directly uses the spike train collected at recording electrodes from thresholding the bandpassed voltage signal. Our approach is a paradigm, not an algorithm, since it can be used with any of the current decoding algorithms, such as population vector or likelihood-based algorithms. Based on analytical results and an extensive simulation study, we show that our paradigm is comparable to, and sometimes more efficient than, the traditional approach based on well-isolated neurons and that it remains efficient even when all electrodes are severely corrupted by noise, a situation that would render spike sorting particularly difficult. Our paradigm will also save time and computational effort, both of which are crucially important for successful operation of real-time brain-machine interfaces. Indeed, in place of the lengthy spike-sorting task of the traditional approach, it involves an exact expectation EM algorithm that is fast enough that it could also be left to run during decoding to capture potential slow changes in the states of the neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (11): 2583–2591.
Published: 01 November 2006
Abstract
View article
PDF
It has been observed that spike count correlation between two simultaneously recorded neurons often increases with the length of time interval examined. Under simple assumptions that are roughly consistent with much experimental data, we show that this phenomenon may be explained as being due to excess trial-to-trial variation. The resulting formula for the correlation is able to predict the observed correlation of two neurons recorded from primary visual cortex as a function of interval length.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (11): 2323–2349.
Published: 01 November 2004
Abstract
View article
PDF
Determining the variations in response latency of one or several neurons to a stimulus is of interest in different contexts. Two common problems concern correlating latency with a particular behavior, for example, the reaction time to a stimulus, and adjusting tools for detecting synchronization between two neurons. We use two such problems to illustrate the latency testing and estimation methods developed in this article. Our test for latencies is a formal statistical test that produces a p -value. It is applicable for Poisson and non-Poisson spike trains via use of the bootstrap. Our estimation method is model free, it is fast and easy to implement, and its performance compares favorably to other methods currently available.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (2): 325–346.
Published: 01 February 2002
Abstract
View article
PDF
Measuring agreement between a statistical model and a spike train data series, that is, evaluating goodness of fit, is crucial for establishing the model's validity prior to using it to make inferences about a particular neural system. Assessing goodness-of-fit is a challenging problem for point process neural spike train models, especially for histogram-based models such as perstimulus time histograms (PSTH) and rate functions estimated by spike train smoothing. The time-rescaling theorem is a well-known result in probability theory, which states that any point process with an integrable conditional intensity function may be transformed into a Poisson process with unit rate. We describe how the theorem may be used to develop goodness-of-fit tests for both parametric and histogram-based point process models of neural spike trains. We apply these tests in two examples: a comparison of PSTH, inhomogeneous Poisson, and inhomogeneous Markov interval models of neural spike trains from the supplementary eye field of a macque monkey and a comparison of temporal and spatial smoothers, inhomogeneous Poisson, inhomogeneous gamma, and inhomogeneous inverse gaussian models of rat hippocampal place cell spiking activity. To help make the logic behind the time-rescaling theorem more accessible to researchers in neuroscience, we present a proof using only elementary probability theory arguments.We also show how the theorem may be used to simulate a general point process model of a spike train. Our paradigm makes it possible to compare parametric and histogram-based neural spike train models directly. These results suggest that the time-rescaling theorem can be a valuable tool for neural spike train data analysis.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (8): 1713–1720.
Published: 01 August 2001
Abstract
View article
PDF
Poisson processes usually provide adequate descriptions of the irregularity in neuron spike times after pooling the data across large numbers of trials, as is done in constructing the peristimulus time histogram. When probabilities are needed to describe the behavior of neurons within individual trials, however, Poisson process models are often inadequate. In principle, an explicit formula gives the probability density of a single spike train in great generality, but without additional assumptions, the firing-rate intensity function appearing in that formula cannot be estimated. We propose a simple solution to this problem, which is to assume that the time at which a neuron fires is determined probabilistically by, and only by, two quantities: the experimental clock time and the elapsed time since the previous spike. We show that this model can be fitted with standard methods and software and that it may used successfully to fit neuronal data.