Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Matthew T. Harrison
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (5): 969–1017.
Published: 01 May 2020
FIGURES
Abstract
View article
PDF
The Kalman filter provides a simple and efficient algorithm to compute the posterior distribution for state-space models where both the latent state and measurement models are linear and gaussian. Extensions to the Kalman filter, including the extended and unscented Kalman filters, incorporate linearizations for models where the observation model p ( observation | state ) is nonlinear. We argue that in many cases, a model for p ( state | observation ) proves both easier to learn and more accurate for latent state estimation. Approximating p ( state | observation ) as gaussian leads to a new filtering algorithm, the discriminative Kalman filter (DKF), which can perform well even when p ( observation | state ) is highly nonlinear and/or nongaussian. The approximation, motivated by the Bernstein–von Mises theorem, improves as the dimensionality of the observations increases. The DKF has computational complexity similar to the Kalman filter, allowing it in some cases to perform much faster than particle filters with similar precision, while better accounting for nonlinear and nongaussian observation models than Kalman-based extensions. When the observation model must be learned from training data prior to filtering, off-the-shelf nonlinear and nonparametric regression techniques can provide a gaussian model for p ( observation | state ) that cleanly integrates with the DKF. As part of the BrainGate2 clinical trial, we successfully implemented gaussian process regression with the DKF framework in a brain-computer interface to provide real-time, closed-loop cursor control to a person with a complete spinal cord injury. In this letter, we explore the theory underlying the DKF, exhibit some illustrative examples, and outline potential extensions.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (11): 2986–3008.
Published: 01 November 2018
FIGURES
| View All (4)
Abstract
View article
PDF
Intracortical brain computer interfaces can enable individuals with paralysis to control external devices through voluntarily modulated brain activity. Decoding quality has been previously shown to degrade with signal nonstationarities—specifically, the changes in the statistics of the data between training and testing data sets. This includes changes to the neural tuning profiles and baseline shifts in firing rates of recorded neurons, as well as nonphysiological noise. While progress has been made toward providing long-term user control via decoder recalibration, relatively little work has been dedicated to making the decoding algorithm more resilient to signal nonstationarities. Here, we describe how principled kernel selection with gaussian process regression can be used within a Bayesian filtering framework to mitigate the effects of commonly encountered nonstationarities. Given a supervised training set of (neural features, intention to move in a direction)-pairs, we use gaussian process regression to predict the intention given the neural data. We apply kernel embedding for each neural feature with the standard radial basis function. The multiple kernels are then summed together across each neural dimension, which allows the kernel to effectively ignore large differences that occur only in a single feature. The summed kernel is used for real-time predictions of the posterior mean and variance under a gaussian process framework. The predictions are then filtered using the discriminative Kalman filter to produce an estimate of the neural intention given the history of neural data. We refer to the multiple kernel approach combined with the discriminative Kalman filter as the MK-DKF. We found that the MK-DKF decoder was more resilient to nonstationarities frequently encountered in-real world settings yet provided similar performance to the currently used Kalman decoder. These results demonstrate a method by which neural decoding can be made more resistant to nonstationarities.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (1): 104–150.
Published: 01 January 2015
FIGURES
| View All (44)
Abstract
View article
PDF
The collective dynamics of neural ensembles create complex spike patterns with many spatial and temporal scales. Understanding the statistical structure of these patterns can help resolve fundamental questions about neural computation and neural dynamics. Spatiotemporal conditional inference (STCI) is introduced here as a semiparametric statistical framework for investigating the nature of precise spiking patterns from collections of neurons that is robust to arbitrarily complex and nonstationary coarse spiking dynamics. The main idea is to focus statistical modeling and inference not on the full distribution of the data, but rather on families of conditional distributions of precise spiking given different types of coarse spiking. The framework is then used to develop families of hypothesis tests for probing the spatiotemporal precision of spiking patterns. Relationships among different conditional distributions are used to improve multiple hypothesis-testing adjustments and design novel Monte Carlo spike resampling algorithms. Of special note are algorithms that can locally jitter spike times while still preserving the instantaneous peristimulus time histogram or the instantaneous total spike count from a group of recorded neurons. The framework can also be used to test whether first-order maximum entropy models with possibly random and time-varying parameters can account for observed patterns of spiking. STCI provides a detailed example of the generic principle of conditional inference, which may be applicable to other areas of neurostatistical analysis.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (2): 418–449.
Published: 01 February 2013
FIGURES
| View All (6)
Abstract
View article
PDF
Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using permutation tests and for testing pairwise synchrony and precise lagged-correlation between many simultaneously recorded spike trains using interval jitter.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (5): 1244–1258.
Published: 01 May 2009
FIGURES
Abstract
View article
PDF
Resampling methods are popular tools for exploring the statistical structure of neural spike trains. In many applications, it is desirable to have resamples that preserve certain non-Poisson properties, like refractory periods and bursting, and that are also robust to trial-to-trial variability. Pattern jitter is a resampling technique that accomplishes this by preserving the recent spiking history of all spikes and constraining resampled spikes to remain close to their original positions. The resampled spike times are maximally random up to these constraints. Dynamic programming is used to create an efficient resampling algorithm.