Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Helene G. Moorman
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (9): 1811–1839.
Published: 01 September 2014
FIGURES
| View All (11)
Abstract
View article
PDF
Closed-loop decoder adaptation (CLDA) is an emerging paradigm for both improving and maintaining online performance in brain-machine interfaces (BMIs). The time required for initial decoder training and any subsequent decoder recalibrations could be potentially reduced by performing continuous adaptation, in which decoder parameters are updated at every time step during these procedures, rather than waiting to update the decoder at periodic intervals in a more batch-based process. Here, we present recursive maximum likelihood (RML), a CLDA algorithm that performs continuous adaptation of a Kalman filter decoder's parameters. We demonstrate that RML possesses a variety of useful properties and practical algorithmic advantages. First, we show how RML leverages the accuracy of updates based on a batch of data while still adapting parameters on every time step. Second, we illustrate how the RML algorithm is parameterized by a single, intuitive half-life parameter that can be used to adjust the rate of adaptation in real time. Third, we show how even when the number of neural features is very large, RML's memory-efficient recursive update rules can be reformulated to also be computationally fast so that continuous adaptation is still feasible. To test the algorithm in closed-loop experiments, we trained three macaque monkeys to perform a center-out reaching task by using either spiking activity or local field potentials to control a 2D computer cursor. RML achieved higher levels of performance more rapidly in comparison to a previous CLDA algorithm that adapts parameters on a more intermediate timescale. Overall, our results indicate that RML is an effective CLDA algorithm for achieving rapid performance acquisition using continuous adaptation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (7): 1693–1731.
Published: 01 July 2013
FIGURES
| View All (19)
Abstract
View article
PDF
Closed-loop decoder adaptation (CLDA) is an emerging paradigm for achieving rapid performance improvements in online brain-machine interface (BMI) operation. Designing an effective CLDA algorithm requires making multiple important decisions, including choosing the timescale of adaptation, selecting which decoder parameters to adapt, crafting the corresponding update rules, and designing CLDA parameters. These design choices, combined with the specific settings of CLDA parameters, will directly affect the algorithm's ability to make decoder parameters converge to values that optimize performance. In this article, we present a general framework for the design and analysis of CLDA algorithms and support our results with experimental data of two monkeys performing a BMI task. First, we analyze and compare existing CLDA algorithms to highlight the importance of four critical design elements: the adaptation timescale, selective parameter adaptation, smooth decoder updates, and intuitive CLDA parameters. Second, we introduce mathematical convergence analysis using measures such as mean-squared error and KL divergence as a useful paradigm for evaluating the convergence properties of a prototype CLDA algorithm before experimental testing. By applying these measures to an existing CLDA algorithm, we demonstrate that our convergence analysis is an effective analytical tool that can ultimately inform and improve the design of CLDA algorithms.