Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Yiwen Wang
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (5): 1372–1401.
Published: 13 April 2021
Abstract
View article
PDF
Motor brain machine interfaces (BMIs) interpret neural activities from motor-related cortical areas in the brain into movement commands to control a prosthesis. As the subject adapts to control the neural prosthesis, the medial prefrontal cortex (mPFC), upstream of the primary motor cortex (M1), is heavily involved in reward-guided motor learning. Thus, considering mPFC and M1 functionality within a hierarchical structure could potentially improve the effectiveness of BMI decoding while subjects are learning. The commonly used Kalman decoding method with only one simple state model may not be able to represent the multiple brain states that evolve over time as well as along the neural pathway. In addition, the performance of Kalman decoders degenerates in heavy-tailed nongaussian noise, which is usually generated due to the nonlinear neural system or influences of movement-related noise in online neural recording. In this letter, we propose a hierarchical model to represent the brain states from multiple cortical areas that evolve along the neural pathway. We then introduce correntropy theory into the hierarchical structure to address the heavy-tailed noise existing in neural recordings. We test the proposed algorithm on in vivo recordings collected from the mPFC and M1 of two rats when the subjects were learning to perform a lever-pressing task. Compared with the classic Kalman filter, our results demonstrate better movement decoding performance due to the hierarchical structure that integrates the past failed trial information over multisite recording and the combination with correntropy criterion to deal with noisy heavy-tailed neural recordings.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (10): 1863–1900.
Published: 01 October 2020
FIGURES
| View All (16)
Abstract
View article
PDF
Modeling spike train transformation among brain regions helps in designing a cognitive neural prosthesis that restores lost cognitive functions. Various methods analyze the nonlinear dynamic spike train transformation between two cortical areas with low computational eficiency. The application of a real-time neural prosthesis requires computational eficiency, performance stability, and better interpretation of the neural firing patterns that modulate target spike generation. We propose the binless kernel machine in the point-process framework to describe nonlinear dynamic spike train transformations. Our approach embeds the binless kernel to eficiently capture the feedforward dynamics of spike trains and maps the input spike timings into reproducing kernel Hilbert space (RKHS). An inhomogeneous Bernoulli process is designed to combine with a kernel logistic regression that operates on the binless kernel to generate an output spike train as a point process. Weights of the proposed model are estimated by maximizing the log likelihood of output spike trains in RKHS, which allows a global-optimal solution. To reduce computational complexity, we design a streaming-based clustering algorithm to extract typical and important spike train features. The cluster centers and their weights enable the visualization of the important input spike train patterns that motivate or inhibit output neuron firing. We test the proposed model on both synthetic data and real spike train data recorded from the dorsal premotor cortex and the primary motor cortex of a monkey performing a center-out task. Performances are evaluated by discrete-time rescaling Kolmogorov-Smirnov tests. Our model outperforms the existing methods with higher stability regardless of weight initialization and demonstrates higher eficiency in analyzing neural patterns from spike timing with less historical input (50%). Meanwhile, the typical spike train patterns selected according to weights are validated to encode output spike from the spike train of single-input neuron and the interaction of two input neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (12): 3189–3226.
Published: 01 December 2018
FIGURES
| View All (13)
Abstract
View article
PDF
Neurons communicate nonlinearly through spike activities. Generalized linear models (GLMs) describe spike activities with a cascade of a linear combination across inputs, a static nonlinear function, and an inhomogeneous Bernoulli or Poisson process, or Cox process if a self-history term is considered. This structure considers the output nonlinearity in spike generation but excludes the nonlinear interaction among input neurons. Recent studies extend GLMs by modeling the interaction among input neurons with a quadratic function, which considers the interaction between every pair of input spikes. However, quadratic effects may not fully capture the nonlinear nature of input interaction. We therefore propose a staged point-process model to describe the nonlinear interaction among inputs using a few hidden units, which follows the idea of artificial neural networks. The output firing probability conditioned on inputs is formed as a cascade of two linear-nonlinear (a linear combination plus a static nonlinear function) stages and an inhomogeneous Bernoulli process. Parameters of this model are estimated by maximizing the log likelihood on output spike trains. Unlike the iterative reweighted least squares algorithm used in GLMs, where the performance is guaranteed by the concave condition, we propose a modified Levenberg-Marquardt (L-M) algorithm, which directly calculates the Hessian matrix of the log likelihood, for the nonlinear optimization in our model. The proposed model is tested on both synthetic data and real spike train data recorded from the dorsal premotor cortex and primary motor cortex of a monkey performing a center-out task. Performances are evaluated by discrete-time rescaled Kolmogorov-Smirnov tests, where our model statistically outperforms a GLM and its quadratic extension, with a higher goodness-of-fit in the prediction results. In addition, the staged point-process model describes nonlinear interaction among input neurons with fewer parameters than quadratic models, and the modified L-M algorithm also demonstrates fast convergence.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (3): 730–751.
Published: 01 March 2010
FIGURES
| View All (9)
Abstract
View article
PDF
We present a simple optimization criterion that leads to autonomous development of a sensorimotor feedback loop driven by the neural representation of the depth in the mammalian visual cortex. Our test bed is an active stereo vision system where the vergence angle between the two eyes is controlled by the output of a population of disparity-selective neurons. By finding a policy that maximizes the total response across the neuron population, the system eventually tracks a target as it moves in depth. We characterized the tracking performance of the resulting policy using objects moving both sinusoidally and randomly in depth. Surprisingly, the system can even learn how to track based on stimuli it cannot track: even though the closed loop 3 dB tracking bandwidth of the system is 0.3 Hz, correct tracking policies are learned for input stimuli moving as fast as 0.75 Hz.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (10): 2894–2930.
Published: 01 October 2009
FIGURES
| View All (11)
Abstract
View article
PDF
Many decoding algorithms for brain machine interfaces' (BMIs) estimate hand movement from binned spike rates, which do not fully exploit the resolution contained in spike timing and may exclude rich neural dynamics from the modeling. More recently, an adaptive filtering method based on a Bayesian approach to reconstruct the neural state from the observed spike times has been proposed. However, it assumes and propagates a gaussian distributed state posterior density, which in general is too restrictive. We have also proposed a sequential Monte Carlo estimation methodology to reconstruct the kinematic states directly from the multichannel spike trains. This letter presents a systematic testing of this algorithm in a simulated neural spike train decoding experiment and then in BMI data. Compared to a point-process adaptive filtering algorithm with a linear observation model and a gaussian approximation (the counterpart for point processes of the Kalman filter), our sequential Monte Carlo estimation methodology exploits a detailed encoding model (tuning function) derived for each neuron from training data. However, this added complexity is translated into higher performance with real data. To deal with the intrinsic spike randomness in online modeling, several synthetic spike trains are generated from the intensity function estimated from the neurons and utilized as extra model inputs in an attempt to decrease the variance in the kinematic predictions. The performance of the sequential Monte Carlo estimation methodology augmented with this synthetic spike input provides improved reconstruction, which raises interesting questions and helps explain the overall modeling requirements better.