Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Sridevi V. Sarma
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (5): 865–886.
Published: 01 May 2020
FIGURES
| View All (9)
Abstract
View article
PDF
The ability to move fast and accurately track moving objects is fundamentally constrained by the biophysics of neurons and dynamics of the muscles involved. Yet the corresponding trade-offs between these factors and tracking motor commands have not been rigorously quantified. We use feedback control principles to quantify performance limitations of the sensorimotor control system (SCS) to track fast periodic movements. We show that (1) linear models of the SCS fail to predict known undesirable phenomena, including skipped cycles, overshoot and undershoot, produced when tracking signals in the “fast regime,” while nonlinear pulsatile control models can predict such undesirable phenomena, and (2) tools from nonlinear control theory allow us to characterize fundamental limitations in this fast regime. Using a validated and tractable nonlinear model of the SCS, we derive an analytical upper bound on frequencies that the SCS model can reliably track before producing such undesirable phenomena as a function of the neurons' biophysical constraints and muscle dynamics. The performance limitations derived here have important implications in sensorimotor control. For example, if the primary motor cortex is compromised due to disease or damage, the theory suggests ways to manipulate muscle dynamics by adding the necessary compensatory forces using an assistive neuroprosthetic device to restore motor performance and, more important, fast and agile movements. Just how one should compensate can be informed by our SCS model and the theory developed here.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (7): 1356–1387.
Published: 01 July 2016
FIGURES
| View All (64)
Abstract
View article
PDF
Pyramidal neurons recorded from the rat hippocampus and entorhinal cortex, such as place and grid cells, have diverse receptive fields, which are either unimodal or multimodal. Spiking activity from these cells encodes information about the spatial position of a freely foraging rat. At fine timescales, a neuron’s spike activity also depends significantly on its own spike history. However, due to limitations of current parametric modeling approaches, it remains a challenge to estimate complex, multimodal neuronal receptive fields while incorporating spike history dependence. Furthermore, efforts to decode the rat’s trajectory in one- or two-dimensional space from hippocampal ensemble spiking activity have mainly focused on spike history–independent neuronal encoding models. In this letter, we address these two important issues by extending a recently introduced nonparametric neural encoding framework that allows modeling both complex spatial receptive fields and spike history dependencies. Using this extended nonparametric approach, we develop novel algorithms for decoding a rat’s trajectory based on recordings of hippocampal place cells and entorhinal grid cells. Results show that both encoding and decoding models derived from our new method performed significantly better than state-of-the-art encoding and decoding models on 6 minutes of test data. In addition, our model’s performance remains invariant to the apparent modality of the neuron’s receptive field.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (10): 2294–2327.
Published: 01 October 2014
FIGURES
| View All (30)
Abstract
View article
PDF
Epilepsy is a network phenomenon characterized by atypical activity at the neuronal and population levels during seizures, including tonic spiking, increased heterogeneity in spiking rates, and synchronization. The etiology of epilepsy is unclear, but a common theme among proposed mechanisms is that structural connectivity between neurons is altered. It is hypothesized that epilepsy arises not from random changes in connectivity, but from specific structural changes to the most fragile nodes or neurons in the network. In this letter, the minimum energy perturbation on functional connectivity required to destabilize linear networks is derived. Perturbation results are then applied to a probabilistic nonlinear neural network model that operates at a stable fixed point. That is, if a small stimulus is applied to the network, the activation probabilities of each neuron respond transiently but eventually recover to their baseline values. When the perturbed network is destabilized, the activation probabilities shift to larger or smaller values or oscillate when a small stimulus is applied. Finally, the structural modifications to the neural network that achieve the functional perturbation are derived. Simulations of the unperturbed and perturbed networks qualitatively reflect neuronal activity observed in epilepsy patients, suggesting that the changes in network dynamics due to destabilizing perturbations, including the emergence of an unstable manifold or a stable limit cycle, may be indicative of neuronal or population dynamics during seizure. That is, the epileptic cortex is always on the brink of instability and minute changes in the synaptic weights associated with the most fragile node can suddenly destabilize the network to cause seizures. Finally, the theory developed here and its interpretation of epileptic networks enables the design of a straightforward feedback controller that first detects when the network has destabilized and then applies linear state feedback control to steer the network back to its stable state.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (11): 2731–2745.
Published: 01 November 2011
FIGURES
Abstract
View article
PDF
Characterizing neural spiking activity as a function of intrinsic and extrinsic factors is important in neuroscience. Point process models are valuable for capturing such information; however, the process of fully applying these models is not always obvious. A complete model application has four broad steps: specification of the model, estimation of model parameters given observed data, verification of the model using goodness of fit, and characterization of the model using confidence bounds. Of these steps, only the first three have been applied widely in the literature, suggesting the need to dedicate a discussion to how the time-rescaling theorem, in combination with parametric bootstrap sampling, can be generally used to compute confidence bounds of point process models. In our first example, we use a generalized linear model of spiking propensity to demonstrate that confidence bounds derived from bootstrap simulations are consistent with those computed from closed-form analytic solutions. In our second example, we consider an adaptive point process model of hippocampal place field plasticity for which no analytical confidence bounds can be derived. We demonstrate how to simulate bootstrap samples from adaptive point process models, how to use these samples to generate confidence bounds, and how to statistically test the hypothesis that neural representations at two time points are significantly different. These examples have been designed as useful guides for performing scientific inference based on point process models.