Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-11 of 11
Uri T. Eden
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2022) 34 (5): 1100–1135.
Published: 15 April 2022
Abstract
View article
PDF
With the accelerated development of neural recording technology over the past few decades, research in integrative neuroscience has become increasingly reliant on data analysis methods that are scalable to high-dimensional recordings and computationally tractable. Latent process models have shown promising results in estimating the dynamics of cognitive processes using individual models for each neuron's receptive field. However, scaling these models to work on high-dimensional neural recordings remains challenging. Not only is it impractical to build receptive field models for individual neurons of a large neural population, but most neural data analyses based on individual receptive field models discard the local history of neural activity, which has been shown to be critical in the accurate inference of the underlying cognitive processes. Here, we propose a novel, scalable latent process model that can directly estimate cognitive process dynamics without requiring precise receptive field models of individual neurons or brain nodes. We call this the direct discriminative decoder (DDD) model. The DDD model consists of (1) a discriminative process that characterizes the conditional distribution of the signal to be estimated, or state, as a function of both the current neural activity and its local history, and (2) a state transition model that characterizes the evolution of the state over a longer time period. While this modeling framework inherits advantages of existing latent process modeling methods, its computational cost is tractable. More important, the solution can incorporate any information from the history of neural activity at any timescale in computing the estimate of the state process. There are many choices in building the discriminative process, including deep neural networks or gaussian processes, which adds to the flexibility of the framework. We argue that these attributes of the proposed methodology, along with its applicability to different modalities of neural data, make it a powerful tool for high-dimensional neural data analysis. We also introduce an extension of these methods, called the discriminative-generative decoder (DGD). The DGD includes both discriminative and generative processes in characterizing observed data. As a result, we can combine physiological correlates like behavior with neural data to better estimate underlying cognitive processes. We illustrate the methods, including steps for inference and model identification, and demonstrate applications to multiple data analysis problems with high-dimensional neural recordings. The modeling results demonstrate the computational and modeling advantages of the DDD and DGD methods.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (11): 2145–2186.
Published: 01 November 2020
Abstract
View article
PDF
Marked point process models have recently been used to capture the coding properties of neural populations from multiunit electrophysiological recordings without spike sorting. These clusterless models have been shown in some instances to better describe the firing properties of neural populations than collections of receptive field models for sorted neurons and to lead to better decoding results. To assess their quality, we previously proposed a goodness-of-fit technique for marked point process models based on time rescaling, which for a correct model produces a set of uniform samples over a random region of space. However, assessing uniformity over such a region can be challenging, especially in high dimensions. Here, we propose a set of new transformations in both time and the space of spike waveform features, which generate events that are uniformly distributed in the new mark and time spaces. These transformations are scalable to multidimensional mark spaces and provide uniformly distributed samples in hypercubes, which are well suited for uniformity tests. We discuss the properties of these transformations and demonstrate aspects of model fit captured by each transformation. We also compare multiple uniformity tests to determine their power to identify lack-of-fit in the rescaled data. We demonstrate an application of these transformations and uniformity tests in a simulation study. Proofs for each transformation are provided in the appendix.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (9): 1751–1788.
Published: 01 September 2019
FIGURES
| View All (9)
Abstract
View article
PDF
Cognitive processes, such as learning and cognitive flexibility, are both difficult to measure and to sample continuously using objective tools because cognitive processes arise from distributed, high-dimensional neural activity. For both research and clinical applications, that dimensionality must be reduced. To reduce dimensionality and measure underlying cognitive processes, we propose a modeling framework in which a cognitive process is defined as a low-dimensional dynamical latent variable—called a cognitive state, which links high-dimensional neural recordings and multidimensional behavioral readouts. This framework allows us to decompose the hard problem of modeling the relationship between neural and behavioral data into separable encoding-decoding approaches. We first use a state-space modeling framework, the behavioral decoder, to articulate the relationship between an objective behavioral readout (e.g., response times) and cognitive state. The second step, the neural encoder, involves using a generalized linear model (GLM) to identify the relationship between the cognitive state and neural signals, such as local field potential (LFP). We then use the neural encoder model and a Bayesian filter to estimate cognitive state using neural data (LFP power) to generate the neural decoder. We provide goodness-of-fit analysis and model selection criteria in support of the encoding-decoding result. We apply this framework to estimate an underlying cognitive state from neural data in human participants ( N = 8 ) performing a cognitive conflict task. We successfully estimated the cognitive state within the 95% confidence intervals of that estimated using behavior readout for an average of 90% of task trials across participants. In contrast to previous encoder-decoder models, our proposed modeling framework incorporates LFP spectral power to encode and decode a cognitive state. The framework allowed us to capture the temporal evolution of the underlying cognitive processes, which could be key to the development of closed-loop experiments and treatments.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (1): 125–148.
Published: 01 January 2018
FIGURES
| View All (6)
Abstract
View article
PDF
To understand neural activity, two broad categories of models exist: statistical and dynamical. While statistical models possess rigorous methods for parameter estimation and goodness-of-fit assessment, dynamical models provide mechanistic insight. In general, these two categories of models are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input current. We then fit these spike train data with a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (7): 1438–1460.
Published: 01 July 2015
FIGURES
| View All (4)
Abstract
View article
PDF
Point process filters have been applied successfully to decode neural signals and track neural dynamics. Traditionally these methods assume that multiunit spiking activity has already been correctly spike-sorted. As a result, these methods are not appropriate for situations where sorting cannot be performed with high precision, such as real-time decoding for brain-computer interfaces. Because the unsupervised spike-sorting problem remains unsolved, we took an alternative approach that takes advantage of recent insights into clusterless decoding. Here we present a new point process decoding algorithm that does not require multiunit signals to be sorted into individual units. We use the theory of marked point processes to construct a function that characterizes the relationship between a covariate of interest (in this case, the location of a rat on a track) and features of the spike waveforms. In our example, we use tetrode recordings, and the marks represent a four-dimensional vector of the maximum amplitudes of the spike waveform on each of the four electrodes. In general, the marks may represent any features of the spike waveform. We then use Bayes’s rule to estimate spatial location from hippocampal neural activity. We validate our approach with a simulation study and experimental data recorded in the hippocampus of a rat moving through a linear environment. Our decoding algorithm accurately reconstructs the rat’s position from unsorted multiunit spiking activity. We then compare the quality of our decoding algorithm to that of a traditional spike-sorting and decoding algorithm. Our analyses show that the proposed decoding algorithm performs equivalent to or better than algorithms based on sorted single-unit activity. These results provide a path toward accurate real-time decoding of spiking patterns that could be used to carry out content-specific manipulations of population activity in hippocampus or elsewhere in the brain.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (4): 901–921.
Published: 01 April 2013
FIGURES
| View All (12)
Abstract
View article
PDF
The instantaneous phase of neural rhythms is important to many neuroscience-related studies. In this letter, we show that the statistical sampling properties of three instantaneous phase estimators commonly employed to analyze neuroscience data share common features, allowing an analytical investigation into their behavior. These three phase estimators—the Hilbert, complex Morlet, and discrete Fourier transform—are each shown to maximize the likelihood of the data, assuming the observation of different neural signals. This connection, explored with the use of a geometric argument, is used to describe the bias and variance properties of each of the phase estimators, their temporal dependence, and the effect of model misspecification. This analysis suggests how prior knowledge about a rhythmic signal can be used to improve the accuracy of phase estimates.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (10): 2537–2566.
Published: 01 October 2011
FIGURES
| View All (5)
Abstract
View article
PDF
We develop a general likelihood-based framework for use in the estimation of neural firing rates, which is designed to choose the temporal smoothing parameters that maximize the likelihood of missing data. This general framework is algorithm-independent and thus can be applied to a multitude of established methods for firing rate or conditional intensity estimation. As a simple example of the use of the general framework, we apply it to the peristimulus time histogram and kernel smoother, the methods most widely used for firing rate estimation in the electrophysiological literature and practice. In doing so, we illustrate how the use of the framework can employ the general point process likelihood as a principled cost function and can provide substantial improvements in estimation accuracy for even the most basic of rate estimation algorithms. In particular, the resultant kernel smoother is simple to implement, efficient to compute, and can accurately determine the bandwidth of a given rate process from individual spike trains. We perform a simulation study to illustrate how the likelihood framework enables the kernel smoother to pick the bandwidth parameter that best predicts missing data, and we show applications to real experimental spike train data. Additionally, we discuss how the general likelihood framework may be used in conjunction with more sophisticated methods for firing rate and conditional intensity estimation and suggest possible applications.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (9): 2209–2241.
Published: 01 September 2011
FIGURES
Abstract
View article
PDF
The coherence between neural spike trains and local-field potential recordings, called spike-field coherence, is of key importance in many neuroscience studies. In this work, aside from questions of estimator performance, we demonstrate that theoretical spike-field coherence for a broad class of spiking models depends on the expected rate of spiking. This rate dependence confounds the phase locking of spike events to field-potential oscillations with overall neuron activity and is demonstrated analytically, for a large class of stochastic models, and in simulation. Finally, the relationship between the spike-field coherence and the intensity field coherence is detailed analytically. This latter quantity is independent of neuron firing rate and, under commonly found conditions, is proportional to the probability that a neuron spikes at a specific phase of field oscillation. Hence, intensity field coherence is a rate-independent measure and a candidate on which to base the appropriate statistical inference of spike field synchrony.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (12): 3305–3334.
Published: 01 December 2009
FIGURES
| View All (8)
Abstract
View article
PDF
Firing activity from neural ensembles in rat hippocampus has been previously used to determine an animal's position in an open environment and separately to predict future behavioral decisions. However, a unified statistical procedure to combine information about position and behavior in environments with complex topological features from ensemble hippocampal activity has yet to be described. Here we present a two-stage computational framework that uses point process filters to simultaneously estimate the animal's location and predict future behavior from ensemble neural spiking activity. First, in the encoding stage, we linearized a two-dimensional T-maze, and used spline-based generalized linear models to characterize the place-field structure of different neurons. All of these neurons displayed highly specific position-dependent firing, which frequently had several peaks at multiple locations along the maze. When the rat was at the stem of the T-maze, the firing activity of several of these neurons also varied significantly as a function of the direction it would turn at the decision point, as detected by ANOVA. Second, in the decoding stage, we developed a state-space model for the animal's movement along a T-maze and used point process filters to accurately reconstruct both the location of the animal and the probability of the next decision. The filter yielded exact full posterior densities that were highly nongaussian and often multimodal. Our computational framework provides a reliable approach for characterizing and extracting information from ensembles of neurons with spatially specific context or task-dependent firing activity.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (10): 2465–2494.
Published: 01 October 2006
Abstract
View article
PDF
The execution of reaching movements involves the coordinated activity of multiple brain regions that relate variously to the desired target and a path of arm states to achieve that target. These arm states may represent positions, velocities, torques, or other quantities. Estimation has been previously applied to neural activity in reconstructing the target separately from the path. However, the target and path are not independent. Because arm movements are limited by finite muscle contractility, knowledge of the target constrains the path of states that leads to the target. In this letter, we derive and illustrate a state equation to capture this basic dependency between target and path. The solution is described for discrete-time linear systems and gaussian increments with known target arrival time. The resulting analysis enables the use of estimation to study how brain regions that relate variously to target and path together specify a trajectory. The corresponding reconstruction procedure may also be useful in brain-driven prosthetic devices to generate control signals for goal-directed movements.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (5): 971–998.
Published: 01 May 2004
Abstract
View article
PDF
Neural receptive fields are dynamic in that with experience, neurons change their spiking responses to relevant stimuli. To understand how neural systems adapt the irrepresentations of biological information, analyses of receptive field plasticity from experimental measurements are crucial. Adaptive signal processing, the well-established engineering discipline for characterizing the temporal evolution of system parameters, suggests a framework for studying the plasticity of receptive fields. We use the Bayes' rule Chapman-Kolmogorov paradigm with a linear state equation and point process observation models to derive adaptive filters appropriate for estimation from neural spike trains. We derive point process filter analogues of the Kalman filter, recursive least squares, and steepest-descent algorithms and describe the properties of these new fil-ters. We illustrate our algorithms in two simulated data examples. The first is a study of slow and rapid evolution of spatial receptive fields in hippocampal neurons. The second is an adaptive decoding study in which a signal is decoded from ensemble neural spiking activity as the recep-tive fields of the neurons in the ensemble evolve. Our results provide a paradigm for adaptive estimation for point process observations and suggest a practical approach for constructing filtering algorithms to track neural receptive field dynamics on a millisecond timescale.