Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
George N. Reeke
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (9): 1840–1872.
Published: 01 September 2014
FIGURES
| View All (8)
Abstract
View article
PDF
Neurons send signals to each other by means of sequences of action potentials (spikes). Ignoring variations in spike amplitude and shape that are probably not meaningful to a receiving cell, the information content, or entropy of the signal depends on only the timing of action potentials, and because there is no external clock, only the interspike intervals, and not the absolute spike times, are significant. Estimating spike train entropy is a difficult task, particularly with small data sets, and many methods of entropy estimation have been proposed. Here we present two related model-based methods for estimating the entropy of neural signals and compare them to existing methods. One of the methods is fast and reasonably accurate, and it converges well with short spike time records; the other is impractically time-consuming but apparently very accurate, relying on generating artificial data that are a statistical match to the experimental data. Using the slow, accurate method to generate a best-estimate entropy value, we find that the faster estimator converges to this value more closely and with smaller data sets than many existing entropy estimators.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (4): 998–1024.
Published: 01 April 2010
FIGURES
| View All (9)
Abstract
View article
PDF
Entropy rate quantifies the change of information of a stochastic process (Cover & Thomas, 2006 ). For decades, the temporal dynamics of spike trains generated by neurons has been studied as a stochastic process (Barbieri, Quirk, Frank, Wilson, & Brown, 2001 ; Brown, Frank, Tang, Quirk, & Wilson, 1998 ; Kass & Ventura, 2001 ; Metzner, Koch, Wessel, & Gabbiani, 1998 ; Zhang, Ginzburg, McNaughton, & Sejnowski, 1998 ). We propose here to estimate the entropy rate of a spike train from an inhomogeneous hidden Markov model of the spike intervals. The model is constructed by building a context tree structure to lay out the conditional probabilities of various subsequences of the spike train. For each state in the Markov chain, we assume a gamma distribution over the spike intervals, although any appropriate distribution may be employed as circumstances dictate. The entropy and confidence intervals for the entropy are calculated from bootstrapping samples taken from a large raw data sequence. The estimator was first tested on synthetic data generated by multiple-order Markov chains, and it always converged to the theoretical Shannon entropy rate (except in the case of a sixth-order model, where the calculations were terminated before convergence was reached). We also applied the method to experimental data and compare its performance with that of several other methods of entropy estimation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (5): 941–970.
Published: 01 May 2004
Abstract
View article
PDF
To better understand the role of timing in the function of the nervous system, we have developed a methodology that allows the entropy of neuronal discharge activity to be estimated from a spike train record when it may be assumed that successive interspike intervals are temporally uncorrelated. The so-called interval entropy obtained by this methodology is based on an implicit enumeration of all possible spike trains that are statistically indistinguishable from a given spike train. The interval entropy is calculated from an analytic distribution whose parameters are obtained by maximum likelihood estimation from the interval probability distribution associated with a given spike train. We show that this approach reveals features of neuronal discharge not seen with two alternative methods of entropy estimation. The methodology allows for validation of the obtained data models by calculation of confidence intervals for the parameters of the analytic distribution and the testing of the significance of the fit between the observed and analytic interval distributions by means of Kolmogorov-Smirnov and Anderson-Darling statistics. The method is demonstrated by analysis of two different data sets: simulated spike trains evoked by either Poissonian or near-synchronous pulsed activation of a model cerebellar Purkinje neuron and spike trains obtained by extracellular recording from spontaneously discharging cultured rat hippocampal neurons.