Neurons send signals to each other by means of sequences of action potentials (spikes). Ignoring variations in spike amplitude and shape that are probably not meaningful to a receiving cell, the information content, or entropy of the signal depends on only the timing of action potentials, and because there is no external clock, only the interspike intervals, and not the absolute spike times, are significant. Estimating spike train entropy is a difficult task, particularly with small data sets, and many methods of entropy estimation have been proposed. Here we present two related model-based methods for estimating the entropy of neural signals and compare them to existing methods. One of the methods is fast and reasonably accurate, and it converges well with short spike time records; the other is impractically time-consuming but apparently very accurate, relying on generating artificial data that are a statistical match to the experimental data. Using the slow, accurate method to generate a best-estimate entropy value, we find that the faster estimator converges to this value more closely and with smaller data sets than many existing entropy estimators.

You do not currently have access to this content.