Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Simon R. Schultz
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (10): 2726–2756.
Published: 01 October 2018
Abstract
View article
PDF
In recent years, the development of algorithms to detect neuronal spiking activity from two-photon calcium imaging data has received much attention, yet few researchers have examined the metrics used to assess the similarity of detected spike trains with the ground truth. We highlight the limitations of the two most commonly used metrics, the spike train correlation and success rate, and propose an alternative, which we refer to as CosMIC. Rather than operating on the true and estimated spike trains directly, the proposed metric assesses the similarity of the pulse trains obtained from convolution of the spike trains with a smoothing pulse. The pulse width, which is derived from the statistics of the imaging data, reflects the temporal tolerance of the metric. The final metric score is the size of the commonalities of the pulse trains as a fraction of their average size. Viewed through the lens of set theory, CosMIC resembles a continuous Sørensen-Dice coefficient—an index commonly used to assess the similarity of discrete, presence/absence data. We demonstrate the ability of the proposed metric to discriminate the precision and recall of spike train estimates. Unlike the spike train correlation, which appears to reward overestimation, the proposed metric score is maximized when the correct number of spikes have been detected. Furthermore, we show that CosMIC is more sensitive to the temporal precision of estimates than the success rate.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (9): 2511–2527.
Published: 01 September 2017
FIGURES
| View All (8)
Abstract
View article
PDF
Hearing, vision, touch: underlying all of these senses is stimulus selectivity, a robust information processing operation in which cortical neurons respond more to some stimuli than to others. Previous models assume that these neurons receive the highest weighted input from an ensemble encoding the preferred stimulus, but dendrites enable other possibilities. Nonlinear dendritic processing can produce stimulus selectivity based on the spatial distribution of synapses, even if the total preferred stimulus weight does not exceed that of nonpreferred stimuli. Using a multi-subunit nonlinear model, we demonstrate that stimulus selectivity can arise from the spatial distribution of synapses. We propose this as a general mechanism for information processing by neurons possessing dendritic trees. Moreover, we show that this implementation of stimulus selectivity increases the neuron's robustness to synaptic and dendritic failure. Importantly, our model can maintain stimulus selectivity for a larger range of loss of synapses or dendrites than an equivalent linear model. We then use a layer 2/3 biophysical neuron model to show that our implementation is consistent with two recent experimental observations: (1) one can observe a mixture of selectivities in dendrites that can differ from the somatic selectivity, and (2) hyperpolarization can broaden somatic tuning without affecting dendritic tuning. Our model predicts that an initially nonselective neuron can become selective when depolarized. In addition to motivating new experiments, the model's increased robustness to synapses and dendrites loss provides a starting point for fault-resistant neuromorphic chip development.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (6): 1311–1349.
Published: 01 June 2001
Abstract
View article
PDF
We demonstrate that the information contained in the spike occurrence times of a population of neurons can be broken up into a series of terms, each reflecting something about potential coding mechanisms. This is possible in the coding regime in which few spikes are emitted in the relevant time window. This approach allows us to study the additional information contributed by spike timing beyond that present in the spike counts and to examine the contributions to the whole information of different statistical properties of spike trains, such as firing rates and correlation functions. It thus forms the basis for a new quantitative procedure for analyzing simultaneous multiple neuron recordings and provides theoretical constraints on neural coding strategies. We find a transition between two coding regimes, depending on the size of the relevant observation timescale. For time windows shorter than the timescale of the stimulus-induced response fluctuations, there exists a spike count coding phase, in which the purely temporal information is of third order in time. For time windows much longer than the characteristic timescale, there can be additional timing information of first order, leading to a temporal coding phase in which timing information may affect the instantaneous information rate. In this new framework, we study the relative contributions of the dynamic firing rate and correlation variables to the full temporal information, the interaction of signal and noise correlations in temporal coding, synergy between spikes and between cells, and the effect of refractoriness. We illustrate the utility of the technique by analyzing a few cells from the rat barrel cortex.