Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Adrienne L. Fairhall
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (5): 1239–1260.
Published: 01 May 2008
Abstract
View article
PDF
Recent in vitro data show that neurons respond to input variance with varying sensitivities. Here we demonstrate that Hodgkin-Huxley (HH) neurons can operate in two computational regimes: one that is more sensitive to input variance (differentiating) and one that is less sensitive (integrating). A boundary plane in the 3D conductance space separates these two regimes. For a reduced HH model, this plane can be derived analytically from the V nullcline, thus suggesting a means of relating biophysical parameters to neural computation by analyzing the neuron's dynamical system.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (12): 3133–3172.
Published: 01 December 2007
Abstract
View article
PDF
White noise methods are a powerful tool for characterizing the computation performed by neural systems. These methods allow one to identify the feature or features that a neural system extracts from a complex input and to determine how these features are combined to drive the system's spiking response. These methods have also been applied to characterize the input-output relations of single neurons driven by synaptic inputs, simulated by direct current injection. To interpret the results of white noise analysis of single neurons, we would like to understand how the obtained feature space of a single neuron maps onto the biophysical properties of the membrane, in particular, the dynamics of ion channels. Here, through analysis of a simple dynamical model neuron, we draw explicit connections between the output of a white noise analysis and the underlying dynamical system. We find that under certain assumptions, the form of the relevant features is well defined by the parameters of the dynamical system. Further, we show that under some conditions, the feature space is spanned by the spike-triggered average and its successive order time derivatives.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (8): 1789–1807.
Published: 01 August 2003
Abstract
View article
PDF
The computation performed by a neuron can be formulated as a combination of dimensional reduction in stimulus space and the nonlinearity inherent in a spiking output. White noise stimulus and reverse correlation (the spike-triggered average and spike-triggered covariance) are often used in experimental neuroscience to “ask” neurons which dimensions in stimulus space they are sensitive to and to characterize the nonlinearity of the response. In this article, we apply reverse correlation to the simplest model neuron with temporal dynamics—the leaky integrate-andfire model—and find that for even this simple case, standard techniques do not recover the known neural computation. To overcome this, we develop novel reverse-correlation techniques by selectively analyzing only “isolated” spikes and taking explicit account of the extended silences that precede these isolated spikes. We discuss the implications of our methods to the characterization of neural adaptation. Although these methods are developed in the context of the leaky integrate-and-fire model, our findings are relevant for the analysis of spike trains from real neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (8): 1715–1749.
Published: 01 August 2003
Abstract
View article
PDF
A spiking neuron “computes” by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the low-dimensional space. Generalizations of the reverse correlation technique with white noise input provide a numerical strategy for extracting the relevant low-dimensional features from experimental data, and information theory can be used to evaluate the quality of the low-dimensional approximation. We apply these methods to analyze the simplest biophysically realistic model neuron, the Hodgkin-Huxley (HH) model, using this system to illustrate the general methodological issues. We focus on the features in the stimulus that trigger a spike, explicitly eliminating the effects of interactions between spikes. One can approximate this triggering “feature space” as a two-dimensional linear subspace in the high-dimensional space of input histories, capturing in this way a substantial fraction of the mutual information between inputs and spike time. We find that an even better approximation, however, is to describe the relevant subspace as two dimensional but curved; in this way, we can capture 90% of the mutual information even at high time resolution. Our analysis provides a new understanding of the computational properties of the HH model. While it is common to approximate neural behavior as “integrate and fire,” the HH model is not an integrator nor is it well described by a single threshold.