Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Robert Butera
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (3): 619–687.
Published: 01 March 2009
FIGURES
| View all 15
Abstract
View articletitled, Sequential Optimal Design of Neurophysiology Experiments
View
PDF
for article titled, Sequential Optimal Design of Neurophysiology Experiments
Adaptively optimizing experiments has the potential to significantly reduce the number of trials needed to build parametric statistical models of neural systems. However, application of adaptive methods to neurophysiology has been limited by severe computational challenges. Since most neurons are high-dimensional systems, optimizing neurophysiology experiments requires computing high-dimensional integrations and optimizations in real time. Here we present a fast algorithm for choosing the most informative stimulus by maximizing the mutual information between the data and the unknown parameters of a generalized linear model (GLM) that we want to fit to the neuron's activity. We rely on important log concavity and asymptotic normality properties of the posterior to facilitate the required computations. Our algorithm requires only low-rank matrix manipulations and a two-dimensional search to choose the optimal stimulus. The average running time of these operations scales quadratically with the dimensionality of the GLM, making real-time adaptive experimental design feasible even for high-dimensional stimulus and parameter spaces. For example, we require roughly 10 milliseconds on a desktop computer to optimize a 100-dimensional stimulus. Despite using some approximations to make the algorithm efficient, our algorithm asymptotically decreases the uncertainty about the model parameters at a rate equal to the maximum rate predicted by an asymptotic analysis. Simulation results show that picking stimuli by maximizing the mutual information can speed up convergence to the optimal values of the parameters by an order of magnitude compared to using random (nonadaptive) stimuli. Finally, applying our design procedure to real neurophysiology experiments requires addressing the nonstationarities that we would expect to see in neural responses; our algorithm can efficiently handle both fast adaptation due to spike history effects and slow, nonsystematic drifts in a neuron's activity.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (9): 2029–2035.
Published: 01 September 2006
Abstract
View articletitled, Bursting Without Slow Kinetics: A Role for a Small World?
View
PDF
for article titled, Bursting Without Slow Kinetics: A Role for a Small World?
Bursting, a dynamical phenomenon whereby episodes of neural action potentials are punctuated by periodic episodes of inactivity, is ubiquitous in neural systems. Examples include components of the respiratory rhythm generating circuitry in the brain stem, spontaneous activity in the neonatal rat spinal cord, and developing neural networks in the retina of the immature ferret. Bursting can also manifest itself in single neurons. Bursting dynamics require one or more kinetic processes slower than the timescale of the action potentials. Such processes usually manifest themselves in intrinsic ion channel properties, such as slow voltage-dependent gating or calcium-dependent processes, or synaptic mechanisms, such as synaptic depression. In this note, we show rhythmic bursting in a simulated neural network where no such slow processes exist at the cellular or synaptic level. Rather, the existence of rhythmic bursting is critically dependent on the connectivity of the network and manifests itself only when connectivity is characterized as small world. The slow process underlying the timescale of bursting manifests itself as a progressive synchronization of the network within each burst.