Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-8 of 8
Henry D. I. Abarbanel
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2022) 34 (7): 1545–1587.
Published: 16 June 2022
Abstract
View article
PDF
Using methods from nonlinear dynamics and interpolation techniques from applied mathematics, we show how to use data alone to construct discrete time dynamical rules that forecast observed neuron properties. These data may come from simulations of a Hodgkin-Huxley (HH) neuron model or from laboratory current clamp experiments. In each case, the reduced-dimension, data-driven forecasting (DDF) models are shown to predict accurately for times after the training period. When the available observations for neuron preparations are, for example, membrane voltage V(t) only, we use the technique of time delay embedding from nonlinear dynamics to generate an appropriate space in which the full dynamics can be realized. The DDF constructions are reduced-dimension models relative to HH models as they are built on and forecast only observables such as V(t). They do not require detailed specification of ion channels, their gating variables, and the many parameters that accompany an HH model for laboratory measurements, yet all of this important information is encoded in the DDF model. As the DDF models use and forecast only voltage data, they can be used in building networks with biophysical connections. Both gap junction connections and ligand gated synaptic connections among neurons involve presynaptic voltages and induce postsynaptic voltage response. Biophysically based DDF neuron models can replace other reduced-dimension neuron models, say, of the integrate-and-fire type, in developing and analyzing large networks of neurons. When one does have detailed HH model neurons for network components, a reduced-dimension DDF realization of the HH voltage dynamics may be used in network computations to achieve computational efficiency and the exploration of larger biological networks.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (10): 2004–2024.
Published: 01 October 2019
FIGURES
| View All (9)
Abstract
View article
PDF
Tasking machine learning to predict segments of a time series requires estimating the parameters of a ML model with input/output pairs from the time series. We borrow two techniques used in statistical data assimilation in order to accomplish this task: time-delay embedding to prepare our input data and precision annealing as a training method. The precision annealing approach identifies the global minimum of the action ( - log [ P ] ). In this way, we are able to identify the number of training pairs required to produce good generalizations (predictions) for the time series. We proceed from a scalar time series s ( t n ) ; t n = t 0 + n Δ t and, using methods of nonlinear time series analysis, show how to produce a D E > 1 -dimensional time-delay embedding space in which the time series has no false neighbors as does the observed s ( t n ) time series. In that D E -dimensional space, we explore the use of feedforward multilayer perceptrons as network models operating on D E -dimensional input and producing D E -dimensional outputs.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (8): 2025–2055.
Published: 01 August 2018
FIGURES
| View All (6)
Abstract
View article
PDF
We formulate an equivalence between machine learning and the formulation of statistical data assimilation as used widely in physical and biological sciences. The correspondence is that layer number in a feedforward artificial network setting is the analog of time in the data assimilation setting. This connection has been noted in the machine learning literature. We add a perspective that expands on how methods from statistical physics and aspects of Lagrangian and Hamiltonian dynamics play a role in how networks can be trained and designed. Within the discussion of this equivalence, we show that adding more layers (making the network deeper) is analogous to adding temporal resolution in a data assimilation framework. Extending this equivalence to recurrent networks is also discussed. We explore how one can find a candidate for the global minimum of the cost functions in the machine learning context using a method from data assimilation. Calculations on simple models from both sides of the equivalence are reported. Also discussed is a framework in which the time or layer label is taken to be continuous, providing a differential equation, the Euler-Lagrange equation and its boundary conditions, as a necessary condition for a minimum of the cost function. This shows that the problem being solved is a two-point boundary value problem familiar in the discussion of variational methods. The use of continuous layers is denoted “deepest learning.” These problems respect a symplectic symmetry in continuous layer phase space. Both Lagrangian versions and Hamiltonian versions of these problems are presented. Their well-studied implementation in a discrete time/layer, while respecting the symplectic structure, is addressed. The Hamiltonian version provides a direct rationale for backpropagation as a solution method for a certain two-point boundary value problem.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (7): 1669–1694.
Published: 01 July 2012
FIGURES
| View All (15)
Abstract
View article
PDF
Neuroscientists often propose detailed computational models to probe the properties of the neural systems they study. With the advent of neuromorphic engineering, there is an increasing number of hardware electronic analogs of biological neural systems being proposed as well. However, for both biological and hardware systems, it is often difficult to estimate the parameters of the model so that they are meaningful to the experimental system under study, especially when these models involve a large number of states and parameters that cannot be simultaneously measured. We have developed a procedure to solve this problem in the context of interacting neural populations using a recently developed dynamic state and parameter estimation (DSPE) technique. This technique uses synchronization as a tool for dynamically coupling experimentally measured data to its corresponding model to determine its parameters and internal state variables. Typically experimental data are obtained from the biological neural system and the model is simulated in software; here we show that this technique is also efficient in validating proposed network models for neuromorphic spike-based very large-scale integration (VLSI) chips and that it is able to systematically extract network parameters such as synaptic weights, time constants, and other variables that are not accessible by direct observation. Our results suggest that this method can become a very useful tool for model-based identification and configuration of neuromorphic multichip VLSI systems.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (4): 1018–1037.
Published: 01 April 2009
FIGURES
| View All (9)
Abstract
View article
PDF
The speed and accuracy of odor recognition in insects can hardly be resolved by the raw descriptors provided by olfactory receptors alone due to their slow time constant and high variability. The animal overcomes these barriers by means of the antennal lobe (AL) dynamics, which consolidates the classificatory information in receptor signal with a spatiotemporal code that is enriched in odor sensitivity, particularly in its transient. Inspired by this fact, we propose an easily implementable AL-like network and show that it significantly expedites and enhances the identification of odors from slow and noisy artificial polymer sensor responses. The device owes its efficiency to two intrinsic mechanisms: inhibition (which triggers a competition) and integration (due to the dynamical nature of the network). The former functions as a sharpening filter extracting the features of receptor signal that favor odor separation, whereas the latter implements a working memory by accumulating the extracted features in trajectories. This cooperation boosts the odor specificity during the receptor transient, which is essential for fast odor recognition.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (7): 1683–1719.
Published: 01 July 2007
Abstract
View article
PDF
Information theory provides a natural set of statistics to quantify the amount of knowledge a neuron conveys about a stimulus. A related work (Kennel, Shlens, Abarbanel, & Chichilnisky, 2005) demonstrated how to reliably estimate, with a Bayesian confidence interval, the entropy rate from a discrete, observed time series. We extend this method to measure the rate of novel information that a neural spike train encodes about a stimulus—the average and specific mutual information rates. Our estimator makes few assumptions about the underlying neural dynamics, shows excellent performance in experimentally relevant regimes, and uniquely provides confidence intervals bounding the range of information rates compatible with the observed spike train. We validate this estimator with simulations of spike trains and highlight how stimulus parameters affect its convergence in bias and variance. Finally, we apply these ideas to a recording from a guinea pig retinal ganglion cell and compare results to a simple linear decoder.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (7): 1531–1576.
Published: 01 July 2005
Abstract
View article
PDF
The entropy rate quantifies the amount of uncertainty or disorder produced by any dynamical system. In a spiking neuron, this uncertainty translates into the amount of information potentially encoded and thus the subject of intense theoretical and experimental investigation. Estimating this quantity in observed, experimental data is difficult and requires a judicious selection of probabilistic models, balancing between two opposing biases. We use a model weighting principle originally developed for lossless data compression, following the minimum description length principle. This weighting yields a direct estimator of the entropy rate, which, compared to existing methods, exhibits significantly less bias and converges faster in simulation. With Monte Carlo techinques, we estimate a Bayesian confidence interval for the entropy rate. In related work, weap-ply these ideas to estimate the information rates between sensory stimuli and neural responses in experimental data (Shlens, Kennel, Abarbanel, & Chichilnisky, in preparation).
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (8): 1567–1602.
Published: 01 November 1996
Abstract
View article
PDF
Experimental observations of the intracellular recorded electrical activity in individual neurons show that the temporal behavior is often chaotic. We discuss both our own observations on a cell from the stom-atogastric central pattern generator of lobster and earlier observations in other cells. In this paper we work with models of chaotic neurons, building on models by Hindmarsh and Rose for bursting, spiking activity in neurons. The key feature of these simplified models of neurons is the presence of coupled slow and fast subsystems. We analyze the model neurons using the same tools employed in the analysis of our experimental data. We couple two model neurons both electrotonically and electrochemically in inhibitory and excitatory fashions. In each of these cases, we demonstrate that the model neurons can synchronize in phase and out of phase depending on the strength of the coupling. For normal synaptic coupling, we have a time delay between the action of one neuron and the response of the other. We also analyze how the synchronization depends on this delay. A rich spectrum of synchronized behaviors is possible for electrically coupled neurons and for inhibitory coupling between neurons. In synchronous neurons one typically sees chaotic motion of the coupled neurons. Excitatory coupling produces essentially periodic voltage trajectories, which are also synchronized. We display and discuss these synchronized behaviors using two “distance” measures of the synchronization.