Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Daniel J. Amit
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (8): 1928–1950.
Published: 01 August 2008
Abstract
View article
PDF
A network of excitatory synapses trained with a conservative version of Hebbian learning is used as a model for recognizing the familiarity of thousands of once-seen stimuli from those never seen before. Such networks were initially proposed for modeling memory retrieval (selective delay activity). We show that the same framework allows the incorporation of both familiarity recognition and memory retrieval, and estimate the network's capacity. In the case of binary neurons, we extend the analysis of Amit and Fusi (1994) to obtain capacity limits based on computations of signal-to-noise ratio of the field difference between selective and non-selective neurons of learned signals. We show that with fast learning (potentiation probability approximately 1), the most recently learned patterns can be retrieved in working memory (selective delay activity). A much higher number of once-seen learned patterns elicit a realistic familiarity signal in the presence of an external field. With potentiation probability much less than 1 (slow learning), memory retrieval disappears, whereas familiarity recognition capacity is maintained at a similarly high level. This analysis is corroborated in simulations. For analog neurons, where such analysis is more difficult, we simplify the capacity analysis by studying the excess number of potentiated synapses above the steady-state distribution. In this framework, we derive the optimal constraint between potentiation and depression probabilities that maximizes the capacity.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (12): 2597–2637.
Published: 01 December 2004
Abstract
View article
PDF
Mean-field (MF) theory is extended to realistic networks of spiking neurons storing in synaptic couplings of randomly chosen stimuli of a given low coding level. The underlying synaptic matrix is the result of a generic, slow, long-term synaptic plasticity of two-state synapses, upon repeated presentation of the fixed set of the stimuli to be stored. The neural populations subtending the MF description are classified by the number of stimuli to which their neurons are responsive ( multiplicity ). This involves 2 p + 1 populations for a network storing p memories. The computational complexity of the MF description is then significantly reduced by observing that at low coding levels ( f ), only a few populations remain relevant: the population of mean multiplicity – pf and those of multiplicity of order √pf around the mean. The theory is used to produce (predict) bifurcation diagrams (the onset of selective delay activity and the rates in its various stationary states) and to compute the storage capacity of the network (the maximal number of single items used in training for each of which the network can sustain a persistent, selective activity state). This is done in various regions of the space of constitutive parameters for the neurons and for the learning process. The capacity is computed in MF versus potentiation amplitude, ratio of potentiation to depression probability and coding level f . The MF results compare well with recordings of delay activity rate distributions in simulations of the underlying microscopic network of 10,000 neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (3): 565–596.
Published: 01 March 2003
Abstract
View article
PDF
The collective behavior of a network, modeling a cortical module of spiking neurons connected by plastic synapses is studied. A detailed spike-driven synaptic dynamics is simulated in a large network of spiking neurons, implementing the full double dynamics of neurons and synapses. The repeated presentation of a set of external stimuli is shown to structure the network to the point of sustaining working memory (selective delay activity). When the synaptic dynamics is analyzed as a function of pre- and postsynaptic spike rates in functionally defined populations, it reveals a novel variation of the Hebbian plasticity paradigm: in any functional set of synapses between pairs of neurons (e.g., stimulated—stimulated, stimulated—delay, stimulated—spontaneous), there is a finite probability of potentiation as well as of depression. This leads to a saturation of potentiation or depression at the level of the ratio of the two probabilities. When one of the two probabilities is very high relative to the other, the familiar Hebbian mechanism is recovered. But where correlated working memory is formed, it prevents overlearning. Constraints relevant to the stability of the acquired synaptic structure and the regimes of global activity allowing for structuring are expressed in terms of the parameters describing the single-synapse dynamics. The synaptic dynamics is discussed in the light of experiments observing precise spike timing effects and related issues of biological plausibility.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (10): 2227–2258.
Published: 01 October 2000
Abstract
View article
PDF
We present a model for spike-driven dynamics of a plastic synapse, suited for a VLSI implementation. The synaptic device behaves as a capacitor on short timescales and preserves the memory of two stable states (efficacies) on long timescales. The transitions (LTP/LTD) are stochastic because both the number and the distribution of neural spikes in any finite (stimulation) interval fluctuate, even at fixed pre- and postsynaptic spike rates. The dynamics of the single synapse is studied analytically by extending the solution to a classic problem in queuing theory (Takàcs process). The model of the synapse is implemented in a VLSI and consists of only 18 transistors. It is also directly simulated. The simulations indicate that LTP/LTD probabilities versus rates are robust to fluctuations of the electronic parameters in a wide range of rates. The solutions for these probabilities are in very good agreement with both the simulations and measurements. Moreover, the probabilities are readily manipulable by variations of the chip's parameters, even in ranges where they are very small. The tests of the electronic device cover the range from spontaneous activity (3–4 Hz) to stimulus-driven rates (50 Hz). Low transition probabilities can be maintained in all ranges, even though the intrinsic time constants of the device are short (∼ 100 ms). Synaptic transitions are triggered by elevated presynaptic rates: for low presynaptic rates, there are essentially no transitions. The synaptic device can preserve its memory for years in the absence of stimulation. Stochasticity of learning is a result of the variability of interspike intervals; noise is a feature of the distributed dynamics of the network. The fact that the synapse is binary on long timescales solves the stability problem of synaptic efficacies in the absence of stimulation. Yet stochastic learning theory ensures that it does not affect the collective behavior of the network, if the transition probabilities are low and LTP is balanced against LTD.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (5): 1071–1092.
Published: 01 July 1997
Abstract
View article
PDF
We discuss paradigmatic properties of the activity of single cells comprising an attractor—a developed stable delay activity distribution. To demonstrate these properties and a methodology for measuring their values, we present a detailed account of the spike activity recorded from a single cell in the inferotemporal cortex of a monkey performing a delayed match-to-sample (DMS) task of visual images. In particular, we discuss and exemplify (1) the relation between spontaneous activity and activity immediately preceding the first stimulus in each trial during a series of DMS trials, (2) the effect on the visual response (i.e., activity during stimulation) of stimulus degradation (moving in the space of IT afferents), (3) the behavior of the delay activity (i.e., activity following visual stimulation) under stimulus degradation (attractor dynamics and the basin of attraction), and (4) the propagation of information between trials—the vehicle for the formation of (contextual) correlations by learning a fixed stimulus sequence (Miyashita, 1988). In the process of the discussion and demonstration, we expose effective tools for the identification and characterization of attractor dynamics. 1
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (5): 957–982.
Published: 01 September 1994
Abstract
View article
PDF
We discuss the long term maintenance of acquired memory in synaptic connections of a perpetually learning electronic device. This is affected by ascribing each synapse a finite number of stable states in which it can maintain for indefinitely long periods. Learning uncorrelated stimuli is expressed as a stochastic process produced by the neural activities on the synapses. In several interesting cases the stochastic process can be analyzed in detail, leading to a clarification of the performance of the network, as an associative memory, during the process of uninterrupted learning. The stochastic nature of the process and the existence of an asymptotic distribution for the synaptic values in the network imply generically that the memory is a palimpsest but capacity is as low as log N for a network of N neurons. The only way we find for avoiding this tight constraint is to allow the parameters governing the learning process (the coding level of the stimuli; the transition probabilities for potentiation and depression and the number of stable synaptic levels) to depend on the number of neurons. It is shown that a network with synapses that have two stable states can dynamically learn with optimal storage efficiency, be a palimpsest, and maintain its (associative) memory for an indefinitely long time provided the coding level is low and depression is equilibrated against potentiation. We suggest that an option so easily implementable in material devices would not have been overlooked by biology. Finally we discuss the stochastic learning on synapses with variable number of stable synaptic states.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1993) 5 (1): 1–17.
Published: 01 January 1993
Abstract
View article
PDF
It is shown that a simple modification of synaptic structures (of the Hopfield type) constructed to produce autoassociative attractors, produces neural networks whose attractors are correlated with several (learned) patterns used in the construction of the matrix. The modification stores in the matrix a fixed sequence of uncorrelated patterns. The network then has correlated attractors, provoked by the uncorrelated stimuli. Thus, the network converts the temporal order (or temporal correlation) expressed by the sequence of patterns, into spatial correlations expressed in the distributions of neural activities in attractors. The model captures phenomena observed in single electrode recordings in performing monkeys by Miyashita et al . The correspondence is as close as to reproduce the fact that given uncorrelated patterns as sequentially learned stimuli, the attractors produced are significantly correlated up to a separation of 5 (five) in the sequence. This number 5 is universal in a range of parameters, and requires essentially no tuning. We then discuss learning scenarios that could lead to this synaptic structure as well as experimental predictions following from it. Finally, we speculate on the cognitive utility of such an arrangement.