Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Yali Amit
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (3): 660–688.
Published: 01 March 2010
FIGURES
| View All (8)
Abstract
View article
PDF
We compute retrieval probabilities as a function of pattern age for networks with binary neurons and synapses updated with the simple Hebbian learning model studied in Amit and Fusi ( 1994 ). The analysis depends on choosing a neural threshold that enables patterns to stabilize in the neural dynamics. In contrast to most earlier work, where selective neurons for each pattern are drawn independently with fixed probability f , here we analyze the situation where f is drawn from some distribution on a range of coding levels. In order to set a workable threshold in this setting, it is necessary to introduce a simple inhibition in the neural dynamics whose magnitude depends on the total activity of the network. Proper choice of the threshold depends on the value of the covariances between the synapses for which we provide an explicit formula. Retrieval probabilities depend on the distribution of the fields induced by a learned pattern. We show that the field induced by the first learned pattern evolves as a Markov chain during subsequent learning epochs, leading to a recursive formula for the distribution. Alternatively, the distribution can be computed using a normal approximation, which involves the value of the synaptic covariances. Capacity is computed as the sum of the retrival probabilities over all ages. We show through simulation that the chosen threshold enables retrieval with asynchronous dynamics even in the presence of significant noise in the initial state of the pattern. The computed probabilities with both methods are shown to be very close to probabilities estimated from simulation. The analysis is extended to randomly connected networks.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (8): 1928–1950.
Published: 01 August 2008
Abstract
View article
PDF
A network of excitatory synapses trained with a conservative version of Hebbian learning is used as a model for recognizing the familiarity of thousands of once-seen stimuli from those never seen before. Such networks were initially proposed for modeling memory retrieval (selective delay activity). We show that the same framework allows the incorporation of both familiarity recognition and memory retrieval, and estimate the network's capacity. In the case of binary neurons, we extend the analysis of Amit and Fusi (1994) to obtain capacity limits based on computations of signal-to-noise ratio of the field difference between selective and non-selective neurons of learned signals. We show that with fast learning (potentiation probability approximately 1), the most recently learned patterns can be retrieved in working memory (selective delay activity). A much higher number of once-seen learned patterns elicit a realistic familiarity signal in the presence of an external field. With potentiation probability much less than 1 (slow learning), memory retrieval disappears, whereas familiarity recognition capacity is maintained at a similarly high level. This analysis is corroborated in simulations. For analog neurons, where such analysis is more difficult, we simplify the capacity analysis by studying the excess number of potentiated synapses above the steady-state distribution. In this framework, we derive the optimal constraint between potentiation and depression probabilities that maximizes the capacity.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (6): 1415–1442.
Published: 01 June 2001
Abstract
View article
PDF
We describe a system of thousands of binary perceptrons with coarse-oriented edges as input that is able to recognize shapes, even in a context with hundreds of classes. The perceptrons have randomized feed-forward connections from the input layer and form a recurrent network among themselves. Each class is represented by a prelearned attractor (serving as an associative hook) in the recurrent net corresponding to a randomly selected subpopulation of the perceptrons. In training, first the attractor of the correct class is activated among the perceptrons; then the visual stimulus is presented at the input layer. The feedforward connections are modified using field-dependent Hebbian learning with positive synapses, which we show to be stable with respect to large variations in feature statistics and coding levels and allows the use of the same threshold on all perceptrons. Recognition is based on only the visual stimuli. These activate the recurrent network, which is then driven by the dynamics to a sustained attractor state, concentrated in the correct class subset and providing a form of working memory. We believe this architecture is more transparent than standard feedforward two-layer networks and has stronger biological analogies.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (5): 1141–1164.
Published: 01 May 2000
Abstract
View article
PDF
This article describes a parallel neural net architecture for efficient and robust visual selection in generic gray-level images. Objects are represented through flexible star-type planar arrangements of binary local features which are in turn star-type planar arrangements of oriented edges. Candidate locations are detected over a range of scales and other deformations, using a generalized Hough transform. The flexibility of the arrangements provides the required invariance. Training involves selecting a small number of stable local features from a predefined pool, which are well localized on registered examples of the object. Training therefore requires only small data sets. The parallel architecture is constructed so that the Hough transform associated with any object can be implemented without creating or modifying any connections . The different object representations are learned and stored in a central module. When one of these representations is evoked, it “primes” the appropriate layers in the network so that the corresponding Hough transform is computed. Analogies between the different layers in the network and those in the visual system are discussed. Furthermore, the model can be used to explain certain experiments on visual selection reported in the literature.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (7): 1691–1715.
Published: 01 October 1999
Abstract
View article
PDF
We propose a computational model for detecting and localizing instances from an object class in static gray-level images. We divide detection into visual selection and final classification, concentrating on the former: drastically reducing the number of candidate regions that require further, usually more intensive, processing, but with a minimum of computation and missed detections. Bottom-up processing is based on local groupings of edge fragments constrained by loose geometrical relationships. They have no a priori semantic or geometric interpretation. The role of training is to select special groupings that are moderately likely at certain places on the object but rare in the background. We show that the statistics in both populations are stable. The candidate regions are those that contain global arrangements of several local groupings. Whereas our model was not conceived to explain brain functions, it does cohere with evidence about the functions of neurons in V1 and V2, such as responses to coarse or incomplete patterns (e.g., illusory contours) and to scale and translation invariance in IT. Finally, the algorithm is applied to face and symbol detection.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (7): 1545–1588.
Published: 10 July 1997
Abstract
View article
PDF
We explore a new approach to shape recognition based on a virtually infinite family of binary features (queries) of the image data, designed to accommodate prior information about shape invariance and regularity. Each query corresponds to a spatial arrangement of several local topographic codes (or tags), which are in themselves too primitive and common to be informative about shape. All the discriminating power derives from relative angles and distances among the tags. The important attributes of the queries are a natural partial ordering corresponding to increasing structure and complexity; semi-invariance , meaning that most shapes of a given class will answer the same way to two queries that are successive in the ordering; and stability , since the queries are not based on distinguished points and substructures. No classifier based on the full feature set can be evaluated, and it is impossible to determine a priori which arrangements are informative. Our approach is to select informative features and build tree classifiers at the same time by inductive learning. In effect, each tree provides an approximation to the full posterior where the features chosen depend on the branch that is traversed. Due to the number and nature of the queries, standard decision tree construction based on a fixed-length feature vector is not feasible. Instead we entertain only a small random sample of queries at each node, constrain their complexity to increase with tree depth, and grow multiple trees. The terminal nodes are labeled by estimates of the corresponding posterior distribution over shape classes. An image is classified by sending it down every tree and aggregating the resulting distributions. The method is applied to classifying handwritten digits and synthetic linear and nonlinear deformations of three hundred symbols. State-of-the-art error rates are achieved on the National Institute of Standards and Technology database of digits. The principal goal of the experiments on symbols is to analyze invariance, generalization error and related issues, and a comparison with artificial neural networks methods is presented in this context. Figure 1: LATEX Symbol