Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
David Willshaw
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (2): 311–344.
Published: 01 February 2008
Abstract
View article
PDF
We investigate how various inhomogeneities present in synapses and neurons affect the performance of feedforward associative memories with linear learning, a high-level network model of hippocampal circuitry and plasticity. The inhomogeneities incorporated into the model are differential input attenuation, stochastic synaptic transmission, and memories learned with varying intensity. For a class of local learning rules, we determine the memory capacity of the model by extending previous analysis. We find that the signal-to-noise ratio (SNR), a measure of fidelity of recall, depends on the coefficients of variation (CVs) of the attenuation factors, the transmission variables, and the intensity of the memories, as well as the parameters of the learning rule, pattern sparsity and the number of memories stored. To predict the effects of attenuation due to extended dendritic trees, we use distributions of attenuations appropriate to unbranched and branched dendritic trees. Biological parameters for stochastic transmission are used to determine the CV of the transmission factors. The reduction in SNR due to differential attenuation is surprisingly low compared to the reduction due to stochastic transmission. Training a network by storing memories at different intensities is equivalent to using a learning rule incorporating weight decay. In this type of network, new memories can be stored continuously at the expense of older ones being forgotten (a palimpsest). We show that there is an optimal rate of weight decay that maximizes the capacity of the network, which is a factor of e lower than its nonpalimpsest equivalent.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (1): 117–137.
Published: 01 January 1999
Abstract
View article
PDF
The associative net model of heteroassociative memory with binary-valued synapses has been extended to include recent experimental data indicating that in the hippocampus, one form of synaptic modification is a change in the probability of synaptic transmission. Pattern pairs are stored in the net by a version of the Hebbian learning rule that changes the probability of transmission at synapses where the presynaptic and post-synaptic units are simultaneously active from a low, base value to a high, modified value. Numerical calculations of the expected recall response of this stochastic associative net have been used to assess the performance for different values of the base and modified probabilities. If there is a cost incurred with generating the difference between these probabilities, then a difference of about 0.4 is optimal. This corresponds to the magnitude of change seen experimentally. Performance can be greatly enhanced by using multiple cue presentations during recall.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (4): 911–936.
Published: 15 May 1997
Abstract
View article
PDF
Marr's proposal for the functioning of the neocortex (Marr, 1970) is the least known of his various theories for specific neural circuitries. He suggested that the neocortex learns by self-organization to extract the structure from the patterns of activity incident upon it. He proposed a feedforward neural network in which the connections to the output cells (identified with the pyramidal cells of the neocortex) are modified by a mechanism of competitive learning. It was intended that each output cell comes to be selective for the input patterns from a different class and is able to respond to new patterns from the same class that have not been seen before. The learning rule that Marr proposed was underspecified, but a logical extension of the basic idea results in a synaptic learning rule in which the total amount of synaptic strength of the connections from each input (“presynaptic”) cell is kept at a constant level. In contrast, conventional competitive learning involves rules of the “postsynaptic” type. The network learns by exploiting the structure that Marr assumed to exist within the ensemble of input patterns. For this case, analysis is possible that extends that carried out by Marr, which was restricted to the binary classification task. This analysis is presented here, together with results from computer simulations of different types of competitive learning mechanisms. The presynaptic mechanism is best known in the computational neuroscience literature. In neural network applications, it may be a more suitable mechanism of competitive learning than those normally considered.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1990) 2 (1): 85–93.
Published: 01 March 1990
Abstract
View article
PDF
A recent article (Stanton and Sejnowski 1989) on long-term synaptic depression in the hippocampus has reopened the issue of the computational efficiency of particular synaptic learning rules (Hebb 1949; Palm 1988a; Morris and Willshaw 1989) — homosynaptic versus heterosynaptic and monotonic versus nonmonotonic changes in synaptic efficacy. We have addressed these questions by calculating and maximizing the signal-to-noise ratio, a measure of the potential fidelity of recall, in a class of associative matrix memories. Up to a multiplicative constant, there are three optimal rules, each providing for synaptic depression such that positive and negative changes in synaptic efficacy balance out. For one rule, which is found to be the Stent-Singer rule (Stent 1973; Rauschecker and Singer 1979), the depression is purely heterosynaptic; for another (Stanton and Sejnowski 1989), the depression is purely homosynaptic; for the third, which is a generalization of the first two, and has a higher signal-to-noise ratio, it is both heterosynaptic and homosynaptic. The third rule takes the form of a covariance rule (Sejnowski 1977a,b) and includes, as a special case, the prescription due to Hopfield (1982) and others (Willshaw 1971; Kohonen 1972).