Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Günther Palm
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (1): 205–260.
Published: 01 January 2020
FIGURES
| View All (11)
Abstract
View article
PDF
Neural associative memories (NAM) are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Gripon and Berrou ( 2011 ) investigated NAM employing block coding, a particular sparse coding method, and reported a significant increase in storage capacity. Here we verify and extend their results for both heteroassociative and recurrent autoassociative networks. For this we provide a new analysis of iterative retrieval in finite autoassociative and heteroassociative networks that allows estimating storage capacity for random and block patterns. Furthermore, we have implemented various retrieval algorithms for block coding and compared them in simulations to our theoretical results and previous simulation data. In good agreement of theory and experiments, we find that finite networks employing block coding can store significantly more memory patterns. However, due to the reduced information per block pattern, it is not possible to significantly increase stored information per synapse. Asymptotically, the information retrieval capacity converges to the known limits C = ln 2 ≈ 0 . 69 and C = ( ln 2 ) / 4 ≈ 0 . 17 also for block coding. We have also implemented very large recurrent networks up to n = 2 · 10 6 neurons, showing that maximal capacity C ≈ 0 . 2 bit per synapse occurs for finite networks having a size n ≈ 10 5 similar to cortical macrocolumns.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (2): 289–341.
Published: 01 February 2010
FIGURES
| View All (10)
Abstract
View article
PDF
Neural associative networks with plastic synapses have been proposed as computational models of brain functions and also for applications such as pattern recognition and information retrieval. To guide biological models and optimize technical applications, several definitions of memory capacity have been used to measure the efficiency of associative memory. Here we explain why the currently used performance measures bias the comparison between models and cannot serve as a theoretical benchmark. We introduce fair measures for information-theoretic capacity in associative memory that also provide a theoretical benchmark. In neural networks, two types of manipulating synapses can be discerned: synaptic plasticity , the change in strength of existing synapses, and structural plasticity , the creation and pruning of synapses. One of the new types of memory capacity we introduce permits quantifying how structural plasticity can increase the network efficiency by compressing the network structure, for example, by pruning unused synapses. Specifically, we analyze operating regimes in the Willshaw model in which structural plasticity can compress the network structure and push performance to the theoretical benchmark. The amount C of information stored in each synapse can scale with the logarithm of the network size rather than being constant, as in classical Willshaw and Hopfield nets ( ⩽ ln 2 ≈ 0.7 ). Further, the review contains novel technical material: a capacity analysis of the Willshaw model that rigorously controls for the level of retrieval quality, an analysis for memories with a nonconstant number of active units (where C ⩽ 1/eln 2 ≈ 0.53 ), and the analysis of the computational complexity of associative memories with and without network compression.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (3): 555–565.
Published: 01 April 1998
Abstract
View article
PDF
We present rules for the unsupervised learning of coincidence between excitatory postsynaptic potentials (EPSPs) by the adjustment of post-synaptic delays between the transmitter binding and the opening of ion channels. Starting from a gradient descent scheme, we develop a robust and more biological threshold rule by which EPSPs from different synapses can be gradually pulled into coincidence. The synaptic delay changes are determined from the summed potential—at the site where the coincidence is to be established—and from postulated synaptic learning functions that accompany the individual EPSPs. According to our scheme, templates for the detection of spatiotemporal patterns of synaptic activation can be learned, which is demonstrated by computer simulation. Finally, we discuss possible relations to biological mechanisms.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1992) 4 (5): 703–711.
Published: 01 September 1992
Abstract
View article
PDF
A simple relation between the storage capacity A for autoassociation and H for heteroassociation with a local learning rule is demonstrated: H = 2A. Both values are bounded by local learning bounds: A ≤ L A and H ≤ L H. L H = 2L A is evaluated numerically.