Memory is a fundamental part of computational systems like the human brain. Theoretical models identify memories as attractors of neural network activity patterns based on the theory that attractor (recurrent) neural networks are able to capture some crucial characteristics of memory, such as encoding, storage, retrieval, and long-term and working memory. In such networks, long-term storage of the memory patterns is enabled by synaptic strengths that are adjusted according to some activity-dependent plasticity mechanisms (of which the most widely recognized is the Hebbian rule) such that the attractors of the network dynamics represent the stored memories. Most of previous studies on associative memory are focused on Hopfield-like binary networks, and the learned patterns are often assumed to be uncorrelated in a way that minimal interactions between memories are facilitated. In this letter, we restrict our attention to a more biological plausible attractor network model and study the neuronal representations of correlated patterns. We have examined the role of saliency weights in memory dynamics. Our results demonstrate that the retrieval process of the memorized patterns is characterized by the saliency distribution, which affects the landscape of the attractors. We have established the conditions that the network state converges to unique memory and multiple memories. The analytical result also holds for other cases for variable coding levels and nonbinary levels, indicating a general property emerging from correlated memories. Our results confirmed the advantage of computing with graded-response neurons over binary neurons (i.e., reducing of spurious states). It was also found that the nonuniform saliency distribution can contribute to disappearance of spurious states when they exit.

You do not currently have access to this content.