It is shown that a simple modification of synaptic structures (of the Hopfield type) constructed to produce autoassociative attractors, produces neural networks whose attractors are correlated with several (learned) patterns used in the construction of the matrix. The modification stores in the matrix a fixed sequence of uncorrelated patterns. The network then has correlated attractors, provoked by the uncorrelated stimuli. Thus, the network converts the temporal order (or temporal correlation) expressed by the sequence of patterns, into spatial correlations expressed in the distributions of neural activities in attractors. The model captures phenomena observed in single electrode recordings in performing monkeys by Miyashita et al. The correspondence is as close as to reproduce the fact that given uncorrelated patterns as sequentially learned stimuli, the attractors produced are significantly correlated up to a separation of 5 (five) in the sequence. This number 5 is universal in a range of parameters, and requires essentially no tuning. We then discuss learning scenarios that could lead to this synaptic structure as well as experimental predictions following from it. Finally, we speculate on the cognitive utility of such an arrangement.

This content is only available as a PDF.
You do not currently have access to this content.