Skip Nav Destination
1-3 of 3
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Neural Computation (2021) 33 (1): 1–40.
Published: 01 December 2021
AbstractView article PDF
Working memory is essential: it serves to guide intelligent behavior of humans and nonhuman primates when task-relevant stimuli are no longer present to the senses. Moreover, complex tasks often require that multiple working memory representations can be flexibly and independently maintained, prioritized, and updated according to changing task demands. Thus far, neural network models of working memory have been unable to offer an integrative account of how such control mechanisms can be acquired in a biologically plausible manner. Here, we present WorkMATe, a neural network architecture that models cognitive control over working memory content and learns the appropriate control operations needed to solve complex working memory tasks. Key components of the model include a gated memory circuit that is controlled by internal actions, encoding sensory information through untrained connections, and a neural circuit that matches sensory inputs to memory content. The network is trained by means of a biologically plausible reinforcement learning rule that relies on attentional feedback and reward prediction errors to guide synaptic updates. We demonstrate that the model successfully acquires policies to solve classical working memory tasks, such as delayed recognition and delayed pro-saccade/anti-saccade tasks. In addition, the model solves much more complex tasks, including the hierarchical 12-AX task or the ABAB ordered recognition task, both of which demand an agent to independently store and updated multiple items separately in memory. Furthermore, the control strategies that the model acquires for these tasks subsequently generalize to new task contexts with novel stimuli, thus bringing symbolic production rule qualities to a neural network architecture. As such, WorkMATe provides a new solution for the neural implementation of flexible memory control.
Neural Computation (2005) 17 (10): 2176–2214.
Published: 01 October 2005
AbstractView article PDF
Animal learning is associated with changes in the efficacy of connections between neurons. The rules that govern this plasticity can be tested in neural networks. Rules that train neural networks to map stimuli onto outputs are given by supervised learning and reinforcement learning theories. Supervised learning is efficient but biologically implausible. In contrast, reinforcement learning is biologically plausible but comparatively inefficient. It lacks a mechanism that can identify units at early processing levels that play a decisive role in the stimulus-response mapping. Here we show that this so-called credit assignment problem can be solved by a new role for attention in learning. There are two factors in our new learning scheme that determine synaptic plasticity: (1) a reinforcement signal that is homogeneous across the network and depends on the amount of reward obtained after a trial, and (2) an attentional feedback signal from the output layer that limits plasticity to those units at earlier processing levels that are crucial for the stimulus-response mapping. The new scheme is called attention-gated reinforcement learning (AGREL). We show that it is as efficient as supervised learning in classification tasks. AGREL is biologically realistic and integrates the role of feedback connections, attention effects, synaptic plasticity, and reinforcement learning signals into a coherent framework.
Neural Computation (1995) 7 (3): 469–485.
Published: 01 May 1995
AbstractView article PDF
Recent work suggests that synchronization of neuronal activity could serve to define functionally relevant relationships between spatially distributed cortical neurons. At present, it is not known to what extent this hypothesis is compatible with the widely supported notion of coarse coding, which assumes that features of a stimulus are represented by the graded responses of a population of optimally and suboptimally activated cells. To resolve this issue we investigated the temporal relationship between responses of optimally and suboptimally stimulated neurons in area 17 of cat visual cortex. We find that optimally and suboptimally activated cells can synchronize their responses with a precision of a few milliseconds. However, there are consistent and systematic deviations of the phase relations from zero phase lag. Systematic variation of the orientation of visual stimuli shows that optimally driven neurons tend to lead over suboptimally activated cells. The observed phase lag depends linearly on the stimulus orientation and is, in addition, proportional to the difference between the preferred orientations of the recorded cells. Similar effects occur when testing the influence of the movement direction and the spatial frequency of visual stimuli. These results suggest that binding by synchrony can be used to define assemblies of neurons representing a coarse-coded stimulus. Furthermore, they allow a quantitative test of neuronal network models designed to reproduce physiological results on stimulus-specific synchronization.