Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Anne G. E. Collins
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2023) 35 (2): 314–330.
Published: 01 February 2023
FIGURES
| View All (6)
Abstract
View article
PDF
In reinforcement learning (RL) experiments, participants learn to make rewarding choices in response to different stimuli; RL models use outcomes to estimate stimulus–response values that change incrementally. RL models consider any response type indiscriminately, ranging from more concretely defined motor choices (pressing a key with the index finger), to more general choices that can be executed in a number of ways (selecting dinner at the restaurant). However, does the learning process vary as a function of the choice type? In Experiment 1, we show that it does: Participants were slower and less accurate in learning correct choices of a general format compared with learning more concrete motor actions. Using computational modeling, we show that two mechanisms contribute to this. First, there was evidence of irrelevant credit assignment: The values of motor actions interfered with the values of other choice dimensions, resulting in more incorrect choices when the correct response was not defined by a single motor action; second, information integration for relevant general choices was slower. In Experiment 2, we replicated and further extended the findings from Experiment 1 by showing that slowed learning was attributable to weaker working memory use, rather than slowed RL. In both experiments, we ruled out the explanation that the difference in performance between two condition types was driven by difficulty/different levels of complexity. We conclude that defining a more abstract choice space used by multiple learning systems for credit assignment recruits executive resources, limiting how much such processes then contribute to fast learning.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (4): 551–568.
Published: 05 March 2022
FIGURES
Abstract
View article
PDF
Reinforcement learning and working memory are two core processes of human cognition and are often considered cognitively, neuroscientifically, and algorithmically distinct. Here, we show that the brain networks that support them actually overlap significantly and that they are less distinct cognitive processes than often assumed. We review literature demonstrating the benefits of considering each process to explain properties of the other and highlight recent work investigating their more complex interactions. We discuss how future research in both computational and cognitive sciences can benefit from one another, suggesting that a key missing piece for artificial agents to learn to behave with more human-like efficiency is taking working memory's role in learning seriously. This review highlights the risks of neglecting the interplay between different processes when studying human behavior (in particular when considering individual differences). We emphasize the importance of investigating these dynamics to build a comprehensive understanding of human cognition.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2018) 30 (10): 1422–1432.
Published: 01 October 2018
FIGURES
| View All (5)
Abstract
View article
PDF
Learning to make rewarding choices in response to stimuli depends on a slow but steady process, reinforcement learning, and a fast and flexible, but capacity-limited process, working memory. Using both systems in parallel, with their contributions weighted based on performance, should allow us to leverage the best of each system: rapid early learning, supplemented by long-term robust acquisition. However, this assumes that using one process does not interfere with the other. We use computational modeling to investigate the interactions between the two processes in a behavioral experiment and show that working memory interferes with reinforcement learning. Previous research showed that neural representations of reward prediction errors, a key marker of reinforcement learning, were blunted when working memory was used for learning. We thus predicted that arbitrating in favor of working memory to learn faster in simple problems would weaken the reinforcement learning process. We tested this by measuring performance in a delayed testing phase where the use of working memory was impossible, and thus participant choices depended on reinforcement learning. Counterintuitively, but confirming our predictions, we observed that associations learned most easily were retained worse than associations learned slower: Using working memory to learn quickly came at the cost of long-term retention. Computational modeling confirmed that this could only be accounted for by working memory interference in reinforcement learning computations. These results further our understanding of how multiple systems contribute in parallel to human learning and may have important applications for education and computational psychiatry.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2018) 30 (8): 1061–1065.
Published: 01 August 2018
Abstract
View article
PDF
Sometime in the past two decades, neuroimaging and behavioral research converged on pFC as an important locus of cognitive control and decision-making, and that seems to be the last thing anyone has agreed on since. Every year sees an increase in the number of roles and functions attributed to distinct subregions within pFC, roles that may explain behavior and neural activity in one context but might fail to generalize across the many behaviors in which each region is implicated. Emblematic of this ongoing proliferation of functions is dorsal ACC (dACC). Novel tasks that activate dACC are followed by novel interpretations of dACC function, and each new interpretation adds to the number of functionally specific processes contained within the region. This state of affairs, a recurrent and persistent behavior followed by an illusory and transient relief, can be likened to behavioral pathology. In Journal of Cognitive Neuroscience , 29:10 we collect contributed articles that seek to move the conversation beyond specific functions of subregions of pFC, focusing instead on general roles that support pFC involvement in a wide variety of behaviors and across a variety of experimental paradigms.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2017) 29 (10): 1646–1655.
Published: 01 October 2017
FIGURES
| View All (6)
Abstract
View article
PDF
Human learning is highly efficient and flexible. A key contributor to this learning flexibility is our ability to generalize new information across contexts that we know require the same behavior and to transfer rules to new contexts we encounter. To do this, we structure the information we learn and represent it hierarchically as abstract, context-dependent rules that constrain lower-level stimulus–action–outcome contingencies. Previous research showed that humans create such structure even when it is not needed, presumably because it usually affords long-term generalization benefits. However, computational models predict that creating structure is costly, with slower learning and slower RTs. We tested this prediction in a new behavioral experiment. Participants learned to select correct actions for four visual patterns, in a setting that either afforded (but did not promote) structure learning or enforced nonhierarchical learning, while controlling for the difficulty of the learning problem. Results replicated our previous finding that healthy young adults create structure even when unneeded and that this structure affords later generalization. Furthermore, they supported our prediction that structure learning incurred a major learning cost and that this cost was specifically tied to the effort in selecting abstract rules, leading to more errors when applying those rules. These findings confirm our theory that humans pay a high short-term cost in learning structure to enable longer-term benefits in learning flexibility.