Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Sen Cheng
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2022) 34 (9): 1841–1870.
Published: 16 August 2022
FIGURES
Abstract
View article
PDF
Many studies have suggested that episodic memory is a generative process, but most computational models adopt a storage view. In this article, we present a model of the generative aspects of episodic memory. It is based on the central hypothesis that the hippocampus stores and retrieves selected aspects of an episode as a memory trace, which is necessarily incomplete. At recall, the neocortex reasonably fills in the missing parts based on general semantic information in a process we call semantic completion. The model combines two neural network architectures known from machine learning, the vector-quantized variational autoencoder (VQ-VAE) and the pixel convolutional neural network (PixelCNN). As episodes, we use images of digits and fashion items (MNIST) augmented by different backgrounds representing context. The model is able to complete missing parts of a memory trace in a semantically plausible way up to the point where it can generate plausible images from scratch, and it generalizes well to images not trained on. Compression as well as semantic completion contribute to a strong reduction in memory requirements and robustness to noise. Finally, we also model an episodic memory experiment and can reproduce that semantically congruent contexts are always recalled better than incongruent ones, high attention levels improve memory accuracy in both cases, and contexts that are not remembered correctly are more often remembered semantically congruently than completely wrong. This model contributes to a deeper understanding of the interplay between episodic memory and semantic information in the generative process of recalling the past.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (2): 293–332.
Published: 01 February 2018
FIGURES
| View All (13)
Abstract
View article
PDF
The experimental evidence on the interrelation between episodic memory and semantic memory is inconclusive. Are they independent systems, different aspects of a single system, or separate but strongly interacting systems? Here, we propose a computational role for the interaction between the semantic and episodic systems that might help resolve this debate. We hypothesize that episodic memories are represented as sequences of activation patterns. These patterns are the output of a semantic representational network that compresses the high-dimensional sensory input. We show quantitatively that the accuracy of episodic memory crucially depends on the quality of the semantic representation. We compare two types of semantic representations: appropriate representations, which means that the representation is used to store input sequences that are of the same type as those that it was trained on, and inappropriate representations, which means that stored inputs differ from the training data. Retrieval accuracy is higher for appropriate representations because the encoded sequences are less divergent than those encoded with inappropriate representations. Consistent with our model prediction, we found that human subjects remember some aspects of episodes significantly more accurately if they had previously been familiarized with the objects occurring in the episode, as compared to episodes involving unfamiliar objects. We thus conclude that the interaction with the semantic system plays an important role for episodic memory.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (4): 760–793.
Published: 01 April 2006
Abstract
View article
PDF
Recent studies have employed simple linear dynamical systems to model trial-by-trial dynamics in various sensorimotor learning tasks. Here we explore the theoretical and practical considerations that arise when employing the general class of linear dynamical systems (LDS) as a model for sensorimotor learning. In this framework, the state of the system is a set of parameters that define the current sensorimotor transformation— the function that maps sensory inputs to motor outputs. The class of LDS models provides a first-order approximation for any Markovian (state-dependent) learning rule that specifies the changes in the sensorimotor transformation that result from sensory feedback on each movement. We show that modeling the trial-by-trial dynamics of learning provides a sub-stantially enhanced picture of the process of adaptation compared to measurements of the steady state of adaptation derived from more traditional blocked-exposure experiments. Specifically, these models can be used to quantify sensory and performance biases, the extent to which learned changes in the sensorimotor transformation decay over time, and the portion of motor variability due to either learning or performance variability. We show that previous attempts to fit such models with linear regression have not generally yielded consistent parameter estimates. Instead, we present an expectation-maximization algorithm for fitting LDS models to experimental data and describe the difficulties inherent in estimating the parameters associated with feedback-driven learning. Finally, we demonstrate the application of these methods in a simple sensorimotor learning experiment: adaptation to shifted visual feedback during reaching.