Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Gail A. Carpenter
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (4): 873–888.
Published: 01 April 2002
Abstract
View article
PDF
Markram and Tsodyks, by showing that the elevated synaptic efficacy observed with single-pulse long-term potentiation (LTP) measurements disappears with higher-frequency test pulses, have critically challenged the conventional assumption that LTP reflects a general gain increase. This observed change in frequency dependence during synaptic potentiation is called redistribution of synaptic efficacy (RSE). RSE is here seen as the local realization of a global design principle in a neural network for pattern coding. The underlying computational model posits an adaptive threshold rather than a multiplicative weight as the elementary unit of long-term memory. A distributed instar learning law allows thresholds to increase only monotonically, but adaptation has a bidirectional effect on the model postsynaptic potential. At each synapse, threshold increases implement pattern selectivity via a frequency-dependent signal component, while a complementary frequency-independent component nonspecifically strengthens the path. This synaptic balance produces changes in frequency dependence that are robustly similar to those observed by Markram and Tsodyks. The network design therefore suggests a functional purpose for RSE, which, by helping to bound total memory change, supports a distributed coding scheme that is stable with fast as well as slow learning. Multiplicative weights have served as a cornerstone for models of physiological data and neural systems for decades. Although the model discussed here does not implement detailed physiology of synaptic transmission, its new learning laws operate in a network architecture that suggests how recently discovered synaptic computations such as RSE may help produce new network capabilities such as learning that is fast, stable, and distributed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1992) 4 (2): 270–286.
Published: 01 March 1992
Abstract
View article
PDF
Working memory neural networks, called Sustained Temporal Order REcurrent (STORE) models, encode the invariant temporal order of sequential events in short-term memory (STM). Inputs to the networks may be presented with widely differing growth rates, amplitudes, durations, and interstimulus intervals without altering the stored STM representation. The STORE temporal order code is designed to enable groupings of the stored events to be stably learned and remembered in real time, even as new events perturb the system. Such invariance and stability properties are needed in neural architectures which self-organize learned codes for variable-rate speech perception, sensorimotor planning, or three-dimensional (3-D) visual object recognition. Using such a working memory, a self-organizing architecture for invariant 3-D visual object recognition is described. The new model is based on the model of Seibert and Waxman (1990a), which builds a 3-D representation of an object from a temporally ordered sequence of its two-dimensional (2-D) aspect graphs. The new model, called an ARTSTORE model, consists of the following cascade of processing modules: Invariant Preprocessor → ART 2 → STORE Model → ART 2 → Outstar Network.