Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Christian Leibold
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (8): 1527–1552.
Published: 01 August 2016
FIGURES
| View All (26)
Abstract
View article
PDF
Synaptic change is a costly resource, particularly for brain structures that have a high demand of synaptic plasticity. For example, building memories of object positions requires efficient use of plasticity resources since objects can easily change their location in space and yet we can memorize object locations. But how should a neural circuit ideally be set up to integrate two input streams (object location and identity) in case the overall synaptic changes should be minimized during ongoing learning? This letter provides a theoretical framework on how the two input pathways should ideally be specified. Generally the model predicts that the information-rich pathway should be plastic and encoded sparsely, whereas the pathway conveying less information should be encoded densely and undergo learning only if a neuronal representation of a novel object has to be established. As an example, we consider hippocampal area CA1, which combines place and object information. The model thereby provides a normative account of hippocampal rate remapping, that is, modulations of place field activity by changes of local cues. It may as well be applicable to other brain areas (such as neocortical layer V) that learn combinatorial codes from multiple input streams.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (12): 3408–3428.
Published: 01 December 2009
FIGURES
| View All (7)
Abstract
View article
PDF
Short-term synaptic plasticity is modulated by long-term synaptic changes. There is, however, no general agreement on the computational role of this interaction. Here, we derive a learning rule for the release probability and the maximal synaptic conductance in a circuit model with combined recurrent and feedforward connections that allows learning to discriminate among natural inputs. Short-term synaptic plasticity thereby provides a nonlinear expansion of the input space of a linear classifier, whereas the random recurrent network serves to decorrelate the expanded input space. Computer simulations reveal that the twofold increase in the number of input dimensions through short-term synaptic plasticity improves the performance of a standard perceptron up to 100%. The distributions of release probabilities and maximal synaptic conductances at the capacity limit strongly depend on the balance between excitation and inhibition. The model also suggests a new computational interpretation of spikes evoked by stimuli outside the classical receptive field. These neuronal activities may reflect decorrelation of the expanded stimulus space by intracortical synaptic connections.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (5): 1285–1324.
Published: 01 May 2008
Abstract
View article
PDF
Phase precession is a relational code that is thought to be important for episodic-like memory, for instance, the learning of a sequence of places. In the hippocampus, places are encoded through bursting activity of so-called place cells. The spikes in such a burst exhibit a precession of their firing phases relative to field potential theta oscillations (4–12 Hz); the theta phase of action potentials in successive theta cycles progressively decreases toward earlier phases. The mechanisms underlying the generation of phase precession are, however, unknown. In this letter, we show through mathematical analysis and numerical simulations that synaptic facilitation in combination with membrane potential oscillations of a neuron gives rise to phase precession. This biologically plausible model reproduces experimentally observed features of phase precession, such as (1) the progressive decrease of spike phases, (2) the nonlinear and often also bimodal relation between spike phases and the animal's place, (3) the range of phase precession being smaller than one theta cycle, and (4) the dependence of phase jitter on the animal's location within the place field. The model suggests that the peculiar features of the hippocampal mossy fiber synapse, such as its large efficacy, long-lasting and strong facilitation, and its phase-locked activation, are essential for phase precession in the CA3 region of the hippocampus.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (4): 904–941.
Published: 01 April 2006
Abstract
View article
PDF
The CA3 region of the hippocampus is a recurrent neural network that is essential for the storage and replay of sequences of patterns that represent behavioral events. Here we present a theoretical framework to calculate a sparsely connected network's capacity to store such sequences. As in CA3, only a limited subset of neurons in the network is active at any one time, pattern retrieval is subject to error, and the resources for plasticity are limited. Our analysis combines an analytical mean field approach, stochastic dynamics, and cellular simulations of a time-discrete McCulloch-Pitts network with binary synapses. To maximize the number of sequences that can be stored in the network, we concurrently optimize the number of active neurons, that is, pattern size, and the firing threshold. We find that for one-step associations (i.e., minimal sequences), the optimal pattern size is inversely proportional to the mean connectivity c , whereas the optimal firing threshold is independent of the connectivity. If the number of synapses per neuron is fixed, the maximum number P of stored sequences in a sufficiently large, nonmodular network is independent of its number N of cells. On the other hand, if the number of synapses scales as the network size to the power of 3/2 , the number of sequences P is proportional to N . In other words, sequential memory is scalable. Further-more, we find that there is an optimal ratio r between silent and nonsilent synapses at which the storage capacity α = P/[c (1 + r ) N ] assumes a maximum. For long sequences, the capacity of sequential memory is about one order of magnitude below the capacity for minimal sequences, but otherwise behaves similar to the case of minimal sequences. In a biologically inspired scenario, the information content per synapse is far below theoretical optimality, suggesting that the brain trades off error tolerance against information content in encoding sequential memories.