Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Claudius Gros
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (3): 672–698.
Published: 01 March 2015
FIGURES
| View All (10)
Abstract
View article
PDF
We present an effective model for timing-dependent synaptic plasticity (STDP) in terms of two interacting traces, corresponding to the fraction of activated NMDA receptors and the concentration in the dendritic spine of the postsynaptic neuron. This model intends to bridge the worlds of existing simplistic phenomenological rules and highly detailed models, thus constituting a practical tool for the study of the interplay of neural activity and synaptic plasticity in extended spiking neural networks. For isolated pairs of pre- and postsynaptic spikes, the standard pairwise STDP rule is reproduced, with appropriate parameters determining the respective weights and timescales for the causal and the anticausal contributions. The model contains otherwise only three free parameters, which can be adjusted to reproduce triplet nonlinearities in hippocampal culture and cortical slices. We also investigate the transition from time-dependent to rate-dependent plasticity occurring for both correlated and uncorrelated spike patterns.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (4): 1006–1028.
Published: 01 April 2013
FIGURES
| View All (10)
Abstract
View article
PDF
Learning algorithms need generally the ability to compare several streams of information. Neural learning architectures hence need a unit, a comparator, able to compare several inputs encoding either internal or external information, for instance, predictions and sensory readings. Without the possibility of comparing the values of predictions to actual sensory inputs, reward evaluation and supervised learning would not be possible. Comparators are usually not implemented explicitly. Necessary comparisons are commonly performed by directly comparing the respective activities one-to-one. This implies that the characteristics of the two input streams (like size and encoding) must be provided at the time of designing the system. It is, however, plausible that biological comparators emerge from self-organizing, genetically encoded principles, which allow the system to adapt to the changes in the input and the organism. We propose an unsupervised neural circuitry, where the function of input comparison emerges via self-organization only from the interaction of the system with the respective inputs, without external influence or supervision. The proposed neural comparator adapts in an unsupervised form according to the correlations present in the input streams. The system consists of a multilayer feedforward neural network, which follows a local output minimization (anti-Hebbian) rule for adaptation of the synaptic weights. The local output minimization allows the circuit to autonomously acquire the capability of comparing the neural activities received from different neural populations, which may differ in population size and the neural encoding used. The comparator is able to compare objects never encountered before in the sensory input streams and evaluate a measure of their similarity even when differently encoded.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (2): 523–540.
Published: 01 February 2012
FIGURES
| View All (10)
Abstract
View article
PDF
A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the qualia of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics.