Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Robert Rosenbaum
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2024) 36 (8): 1568–1600.
Published: 19 July 2024
FIGURES
| View All (5)
Abstract
View article
PDF
In computational neuroscience, recurrent neural networks are widely used to model neural activity and learning. In many studies, fixed points of recurrent neural networks are used to model neural responses to static or slowly changing stimuli, such as visual cortical responses to static visual stimuli. These applications raise the question of how to train the weights in a recurrent neural network to minimize a loss function evaluated on fixed points. In parallel, training fixed points is a central topic in the study of deep equilibrium models in machine learning. A natural approach is to use gradient descent on the Euclidean space of weights. We show that this approach can lead to poor learning performance due in part to singularities that arise in the loss surface. We use a reparameterization of the recurrent network model to derive two alternative learning rules that produce more robust learning dynamics. We demonstrate that these learning rules avoid singularities and learn more effectively than standard gradient descent. The new learning rules can be interpreted as steepest descent and gradient descent, respectively, under a non-Euclidean metric on the space of recurrent weights. Our results question the common, implicit assumption that learning in the brain should be expected to follow the negative Euclidean gradient of synaptic weights.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (7): 1430–1461.
Published: 01 July 2019
FIGURES
| View All (9)
Abstract
View article
PDF
Reservoir computing is a biologically inspired class of learning algorithms in which the intrinsic dynamics of a recurrent neural network are mined to produce target time series. Most existing reservoir computing algorithms rely on fully supervised learning rules, which require access to an exact copy of the target response, greatly reducing the utility of the system. Reinforcement learning rules have been developed for reservoir computing, but we find that they fail to converge on complex motor tasks. Current theories of biological motor learning pose that early learning is controlled by dopamine-modulated plasticity in the basal ganglia that trains parallel cortical pathways through unsupervised plasticity as a motor task becomes well learned. We developed a novel learning algorithm for reservoir computing that models the interaction between reinforcement and unsupervised learning observed in experiments. This novel learning algorithm converges on simulated motor tasks on which previous reservoir computing algorithms fail and reproduces experimental findings that relate Parkinson's disease and its treatments to motor learning. Hence, incorporating biological theories of motor learning improves the effectiveness and biological relevance of reservoir computing models.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (5): 1261–1305.
Published: 01 May 2011
FIGURES
| View All (17)
Abstract
View article
PDF
Correlations between neuronal spike trains affect network dynamics and population coding. Overlapping afferent populations and correlations between presynaptic spike trains introduce correlations between the inputs to downstream cells. To understand network activity and population coding, it is therefore important to understand how these input correlations are transferred to output correlations.Recent studies have addressed this question in the limit of many inputs with infinitesimal postsynaptic response amplitudes, where the total input can be approximated by gaussian noise. In contrast, we address the problem of correlation transfer by representing input spike trains as point processes, with each input spike eliciting a finite postsynaptic response. This approach allows us to naturally model synaptic noise and recurrent coupling and to treat excitatory and inhibitory inputs separately.We derive several new results that provide intuitive insights into the fundamental mechanisms that modulate the transfer of spiking correlations.