Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Jean-Jacques E. Slotine
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (3): 590–673.
Published: 01 March 2021
FIGURES
Abstract
View article
PDF
Stable concurrent learning and control of dynamical systems is the subject of adaptive control. Despite being an established field with many practical applications and a rich theory, much of the development in adaptive control for nonlinear systems revolves around a few key algorithms. By exploiting strong connections between classical adaptive nonlinear control techniques and recent progress in optimization and machine learning, we show that there exists considerable untapped potential in algorithm development for both adaptive nonlinear control and adaptive dynamics prediction. We begin by introducing first-order adaptation laws inspired by natural gradient descent and mirror descent. We prove that when there are multiple dynamics consistent with the data, these non-Euclidean adaptation laws implicitly regularize the learned model. Local geometry imposed during learning thus may be used to select parameter vectors—out of the many that will achieve perfect tracking or prediction—for desired properties such as sparsity. We apply this result to regularized dynamics predictor and observer design, and as concrete examples, we consider Hamiltonian systems, Lagrangian systems, and recurrent neural networks. We subsequently develop a variational formalism based on the Bregman Lagrangian. We show that its Euler Lagrange equations lead to natural gradient and mirror descent-like adaptation laws with momentum, and we recover their first-order analogues in the infinite friction limit. We illustrate our analyses with simulations demonstrating our theoretical results.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (1): 36–96.
Published: 01 January 2020
FIGURES
| View All (12)
Abstract
View article
PDF
We analyze the effect of synchronization on distributed stochastic gradient algorithms. By exploiting an analogy with dynamical models of biological quorum sensing, where synchronization between agents is induced through communication with a common signal, we quantify how synchronization can significantly reduce the magnitude of the noise felt by the individual distributed agents and their spatial mean. This noise reduction is in turn associated with a reduction in the smoothing of the loss function imposed by the stochastic gradient approximation. Through simulations on model nonconvex objectives, we demonstrate that coupling can stabilize higher noise levels and improve convergence. We provide a convergence analysis for strongly convex functions by deriving a bound on the expected deviation of the spatial mean of the agents from the global minimizer for an algorithm based on quorum sensing, the same algorithm with momentum, and the elastic averaging SGD (EASGD) algorithm. We discuss extensions to new algorithms that allow each agent to broadcast its current measure of success and shape the collective computation accordingly. We supplement our theoretical analysis with numerical experiments on convolutional neural networks trained on the CIFAR-10 data set, where we note a surprising regularizing property of EASGD even when applied to the non-distributed case. This observation suggests alternative second-order in time algorithms for nondistributed optimization that are competitive with momentum methods.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (4): 753–790.
Published: 01 July 1995
Abstract
View article
PDF
The rapid development and formalization of adaptive signal processing algorithms loosely inspired by biological models can be potentially harnessed for use in flexible new learning control algorithms for nonlinear dynamic systems. However, if such controller designs are to be viable in practice, their stability must be guaranteed and their performance quantified. In this paper, the stable adaptive tracking control designs employing “neural” networks, initially presented in Sanner and Slotine (1992), are extended to classes of multivariable mechanical systems, including robot manipulators, and bounds are developed for the magnitude of the asymptotic tracking errors and the rate of convergence to these bounds. This new algorithm permits simultaneous learning and control, without recourse to an initial identification stage, and is distinguished from previous stable adaptive robotic controllers, e.g. (Slotine and Li 1987), by the relative lack of structure assumed in the design of the control law. The required control is simply considered to contain unknown functions of the measured state variables, and adaptive “neural” networks are used to stably determine, in real time, the entire required functional dependence. While computationally more complex than explicitly model-based techniques, the methods developed in this paper may be effectively applied to the control of many physical systems for which the state dependence of the dynamics is reasonably well understood, but the exact functional form of this dependence, or part thereof, is not, such as underwater robotic vehicles and high performance aircraft.