## Abstract

We analyze the effect of synchronization on distributed stochastic gradient algorithms. By exploiting an analogy with dynamical models of biological quorum sensing, where synchronization between agents is induced through communication with a common signal, we quantify how synchronization can significantly reduce the magnitude of the noise felt by the individual distributed agents and their spatial mean. This noise reduction is in turn associated with a reduction in the smoothing of the loss function imposed by the stochastic gradient approximation. Through simulations on model nonconvex objectives, we demonstrate that coupling can stabilize higher noise levels and improve convergence. We provide a convergence analysis for strongly convex functions by deriving a bound on the expected deviation of the spatial mean of the agents from the global minimizer for an algorithm based on quorum sensing, the same algorithm with momentum, and the elastic averaging SGD (EASGD) algorithm. We discuss extensions to new algorithms that allow each agent to broadcast its current measure of success and shape the collective computation accordingly. We supplement our theoretical analysis with numerical experiments on convolutional neural networks trained on the CIFAR-10 data set, where we note a surprising regularizing property of EASGD even when applied to the non-distributed case. This observation suggests alternative second-order in time algorithms for nondistributed optimization that are competitive with momentum methods.

## 1 Introduction

Stochastic gradient descent (SGD) and its variants have become the de facto algorithms for large-scale machine learning applications such as deep neural networks (Bottou, 2010; Goodfellow, Bengio, & Courville, 2016; LeCun, Bengio, & Hinton, 2015; Mallat, 2016). SGD is used to optimize finite-sum loss functions, where a stochastic approximation to the gradient is computed using only a random selection of the input data points. Well-known results on almost-sure convergence rates to global minimizers for strictly convex functions and to stationary points for non-convex functions exist under sufficient regularity conditions (Bottou, 1998; Robbins & Siegmund, 1971). Classic work on iterate averaging for SGD (Polyak & Juditsky, 1992) and other more recent extensions (Bach & Moulines, 2013; Defazio, Bach, & Lacoste-Julien, 2014; Roux, Schmidt, & Bach, 2012; Schmidt, Le Roux & Bach, 2017) can improve convergence under a set of reasonable assumptions typically satisfied in the machine learning setting. Convergence proofs rely on a suitably chosen decreasing step size; for constant step sizes and strictly convex functions, the parameters ultimately converge to a distribution peaked around the optimum.

For large-scale machine learning applications, parallelization of SGD is a critical problem of significant modern research interest (Chaudhari et al., 2017; Dean et al., 2012; Recht & Ré, 2013; Recht, Re, Wright, & Niu, 2011). Recent work in this direction includes the elastic averaging SGD (EASGD) algorithm, in which $p$ distributed agents coupled through a common signal optimize the same loss function. EASGD can be derived from a single SGD step on a global variable consensus objective with a quadratic penalty, and the common signal takes the form of an average over space and time of the parameter vectors of the individual agents (Boyd, Parikh, Chu, Peleato, & Eckstein, 2010; Zhang, Choromanska & LeCun, 2015). At its core, the EASGD algorithm is a system of identical, coupled, discrete-time dynamical systems. And indeed, the EASGD algorithm has exactly the same structure as earlier mathematical models of synchronization (Chung & Slotine, 2009; Russo & Slotine, 2010) inspired by quorum sensing in bacteria (Miller & Bassler, 2001; Waters & Bassler, 2005). In these models, which have typically been analyzed in continuous-time, the dynamics of the common (quorum) signal can be arbitrary (Russo & Slotine, 2010), and in fact they may consist simply of a weighted average of individual signals. Motivated by this immediate analogy, we present here a continuous-time analysis of distributed stochastic gradient algorithms, of which EASGD is a special case. A significant focus of this work is the interaction between the degree of synchronization of the individual agents, characterized rigorously by a bound on the expected distance between all agents and governed by the coupling strength, and the amount of noise induced by their stochastic gradient approximations.

The effect of coupling between identical continuous-time dynamical systems has a rich history. In particular, synchronization phenomena in such coupled systems have been the subject of much mathematical (Wang & Slotine, 2005), biological (Russo & Slotine, 2010), neuroscientific (Tabareau, Slotine & Pham, 2010), and physical interest (Javaloyes, Perrin, & Politi, 2008). In nonlinear dynamical systems, synchronization has been shown to play a crucial role in protection of the individual systems from independent sources of noise (Tabareau et al., 2010). The interaction between synchronization and noise has also been posed as a possible source of regularization in biological learning, where quorum sensing–like mechanisms could be implemented between neurons through local field potentials (Bouvrie & Slotine, 2013). Given the significance of stochastic gradient (Y. Zhang, Saxe, Advani & Lee, 2018) and externally injected (Neelakantan et al., 2015) noise in regularization of large-scale machine learning models such as deep networks (Zhang, Bengio, Hardt, Recht & Vinyals, 2017), it is natural to expect that the interplay between synchronization of the individual agents and the noise from their stochastic gradient approximations is of central importance in distributed SGD algorithms.

Recently, there has been renewed interest in a continuous-time view of optimization algorithms (Betancourt, Jordan, & Wilson, 2018; Wibisono & Wilson, 2015; Wibisono, Wilson & Jordan, 2016; Wilson, Recht & Jordan, 2016). Nesterov's accelerated gradient method (Nesterov, 1983) was fruitfully analyzed in continuous time in Su, Boyd and Candes (2014), and a unifying extension to other algorithms can be found in Wibisono et al. (2016). Continuous-time analysis has also enabled discrete-time algorithm development through classical discretization techniques from numerical analysis (Zhang, Mokhtari, Sra & Jadbabaie, 2018). This article adds to this line of work by deriving new results with the mathematical tools afforded by the continuous-time view, such as stochastic calculus and nonlinear contraction analysis (Lohmiller & Slotine, 1998).

The article is organized as follows. In section 2, we provide some necessary mathematical preliminaries: a review of SGD in continuous time, a continuous-time limit of the EASGD algorithm, a review of stochastic nonlinear contraction theory, and a statement of some needed assumptions. In section 3, we demonstrate that the effect of synchronization of the distributed SGD agents is to reduce the magnitude of the noise felt by each agent and by their spatial mean. We derive this for an algorithm where all-to-all coupling is implemented through communication with the spatial mean of the distributed parameters, and we refer to this algorithm as quorum SGD (QSGD). The appendix presents a similar derivation with arbitrary dynamics for the quorum variable, of which EASGD is a special case. In section 4, we connect this noise reduction property with a recent analysis in Kleinberg, Li, and Yuan (2018), which shows that SGD can be interpreted as performing gradient descent on a smoothed loss in expectation. We use this derivation to garner intuition about the qualitative performance of distributed SGD algorithms as the coupling strength is varied, and we verify this intuition with simulations on model non-convex loss functions in low and high dimensions. In section 5, we provide new convergence results for QSGD, QSGD with momentum, and EASGD for a strongly convex objective. In section 6, we explore the properties of EASGD and QSGD for training deep neural networks and, in particular, test the stability and performance of variants proposed throughout the article. We also propose a new class of second-order in time algorithms motivated by the EASGD algorithm with a single agent, which consists of standard SGD coupled in feedback to the output of a nonlinear filter of the parameters. We provide some concluding remarks in section 7.

## 2 Mathematical Preliminaries

In this section, we provide a brief review of the necessary mathematical tools employed in this work.

### 2.1 Convex Optimization

For the convergence proofs in section 5 and for synchronization of momentum methods, we require a few standard definitions from convex optimization.

A function $f\u2208C2(Rn,R)$ is $l$-strongly convex with $l>0$ if its Hessian is uniformly lower bounded by $lI$ with respect to the positive semidefinite order, $\u22072f(x)>lI$ for all $x\u2208Rn$.

A function $f\u2208C2(Rn,R)$ is $L$-smooth with $L>0$ if its Hessian is uniformly upper bounded by $LI$ with respect to the positive semidefinite order, $\u22072f(x)<LI$ for all $x\u2208Rn$.

### 2.2 Stochastic Gradient Descent in Discrete-Time

### 2.3 Stochastic Gradient Descent in Continuous-Time

^{1}and $BBT=\Sigma $. Dropping the small term proportional to $\eta $ reduces the weak error to $O(\eta )$ (Hu et al., 2017). This leads to the SDE:

### 2.4 EASGD in Continuous-Time

### 2.5 Background on Nonlinear Contraction Theory

The main mathematical tool used in this work is nonlinear contraction theory, a form of incremental stability for nonlinear systems. In particular, we specialize to the case of time- and state-independent metrics (further details can be found in Lohmiller & Slotine, 1998).

In this work, we interchangeably refer to $f$, the system, and the generalized Jacobian as contracting depending on the context. In particular, for stochastic differential equations, we refer to $f$ as contracting if the deterministic system is contracting. Two specific robustness results for contracting systems needed for the derivations in this work are summarized below.

See Pham, Tabareau, and Slotine (2009, theorem ^{8}) for a proof. The following corollary will be useful in section 5.

Corollary ^{6} is obtained by following the proof of theorem ^{8} in Pham et al. (2009), with the restriction that one system is deterministic. To reduce the appearance of decaying exponential terms, in applications of theorem ^{5}, corollary ^{6}, and other related contraction-based bounds, we will simply state the final constant and the corresponding rate of exponential transients. The conditions of theorem ^{5} are worthy of their own definition.

If the conditions of theorem ^{5} are satisfied, system 2.14 is said to be stochastically contracting in the metric $M$ (or with metric transformation $\Theta $) with bound $C$ and rate $\lambda $.

In this work, we will also make use of an extension of contraction known as partial contraction originally introduced in Wang and Slotine (2005). The procedure is summarized in theorem ^{8}:

^{2}Assume a single trajectory $y(t)$ of equation 2.15 is known. Then all trajectories of equation 2.7 converge to $y(t)$.

We will commonly refer to the auxiliary $y$ system in theorem ^{8} as a virtual system, and $f$ is said to be partially contracting. Theorem ^{8} enables the application of contraction to systems that in themselves are not contracting but can be embedded in a virtual system that is.

^{6}, or to the trajectories of equation 2.14 through the application of theorem

^{5}.

### 2.6 Assumptions

We require two main assumptions about the objective function $f(x)$, both of which have been employed in previous work analyzing synchronization and noise in nonlinear systems (Tabareau et al., 2010). The first is an assumption on the nonlinearity of the components of the gradient.

Assume that the Hessian matrix of each component of the negative gradient has bounded maximum eigenvalue, $\u22072(-\u2207f(x))j\u2264QI$ for all $j$.

The second assumption is a condition on the robustness of the distributed gradient flows studied in this work to small, potentially stochastic perturbations.

Continuous dependence of trajectories on the parameters of the dynamics in the sense of assumption 2 can be characterized for deterministic systems through continuity assumptions on the dynamics (see, e.g., section 3.2 in Khalil, 2002). Here we assume a natural stochastic extension. Assumption 2 has been verified for FitzHugh-Nagumo oscillators where $Pl$ is a white noise process (Tuckwell & Rodriguez, 1998) and validated in simulation for more complex nonlinear oscillators (Tabareau et al., 2010). We remark that $E\u2225P\u2225\u21920$ implies that $\u2225P\u2225\u21920$ almost surely, and hence that $P\u21920$ almost surely.

## 3 Synchronization and Noise

In this section, we analyze the interaction between synchronization of the distributed QSGD agents and the noise they experience. We begin with a derivation of a quantitative measure of synchronization that applies to a class of distributed SGD algorithms involving coupling to a common external signal with no communication delays. We then present the section's primary contribution, which will serve as a basis for the theory in the remainder of the article, as well as for the intuition for various experiments.

### 3.1 A Measure of Synchronization

We now present a simple theorem on synchronization in the deterministic setting, which will allow us to prove a bound on synchronization in the stochastic setting using theorem ^{5}.

This theorem motivates a definition.

We will say the agents in a distributed algorithm globally exponentially synchronize if they all converge to one another exponentially regardless of initial conditions.

Theorem ^{11} gives a simple condition on the coupling gain $k$ for synchronization of the individual agents in equation 3.1. Because $z$ can represent any input, theorem ^{11} applies to any dynamics of the quorum variable: with $z=x\u2022$, it applies to the QSGD algorithm, and with $z=x\u02dc$, it applies to the EASGD algorithm. Under the assumption of a contracting deterministic system, we can use the stochastic contraction results in theorem ^{5} to bound the expected distance between individual agents in the stochastic setting.

^{11}. For fixed $i$ and $j$, applying the results of theorem

^{5}in the Euclidean metric leads to

We will refer to equation 3.4 as a synchronization condition.

### 3.2 Reduction of Noise Due to Synchronization

We now provide a mathematical characterization of how synchronization reduces the amount of noise felt by the individual QSGD agents. The derivation follows the mathematical procedure first employed in Tabareau et al. (2010) in the study of neural oscillators.

^{3}We now let $Fj$ denote the gradient of $-\u2207f(x)j$, and we let $Hj$ denote its Hessian. We apply the Taylor formula with integral remainder to $-\u2207f(x)j$:

By assumption 2 and theorem ^{5}, as $k\u2192\u221e$ and $p\u2192\u221e$, the difference between trajectories of equation 3.11 and the unperturbed, noise-free system tends to zero almost surely, as the effects of both the stochastic disturbance $\epsilon $ and the additive noise term are eliminated in this simultaneous limit.$\u25a1$

### 3.3 Discussion

Theorem ^{14} demonstrates that for distributed SGD algorithms, roughly speaking, the noise strength is set by the ratio parameter $\eta bp$ at the expense of a distortion term, which tends to zero with synchronization. Whether this noise reduction is a benefit or a drawback for non-convex optimization depends on the problem at hand.

If the use of a stochastic gradient is purely as an approximation of the true gradient (e.g., due to single-node or single-GPU memory limitations), then synchronization can be seen as improving this approximation and eliminating undesirable noise while simultaneously parallelizing the optimization problem. The analysis in this section then gives rigorous bounds on the magnitude of noise reduction. The $\epsilon $ term could be measured in practice to understand the empirical size of the distortion, and $k$ could be increased until $\epsilon $ tends approximately to zero and the noise is reduced to a desired level.

Many studies have reported the importance of stochastic gradient noise in deep learning, particularly in the context of generalization performance (Poggio et al., 2017; Zhu, Wu, Yu, Wu & Ma, 2018; Chaudhari & Soatto, 2018; Zhang et al., 2017). Furthermore, large batches are known to cause issues with generalization, and this has been hypothesized to be due to a reduction in the noise magnitude due to a higher $b$ in the ratio $\eta b$ (Keskar, Mudigere, Nocedal, Smelyanskiy, & Tang, 2016). In this context, reduction of noise may be undesirable, and one may be interested only in the parallelization of the problem. Our analysis then suggests choosing $k$ high enough such that the quorum variable represents a meaningful average of the parameters, but low enough that the noise in the SGD iterations is not reduced. Indeed, in section 6, we will find the best generalization performance for low values of $k$ that still result in convergence of the quorum variable. For deep networks, the level of synchronization for a given value of $k$ will be both architecture and data set dependent.

The condition in theorem ^{11} is merely a sufficient condition for synchronization, and synchronization may occur for significantly lower values of $k$ than predicted by contraction in the Euclidean metric. However, independent of when synchronization exactly occurs, so long as there is a fixed upper bound as in equation 3.4, the results in this section will apply with the corresponding estimate of $E[\u2225\epsilon \u2225]$.

### 3.4 Extension to Multiple Learning Rates

### 3.5 Extension to Momentum Methods

Equations 3.18 and 3.19 also admit the $xli$ as particular solutions, so that the agents globally exponentially synchronize with a rate $\xi =|\lambda max(J)|$. The lower bound on $\xi $ can be obtained by application of the result in Slotine (2003, example 3.8).$\u25a1$

Hence, a bound similar to equation 3.4 can be derived just as in lemma ^{13}. Because the $x1\u2022$ dynamics are linear and because the $x2\u2022$ dynamics are nonlinear only through the gradient of the loss, assumption 1 does not need to be modified. For $inft\mu (t)>0$, $k2$ can be set to zero, so that coupling is only through the position variables.

## 4 An Alternative View of Distributed Stochastic Gradient Descent

In this section, we connect the discussion of synchronization and noise reduction with the analysis in Kleinberg et al. (2018), which interprets SGD as performing gradient descent on a smoothed loss in expectation. Specifically, we show that the reduction of noise due to synchronization can be viewed as a reduction in the smoothing of the loss function. This provides further geometrical intuition for the effect of synchronization on distributed SGD algorithms. It furthermore sheds light as to why one may want to use low values of $k$ to prevent noise reduction in learning problems involving generalization, where optimization of the empirical risk rather than the expected risk introduces spurious defects into the loss function that may be removed by sufficient smoothing.

^{4}Using this argument, Kleinberg et al. (2018) show that SGD can converge to minimizers for a much larger class of functions than just convex functions, though the convolution operation can disturb the locations of the minima.

### 4.1 The Effect of Synchronization on the Convolution Scaling

### 4.2 Discussion

To better understand the interplay of synchronization and noise in SGD, we can consider several limiting cases. Consider a choice of $\eta $ corresponding to a fairly high noise level, so that the loss function is sufficiently smoothed for the iterates of SGD ($k=0$) to avoid local minima, saddle points, and flat regions, but so that the iterates would not reliably converge to a desirable region of parameter space, such as a deep and robust minimum.

For $k\u2192\u221e$ and $p$ sufficiently large, the quorum variable will effectively perform gradient descent on a minimally smoothed loss and will converge to a local minimum of the true loss function close to its initialization. Due to the strong coupling, the agents will likely get pulled into this minimum, leading to convergence as if a single agent had been initialized using deterministic gradient descent at $x\u2022(t=0)$, despite the high value of $\eta $.

With an intermediate value of $k$ so that the agents remain in close proximity to each other, but not so strong that $\u2225\epsilon \u2225\u21920$, the $x$ variables will be concentrated around the minima of the smoothed loss (the coupling will pull the agents together, but because $\u2225\epsilon \u2225\u22600$, the smoothing will not be reduced in the sense of equation 4.2). The stationary distribution of SGD is thought to be biased toward concentration around degenerate minima of high volume (Banburski et al., 2019); the coupling force should thus amplify this effect and lead to an accumulation of agents in wider and deeper minima in which all agents can approximately fit. Eventually, if sufficiently many agents arrive in a single minimum, it will be extremely difficult for any one agent to escape, leading to a consensus solution chosen by the agents even at a high noise level.

### 4.3 Numerical Simulations in Non-Convex Optimization

^{5}The corresponding distributions of final points, computed via a kernel density estimate, are plotted over a range of $k$ values in Figure 1. In each panel of the figure, the true loss function is plotted in orange and the loss function convolved with the noise distribution is in blue. The loss functions are normalized so they can appear on the same scale as the distributions, and the $y$ scale is thus omitted. The agents are initialized uniformly over the interval $[-3,3]$, and each experiences an independent and identically distributed (i.i.d.) uniform noise term $\zeta ti\u223cU(-1.5,1.5)$ per iteration. $F$ is fixed at 150.

In Figure 1a, there is no coupling and the distribution of final iterates for the agents is nearly uniform across the parameter space, with a slightly increased probability of convergence to the two deepest regions. The distribution of the quorum variable is sharply peaked around zero.^{6} As $k$ increases to $k=0.4$ in Figure 1b, the agents concentrate around the wide basins of the convolved loss function and avoid the sharp local minima of the true loss function. The distribution for the quorum variable is similar, but is too wide to imply reliable convergence to a minimum with loss near the global optimum.

As $k$ is increased further to $k=0.8$ in Figure 1c and $k=1.0$ in Figure 1d, performance increases significantly. The distribution of the agents is centered around the global optimum of the smoothed loss, and the distribution of the quorum variable is very sharp around the same minimum; this represents the regime in which the agents have chosen a consensus solution. As demonstrated by Figure 1a, this improved convergence is not possible with standard SGD. As $k$ is increased again in Figures 1e and 1f, the coupling force becomes too great, and performance decreases. There is no initial exploratory phase to find the deeper regions of the landscape, and convergence is simply near the initialization of $x\u2022$.

These simulation results suggest a useful combination of high noise, coupling, and traditional learning rate schedules. High noise levels can lead to rapid exploration and avoidance of problematic regions in parameter space, such as local minima, saddle points, or flat regions, while coupling can stabilize the dynamics toward a distribution around a wide and deep minimum of the convolved loss. The learning rate can then be decreased to improve convergence to minima of the true loss that lie within the spread of the distribution. In the uncoupled case, similar levels of noise would lead to a random walk.

This intuition is supported by the simulation results in Figure 2. The same simulation parameters are used, except the learning rate is now decreased by a factor of two every 4000 iterations until $\eta \u22640.001$, where it is fixed. In the uncoupled case in Figure 2a, the schedule only slightly improves convergence around minima of the smoothed loss when compared to Figure 1a. Figure 2b again reflects a mild improvement relative to Figure 1b. For the two best values of $k=0.8$ and $k=1.0$ in Figures 2c and 2d, convergence of the agents and the quorum variable around the deepest minimum of the true loss that lies within the distribution of the agents in Figures 1c and 1d is excellent. In the very high $k$ regime in Figures 2e and 2f, the coupling force is too strong to enable exploration, and convergence is again near the initialization of $x\u2022$, but now to the minima of the true loss.

Figure 3a is identical to Figure 1a except for the difference in learning rate: the agents converge uniformly across the parameter space. As $k$ is increased to $k=2$ in Figure 3b, the distribution of the agents becomes more localized around the center of parameter space but not around any minima. When $k$ is increased to $k=4$ in Figure 3c, $k=8$ in Figure 3d, and $k=10$ in Figure 3e, the distributions of the agents and the quorum variable become localized on the two deepest minima of the convolved loss but are still too wide for reliable convergence. The value $k=15$ in Figure 3f leads to reliable convergence around the deep minimum on the right and would combine well with a learning rate schedule as in Figure 2. Overall, the trend is similar to the case without momentum, though much higher values of $k$ are tolerated before degradation in performance. Despite high $k$ values rapidly pulling the agent positions close to $x\u2022(t=0)$, significant differences in the velocities of the agents prevent convergence to a local minimum nearby $x\u2022(t=0)$ in the high $k$ regime.

Equation 4.7 represents a separable sum of double well loss functions with pairwise sinusoidal coupling between all parameters. We include 1000 agents in each of 250 simulations per $k$ value with $d=250$. Each simulation is allowed to run for 10,000 steps with 1000 agents per simulation. The parameters are updated according to the vector forms of equations 4.5 and 4.6 with $\eta =.15$ and $\delta =.9$. No learning rate schedule is used. The agents are all randomly initialized uniformly in $[-4,4]\xd7[-4,4]$, and each experiences an i.i.d. noise term $\zeta ti\u223cU(-.75,.75)$. $F$ is fixed at 50.

For visualization purposes, we plot the contours of a two-dimensional cross section of the loss function by evaluating the last $d-2$ coordinates at the value $-1.2$. This value was chosen to represent the bottom-left cluster apparent in Figures 5 and 6; it also lies close to the global minimum of the uncorrupted loss function $(-1.426,-1.426,\u2026,-1.426)T\u2208Rd$. Visualization of high-dimensional loss functions is difficult, and using such a cross section has its drawbacks; in particular, a saddle point may show up as a local minimum, correctly as a saddle point, or as a local maximum depending on the cross section taken. Nevertheless, the employed cross sections enable qualitative visualization of the clustering of the quorum variable and the individual agents and provide assurance that the general phenomena seen in one dimension in Figures 1 to 3 generalize naturally to higher dimensions.

The loss function itself is shown in Figure 4a, and the smoothed loss is shown in Figure 4b, which has significantly reduced complexity. Figure 4c displays the loss value of the quorum variable, averaged over all simulations, as a function of iteration number for a set of possible $k$ values. The results are much the same as was described qualitatively in one dimension. Low values of $k$ such as $k=0$ and $k=0.5$ do not successfully minimize the loss function as the agents are too spread out. Despite a significant ability to explore the loss landscape with such small coupling, the agents are not concentrated enough for $x\u2022$ to represent a meaningful average. As $k$ increases, the ability to optimize the loss function at first significantly improves. While better than $k=0$ and $k=0.5$, $k=1.5$ still represents the regime of too little coupling. $k=2.5$ and $k=3.5$ obtain much lower loss values than $k=0$ and $k=0.5$, with $k=2.5$ achieving the lowest loss of the displayed $k$ values. As $k$ is increased further, performance starts to degrade. $k=4.5$ performs worse than $k=2.5$, and $k=3.5$ obtains similar performance to $k=1.5$. Increasing $k$ to $k=7.5$, $k=9.5$, and $k=11.5$ continues to deteriorate the ability of the algorithm to minimize the loss. The optimum $k$ value represents, for the given noise level and loss function, the correct balance of exploration and resistance to noise.

As in the case of any algorithmic hyperparameter, it is natural to expect that there will be an optimum value of $k$. To see that the manifestation of this optimum is precisely a high-dimensional analog of the qualitative behavior observed in the one-dimensional simulations in Figures 1 to 3, we visualize the final points found by the quorum variable and a random selection of 25 agents per simulation in Figures 5 and 6, respectively, for a representative subset of the $k$ values seen in Figure 4c.

Figure 5a shows that $k=0$ results in essentially uniform convergence of the agents across the parameter space to local minima and saddle points, and hence the quorum variable simply converges near the origin in Figure 6a. The small amount of coupling $k=0.5$ in Figure 5b leads to increased, but still insufficient, clustering of the agents. This manifests itself in Figure 6b as a shift of the ball of quorum convergence points toward the bottom left corner. $k=1.5$ and $k=2.5$ in Figures 5c and 5d have significantly improved convergence, with strong clustering of the agents in four balls around $(\xb11.2,\xb11.2)T$. These clusters are located near the minima of the uncorrupted loss function, which occur at $(\xb11.426,\xb11.426,\u2026,\xb11.426)T$.

$k=1.5$ and $k=2.5$ have similar quorum convergence plots in Figures 6c and 6d, though the value of the loss in Figure 4c is noticeably different at iteration 10,000. The differences in the loss function values for the quorum variables are likely hidden by the low-dimensional visualization method. Figures 5c and 5d show that $k=1.5$ has more “straggler” agents between the four corner clusters than $k=2.5$, which may shift the quorum convergence points uphill. From a qualitative perspective, both are good choices for tracking minima of the uncorrupted or the non-smoothed loss functions and could be combined with a learning rate schedule to improve convergence from the cloud of “starting points” in Figures 5c and 5d.

As $k$ is increased further to $k=7.5$, the coupling begins to grow too strong. The distinct agent clusters attempt to merge, as seen in Figure 5e. The result of this is seen in Figure 6e, where there are scattered quorum convergence points between the clusters. Finally, for $k=11.5$, the coupling is too great, and convergence of both the agents and the quorum variables in Figures 5f and 6f, respectively, are both near the origin.

Taken together, Figures 1 to 6 provide significant qualitative insight into the convergence of distributed SGD algorithms, both with and without momentum. In one-dimensional and high-dimensional simulations, there is an optimum level of coupling that represents an ideal balance between the ability of the agents to explore the loss function and the concentration of the distribution of final iterates. Pushing $k$ too high will lead to convergence near the initialization of $x\u2022$ and ultimately to reduced smoothing of the loss function, while setting $k$ too low will lead to poor convergence of the quorum variable due to a lack of clustering of the agents. Intermediate values of $k$ lead to concentration of the agents around deep and wide minima of the smoothed loss, which will generally lie close to the minima of the uncorrupted loss; convergence can be improved from here with a learning rate schedule.

The optimum value of $k$ is set by the size of the gradients in comparison to the noise level. In the simulation setup used here, this corresponds to a trade-off between the value of $F$, which sets the gradient magnitudes, and the width of the noise distribution. By setting the width of the noise distribution very high, the optimum $k$ value can be shifted to a large value, so that numerical stability issues arise before performance begins to degrade. Similarly, with small width and small $F$, the optimum value of $k$ can be very small. In section 6, we will see a manifestation of a similar phenomenon in deep networks for the testing loss.

## 5 Convergence Analysis

We now provide contraction-based convergence proofs for QSGD and EASGD in the strongly convex setting. In the original work on EASGD, rigorous bounds were found for multivariate quadratic objectives in discrete-time, and the analysis for a general strongly convex objective was restricted to an inequality on the iteration for several relevant variances (Zhang et al., 2015). The results in this section thus extend previously available convergence results for EASGD and contain new results for QSGD. We furthermore present convergence results for QSGD with momentum.

A significant theme of this section is that the general methodology of theorem ^{14} can be applied to produce bounds on the expected distance of the quorum variable from the global minimizer of a strongly convex function, again split into a sum of two terms—one based on the averaged noise and one based on bounding the distortion vector $\epsilon $. We also demonstrate in this section that an optimality result obtained for EASGD in discrete-time in Zhang et al. (2015) can be obtained through a straightforward application of stochastic calculus in continuous-time, and that the same result applies for QSGD.

### 5.1 QSGD Convergence Analysis

We first present a simple lemma describing the convergence of deterministic distributed gradient descent with arbitrary coupling.

If $f$ is $l$-strongly convex, $-\u2207f$ will be contracting in the identity metric with rate $l$.

If $f$ is locally $l$-strongly convex, $-\u2207f$ will be locally contracting in the identity metric with rate $l$. For example, for a nonconvex objective with initializations $xi(0)$ in a strongly convex region of parameter space, we can conclude exponential convergence to a local minimizer for each agent.

If $f$ is strongly convex, the coupling between agents provides no advantage in the deterministic setting, as they would individually contract toward the minimum regardless. For stochastic dynamics, however, coupling can improve convergence. We now demonstrate the ramifications of the results in section 3 in the context of QSGD with the following theorem.

^{4}, we can write with $R=\u2225y1-y2\u2225$,

^{7}Hence by lemma

^{4}, the difference between the $y1$ and $y2$ systems can be bounded as

^{6}, after exponential transients of rate $\lambda $,

As in section 3, the bound 5.3 consists of two terms. The first term originates from a lack of complete synchronization and can be decreased by increasing $k$. The second term comes from the additive noise and can be decreased by increasing the number of agents. Both terms can be decreased by decreasing $\eta b$, as this ratio sets the magnitude of the noise, and hence the size of both the disturbance and the noise term.

State- and time-dependent couplings of the form $k(x\u2022,t)$ are also immediately applicable with the proof methodology above. For example, increasing $k$ over time can significantly decrease the influence of the first term in equation 5.3, leaving only a bound essentially equivalent to linear noise averaging. For non-convex objectives, this suggests choosing low values of $k(x\u2022,t)$ in the early stages of training for exploration and larger values near the end of training to reduce the variance of $x\u2022$ around a minimum. By the synchronization and noise argument in section 3 and the considerations in section 4, this will also have the effect of improving convergence to a minimum of the true loss function rather than the smoothed loss. If accessible, local curvature information could be used to determine when $x\u2022$ is near a local minimum and therefore when to increase $k$. Using state- and time-dependent couplings would change the duration of exponential transients, but the result in theorem ^{20} would still hold.

^{20}, the expected difference after exponential transients between a critical point of $f$ and the stochastic $x$ is given by corollary

^{6}and an application of Jensen's inequality as

^{20}, the deviation is reduced by a factor of $1p$ in exchange for an additional additive term. This additive term is related to the noise strength $C\eta b$, the bound $Q$, and the number of parameters $n$, and is divided by $\lambda (\lambda +k)$—that is, it is smaller for more strongly convex functions and with more synchronized dynamics.

### 5.2 EASGD Convergence Analysis

We now incorporate the additional dynamics present in the EASGD algorithm. First, we prove a lemma demonstrating convergence to the global minimum of a strongly convex function in the deterministic setting.

^{11}and strong convexity of $f$, the individual $xi$ trajectories globally exponentially synchronize with rate $\lambda +k$. On the synchronized subspace, the system can be described by the reduced-order virtual system:

Just as in theorem ^{20}, we now turn to a convergence analysis for the EASGD algorithm using the results of lemma ^{21}:

^{21}, as lower bounded in equation 5.6. Further assume that $Tr(BTMB)\u2264C(p)$ with $C$ a positive constant potentially dependent on $p$ through the dependence of $M$ on $p$. Then, after exponential transients of rate $\gamma $ and $\lambda +k$,

^{21}. The second system is contracting for any external input $\epsilon $, and we have already bounded $E[\u2225\epsilon \u2225]$ in section 3 (the application of the bound is independent of the dynamics of the quorum variable; see the appendix for details). Let $zi=yi,y\u02dci$ and $z*=x*,x*$. By an identical argument as in the proof of theorem

^{20}and noting that the condition number of $\Theta $ is $p$,

^{6}and

Theorem ^{22} demonstrates an explicit bound on the expected deviation of both the center of mass variable $x\u2022$ and the quorum variable $x\u02dc$ from the global minimizer of a strongly convex function. As in the discussion after theorem ^{20}, the results will still hold with state- and time-dependent couplings of the form $k=k(x\u02dc,t)$, and the same ideas suggested for QSGD based on increasing $k$ over time can be used to eliminate the effect of the first term in the bound.

Theorem ^{22} is strictly weaker than theorem ^{20}. The metric transformation used adds a factor of $p$ to the first quantity in the bound, and the assumption $Tr(BTMB)\u2264C(p)$ now depends on $p$ through the factor of $p$ in the top-left block of $M$. Indeed, writing the matrix $B$ in $n\xd7n$ block form, $Tr(BTMB)=C+(p-1)Tr(B11TB11+B12TB12)$ where $C=Tr(BTB)$ as in theorem ^{20}. Thus, the dependence of $C(p)$ on $p$ is in general linear.

Because of this linear dependence on $p$, the first term in the bound scales like $p3/2$, while the second is asymptotically independent of $p$. This is not the case in theorem ^{20}, where the first term is asymptotically independent of $p$ and the second term scales like $1p$. The unfavorable scaling of the bound in theorem ^{22} with $p$ implies that higher values of $p$ do not improve convergence for EASGD as they do for QSGD. These issues can be avoided by reformulating lemma ^{21} in the Euclidean metric, but this leads to the fairly strong restriction $k<4\lambda p(p-1)2$.

These observations highlight potential convergence issues for EASGD with large $p$ which are not present with QSGD. In line with these theoretical conclusions, we will empirically find stricter stability conditions on $k$ for EASGD when compared to QSGD for training deep networks in section 6. Nevertheless, in the context of non-convex optimization, higher values of $p$ can still lead to improved performance by affording increased parallelization of the problem and exploration of the landscape.

Less significantly, unlike in theorem ^{20}, the bound in theorem ^{22} is applied to the combined vector $z$ rather than the quorum variable $x\u02dc$ itself, and the contraction rate $\gamma $ is used rather than $\lambda $ in the virtual system bounds.^{8} Both of these facts weaken the result when compared to theorem ^{20}. $\gamma $ will in general be less than $\lambda $, as exemplified by the lower bound, equation 5.6.

### 5.3 QSGD with Momentum Convergence Analysis

We now present a proof of convergence for the QSGD algorithm with momentum. We first prove a lemma demonstrating convergence to the global minimum of a strongly convex, $\lambda \xaf$-smooth function. We consider the case of coupling only in the position variables; coupling additionally through the momentum variables is similar. We also restrict to the case of constant momentum coefficient for simplicity.

^{15}and according to the assumption on $k$, the agents will globally exponentially synchronize with rate $\xi $, where $\xi $ may be lower bounded as in equation 3.17. On the synchronized subspace, the overall system can be described by the virtual system

Note that in general, so long as $\mu $ is chosen to satisfy the lower bound of the preceding lemma, the QSGD with momentum system will be contracting in some metric. The given metric will depend on the value of $\delta (\mu )$—for example, chosen as suggested in the proof.

With Lemma ^{23} in hand, we can now state a convergence result for QSGD with momentum.

^{23}and assumption 2 are satisfied. Let $\kappa $ denote the contraction rate of the deterministic, fully synchronized QSGD with momentum system as lower bounded in equation 5.8 and let $\xi $ denote the synchronization rate of the QSGD with momentum system as lower bounded in equation 3.17. Further assume that $Tr(BTMB)\u2264C$ with $C>0$ where $M=\Theta T\Theta $ and $\Theta $ is the metric transformation from lemma

^{23}. Let $\psi =12(1+a2+\delta 2\mu 2-(1+a2+\delta 2\mu 2)2-4a2)$ denote the minimum eigenvalue of $M$ with $a$ given by equation 5.9 and let $\Psi =12(1+a2+\delta 2\mu 2+(1+a2+\delta 2\mu 2)2-4a2)$ denote the maximum eigenvalue. Then, after exponential transients of rate $\kappa $ and $\xi $, with $z=(x1\u2022,x2\u2022)$ and $z*=(x*,0)$

^{20}and

^{22}:

^{23}. The second system is contracting for any external input $\epsilon $, and as argued in section 3, the bound on $E\u2225\epsilon \u2225$ can be applied as is to the momentum system with a suitable replacement of contraction rates. As in theorem

^{20}, and noting that the condition number of $\Theta $ is $\Psi \psi $,

^{6}gives

Equation 5.10 is similar to the results for EASGD and QSGD. The bound is closer in spirit to the bound for QSGD without momentum, in that the two terms do not have poor dependencies on $p$ as they do for EASGD. However, the statement of the theorem is complicated by the expressions for the contraction rates $\kappa $ and $\xi $, the expressions for the minimum and maximum eigenvalues of the metric $\psi $ and $\Psi $, and the expression for $a$ in the metric transformation. Together, these four quantities create a more complex dependence of the bound on hyperparameters such as $\mu $ and $k$. Nevertheless, the spirit is still the same as theorem ^{20}, in that the first term originates from the $\epsilon $ disturbance and can be eliminated with synchronization, while the second term originates from the additive noise and can be eliminated by including additional agents.

### 5.4 Extensions to Other Distributed Structures

Similar results can be derived for many other possible distributed structures in an identical manner. We present one general formalism here, involving local state- and time-dependent couplings.

Individually state-dependent couplings of the form 5.11 or its quorum-mediated equivalent, equation 5.13, allow for individual gain schedules that depend on local cost values or other local performance measures. This can allow each agent to broadcast its current measure of success and shape the quorum variable accordingly. For example, the classification accuracy on a validation set for each $xi$ could be use to select the current best parameter vectors and increase the corresponding $ki$ values to pull other agents toward them.

### 5.5 Specialization to a Multivariate Quadratic Objective

In the original discrete-time analysis of EASGD in Zhang et al. (2015), it was proven that iterate averaging (Polyak & Juditsky, 1992) of $x\u02dc$ leads to an optimal variance around the minimum of a quadratic objective. We now derive an identical result in continuous-time for the QSGD algorithm, demonstrating that this optimality is independent of the additional dynamics in the EASGD algorithm.

The assumption of state-independence can be justified in several ways. Theoretical analyses have demonstrated that the specific form of positive semidefinite $B$ does not affect the $O(\eta )$ weak accuracy of the approximating stochastic differential equation 2.2 for SGD (Feng et al., 2018; Hu et al., 2017; Li et al., 2018), though it does affect the constant.^{9} For relevance to general non-convex optimization, we can assume that all agents have arrived sufficiently close to a minimum of the loss function that it can be approximately represented as a quadratic and that the noise covariance is approximately constant (Mandt et al., 2016, 2017). For deep networks, the noise covariance has been empirically shown to align with the Hessian of the loss (Sagun, Evci, Guney, Dauphin, & Bottou, 2017; Zhu et al., 2018), with theoretical justification for when this is valid provided in appendix A of Jastrzȩbski et al. (2017). For all agents in an approximately quadratic basin of a local minimum of a deep network, $B$ can then be taken to be constant such that $BBT=A$, where $A$ is the approximately state-independent Hessian.

As in the discrete-time EASGD analysis, equation 5.15 is optimal in the sense of achieving the Fisher information lower bound and is independent of the coupling strength $k$ (Polyak & Juditsky, 1992; Zhang et al., 2015). The lack of dependence on the coupling $k$ is less surprising in this case, as it is not present in the $x\u2022$ dynamics. The optimality of this result, together with the comparison of theorems ^{20} and ^{22}, suggests that the extra $x\u02dc$ dynamics may not provide any benefit over coupling simply through the spatial average variable $x\u2022$ from the perspective of convex optimization. However, in section 6, we will show through numerical experiments on deep networks that EASGD tends to find networks that generalize better than QSGD. The benefits of EASGD must then go beyond basic optimization, and the extra dynamics may have a regularizing effect.

^{10}If we precondition the stochastic gradients for each agent by the same constant invertible matrix $Q$, then the stationary variance remains optimal. To see this, note that we can account for this preconditioning simply by modifying the derivation so that $A\u2192QA$ and $B\u2192QB$. Then,

## 6 Deep Network Simulations

We now turn to evaluate EASGD, QSGD, and one possible state-dependent variant of QSGD, equation 5.13, as learning algorithms for training deep neural networks on the CIFAR-10 data set. A significant goal of the section is to understand the role of synchronization and noise in training deep neural networks. We also seek to test the extensions proposed throughout this article, such as multiple learning rates, synchronization bounds allowing for independent initial conditions of the agents, and state-dependent coupling.

We obtain two primary results. The first is that less synchronization, when it still leads to reliable convergence of the quorum variable, results in the best generalization capabilities of the learned network. This is similar to the results of the model experiments performed in section 4.3, though those experiments revealed this to be true for general optimization rather than generalization. The observation of better generalization with reduced synchronization is in line with the comments of section 3.3 regarding noise and generalization in deep networks.

Our second primary result is the observation of an interesting regularizing property of EASGD, even in the single-agent case. Unlike QSGD with a single agent, EASGD does not reduce to standard SGD. We find that EASGD without momentum outperforms SGD with momentum and EASGD with momentum in the nondistributed setting.

### 6.1 Experimental Setup

We use a three-layer convolutional neural network based on the experiments in Zhang et al. (2015); each layer consists of a two-dimensional convolution, an ReLU nonlinearity, $2\xd72$ max-pooling with a stride of two, and BatchNorm (Ioffe & Szegedy, 2015) with batch statistics in both training and evaluation. The first convolutional layer has kernel size nine, the second has kernel size five, and the third has kernel size three. All convolutions use a stride of one and zero padding. Following the three convolutional layers, there is a single fully connected layer to which we apply dropout with a probability of 0.5. The input data are normalized to have mean zero and standard deviation one in each channel in both the training and test sets. Because we are interested in qualitative trends rather than state-of-the-art performance, we do not employ any data augmentation strategies. We use an 80/20 training/validation set split, and we use the cross-entropy loss. The stochastic gradient is computed using minibatches of size 128. The learning rate is set to $\eta =0.05$ initially unless otherwise specified. This value was chosen as the highest initial value of $\eta $ that remained stable throughout training for most values of $p$, and the qualitative trends presented here were robust to the choice of learning rate (further simulations demonstrating this robustness are available in the supplemental information). We decrease the learning rate three times when the validation loss stalls:^{11} first by a factor of five, then a factor of two the second and third times. This is done on an agent basis: the agents are allowed to maintain different learning rates. Because we are focused on the behavior of the algorithms rather than efficiency from the standpoint of a parallel implementation, the agents communicate with the quorum variable after each update.

^{12}

In each of the following simulations, the fully connected weights and biases are initialized randomly and uniformly $Wij,bi\u223cU(-1m,1m)$ where $m$ is the number of inputs. The convolutional weights use Kaiming initialization (He, Zhang, Ren, & Sun, 2015). In each comparison, the methods are initialized from the same points in parameter space, but the agents are not required to be initialized at the same location. In QSGD and SD-QSGD, the quorum variable is exponentially weighted $x\xaft+1=\gamma xt\u2022+(1-\gamma )x\xaft$ with $\gamma =.1$, and we test the convergence of $x\xaf$. Note that because this variable is not coupled to the dynamics of the individual agents, this is still distinct from EASGD. Because we use momentum in nearly all experiments, we will refer simply to QSGD and EASGD. The non-momentum variant of EASGD, when used, will be referred to as EASGD-WM (EASGD without momentum).

### 6.2 Experimental Results

We first analyze the effect of $k$ on classification performance. We find that the best performance is obtained for the lowest possible fixed values of $k$ that still lead to convergence of the quorum variable. This is demonstrated in Figure 7 for the EASGD algorithm with $\eta =0.05$ initially and $p=8$, where we observe the general trend that test accuracy improves as the coupling gain is decreased. Note that $k=0.01$ and $k=0.02$, as well as $k=0$ (not shown), have too little synchronization for the quorum variable to reflect a meaningful average, and hence do not lead to good performance. Similar results hold for QSGD (not shown). We found not only the best performance for low, fixed $k$ but also the best scaling with the number of agents.^{13}

There are several plausible explanations for the observation of improved generalization with reduced coupling. Lower values of $k$ allow for greater exploration of the optimization landscape, which intuitively should lead to better performance. As the measure of synchronization in Figure 7d tends to zero, the $\epsilon $ term in the $x\u2022$ dynamics will also tend to zero, and synchronization will begin to reduce the amount of noise felt by the individual agents. In neural networks, it is expected that this noise reduction will favor convergence to minima that do not generalize as well as those obtained with higher amounts of noise, as seen in Figure 7c.

Results for a comparison of QSGD and SD-QSGD are shown in Figure 8 for $p=1,4,8,16,32$, and 64 with $k=0.04$. QSGD is shown in solid lines, while SD-QSGD is shown in dashed; color indicates the number of agents (see the key in Figure 8a). Note that $p=1$ simply corresponds to SGD for both SD-QSGD and QSGD, as the coupling term vanishes for a single agent. In both cases, we see significant improvement in accuracy as the number of agents increases, most likely due to an improved ability of the agents to explore the landscape, along with a decrease in synchronization. The test loss and test error curves display interesting differences between the two algorithms; for $p=8$ and $p=16$, the state-dependent formalism obtains mildly improved generalization relative to QSGD, as expected by the bias toward minima with lower validation loss. QSGD performs better for $p=32$ and $p=64$; SD-QSGD does not converge for $p=64$.

We display a comparison of QSGD and EASGD in Figure 9, again for $k=0.04$. QSGD tends to decrease the training loss further and more rapidly than EASGD; this is in line with earlier comments that, from an optimization perspective, the extra dynamics of the quorum variable offer no clear theoretical benefit. However, consistently across all experiments except for $p=16$ where it does not converge, EASGD generalizes better: the test loss is driven lower, and the test accuracy is higher than QSGD. A particularly interesting result is the single-agent case, where EASGD actually performs better than SGD with momentum.^{14} These observations suggest that the extra dynamics of the quorum variable may impose a form of implicit regularization that, to our knowledge, has not been observed before.

Motivated by this observation, we now compare the $p=1$ EASGD algorithm with momentum, without momentum, and basic SGD with momentum in Figure 10 across a range of initial learning rates. Each algorithm is initialized from the same location, and each curve represents an average over three runs to eliminate stochastic variability. The momentum algorithms use $\delta =0.9$, and the two EASGD variants use $k=0.054$. In general, EASGD with and without momentum (dashed and solid lines, respectively) both achieve higher test accuracy than SGD with momentum (dotted lines). Surprisingly, EASGD without momentum often performs better than EASGD with momentum.

To show that this trend is not an artifact of incorrectly choosing the momentum parameter, we have compiled additional data in Table 1 over a range of momentum parameters and learning rates. Each data point reported is again the result of an average over three independent runs, and each algorithm is initialized from the same location in each run. For simplicity, we simply report the testing loss and testing error rather than the results on the training data. For all but one choice of $\eta $ and $\delta $, EASGD-WM outperforms both EASGD and MSGD in classification accuracy, demonstrating that the trend is robust to choice of learning rate and momentum value.

. | . | Minimum Test Loss . | Minimum Error . | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

. | . | $\delta =.1$ . | $\delta =.25$ . | $\delta =.5$ . | $\delta =.75$ . | $\delta =.9$ . | $\delta =.99$ . | $\delta =.1$ . | $\delta =.25$ . | $\delta =.5$ . | $\delta =.75$ . | $\delta =.9$ . | $\delta =.99$ . |

$\eta =.005$ | EASGD-WM | 4.25 | 4.26 | 4.28 | 4.29 | 3.22 | 3.24 | .267 | .266 | .264 | .269 | .268 | .270 |

EASGD | 4.72 | 4.83 | 4.56 | 4.28 | 3.17 | 3.11 | .304 | .313 | .301 | .282 | .280 | .277 | |

MSGD | 4.75 | 4.87 | 4.64 | 4.33 | 3.21 | 3.29 | .310 | .323 | .306 | .286 | .292 | .295 | |

$\eta =.01$ | EASGD-WM | 4.09 | 5.15 | 4.03 | 4.12 | 3.15 | 3.10 | .262 | .259 | .253 | .261 | .267 | .257 |

EASGD | 4.57 | 5.75 | 4.27 | 4.14 | 3.04 | 3.22 | .297 | .300 | .280 | .275 | .263 | .283 | |

MSGD | 4.59 | 5.81 | 4.48 | 4.46 | 3.22 | 3.33 | .300 | .307 | .294 | .294 | .287 | .301 | |

$\eta =.05$ | EASGD-WM | 3.95 | 3.96 | 3.86 | 3.97 | 3.07 | 3.00 | .252 | .258 | .250 | .255 | .262 | .253 |

EASGD | 4.41 | 4.27 | 4.05 | 4.06 | 3.19 | 4.04 | .286 | .283 | .265 | .267 | .276 | .417 | |

MSGD | 4.46 | 4.57 | 4.48 | 4.43 | 3.21 | 6.91 | .294 | .307 | .295 | .290 | .292 | 0.9 | |

$\eta =.1$ | EASGD-WM | 4.08 | 4.01 | 4.04 | 4.05 | 3.11 | 3.15 | .267 | .264 | .268 | .265 | .268 | .269 |

EASGD | 4.24 | 4.23 | 4.14 | 4.13 | 3.17 | 6.91 | .282 | .283 | .277 | .272 | .280 | 0.9 | |

MSGD | 4.62 | 4.55 | 4.22 | 4.47 | 3.38 | 6.91 | .288 | .307 | .287 | .288 | .301 | 0.9 |

. | . | Minimum Test Loss . | Minimum Error . | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

. | . | $\delta =.1$ . | $\delta =.25$ . | $\delta =.5$ . | $\delta =.75$ . | $\delta =.9$ . | $\delta =.99$ . | $\delta =.1$ . | $\delta =.25$ . | $\delta =.5$ . | $\delta =.75$ . | $\delta =.9$ . | $\delta =.99$ . |

$\eta =.005$ | EASGD-WM | 4.25 | 4.26 | 4.28 | 4.29 | 3.22 | 3.24 | .267 | .266 | .264 | .269 | .268 | .270 |

EASGD | 4.72 | 4.83 | 4.56 | 4.28 | 3.17 | 3.11 | .304 | .313 | .301 | .282 | .280 | .277 | |

MSGD | 4.75 | 4.87 | 4.64 | 4.33 | 3.21 | 3.29 | .310 | .323 | .306 | .286 | .292 | .295 | |

$\eta =.01$ | EASGD-WM | 4.09 | 5.15 | 4.03 | 4.12 | 3.15 | 3.10 | .262 | .259 | .253 | .261 | .267 | .257 |

EASGD | 4.57 | 5.75 | 4.27 | 4.14 | 3.04 | 3.22 | .297 | .300 | .280 | .275 | .263 | .283 | |

MSGD | 4.59 | 5.81 | 4.48 | 4.46 | 3.22 | 3.33 | .300 | .307 | .294 | .294 | .287 | .301 | |

$\eta =.05$ | EASGD-WM | 3.95 | 3.96 | 3.86 | 3.97 | 3.07 | 3.00 | .252 | .258 | .250 | .255 | .262 | .253 |

EASGD | 4.41 | 4.27 | 4.05 | 4.06 | 3.19 | 4.04 | .286 | .283 | .265 | .267 | .276 | .417 | |

MSGD | 4.46 | 4.57 | 4.48 | 4.43 | 3.21 | 6.91 | .294 | .307 | .295 | .290 | .292 | 0.9 | |

$\eta =.1$ | EASGD-WM | 4.08 | 4.01 | 4.04 | 4.05 | 3.11 | 3.15 | .267 | .264 | .268 | .265 | .268 | .269 |

EASGD | 4.24 | 4.23 | 4.14 | 4.13 | 3.17 | 6.91 | .282 | .283 | .277 | .272 | .280 | 0.9 | |

MSGD | 4.62 | 4.55 | 4.22 | 4.47 | 3.38 | 6.91 | .288 | .307 | .287 | .288 | .301 | 0.9 |

Notes: Each experiment was run three times, and the minimum was taken over the average trajectory. In each run, the algorithms were initialized from the same starting location. Surprisingly, EASGD-WM consistently achieves the lowest test error (all but one setting) and the lowest test loss (all but four settings) in comparison to EASGD and MSGD. For high learning rate and high $\delta $, MSGD and EASGD eventually run into convergence issues, while EASGD-WM does not (error of .9 and test loss of 6.91 indicate convergence issues). Bold indicates the top performance of the three algorithms for choice of $\eta $ and $\delta $.

Much like SGD with momentum, single-agent EASGD-WM is a second-order system in time. It also maintains a similar computational complexity and requires storing only one extra set of parameters for the quorum variable.

Returning to the distributed case, Figure 9d shows that EASGD and QSGD respond differently to the choice of $k$.^{15} EASGD is less synchronized than QSGD in all cases. Hence, in the context of Figure 7, a possible explanation for the improved performance of EASGD when compared to QSGD is simply the observation that it tends to remain less synchronized.

To answer this question, we use a scaling factor $kEASGD=r\xd7kQSGD$ to roughly match the levels of synchronization between EASGD and QSGD. Results for $r=1.35$ are shown in Figure 11, and the synchronization curves are either approximately equal or EASGD remains more synchronized across all values of $p$. Additional values of $p=32$ and $p=64$ are shown, and EASGD now converges for all attempted values of $p<64$. QSGD continues to perform worse than EASGD on the test data due to an increased tendency to overfit. As the number of agents is increased, QSGD improves up to $p=32$; $p=64$ obtains roughly the same test performance. EASGD improves up to around $p=16$ and does not converge for $p=64$ (see Figure 11a; the curves in Figures 11b and 11d are covered by the insets, but EASGD obtains roughly 55% testing accuracy). In general, EASGD with $p$ agents obtains roughly the same performance as QSGD with $2p$ agents. Interestingly, Figure 11d shows that the high $p$ stability issues for EASGD are not simply due to a lack of synchronization, as EASGD actually remains more synchronized than QSGD for $p=64$ for much of the training time. We offer a simple possible explanation for these stability issues in the supplemental information by analyzing discrete-time optimization of a one-dimensional quadratic objective. Another explanation is afforded by theorems ^{20} and ^{22}, which reveal poor scaling with $p$ of both terms in the bound for EASGD when compared to QSGD. Together, these observations highlight stability issues in both continuous and discrete-time.

As discussed in the text and the description of the experimental setup, our theory allows the agents to be initialized in different locations and to use distinct learning rates through individual learning rate schedules. In the original work on EASGD, it was postulated that starting the agents at different locations would break symmetry and lead to instability (Zhang et al., 2015). Similarly, a single learning rate was used for all agents. The above simulations demonstrate that starting from distinct locations and decreasing the learning rate on an individual basis is nonproblematic. We show in Figure 12 that starting from a single location leads to decreased performance. Surprisingly, Figure 12 also highlights that initializing the agents from multiple locations is critical for optimal improvement as the number of agents is increased.

## 7 Conclusion

In this article, we presented a continuous-time analysis of distributed stochastic gradient algorithms within the framework of stochastic nonlinear contraction theory. Through analogy with quorum-sensing mechanisms, we analyzed the effect of synchronization of the individual SGD agents on the noise generated by their stochastic gradient approximations. We demonstrated that synchronization can effectively reduce the noise felt by each of the individual agents and by their spatial mean. We further demonstrated that synchronization can be seen to reduce the amount of smoothing imposed by SGD on the loss function. Through simulations on model non-convex optimization problems, we provided insight into how the distributed and coupled setting affects convergence to minima of the smoothed loss and the true loss. We introduced a new distributed algorithm, QSGD, and proved convergence results for a strongly convex objective for QSGD, QSGD with momentum, and EASGD. We further introduced a state-dependent variant of QSGD and constructed one specific example of the algorithm to show how the formalism can be used to bias exploration. We presented experiments on deep neural networks and compared the properties of QSGD, SD-QSGD, and EASGD for generalization performance. We noted an interesting regularizing property of EASGD even in the single-agent case and compared it to basic SGD with momentum, showing that it can lead to improved generalization. Research into similar higher-order in time optimization algorithms formed as coupled dynamical systems is an interesting direction of future work.

## Appendix: Interaction between Synchronization and Noise: Extra Quorum Dynamics

## Notes

^{1}

For the remainder of this article, unless otherwise specified, we will use $\u2225\xb7\u2225$ to denote the 2-norm.

^{2}

For example, say $f(x,t)=-P(x)x$ with $P(x)$ a symmetric and uniformly positive-definite matrix. Then $g(y,x,t)=-P(x)y$ satisfies this restriction requirement. The $y$ system is also contracting in $y$, as the symmetric part of the Jacobian $Js=-P(x)<0$ uniformly. The $f(x,t)$ system has Jacobian $\u2202fi\u2202xj=-Pij(x)-\u2211k\u2202Pik(x)\u2202xjxk$, which has a symmetric part with unknown definiteness without further assumptions on $P$.

^{3}

Indeed, the covariance $\eta bp2\u2211i\Sigma (xi)\u2264\eta bp\Sigma \xaf$, where $\Sigma \xaf=maxi\Sigma (xi)$ and the $max$ and $\u2264$ are with respect to the positive semidefinite order. The covariance $\eta bp\Sigma \xaf$ tends to zero as $p\u2192\u221e$, so that gaussian random variables drawn from a distribution with this covariance will become increasingly concentrated around zero with increasing $p$. Because the true covariance $\eta bp2TTT$ is less positive semidefinite, random variables drawn from the true distribution will also become concentrated around zero as $p\u2192\u221e$.

^{4}

Kleinberg et al. (2018) group the factor of $1/b$ with the covariance of the noise.

^{5}

We choose a relatively high value of $\eta $ so that the convolved loss will be qualitatively different from the true loss to a degree that is visible by eye. This enables us to distinguish convergence to true minima from convergence to minima of the convolved loss. An alternative and equivalent choice would be to choose $\eta $ smaller, with a correspondingly wider distribution of the noise.

^{6}

Note that without coupling, each agent performs basic SGD. Hence, the results in Figure 1a are equivalent to $p\xd7n$ single-agent SGD simulations, where $n$ is the total number of simulations and $p$ is the number of agents per simulation.

^{7}

In section 3, the denominator contained the factor $k-\lambda \xaf$ rather than $k+\lambda $. Strong convexity of $f$ was not assumed, so that the contraction rate of the coupled system was $k-\lambda \xaf$. In this proof, strong convexity of $f$ implies that the contraction rate of the coupled system is $k+\lambda $.

^{8}

The factor of $\lambda +k$ in the first term remains, as this factor originates in the derivation of the bound on $E\u2225\epsilon \u2225$, where the synchronization rate is $\lambda +k$.

^{9}

The state-dependent version used earlier in this work has been empirically shown to have a lower constant (Li et al., 2018), and is closer to the $O(\eta 2)$ approximating SDE, which is why it has been used up to this point.

^{10}

A similar continuous-time analysis for the averaging scheme considered here was performed in Mandt et al. (2017) for the non-distributed case; the derivation here is simpler and provides asymptotic results.

^{11}

More precisely, we keep track of the validation loss for each agent at a reference point, beginning with the validation loss at the first epoch. If the validation loss at the next epoch changes by greater than $1%$ of the reference point, the reference loss is set to the newly computed validation loss. If the validation loss changes by less than $1%$, the reference point is unchanged. When the reference point has been unchanged for five epochs, we decrease the learning rate.

^{12}

Another option would be to set $kj=(k-kj*)/(p-1)$ when this is positive and zero otherwise. This ensures, outside of the initial spiking period, that the total sum of the $kj$ is constant. We found similar empirical results with both choices.

^{13}

The improvement in test accuracy and in the minimization of the test loss with increasing number of agents is demonstrated in later plots. We found that this trend was maximized with lower values of $k$.

^{14}

Note that unlike QSGD with a single agent, EASGD with a single agent is a different algorithm from basic SGD. It can be seen as SGD coupled in feedback to a low-pass filter of its output.

^{15}

Figure 9d shows the distance from $x\u02dc$ for EASGD. The distance from $x\u2022$ for EASGD is nearly identical.

## Acknowledgments

N.B. was supported by a Department of Energy Computational Science Graduate Fellowship. We graciously thank the reviewers for helpful feedback and for suggestions to improve the work.

## References

*o*(1/

*n*). In

*convergence rate $O(1/k2)$*

*A Lyapunov analysis of momentum methods in optimization*