## Abstract

Although the number of artificial neural network and machine learning architectures is growing at an exponential pace, more attention needs to be paid to theoretical guarantees of asymptotic convergence for novel, nonlinear, high-dimensional adaptive learning algorithms. When properly understood, such guarantees can guide the algorithm development and evaluation process and provide theoretical validation for a particular algorithm design. For many decades, the machine learning community has widely recognized the importance of stochastic approximation theory as a powerful tool for identifying explicit convergence conditions for adaptive learning machines. However, the verification of such conditions is challenging for multidisciplinary researchers not working in the area of stochastic approximation theory. For this reason, this letter presents a new stochastic approximation theorem for both passive and reactive learning environments with assumptions that are easily verifiable. The theorem is widely applicable to the analysis and design of important machine learning algorithms including deep learning algorithms with multiple strict local minimizers, Monte Carlo expectation-maximization algorithms, contrastive divergence learning in Markov fields, and policy gradient reinforcement learning.

## 1 Overview

Although the number of artificial neural network and machine learning architectures is growing at an exponential pace, more attention needs to be paid to theoretical guarantees of asymptotic convergence for novel, nonlinear, high-dimensional adaptive learning algorithms. When properly understood, such guarantees can guide the algorithm development and evaluation process and, in addition, provide theoretical validation for a particular algorithm design. For many decades, the machine learning community has widely recognized the importance of stochastic approximation theory as a powerful tool for identifying explicit convergence conditions for adaptive learning machines. However, the verification of such conditions is challenging for multidisciplinary researchers not working in the area of stochastic approximation theory. For this reason, the goal of this letter is to present a new stochastic approximation theorem with easily verifiable assumptions for characterizing the asymptotic behavior of a wide range of important machine learning algorithms.

The new stochastic approximation theorem presented here is applicable to the analysis of the asymptotic behavior of a wide range of learning algorithms including (1) deep learning algorithm (Bottou, 1991, 1998, 2004; Bengio, Courville, & Vincent, 2013; Sutskever, Marten, Dahl, & Hinton, 2013; Zhang, Choromanska, & LeCun, 2015), (2) variable metric (Jani, Dowling, Golden, & Wang, 2000; Paik, Golden, Torlak, & Dowling, 2006; Roux, Manzagol, & Bengio, 2008; Schraudolph, Yu, & Günter, 2007; Sunehag, Trumpf, Vishwanathan, & Schraudolph, 2009) and momentum-type stochastic approximation schemes (Pearlmutter, 1992; Roux, Schmidt, & Bach, 2012; Sutskever et al., 2013; Zhang et al., 2015), (3) reinforcement learning and adaptive control (Jaakkola, Jordan, & Singh, 1994; Baird & Moore, 1999; Williams, 1992; Sugiyama, 2015; Sutton & Barto, 1998; Balcan & Feldman, 2013; Mohri, Rostamizadeh, & Talwalkar, 2012), (4) expectation-maximization problems for latent variable and missing data problems (Carbonetto, King, & Hamze, 2009; Gu & Kong, 1998), and (5) contrastive divergence learning in Markov random fields (Yuille, 2005; Hinton, Osindero, & Teh, 2006; Tieleman, 2008; Swersky, Chen, Marlin, & de Freitas, 2010; Salakhutdinov & Hinton, 2012). A critical feature of the theorem is that its statement and proof are specifically designed to provide relatively easily verifiable assumptions and interpretable conclusions that can be understood and applied by researchers outside the field of stochastic approximation theory.

Stochastic approximation theorems have played a vital role in characterizing our understanding of adaptive learning algorithms from the very beginning of work in machine learning (e.g., Amari, 1967; Duda & Hart, 1973). White (1989a, 1989b), Benveniste, Metivier, and Priouret (1990), Bottou (1991), Bertsekas and Tsitsiklis (1996), Golden (1996), Borkar (2008), Swersky et al. (2010), and Mohri et al. (2012) provide useful discussions of the application of stochastic approximation methods to machine learning problems. Kushner (2010), a seminal contributor to the development of stochastic approximation theory, provides an excellent review of the theoretical stochastic approximation literature from its origins in the 1950s.

The generic form of a stochastic approximation algorithm is defined as follows. Consider a learning machine whose parameter values at iteration $t$ of the learning algorithm are interpretable as the realization of a $q$-dimensional random vector $\theta \u02dc(t)$. The learning machine is provided an initial guess for the parameter estimates at iteration $t=0$, which is denoted as $\theta \u02dc(0)$. Then the learning machine observes a realization of a random vector $x\u02dc(t)$ called the *training stimulus*$x(t)$ which is then used to update the parameters of the learning machine.

In the initial stages of learning, the *search time period*, the step-size $\gamma t$ is typically chosen to be either constant or to increase in value. During this phase of the learning process, the adaptive learning machine's dynamics in equation 1.1 have the opportunity to sample the statistical environment. Ideally, this time period should be sufficiently long so that there is an opportunity for the learning machine to observe the different types of training stimuli in its environment for the purpose of extracting critical statistical regularities. For example, if there are $M$ distinct training stimuli that occur with approximately equal probability in the environment, then choosing the time period for learning to be $10M$ would ensure that each training stimulus will be approximately observed by the learning machine about 10 times during the initial search phase. After the initial search phase, the step-size $\gamma t$ is decreased at an appropriate rate to ensure convergence. This latter phase is called the *converge time period*.

Different choices of the search direction vector $d\u02dct$ in equation 1.1 realize different popular stochastic descent algorithms such as stochastic gradient descent (Bottou, 1991, 1998), normalized stochastic gradient descent (Hazan, Levy, & Shalev-Shwartz, 2015), modified Newton (Jani et al., 2000; Paik et al., 2006; Roux et al., 2008; Schraudolph et al., 2007; Sunehag et al., 2009), and momentum-type stochastic gradient descent methods (Pearlmutter, 1992; Roux et al., 2012; Sutskever et al., 2013; Zhang et al., 2015). A standard assumption is that the dot product of the expected value of the search direction $d\u02dct$ with the gradient of the objective function is less than or equal to zero.

Assume the stochastic sequence of $d$-dimensional random vectors $x\u02dc(1),x\u02dc(2),\u2026$ modeling the training stimuli are independent and identically distributed with common data generating process (DGP) probability density $pe:Rd\u2192[0,\u221e)$. In other words, each time the learning machine updates its parameters, the likelihood of observing a particular training stimulus $x(t)$ at iteration $t$ is given by $pe$. The goal of an adaptive learning machine is to estimate (learn) the global minimizer, $\theta *\u2208Rq$, of a smooth risk function $\u2113:Rq\u2192R$, which specifies the learning machine's optimal behavior. In addition, let a smooth function $c$ be defined such that $c(x,\theta )$ is the penalty, or “loss,” incurred by the learning machine for choosing parameter value $\theta $ for training stimulus $x$ where $x\u2208Rd$.

Several prior publications in the machine learning literature (White, 1989a, 1989b; Bottou, 1991, 1998; Golden, 1996; Mohri et al., 2012; Toulis, Rennie, & Airoldi, 2014) have provided explicit convergence theorems by considering parameter update equations of the form of equation 1.1 and assuming that the risk function has the form of equation 1.4. That is, at each parameter update, the training stimulus is sampled from the statistical environment using the probability density $pe$. This assumption, unfortunately, is not directly relevant to many important problems in the areas of (1) contrastive divergence learning (Yuille, 2005; Younes, 1999; Hinton et al., 2006; Tieleman, 2008; Swersky et al., 2010; Salakhutdinov & Hinton, 2012); (2) learning in the presence of missing data or latent variables (Gu & Kong, 1998; Carbonetto et al., 2009; Vlassis & Toussaint, 2009); and (3) active learning and adaptive control (Jaakkola et al., 1994; Baird & Moore, 1999; Williams, 1992; Sugiyama, 2015; Sutton & Barto, 1998; Balcan & Feldman, 2013; Vlassis & Toussaint, 2009). Such problems typically require that the training stimulus is sampled from a statistical environment specified by the current parameter estimates so that rather than sampling from the density $pe$, one samples from the density $pe(\xb7|\theta )$, where $\theta $ is the current knowledge state of the learning machine. These latter problems can be viewed as learning within a reactive learning environment.

In the machine learning literature, most of the focus has been on investigating the rate of convergence of stochastic approximation algorithms (Roux et al., 2012; Mohri et al., 2012). Analyses in the machine learning literature (Yuille, 2005; Sunehag et al., 2009; Mohri et al., 2012) include theorems for handling reactive learning environments but do not explain in detail how such theorems handle the case where the data generating process density $pe$ is functionally dependent on $\theta $ and do not explicitly characterize the asymptotic behavior of the state sequence ${\theta \u02dc(t)}$. In addition, such analyses often lack a discussion regarding how a stochastic approximation convergence theorem can be applied to situations where the objective function has multiple minimizers, maximizers, and saddle points. However, Blum (1954), Beneviste et al. (1990), Gu and Kong (1998), Kushner (1981), Younes (1999), and Delyon, Lavielle, & Moulines (1999) have provided explicit assumptions and proofs of convergence theorems for stochastic reactive learning environments, but the theorems and their assumptions may be difficult to apply in practice for readers without a background in stochastic approximation theory.

Clarity of understanding is important to ensure that such theorems can be properly and confidently applied in practice since the algorithms they describe are widely used in the field of machine learning. An important contribution of this letter is providing a relatively simple set of assumptions and a straightforward detailed discussion intended to support the mathematical analysis of a wide range of adaptive learning algorithms. Furthermore, it is hoped that as a result of the analyses presented here, the importance of prior contributions to the stochastic approximation theorem literature will be better appreciated and this analysis will serve as a stepping-stone to advanced study in this important area.

## 2 Overview of the New Convergence Theorem

The new stochastic approximation theorem that minimizes the reactive environment learning risk function in equation 1.5, as well as the passive learning risk function in equation 1.4, is similar to analyses by Andrieu, Moulines, and Priouret (2005), Blum (1954), Kushner (1981, theorem 1), White (1989a, 1989b), Benveniste et al. (1990; appendix to part II), Bertsekas and Tsitsiklis (1996, proposition 4.1, p. 141), Gu and Kong (1998), and Delyon et al. (1999, theorem ^{1}). With respect to the machine learning literature, the theorem and its proof are most closely related to the analysis of Sunehag et al. (2009). However, the assumptions, conclusions, and proof of this theorem are specifically designed to be easily understood by machine learning researchers working outside the field of stochastic approximation theory. The accessibility of these theoretical results is fundamentally important for the development of the field of machine learning to ensure that such results are correctly applied in specific applications. In addition to having conditions that are easily verifiable, the stochastic approximation theorem introduced here is applicable to a wide range of situations commonly encountered in practical machine learning problems.

If the objective function is positive definite everywhere on the parameter space, the theorem provides conditions ensuring convergence to the unique strict global minimum of the objective function. However, if the objective function has multiple minima, maxima, and saddle points, then the new stochastic approximation theorem is still applicable. In this latter nonconvex optimization case, the theorem provides the weaker conclusion that the sequence of algorithm-generated parameter estimates will converge to the set of critical points with probability one or the algorithm will generate a sequence of parameter estimates that are not bounded with probability one.

Note the terminology that an event occurs “with probability one” means there is a zero probability that the event will not occur. For example, if the stochastic sequence $\theta \u02dc(1),\theta \u02dc(2),\u2026$ converges to some set $H$ with probability one, this means that the probability of observing any realization $\theta (1),\theta (2),\u2026$ that deterministically converges to $H$ is exactly equal to one and the probability of observing any realization that does not converge to $H$ is exactly equal to zero.

## 3 A Practical Convergence Analysis Recipe

In this section, a procedure for applying the new stochastic approximation theorem is provided. Section 5 provides a formal statement and proof of the theorem.

The assumption that a stochastic sequence $x\u02dc(1),x\u02dc(2),\u2026$ is bounded means that there exists some finite number $K$ such that $|x\u02dc(t)|\u2264K$ with probability one. Here, the random vector $x\u02dc(t)$ corresponds to an experiment that generates a training stimulus vector $x(t)$. If the random vector $x\u02dc(t)$ is a discrete random vector restricted to take on a finite number of values (e.g., a $d$-dimensional binary random vector $x\u02dc(t)\u2208{0,1}d$), then this is a sufficient condition for the stochastic sequence to be bounded.

A sufficient condition for $c(x\u02dc,\theta )$ to be called a twice continuously differentiable random function is if $c$ is a continuous function of $x\u02dc$ and the second derivative of $c$, $H$, is a continuous function on the $q$-dimensional parameter space $\Theta $.

The conclusion of the convergence theorem states that the stochastic sequence of parameter estimates $\theta \u02dc(1),\theta \u02dc(2)$ either (1) is not confined to a closed, bounded, and convex region, $\Theta $, of the parameter space with probability one, or (2) converges to the set of critical points in $\Theta $ with probability one. For example, if the stochastic sequence of parameter estimates $\theta \u02dc(1),\theta \u02dc(2)$ converges to a set of two critical points of $\u2113$ such that it oscillates between these two points forever with probability one, then the stochastic sequence of parameter estimates $\theta \u02dc(1),\theta \u02dc(2)$ is said to converge to this set of two critical points with probability one:

- •
*Step 1: Identify the statistical environment.*A reactive statistical environment is modeled as a sequence of bounded, independent, and identically distributed $d$-dimensional random vectors $x\u02dc(1),x\u02dc(2),\u2026$ with common density $pe(\xb7|\theta )$ where $\theta \u2208Rq$. The density $pe$ is not functionally dependent on $\theta $ for passive statistical environments. - •
*Step 2: Check $\u2113$ is twice continuously differentiable with a lower bound.*Since ${x\u02dc(t)}$ is assumed bounded and it will be assumed that ${\theta \u02dc(t)}$ is a bounded stochastic sequence, this assumption is satisfied provided that $c$ and $pe$ are twice continuously differentiable random functions and defined such that for all $\theta \u2208Rq$:That is, $\u2113(\theta )=E{c(x\u02dc,\theta )}$ where the expectation is taken with respect to $pe(x|\theta )$. It is also assumed that $\u2113$ has a lower bound on $Rq$.$\u2113(\theta )=\u222bc(x,\theta )pe(x|\theta )d\nu (x).$ - •
*Step 3: Define the region of convergence.*Let $\Theta $ be a closed, bounded, and convex subset of $Rq$. - •
*Step 4: Check the annealing schedule.*Define a sequence of step sizes $\gamma 1,\gamma 2,\u2026$ that satisfies equations 5.1 and 5.2. In the context of adaptive learning, $\gamma t$ corresponds to the adaptive learning algorithm's “learning rate.” For example, the step-size schedulewhere $0<\tau 1<\tau 2$ and positive $\gamma 0$ generates a sequence $\gamma 1,\gamma 2,\u2026$ that satisfies special constraints on the step-size sequence specified by equations 5.1 and 5.2. This particular step-size schedule initially increases the step size and then eventually decreases it. The constant $\tau 1$ should be chosen to be large enough that the learning algorithm observes a sufficiently rich sample of its statistical environment to support learning. The constant $\tau 2$ should be the same order of magnitude as $\tau 1$. So, for example, if the learning machine observes $M$ distinct training stimuli with approximately equal probability and only one training stimulus is observed per iteration, then $\tau 1$ might be chosen to be $10M$ so that each training stimulus is observed approximately 10 times during both the search and the converge phases of the learning process.$\gamma t=\gamma 01+(t/\tau 1)1+(t/\tau 2)2,$ - •
*Step 5: Identify the search direction function.*Let $dt:Rd\xd7Rq\u2192Rq$ be a piecewise continuous function on $Rd\xd7Rq$ for each $t\u2208N$. Rewrite the learning rule for updating parameter estimates using the formulawhere the search direction random vector $d\u02dct=dt(x\u02dc(t),\theta \u02dc(t))$, and ${d\u02dct}$ is a bounded stochastic sequence. A sufficient condition for ${d\u02dct}$ to be a bounded stochastic sequence is that there exists a piecewise continuous function $d:Rd\xd7Rq\u2192Rq$ on a finite partition of $Rd\xd7Rq$ such that $dt=d$ for all $t\u2208N$ since ${x\u02dc(t)}$ and ${\theta \u02dc(t)}$ are bounded stochastic sequences by assumption.$\theta \u02dc(t+1)=\theta \u02dc(t)+\gamma td\u02dct,$ - •
*Step 6: Show the average search direction is downward.*Assume there exists a series of functions $d\xaf1,d\xaf2,\u2026$ such thatShow that there exists a positive number $K$ such that$d\xaft(\theta )\u2261E{dtx\u02dc(t),\theta |\theta}=\u222bdtx,\theta pe(x|\theta )d\nu (x).$For example, choosing$d\xaft(\theta )Tg(\theta )\u2264-K|g(\theta )|2.$(3.1)yields the standard stochastic gradient descent direction$dt(x,\theta )=-1pe(x|\theta )d[c(x,\theta )pe(x|\theta )]d\theta =-dc(x,\theta )d\theta -c(x,\theta )dlogpe(x|\theta )d\theta $so that $d\xaft(\theta )Tg(\theta )=-|g(\theta )|2$.$d\xaft(\theta )=\u222bdt(x,\theta )pe(x|\theta )d\nu (x)=-d\u2113/d\theta $ - •
*Step 7: Investigate asymptotic behavior.*Let $H$ be the set of critical points in $\Theta $. Conclude that with probability one either (1) the stochastic sequence does not remain in Θ for all $t>T$ for some positive integer $T$, or (2) $\theta \u02dc(t)\u2192H$ as $t\u2192\u221e$.

Consider the important special case where the Hessian of $\u2113$ is positive definite on $\Theta $ even though $\u2113$ is multimodal. The region $\Theta $ can contain no critical points, exactly one critical point, or multiple critical points. If $\Theta $ contains exactly one critical point in its interior, then that critical point is the unique global minimizer of $\u2113$ on the interior of $\Theta $. The region $\Theta \u2286Rq$ may also contain one or more critical points of $\u2113$ on its boundary corresponding to saddle points or local maximizers of $\u2113$ on $Rq$. For example, suppose that a smooth objective function $\u2113$ has a strict local minimum at the point $\theta =0$, a saddle point at $\theta =5$, and a strict local maximum at the point $\theta =10$. The function $\u2113$ is positive definite on the set $\Theta 1=[-3,-1]$ but no critical points exist in $\Theta 1$. The function $\u2113$ is positive definite on the set $\Theta 2=[-3,+3]$ and has a unique strict local minimizer at $\theta =0$. The function $\u2113$ is positive definite on the set $\Theta 3=[-3,5]$ and has two critical points located at $\theta =0$ (strict local minimizer) and $\theta =5$ (critical point on boundary of $\Theta 3$).

## 4 Adaptive Learning Algorithm Applications

In this section, we discuss several examples of adaptive learning algorithms that can be analyzed using the stochastic approximation theorem for reactive environments presented in section 5.

### 4.1 Adaptive Learning in Passive Statistical Environments

In this section, some adaptive learning strategies for passive statistical environments are discussed. In such environments, the objective function is defined as in equation 1.4. It should be noted, however, that these adaptive learning strategies are applicable for reactive learning statistical environments as well where the objective function is defined as in equation 1.5.

Assume the observations $x\u02dc(1),x\u02dc(2),\u2026$ are independent and identically distributed with common density $pe$.

The above methodology can also be used to implement different stochastic approximation variants of momentum, conjugate gradient, limited memory Broyden-Fletcher-Goldfarb-Shanno descent algorithms (Shraudolph et al., 2007; Jani et al., 2000; Paik et al., 2006), natural gradient descent methods (Schraudolph et al., 2007), and normalized gradient methods (Hazan et al., 2015).

In practice, one would set $\mu \u02dck=0$, yielding a gradient descent step in situations where the magnitude of $d\u02dc(k-1)Tg\u02dck$ is less than some positive number $\epsilon $.

A random block coordinate descent algorithm (Razaviyayn, Hong, Luo, & Pang, 2014) can be realized within this proposed framework as well. Let $\u2299$ denote the Hadamard product (element-by-element vector multiplication) operator. Let the set of $q$-dimensional binary vectors be denoted by $B\u2261{0,1}q$. Let $mt\u2208B$ be a $q$-dimensional binary vector whose $j$th element is a one if the $j$th element of the $q$-dimensional random vector $\theta (k)$ is updated with information about training pattern $s(t)$ at learning trial $t$.

### 4.2 Normalization Constants and Contrastive Divergence

Equation 4.12 cannot, however, be immediately used to derive a stochastic gradient descent algorithm that minimizes $\u2113$ for the following reasons. The first term on the right-hand side of equation 4.13 is usually relatively easy to evaluate. But the second term on the right-hand side of equation 4.13 is usually very difficult to evaluate because it involves a computationally intractable multidimensional integration.

Note that the statistical environment used to generate the data for the stochastic approximation algorithm in equation 4.16 is not a passive statistical environment since the parameters of the learning machine are updated at learning trial $k$ not only by the observation $x\u02dc(k)$ but also by the observations $y\u02dc1,\u2026,y\u02dcm$ whose joint distribution is functionally dependent on the current parameter estimates $\theta (k)$. Thus, contrastive-divergence algorithms of this type can be analyzed approximately using the theorem presented in section 1.

### 4.3 Missing Data, Hidden Variables, and the EM Algorithm

In this section, the problems of hidden variables and missing data are considered. The presence of hidden variables is not only a characteristic feature of latent variable models and deep learning architectures but can be considered equivalent to the presence of data, which is always missing.

*complete-data*random vector and $m\u02dck$ is a $d$-dimensional

*missing data indicator*binary random vector taking on values in ${0,1}d$ for all $k\u2208N$. The $j$th element of $m\u02dck$ takes on the value of one if and only if the $j$th element of $x\u02dck$ is observable.

For convenience, the $d$-dimensional random vector $x\u02dck$ is partitioned such that $x\u02dck=[v\u02dck,h\u02dck]$ where $v\u02dck$ is the observable component of $x\u02dck$ and $h\u02dck$ is the unobservable component whose probability distribution is functionally dependent only on a realization of $v\u02dck$. The elements of $v\u02dck$ correspond to the visible random variables, while the elements of $h\u02dck$ correspond to the hidden random variables or the missing data. Note that the dimensionalities of $v\u02dck$ and $h\u02dck$ will typically vary as a function of the positive integer index variable $t$.

Note that $m$ can be chosen equal to 1 or any positive integer. In the case where $m=\u221e$, the resulting algorithm approximates the deterministic generalized expectation-maximization (GEM) algorithm (see McLachlan & Krishnan, 1996, for a formal definition of a GEM algorithm) in which the learning machine uses its current probabilistic model to compute the expected downhill search direction, takes a downhill step, updates its current probabilistic model, and then repeats this process in an iterative manner.

### 4.4 Policy Gradient Reinforcement Learning

In this section, the stochastic approximation theorem developed here is applied to the problem of investigating the convergence of a class of reinforcement learning algorithms called *policy gradient reinforcement learning machines* (Williams, 1992; Sutton & Barto, 1998; Sugiyama, 2015). Suppose that a learning machine experiences a collection of episodes. The episodes $u\u02dc(0),u\u02dc(1),\u2026$ are assumed to be independent and identically distributed. In addition, the $k$th episode $u(k)$ is defined such that $u(k)\u2261[so(k),sF(k)]$ where $so(k)$ is called the *initial state of episode*$u(k)$ and $sF(k)$ is called the *final state of episode*$u(k)$. The probability density of $u\u02dck$ when the learning machine is embedded within a passive statistical environment is specified by the density $pe(u)=pe(so,sF)$ where $pe(u)$ specifies the likelihood that $u$ is observed by the learning machine in its statistical environment.

On the other hand, for a reactive learning environment, the probability that the learning machine selects action $aj$ given the current state of the environment $so$ and the learning machine's current state of knowledge $\theta $ is expressed by the conditional probability mass function $p(aj|so,\theta )$, $j=1,\u2026,J$. The statistical environment of the learning machine is characterized by the probability density $pe(so)$, specifying the likelihood of a given initial state of an episode and the conditional density $pe(sF|aj,so)$, which specifies the likelihood of a final state of an episode $sF$ given the learning machine's action $aj$ and the initial state of the episode $so$.

## 5 Formal Convergence Analysis of Learning

In this section, the proof of the stochastic approximation theorem is provided, which minimizes the reactive environment risk function in equation 1.5 as well as the passive environment risk function in equation 1.4.

Although the specific theorem and proof presented here are novel, the obtained results and method of proof are very similar to many existing results in the literature. In particular, the statement and proof of the theorem follow a combination of arguments by Blum (1954), the appendix of Benveniste et al. (1990), and Sunehag et al. (2009) using the the well-known Robbins-Siegmund lemma (Robbins & Siegmund, 1971; see Benveniste et al., 1990, appendix to part 2, or Douc, Moulines, & Stoffer, 2014, lemma C2, for relevant reviews).

The results presented here are similar to those obtained by Andrieu et al. (2005, theorem 2.3), Benveniste et al. (1990, appendix to part 2, pp. 344–347), Bertsekas & Tsitsiklis (1996, proposition 4.1, p. 141), Douc et al. (2014, theorem C.7), Kushner (1981, theorem 1), Kushner & Yin (1997, theorem 4.1), Mohri et al (2012, theorems 14.7 and 14.8), White (1989a, 1989b, theorem 3.1).

The terminology that a function $f:Rd\xd7Rq\u2192Rq$ is *bounded* means that for all $(x,\theta )\u2208Rd\xd7Rq$, there exists a finite number $K$ such that $|f|\u2264K$. The terminology that a stochastic sequence $x\u02dc(0),x\u02dc(1),\u2026$ is *bounded* means that there exists a finite number $K$ such that for all $t\u2208N$: $|x\u02dc(t)|\u2264K$ with probability one where $N\u2261{0,1,2,\u2026}$.

*piecewise continuous function on the finite partition*$D$.

Let $\Theta $ be a convex, closed, and bounded subset of $Rq$. Let $\u2113:Rq\u2192R$ be a twice continuously differentiable function.

Let the gradient of $\u2113$ be denoted as $g\u2261(\u2207\u2113)T$. Let the Hessian of $\u2113$ be denoted as $H\u2261\u22072\u2113$.

See Robbins and Siegmund (1971; also see Benveniste et al., 1990, p. 344, or Douc et al., 2014, lemma C2) for the statement and proof of the almost supermartingale lemma.

Let $\Theta $ be a closed, bounded, and convex subset of $Rq$. Let $\u2113:Rq\u2192R$ be a twice continuously differentiable function with a finite lower bound. Let $g\u2261(\u2207\u2113)T$. Let $H\u2261\u22072\u2113$.

- •
Assume $x\u02dc\theta $ has Radon-Nikodým density $pe(\xb7|\theta ):Rd\u2192[0,\u221e)$ with respect to a sigma-finite measure $\nu $ for each $\theta \u2208\Theta $.

- •
Assume a positive number $xmax$ exists such that for all $\theta \u2208\Theta $, the random vector $x\u02dc\theta $ with density $pe(\xb7|\theta )$ satisfies $x\u02dc\theta <xmax$ with probability one.

- •Let $\gamma 0,\gamma 1,\gamma 2,\u2026$ be a sequence of positive real numbers such thatand$\u2211t=0\u221e\gamma t2<\u221e$(5.1)$\u2211t=0\u221e\gamma t=\u221e.$(5.2)
- •Let $dt:Rd\xd7Rq\u2192Rq$ be a piecewise continuous function on a finite partition of $Rd\xd7Rq$ for all $t\u2208N$. When it exists, let$d\xaft(\theta )=\u222bdt(x,\theta )pe(x|\theta )d\nu (x).$
- •Let $\theta \u02dc(0)$ be a $q$-dimensional random vector. Let $\theta \u02dc(1),\theta \u02dc(2),\u2026$ be a sequence of $q$-dimensional random vectors defined such that for $t=0,1,2,\u2026$,where $d\u02dct\u2261dt(x\u02dc\theta (t),\theta \u02dc(t))$ such that $|d\u02dct|$ is less than some finite number for $t=0,1,2,\u2026$, and the distribution of $x\u02dc\theta (t)$ is specified by the conditional density $pe(\xb7|\theta \u02dc(t))$.$\theta \u02dc(t+1)=\theta \u02dc(t)+\gamma td\u02dct,$(5.3)
- •Assume there exists a positive number $K$ such that for all $\theta \u2208\Theta $,$d\xaft(\theta )Tg(\theta )\u2264-K|g(\theta )|2.$(5.4)

If there exists a positive integer $T$ such that $\theta \u02dc(t)\u2208\Theta $ for all $t\u2265T$ with probability one, then $\theta \u02dc(1),\theta \u02dc(2),\u2026$ converges with probability one to the set of critical points of $\u2113$ contained in $\Theta $.

**Proof.** Let $\u2113\u02dct\u2261\u2113(\theta \u02dc(t))$ with realization $\u2113t\u2261\u2113(\theta (t))$. Let $g\u02dct\u2261g(\theta \u02dc(t))$ with realization $gt\u2261g(\theta (t))$. Let $H\u02dct\u2261H(\theta \u02dc(t))$ with realization $Ht\u2261H(\theta (t))$.

*Step 1: Expand $\u2113$ using a second-order mean value expansion.*Expand $\u2113$ about $\theta \u02dc(t)$ and evaluate at $\theta \u02dc(t+1)$ using the mean value theorem to obtainwith$\u2113\u02dct+1=\u2113\u02dct+g\u02dctT\theta \u02dc(t+1)-\theta \u02dc(t)+\gamma t2R\u02dct$(5.5)where the random variable $\zeta \u02dct$ can be defined as a point on the chord connecting $\theta \u02dc(t)$ and $\theta \u02dc(t+1)$. Substituting the relation$R\u02dct\u2261(1/2)d\u02dctTH(\zeta \u02dct)d\u02dct,$(5.6)into equation 5.5 gives$\gamma td\u02dct=\theta \u02dc(t+1)-\theta \u02dc(t)$$\u2113\u02dct+1=\u2113\u02dct+\gamma tg\u02dctTd\u02dct+\gamma t2R\u02dct.$(5.7)*Step 2: Identify conditions required for the remainder term of the expansion to be bounded.*Since, by assumption, ${\theta \u02dc(t)}$ is a bounded stochastic sequence and $H$ is continuous, this implies that the stochastic sequence ${H(\zeta \u02dct)}$ is bounded. In addition, by assumption, ${d\u02dct}$ is a bounded stochastic sequence. This implies there exists a number $Rmax$ such that for all $t=0,1,2,\u2026$,with probability one.$|R\u02dct|<Rmax,$(5.8)*Step 3: Show the expected value of objective function decreases.*Taking the conditional expectation of both sides of equation 5.7 with respect to the conditional density $pe$ and evaluating at $\theta (t)$ and $\gamma t$ yields$E\u2113\u02dct+1|\theta (t)=\u2113t+\gamma tgtTd\xaft+\gamma t2E{R\u02dct|\theta (t)}.$(5.9)Substituting the assumption $d\xaft(\theta )Tg(\theta )\u2264-K|g(\theta )|2$ and the conclusion of step 2 that $|R\u02dct|<Rmax$ with probability one into equation 5.7 gives$E\u2113\u02dct+1|\theta (t)\u2264\u2113t-\gamma tK|gt|2+\gamma t2Rmax.$(5.10)*Step 4: Show a subsequence of ${|g\u02dct|2}$ converges to zero wp1.*Since $\u2113$ has a lower bound, $K$ is a finite positive number, and equation 5.1 holds by assumption, then the almost supermartingale lemma can be applied to equation 5.10 on the set where ${\theta \u02dc(t)}$ and ${d\u02dct}$ are bounded with probability one to obtain the conclusion thatwith probability one.$\u2211t=0\u221e\gamma tg\u02dct2<\u221e$(5.11)For some positive integer $T$, letThe sequence $a\u02dcT*,a\u02dcT+1*,\u2026$ is nonincreasing with probability one and bounded from below by zero, which implies that this sequence is convergent with probability one to a random variable $a\u02dc*$ (see theorem 5.1.1(vii); Rosenlicht, 1968, p. 50).$a\u02dcT*\u2261infg\u02dcT2,g\u02dcT+12,\u2026.$Assume that $a\u02dc*$ is positive and not equal to zero, from equation 5.2,which contradicts equation 5.11. Thus, the sequence $a\u02dcT*,a\u02dcT+1*,\u2026$ is convergent with probability one to zero. Equivalently a subsequence of ${|g\u02dct|2}$ is convergent with probability one to zero.$\u2211t=1\u221e\gamma ta\u02dct\u2265a\u02dc*\u2211t=T\u221e\gamma t=\u221e,$*Step 5: Show that the stochastic sequence ${\theta \u02dc(t)}$ converges to a random variable wp1.*From conclusion (1) of the almost supermartingale lemma, the stochastic sequence of $\u2113(\theta \u02dc(1)),\u2113(\theta \u02dc(2)),\u2026$ converges to some unknown random variable, which will be denoted as $\u2113\u02dc*$ with probability one. Since $\u2113$ is continuous, this is equivalent to the assertion that $\theta \u02dc(1),\theta \u02dc(2),\u2026$ converges with probability one to some unknown random variable, which will be denoted as $V\u02dc*$ such that $\u2113(V\u02dc*)=\u2113\u02dc*$ with probability one. By the assumption that with probability one, every trajectory $\theta \u02dc(1),\theta \u02dc(2),\u2026$ is confined to the closed, bounded, and convex set $\Theta $, it follows that $V\u02dc*\u2208\Theta $ with probability one.*Step 6: Show the stochastic sequence ${|g\u02dct|2}$ converges to zero wp1:*Since $g$ is a continuous function, it follows that $|g(\theta \u02dc(1))|2,|g(\theta \u02dc(2))|2,\u2026$ converges with probability one to $|g(V\u02dc*)|2$. This is equivalent to the statement that every subsequence of ${|g(\theta \u02dc(t))|2}$ converges to $|g(V\u02dc*)|2$ with probability one. That is, for every possible sequence of positive integers $t1,t2,\u2026$ the stochastic subsequence $|g(\theta \u02dc(t1)|2,|g(\theta \u02dc(t2)|2,\u2026$ converges with probability one to $|g(V\u02dc*)|2$.From step 4, there exists a sequence of positive integers, $k1,k2,\u2026$ such that the stochastic subsequence $|g(\theta \u02dc(k1)|2,|g(\theta \u02dc(k2)|2,\u2026$ converges with probability one to zero. Thus, to avoid a contradiction, every subsequence of ${|g(\theta \u02dc(t))|2}$ converges to a random variable $|g(V\u02dc*)|2$ with probability one and additionally with probability one, $|g(V\u02dc*)|2=0$—or equivalently, ${|g(\theta \u02dc(t))|2}$ converges to 0 with probability one.

Since $|g|2$ is a continuous function and the assumption that $V\u02dc*\u2208\Theta $ with probability one, it follows that $\theta \u02dc(1),\theta \u02dc(2),\u2026$ converges with probability one toThat is, $\theta \u02dc(1),\theta \u02dc(2),\u2026$ converges with probability one to the set of critical points of $\u2113$ in $\Theta .\u25a1$${V\u02dc*\u2208\Theta :|g(V\u02dc*)|2=0}.$