Abstract
In this paper, we analyze the effects of depth and width on the quality of local minima, without strong overparameterization and simplification assumptions in the literature. Without any simplification assumption, for deep nonlinear neural networks with the squared loss, we theoretically show that the quality of local minima tends to improve toward the global minimum value as depth and width increase. Furthermore, with a locally induced structure on deep nonlinear neural networks, the values of local minima of neural networks are theoretically proven to be no worse than the globally optimal values of corresponding classical machine learning models. We empirically support our theoretical observation with a synthetic data set, as well as MNIST, CIFAR-10, and SVHN data sets. When compared to previous studies with strong overparameterization assumptions, the results in this letter do not require overparameterization and instead show the gradual effects of overparameterization as consequences of general results.
1 Introduction
Deep learning with neural networks has been a significant practical success in many fields, including computer vision, machine learning, and artificial intelligence. Along with its practical success, deep learning has been theoretically analyzed and shown to be attractive in terms of its expressive power. For example, neural networks with one hidden layer can approximate any continuous function (Leshno, Lin, Pinkus, & Schocken, 1993; Barron, 1993), and deeper neural networks enable us to approximate functions of certain classes with fewer parameters (Montufar, Pascanu, Cho, & Bengio, 2014; Livni, Shalev-Shwartz, & Shamir, 2014; Telgarsky, 2016). However, training deep learning models requires us to work with a seemingly intractable problem: nonconvex and high-dimensional optimization. Finding a global minimum of a general nonconvex function is NP-hard (Murty & Kabadi, 1987), and nonconvex optimization to train certain types of neural networks is also known to be NP-hard (Blum & Rivest, 1992). These hardness results pose a serious concern only for high-dimensional problems, because global optimization methods can efficiently approximate global minima without convexity in relatively low-dimensional problems (Kawaguchi, Kaelbling, & Lozano-Pérez, 2015).
A hope is that beyond the worst-case scenarios, practical deep learning allows some additional structure or assumption to make nonconvex high-dimensional optimization tractable. Recently, it has been shown with strong simplification assumptions that there are novel loss landscape structures in deep learning optimization that may play a role in making the optimization tractable (Dauphin et al., 2014; Choromanska, Henaff, Mathieu, Ben Arous, & LeCun, 2015; Kawaguchi, 2016). Another key observation is that if a neural network is strongly overparameterized so that it can memorize any data set of a fixed size, then all stationary points (including all local minima and saddle points) become global minima, with some nondegeneracy assumptions. This observation was explained by Livni et al. (2014) and further refined by Nguyen and Hein (2017, 2018). However, these previous results (Livni et al., 2014; Nguyen and Hein, 2017, 2018) require strong overparameterization by assuming not only that a network's width is larger than the data set size but also that optimizing only a single layer (the last layer or some hidden layer) can memorize any data set based on an assumed condition on the rank or nondegeneracy of other layers.
In this letter, we analyze the effects of depth and width on the values of local minima, without the strong overparameterization and simplification assumptions in the literature. As a result, we prove quantitative upper bounds on the quality of local minima, which shows that the values of local minima of neural networks are guaranteed to be no worse than the globally optimal values of corresponding classical machine learning models, and the guarantee can improve as depth and width increase.
2 Preliminaries
This section defines the optimization problem considered in this letter and introduces the basic notation.
2.1 Problem Formulation
Let and be an input vector and a target vector, respectively. Let be a training data set of size . Given a set of matrices or vectors , define to be a block matrix of each column block being . Define the training data matrices as and .
2.2 Additional Notation
Define to be the orthogonal projection matrix onto the column space (or range space) of a matrix . Let be the orthogonal projection matrix onto the null space (or kernel space) of a matrix . For a matrix , we denote the standard vectorization of the matrix as .
3 Shallow Nonlinear Neural Networks with Scalar-Valued Output
Before presenting our main results for deep nonlinear neural networks, this section provides the results for shallow networks with a single hidden layer (or three-layer networks with the input and output layers) and scalar-valued output (i.e., ) to illustrate some of the ideas behind the discussed effects of the depth and width on local minima.
3.1 Analysis with ReLU Activations
Under this setting, proposition 1 provides an equation that holds at local minima and illustrates the effect of width for shallow ReLU neural networks.
Proposition 1 is an immediate consequence of our general result (see theorem 1) in the next section (the proof is provided in section A.1). In the rest of this section, we provide a proof sketch of proposition 1.
3.2 Probabilistic Bound
From equation 2.2 in proposition 1, the loss at differentiable local minima is expected to tend to get smaller as the width of the hidden layer gets larger. To further support this theoretical observation, this section obtains a probabilistic upper bound on the loss for white noise data by fixing the activation patterns for and assuming that the data matrix is a random gaussian matrix, with each entry having mean zero and variance one.
This definition of generalizes the corresponding definition in section 3.1. Proposition 1 holds for this generalized activation pattern by simply replacing the previous definition of by this more general definition. This can be seen from the proof sketch in section 3.1 and is later formalized in the proof of theorem 1.
We denote the vector consisting of the diagonal entries of by for . Define the activation pattern matrix as . For any index set , let denote the submatrix of that consists of its rows of indices in . Let be the smallest singular value of .
Proposition 2 proves that in the regime , and in the regime , under the corresponding conditions on ; that is, for any index set such that in the regime , and in the regime . This supports our theoretical observation that increasing width helps improve the quality of local minima.
Fix the activation pattern matrix . Let be a random gaussian matrix, with each entry having mean zero and variance one. Then the loss as in equation 3.2 satisfies both of the following statements:
- i.If and for any index set with , thenwith probability at least .
- ii.If with and for any index set with , thenwith probability at least .
The proof of proposition 2 is provided in appendix B. In that proof, we first rewrite the loss as the projection of onto the null space of an matrix , with an explicit expression in terms of the activation pattern matrix and the data matrix . By our assumption, the data matrix is a random gaussian matrix. The projection matrix is also a random matrix. Proposition 2 then boils down to understanding the rank of the projection matrix , and we proceed to show that has the largest possible rank, , with high probability. In fact, we derive quantitative estimates on the smallest singular value of . The main difficulties are that the columns of the matrix are correlated and variances of different entries vary. Our approach to obtain quantitative estimates on the smallest singular value of combines the epsilon net argument with an iterative argument.
In the regime , results similar to proposition 2ii were obtained under certain diversity assumptions on the entries of the weight matrices in a previous study (Xie, Liang, & Song, 2017). When compared with the previous study (Xie et al., 2017), proposition 2 specifies precise relations between the size of the neural network and the size of the data set and also holds true in the regime . Moreover, our proof arguments for proposition 2ii are different. Xie et al. (2017), under the assumption that , show that is close to its expectation in the sense of spectral norm. As a consequence, the lower bound of the smallest eigenvalue of gives the lower bound for the smallest singular value of .
However, proposition 2 assumes a gaussian data matrix, which may be a substantial limitation. The proof of proposition 2 relies on the concentration properties of gaussian distribution. Whereas a similar proof would be able to extend proposition 2 to a nongaussian distribution with these properties (e.g., distributions with subgaussian tails), it would be challenging to use a similar proof for a general distribution without the properties similar to those.
4 Deep Nonlinear Neural Networks
Let be the number of hidden layers and be the width (or, equivalently, the number of units) of the th hidden layer. To theoretically analyze concrete phenomena, the rest of this letter focuses on fully connected feedforward networks with various depths and widths , using rectified linear units (ReLUs), leaky ReLUs, and absolute value activations, evaluated with the squared loss function. In the rest of this letter, the (finite) depth can be arbitrarily large and the (finite) widths can arbitrarily differ among different layers.
4.1 Model and Notation
This definition of generalizes the corresponding definition in section 3. Let be the identity matrix of size by . Define to be the Kronecker product of matrices and . Given a matrix , and denote the th column vector of and the th row vector of , respectively.
4.2 Theoretical Result
For the standard deep nonlinear neural networks, theorem 1 provides an equation that holds at local minima and illustrates the effect of depth and width. Let for all and .
The complete proof of theorem 1 is provided in section A.1. Theorem 1 is a generalization of proposition 1. Accordingly, its proof follows the proof sketch presented in the previous section for proposition 1.
Unlike previous studies (Livni et al., 2014; Nguyen & Hein, 2017, 2018), theorem 1 requires no overparameterization such as . Instead, it provides quantitative gradual effects of depth and width on local minima, from no overparameterization to overparameterization. Notably, theorem 1 shows the effect of overparameterization in terms of depth as well as width, which also differs from the results of previous studies that consider overparameterization in terms of width (Livni et al., 2014; Nguyen & Hein, 2017, 2018).
The proof idea behind these previous studies with strong overparameterization is captured in the discussion after equation 3.3—with strong overparameterization such that and , is left-invertible and hence every local minimum is a global minimum with zero training error. Here, represents the rank of a matrix . The proof idea behind theorem 1 differs from those as shown in section 3.1. What is still missing in theorem 1 is the ability to provide a prior guarantee on without strong overparameterization, which is addressed in sections 3.2 and 5 for some special cases but left as an open problem for other cases.
4.3 Experiments
In the synthetic data set, the data points were randomly generated by a ground-truth, fully connected feedforward neural network with , for all , tanh activation function, and . MNIST (LeCun, Bottou, Bengio, & Haffner, 1998), a popular data set for recognizing handwritten digits, contains 28 28 gray-scale images. The CIFAR-10 (Krizhevsky & Hinton, 2009) data set consists of 32 32 color images that contain different types of objects such as “airplane,” “automobile,” and “cat.” The Street View House Numbers (SVHN) data set (Netzer et al., 2011) contains house digits collected by Google Street View, and we used the 32 32 color image version for the standard task of predicting the digits in the middle of these images. In order to reduce the computational cost, for the image data sets (MNIST, CIFAR-10, and SVHN), we center-cropped the images ( for MNIST and for CIFAR-10 and SVHN), then resized them to smaller gray-scale images ( for MNIST and for CIFAR-10 and SVHN), and used randomly selected subsets of the data sets with size as the training data sets.
For all the data sets, the network architecture was fixed to be a fully connected feedforward network with the ReLU activation function. For each data set, the values of were computed with initial random weights drawn from a normal distribution with zero mean and normalized standard deviation () and with trained weights at the end of 40 training epochs. (Additional experimental details are presented in appendix C.)
Figure 1 shows the results with the synthetic data set, as well as the MNIST, CIFAR-10, and SVHN data sets. As it can be seen, the values of tend to decrease toward zero (and hence the global minimum value), as the width or depth of neural networks increases. In theory, the values of may not improve as much as desired along depth and width if representations corresponding to each unit and each layer are redundant in the sense of linear dependence of the columns of (see theorem 1). Intuitively, at initial random weights, one can mitigate this redundancy due to the randomness of the weights, and hence a major concern is whether such redundancy arises and degrades along with training. From Figure 1, it can be also noticed that the values of tend to decrease along with training. These empirical results partially support our theoretical observation that increasing the depth and width can improve the quality of local minima.
The values of for the training data sets ( are on the right-hand side of equation 4.1) with varying depth (-axis) and width for all (-axis). The heat map colors represent the values of . In all panels of this figure, the left heat map (initial) is computed with initial random weights and the right heat map (trained) is calculated after training. It can be seen that both depth and width helped improve the values of .
The values of for the training data sets ( are on the right-hand side of equation 4.1) with varying depth (-axis) and width for all (-axis). The heat map colors represent the values of . In all panels of this figure, the left heat map (initial) is computed with initial random weights and the right heat map (trained) is calculated after training. It can be seen that both depth and width helped improve the values of .
5 Deep Nonlinear Neural Networks with Local Structure
Given the scarcity of theoretical understanding of the optimality of deep neural networks, Goodfellow, Bengio, and Courville (2016) noted that it is valuable to theoretically study simplified models: deep linear neural networks. For example, Saxe, McClelland, and Ganguli (2014) empirically showed that in terms of optimization, deep linear networks exhibited several properties similar to those of deep nonlinear networks. Following these observations, the theoretical study of deep linear neural networks has become an active area of research (Kawaguchi, 2016; Hardt & Ma, 2017; Arora, Cohen, Golowich, & Hu, 2018; Arora, Cohen, & Hazan, 2018), as a step toward the goal of establishing the optimization theory of deep learning.
As another step toward the goal, this section discards the strong linearity assumption and considers a locally induced nonlinear-linear structure in deep nonlinear networks with the piecewise linear activation functions such as ReLUs, leaky ReLUs, and absolute value activations.
5.1 Locally Induced Nonlinear-Linear Structure
In this section, we describe how a standard deep nonlinear neural network can induce nonlinear-linear structure. The nonlinear-linear structure considered in this letter is defined in definition 1: condition i simply defines the index subsets that pick out the relevant subset of units at each layer , condition ii requires the existence of linearly acting units, and condition iii imposes weak separability of edges.
A parameter vector is said to induce weakly separated linear units on a training input data set if there exist sets such that for all , the following three conditions hold:
- i.
with .
- ii.
for all .
- iii.
for all if .
Given a training input data set , let be the set of all parameter vectors that induce weakly separated linear units on the training input data set that defines the total loss in equation 2.1. For standard deep nonlinear neural networks, all parameter vectors are in , and some parameter vectors are in for different values of . Figure 2 a illustrates locally induced structures for . For a parameter to be in , definition 1 requires only the existence of a portion of units to act linearly on the particular training data set merely at the particular . Thus, all units can be nonlinear, act nonlinearly on the training data set outside of some parameters , and operate nonlinearly always on other inputs —for example, in a test data set or a different training data set. The weak separability requires that the edges going from the units to the rest of the network are negligible. The weak separability does not require the units to be separated from the rest of the neural network.
Illustration of locally induced nonlinear-linear structures. (a) Simple examples of the structure with weakly separated edges considered in this section (see definition 1). (b) Examples of a simpler structure with strongly separated edges (see definition 2). The red nodes represent the linearly acting units on a training data set at a particular , and the white nodes are the remaining units. The black dashed edges represent standard edges without any assumptions. The red nodes are allowed to depend on all nodes from the previous layer in panel a, whereas they are not allowed in panel b except for the input layer. In both panels a and b, two examples of parameters are presented with the exact same network architecture (including activation functions and edges). Even if the network architecture (or parameterization) is identical, different parameters can induce different local structures. With , this local structure always holds in standard deep nonlinear networks with four hidden layers.
Illustration of locally induced nonlinear-linear structures. (a) Simple examples of the structure with weakly separated edges considered in this section (see definition 1). (b) Examples of a simpler structure with strongly separated edges (see definition 2). The red nodes represent the linearly acting units on a training data set at a particular , and the white nodes are the remaining units. The black dashed edges represent standard edges without any assumptions. The red nodes are allowed to depend on all nodes from the previous layer in panel a, whereas they are not allowed in panel b except for the input layer. In both panels a and b, two examples of parameters are presented with the exact same network architecture (including activation functions and edges). Even if the network architecture (or parameterization) is identical, different parameters can induce different local structures. With , this local structure always holds in standard deep nonlinear networks with four hidden layers.
Here, a neural network with can be a standard deep nonlinear neural network (without any linear units in its architecture), a deep linear neural network (with all activation functions being linear), or a combination of these cases. Whereas a standard deep nonlinear neural network can naturally have parameters , it is possible to guarantee all parameters to be in with desired simply by using corresponding network architectures. For standard deep nonlinear neural networks, one can also restrict all relevant convergent solution parameters to be in by using some corresponding learning algorithms. Our theoretical results hold for all of these cases.
5.2 Theoretical Result
We state our main theoretical result in theorem 2 and corollary 1; a simplified statement is presented in remark 1. Here, a classical machine learning method, basis function regression, is used as a baseline to be compared with neural networks. The global minimum value of basis function regression with an arbitrary basis matrix is , where the basis matrix does not depend on and can represent nonlinear maps, for example, by setting with any nonlinear basis functions and any finite . In theorem 2, the expression represents the projection of onto the null space of , which is also (—the projection of onto the column space of ). Given matrices with a sequence , define to be a block matrix with columns being . Let denote a subsequence of .
From theorem 2 (or corollary 1), one can see the following properties of the loss landscape:
- i.
Every differentiable local minimum, has a loss value better than or equal to any global minimum value of basis function regression with any combination of the basis matrices in the set of fixed deep hierarchical representation matrices. In particular with , every differentiable local minimum has a loss value no worse than the global minimum values of standard basis function regression with the handcrafted basis matrix , and of basis function regression with the larger basis matrix .
- ii.
As and increase (or, equivalently, as a neural network gets wider and deeper), the upper bound on the loss values of local minima can further improve.
The proof of theorem 2 is provided in section A.2. The proof is based on the combination of the idea presented in section 3.1 and perturbations of a local minimum candidate. That is, if a is a local minimum, then the is a global minimum within a local region (i.e., a neighborhood of ). Thus, after perturbing as such that is sufficiently small (so that stays in the local region) and , the must be still a global minimum within the local region and, hence, the is also a local minimum. The proof idea of theorem 2 is to apply the proof sketch in section 3.1 to not only a local minimum candidate but also its perturbations .
In terms of overparameterization, theorem 2 states that local minima of deep neural networks are as good as global minima of the corresponding basis function regression even without overparameterization, and overparameterization helps to further improve the guarantee on local minima. The effect of overparameterization is captured in both the first and second terms on the right-hand side of equation 5.1. As depth and width increase, the second term tends to increase, and hence the guarantee on local minima can improve. Moreover, as depth and width increase (for some of th layers in theorem 2), the first term tends to decrease and the guarantee on local minima can also improve. For example, if has rank at least , then the first term is zero and, hence, every local minimum is a global minimum with zero loss value. As a special case of this example, since every is automatically in , if is forced to have rank at least , every local minimum becomes a global minimum for standard deep nonlinear neural networks, which coincides with the observation about overparameterization by Livni et al. (2014).
Without overparameterization, theorem 2 also recovers one of the main results in the literature of deep linear neural networks as a special case—that is, every local minimum is a global minimum. If , every local minimum for deep linear networks is differentiable and in , and hence theorem 1 yields that . Because is the global minimum value, this implies that every local minimum is a global minimum for deep linear neural networks.
Corollary 1 states that the same conclusion and discussions as in theorem 2 hold true even if we fix the edges in condition iii in definition 1 to be zero (by removing them as an architectural design or by forcing it with a learning algorithm) and consider optimization problems only with remaining edges.
The proof of corollary 1 is provided in section A.3 and follows the proof of theorem 2. Here, consists of training inputs in the arbitrary given feature space embedded in ; for example, given a raw input and any feature map (including identity as ), we write . Therefore, theorem 2 and corollary 1 state that every differentiable local minima of deep neural networks can be guaranteed to be no worse than any given basis function regression model with a handcrafted basis taking values in with some finite , such as polynomial regression with a finite degree and radial basis function regression with a finite number of centers.
To illustrate an advantage of the notion of weakly separated edges in definition 1, one can consider the following alternative definition that requires strongly separated edges.
A parameter vector is said to induce strongly separated linear units on the training input data set if there exist sets such that for all , conditions i to iii in definition 1 hold and for all if .
Let be the set of all parameter vectors that induces stronglyseparated linear units on the particular training input data set that defines the total loss in equation 2.1. Figure 2 shows a comparison of weekly separated edges and strongly separated edges. Under this stronger restriction on the local structure, we can obtain corollary 2.
The proof of corollary 2 is provided in section A.4 and follows the proof of theorem 2. As a special case, corollary 2 also recovers the statement that every local minimum is a global minimum for deep linear neural networks in the same way as in theorem 2. When compared with theorem 2, one can see that the statement in corollary 2 is weaker, producing the upper bound only in terms of . This is because the restriction of strongly separated units forces neural networks to have less expressive power with fewer effective edges. This illustrates an advantage of the notion of weakly separated edges in definition 1.
A limitation in theorems 1 and 2 and corollary 1 is the lack of treatment of nondifferentiable local minima. The Lebesgue measure of nondifferentiable points is zero, but this does not imply that the appropriate measure of nondifferentiable points is small. For example, if , the Lebesgue measure of the nondifferentiable point () is zero, but the nondifferentiable point is the only local and global minimum. Thus, the treatment of nondifferentiable points in this context is a nonnegligible problem. The proofs of theorems 1 and 2 and corollary 1 are all based on the proof sketch in section 3.1, which heavily relies on the differentiability. Thus, the current proofs do not trivially extend to address this open problem.
6 Conclusion
In this letter, we have theoretically and empirically analyzed the effect of depth and width on the loss values of local minima, with and without a possible local nonlinear-linear structure. The local nonlinear-linear structure we have considered might naturally arise during training and also is guaranteed to emerge by using specific learning algorithms or architecture designs. With the local nonlinear-linear structure, we have proved that the values of local minima of neural networks are no worse than the global minimum values of corresponding basis function regression and can improve as depth and width increase. In the general case without the possible local structure, we have theoretically shown that increasing the depth and width can improve the quality of local minima, and we empirically supported this theoretical observation. Furthermore, without the local structure but with a shallow neural network and a gaussian data matrix, we have proven the probabilistic bounds on the rates of the improvements on the local minimum values with respect to width. Moreover, we have discussed a major limitation of this letter: all of its the results focus on the differentiable points on the loss surfaces. Additional treatments of the nondifferentiable points are left to future research.
Our results suggest that the values of local minima are not arbitrarily poor (unless one crafts a pathological worst-case example) and can be guaranteed to some desired degree in practice, depending on the degree of overparameterization, as well as the local or global structural assumption. Indeed, a structural assumption, namely the existence of an identity map, was recently used to analyze the quality of local minima (Shamir, 2018; Kawaguchi & Bengio, 2018). When compared with these previous studies (Shamir, 2018; Kawaguchi & Bengio, 2018), we have shown the effect of depth and width, as well as considered a different type of neural network without the explicit identity map.
In practice, we often “overparameterize” a hypothesis space in deep learning in a certain sense (e.g., in terms of expressive power). Theoretically, with strong overparameterization assumptions, we can show that every stationary point (including all local minima) with respect to a single layer is a global minimum with the zero training error and can memorize any data set. However, “overparameterization” in practice may not satisfy such strong overparameterization assumptions in the theoretical literature. In contrast, our results in this letter do not require overparameterization and show the gradual effects of overparameterization as consequences of general results.
Appendix A: Proofs for Nonprobabilistic Statements
Let be defined in theorem 2. Let and . Given a matrix-valued function , let be the partial derivative of with respect to . Let if . Let if . Let be the null space of a matrix . Let be an open ball of radius with the center at .
The following lemma decomposes the model output in terms of the weight matrix and that coincides with its derivatives at differentiable points.
Lemma 2 generalizes part of theorem A.45 in Rao, Toutenburg, Shalabh, and Heumann (2007) by discarding invertibility assumptions.
Lemma 3 decomposes a norm of a projected target vector into a form that clearly shows an effect of depth and width.
The following lemma plays a major role in the proof of theorem 2.
A.1 Proof of Theorem 1
A.2 Proof of Theorem 2
A.3 Proof of Corollary 1
A.4 Proof of Corollary 2
Appendix B: Proofs for Probabilistic Statements
In the following lemma, we rewrite equation 3.2 in terms of the activation pattern, and data matrices .
From equation B.1, we expect that the larger the rank of the projection matrix , the smaller is the loss . In the following lemma, we prove that under the conditions of the activation pattern matrix . In the regime , we have . In the regime , we have . As we show later, proposition 2 follows easily from the rank estimates of .
Fix the activation pattern matrix . Let be a random gaussian matrix, with each entry having mean zero and variance one. Then the matrix as defined in equation B.2 satisfies both of the following statements:
- i.
If and for any index set with , then with probability at least .
- ii.
If with and for any index set with , then with probability at least .
The following concentration inequalities for the square of gaussian random variables are from Laurent and Massart (2000).
Appendix C: Additional Experimental Details
By using the ground-truth network described in section 4.3, the synthetic data set was generated with i.i.d. random inputs and i.i.d. random weight matrices . Each input was randomly sampled from the standard normal distribution, and each entry of the weight matrix was randomly sampled from a normal distribution with zero mean and normalized standard deviation ().
For training, we used a standard training procedure with mini-batch stochastic gradient decent (SGD) with momentum. The learning rate was set to 0.01. The momentum coefficient was set to 0.9 for the synthetic data set and 0.5 for the image data sets. The mini-batch size was set to 200 for the synthetic data set and 64 for the image data sets.
From the proof of theorem 1, for all , which was used to numerically compute the values of . This is mainly because the form of in theorem 1 may accumulate positive numerical errors for each and in the sum in its second term, which may easily cause a numerical overestimation of the effect of depth and width. To compute the projections, we adopted a method of computing a numerical cutoff criterion on singular values from Press, Teukolsky, Vetterling, and Flannery (2007) as (the numerical cutoff criterion) (maximum singular value of ) (machine precision of ) (), for a matrix of . We also confirmed that the reported experimental results remained qualitatively unchanged with two other different cutoff criteria: a criterion based (Golub & Van Loan, 1996) as (the numerical cutoff criterion) = (machine precision of ) (where for a matrix of ), as well as another criterion based on Netlib Repository LAPACK documentation as (the numerical cutoff criterion) = (maximum singular value of ) (machine precision of ).
Acknowledgments
We gratefully acknowledge support from NSF grants 1523767 and 1723381; AFOSR grant FA9550-17-1-0165; ONR grant N00014-18-1-2847; Honda Research; and the MIT-Sensetime Alliance on AI. Any opinions, findings, and conclusions or recommendations expressed in this material are our own and do not necessarily reflect the views of our sponsors.