Lower Bounds for Finding Stationary Points I

41
Lower Bounds for Finding Stationary Points I Yair Carmon John C. Duchi Oliver Hinder Aaron Sidford {yairc,jduchi,ohinder,sidford}@stanford.edu Abstract We prove lower bounds on the complexity of finding -stationary points (points x such that k∇f (x)k≤ ) of smooth, high-dimensional, and potentially non-convex functions f . We consider oracle-based complexity measures, where an algorithm is given access to the value and all derivatives of f at a query point x. We show that for any (potentially randomized) algorithm A, there exists a function f with Lipschitz pth order derivatives such that A requires at least -(p+1)/p queries to find an -stationary point. Our lower bounds are sharp to within constants, and they show that gradient descent, cubic-regularized Newton’s method, and generalized pth order regularization are worst-case optimal within their natural function classes. 1 Introduction Consider the optimization problem minimize xR d f (x) where f : R d R is smooth, but possibly non-convex. In general, it is intractable to even approximately minimize such f [35, 33], so—following an established line of research—we consider the problem of finding an -stationary point of f , meaning some x R d such that k∇f (x)k≤ . (1) We prove lower bounds on the number of function and derivative evaluations required for algo- rithms to find a point x satisfying inequality (1). While for arbitrary smooth f , a near-stationary point (1) is certainly insufficient for any type of optimality, there are a number of reasons to study algorithms and complexity for finding stationary points. In several statistical and engineer- ing problems, including regression models with non-convex penalties and objectives [30, 31], phase retrieval [12, 42], and non-convex (low-rank) reformulations of semidefinite programs and matrix completion [11, 27, 8], it is possible to show that all first- or second-order stationary points are (near) global minima. The strong empirical success of local search strategies for such problems, as well as for neural networks [28], motivates a growing body of work on algorithms with strong complexity guarantees for finding stationary points [40, 7, 15, 2, 13]. In contrast to this algorithmic progress, algorithm-independent lower bounds for finding stationary points are largely unexplored. Even for non-convex functions f , it is possible to find -stationary points for which the num- ber of function and derivative evaluations is polynomial in 1/ and the dimension d of domf . Of particular interest are methods for which the number of function and derivative evaluations does not depend on d, but instead depends on measures of f ’s regularity. The best-known method with such a dimension-free convergence guarantee is classical gradient descent: for every (non-convex) function f with L 1 -Lipschitz gradient satisfying f (x (0) ) - inf x f (x) Δ at the initial point x (0) , gradient descent finds an -stationary point in at most 2L 1 Δ -2 iterations [37]. Under the addi- tional assumption that f has Lipschitz continuous Hessian, our work [15] and Agarwal et al. [2] 1 arXiv:1710.11606v3 [math.OC] 15 Aug 2019

Transcript of Lower Bounds for Finding Stationary Points I

Yair Carmon John C. Duchi Oliver Hinder Aaron Sidford {yairc,jduchi,ohinder,sidford}@stanford.edu
Abstract
We prove lower bounds on the complexity of finding ε-stationary points (points x such that ∇f(x) ≤ ε) of smooth, high-dimensional, and potentially non-convex functions f . We consider oracle-based complexity measures, where an algorithm is given access to the value and all derivatives of f at a query point x. We show that for any (potentially randomized) algorithm A, there exists a function f with Lipschitz pth order derivatives such that A requires at least ε−(p+1)/p queries to find an ε-stationary point. Our lower bounds are sharp to within constants, and they show that gradient descent, cubic-regularized Newton’s method, and generalized pth order regularization are worst-case optimal within their natural function classes.
1 Introduction
x∈Rd f(x)
where f : Rd → R is smooth, but possibly non-convex. In general, it is intractable to even approximately minimize such f [35, 33], so—following an established line of research—we consider the problem of finding an ε-stationary point of f , meaning some x ∈ Rd such that
∇f(x) ≤ ε. (1)
We prove lower bounds on the number of function and derivative evaluations required for algo- rithms to find a point x satisfying inequality (1). While for arbitrary smooth f , a near-stationary point (1) is certainly insufficient for any type of optimality, there are a number of reasons to study algorithms and complexity for finding stationary points. In several statistical and engineer- ing problems, including regression models with non-convex penalties and objectives [30, 31], phase retrieval [12, 42], and non-convex (low-rank) reformulations of semidefinite programs and matrix completion [11, 27, 8], it is possible to show that all first- or second-order stationary points are (near) global minima. The strong empirical success of local search strategies for such problems, as well as for neural networks [28], motivates a growing body of work on algorithms with strong complexity guarantees for finding stationary points [40, 7, 15, 2, 13]. In contrast to this algorithmic progress, algorithm-independent lower bounds for finding stationary points are largely unexplored.
Even for non-convex functions f , it is possible to find ε-stationary points for which the num- ber of function and derivative evaluations is polynomial in 1/ε and the dimension d of domf . Of particular interest are methods for which the number of function and derivative evaluations does not depend on d, but instead depends on measures of f ’s regularity. The best-known method with such a dimension-free convergence guarantee is classical gradient descent: for every (non-convex) function f with L1-Lipschitz gradient satisfying f(x(0)) − infx f(x) ≤ at the initial point x(0), gradient descent finds an ε-stationary point in at most 2L1ε−2 iterations [37]. Under the addi- tional assumption that f has Lipschitz continuous Hessian, our work [15] and Agarwal et al. [2]
1
(igoring other problem-dependent constants). In subsequent work [13], we show a different deter- ministic accelerated gradient method that achieves dimension-free complexity ε−7/4 log 1
ε , and if f
additionally has Lipschitz third derivatives, then ε−5/3 log 1 ε iterations suffice to find an ε-stationary
point. By evaluation of higher order derivatives, such as the Hessian, it is possible to achieve better
ε dependence. Nesterov and Polyak’s cubic regularization of Newton’s method [40, 16] guarantees ε-stationarity (1) in ε−3/2 iterations, but each iteration may be expensive when the dimension d is large. More generally, pth-order regularization methods iterate by sequentially minimizing models of f based on order p Taylor approximations, and Birgin et al. [7] show that these methods converge in ε−(p+1)/p iterations. Each iteration requires finding an approximate stationary point of a high- dimensional, potentially non-convex, degree p + 1 polynomial, which suggests that the methods will be practically challenging for p > 2. The methods nonetheless provide fundamental upper complexity bounds.
In this paper and its companion [14], we focus on the converse problem: providing dimension- free complexity lower bounds for finding ε-stationary points. We show fundamental limits on the best achievable ε dependence, as well as dependence on other problem parameters. Together with known upper bounds, our results shed light on the optimal rates of convergence for finding stationary points.
1.1 Related lower bounds
In the case of convex optimization, we have a deep understanding of the complexity of finding ε- suboptimal points, that is, x satisfying f(x) ≤ f(x?) + ε for some ε > 0, where x? ∈ argminx f(x). Here we review only the dimension-free optimal rates, as those are most relevant for our results. Given a point x(0) satisfying x(0) − x? ≤ D < ∞, if f is convex with L1-Lipschitz gradient, Nesterov’s accelerated gradient method finds an ε-suboptimal point in
√ L1Dε
−1/2 gradient eval- uations, which is optimal even among randomized, higher-order algorithms [36, 35, 37, 45].1 For non-smooth problems, that is, when f is L0-Lipschitz, subgradient methods achieve the optimal rate of L2
0D 2/ε2 subgradient evaluations (cf. [10, 35, 37]). In Part II of this paper [14], we consider
the impact of convexity on the difficulty of finding stationary points using first-order methods. Globally optimizing smooth non-convex functions is of course intractable: Nemirovski and
Yudin [35, §1.6] show that for functions f : Rd → R with Lipschitz 1st through pth derivatives, and algorithms receiving all derivatives of f at the query point x, the worst case complexity of finding ε-suboptimal points scales at least as (1/ε)d/p. This exponential scaling in d shows that dimension- free guarantees for achieving near-optimality in smooth non-convex functions are impossible to obtain.
Less is known about lower bounds for finding stationary points for f : Rd → R. Nesterov [39] proposes lower bounds for finding stationary points under a box constraint, but his construction does not extend to the unconstrained case when f(x(0))−infx f(x) is bounded. Vavasis [44] considers the complexity of finding ε-stationary points of functions with Lipschitz derivatives in a first-order (gradient and function-value) oracle model. For such problems, he proves a lower bound of ε−1/2
oracle queries that applies to any deterministic algorithm operating on certain two-dimensional functions. This appears to be the first algorithm-independent lower bound for approximating stationary points of non-convex functions, but it is unclear if the bound is tight, even for functions on R2.
1Higher order methods can yield improvements under additional smoothness: if in addition f has L2-Lipschitz Hessian and ε ≤ L
7/3 1 L
−4/3 2 D2/3, an accelerated Newton method achieves the (optimal) rate (L2D
3/ε)2/7 [4, 32].
2
A related line of work considers algorithm-dependent lower bounds, describing functions that are challenging for common classes of algorithms, such as Newton’s method and gradient descent. In this vein, Jarre [25] shows that the Chebyshev-Rosenbrock function is difficult to optimize, and that any algorithm that employs line search to determine the step size will require an exponential (in ε) number of iterations to find an ε-suboptimal point, even though the Chebyshev-Rosenbrock function has only a single stationary point. While this appears to contradict the polynomial complexity guarantees mentioned above, Cartis et al. [19] explain this by showing that the difficult Chebyshev- Rosenbrock instances have ε-stationary point with function value that is ω(ε)-suboptimal. Cartis et al. also develop algorithm-specific lower bounds on the iteration complexity of finding approximate stationary points. Their works [16, 17] show that the performance guarantees for gradient descent and cubic regularization of Newton’s method are tight for two-dimensional functions they construct, and they also extend these results to certain structured classes of methods [18, 20].
1.2 The importance of high-dimensional constructions
To tightly characterize the algorithm- and dimension-independent complexity of finding ε-stationary points, one must construct hard instances whose domain has dimension that grows with 1/ε. The reason for this is simple: there exist algorithms with complexity that trades dependence on dimen- sion d in favor of better 1/ε dependence. Indeed, Vavasis [44] gives a grid-search method that, for functions with Lipschitz gradient, finds an ε-stationary point in max{2d, ε−2d/(d+2)} gradient and function evaluations. Moreover, Hinder [24] exhibits a cutting-plane method that, for functions with Lipschitz first and third derivatives, finds an ε-stationary point in d · ε−4/3 log 1
ε gradient and function evaluations.
High-dimensional constructions are similarly unavoidable when developing lower bounds in con- vex optimization. There, the center-of-gravity cutting plane method (cf. [37]) finds an ε-suboptimal point in d log 1
ε (sub)gradient evaluations, for any continuous convex function with bounded distance to optimality. Consequently, proofs of the dimension-free lower bound for convex optimization (as we cite in the previous section) all rely on constructions whose dimensionality grows polynomially in 1/ε.
Our paper continues this well-established practice, and our lower bounds apply in the following order of quantifiers: for all ε > 0, there exists a dimension d ∈ N such that for any d′ ≥ d and algorithm A, there is some f : Rd′ → R such that A requires at least T (ε) oracle queries to find an ε-stationary point of f . Our bounds on deterministic algorithms require dimension d = 1 + 2T (ε), while our bounds on all randomized algorithms require d = c·T (ε)2 log T (ε) for a numerical constant c < ∞. In contrast, the results of Vavasis [44] and Cartis et al. [16, 17, 18, 20] hold with d = 2 independent of ε. Inevitably, they do so at a cost; the lower bound [44] is loose, while the lower bounds [16, 17, 18, 20] apply to only restricted algorithm classes that exclude the aforementioned grid-search and cutting-plane algorithms.
1.3 Our contributions
In this paper, we consider the class of all randomized algorithms that access the function f through an information oracle that returns the function value, gradient, Hessian and all higher-order deriva- tives of f at a queried point x. Our main result (Theorem 2 in Section 5) is as follows. Let p ∈ N and , Lp, and ε > 0. Then, for any randomized algorithm A based on the oracle described above, there exists a function f that has Lp-Lipschitz pth derivative, satisfies f(x(0))− f(x?) ≤ , and is such that, with high probability, A requires at least
cp ·L1/p p ε−(p+1)/p
3
oracle queries to find an ε-stationary point of f , where cp > 0 is a constant decreasing at most polynomially in p. As explained in the previous section, the domain of the constructed function f has dimension polynomial in 1/ε.
For every p, our lower bound matches (up to a constant) known upper bounds, thereby char- acterizing the optimal complexity of finding stationary points. For p = 1, our results imply that gradient descent [37, 39] is optimal among all methods (even randomized, high-order methods) op- erating on functions with Lipschitz continuous gradient and bounded initial sub-optimality. There- fore, to strengthen the guarantees of gradient descent one must introduce additional assumptions, such as convexity of f or Lipschitz continuity of ∇2f . Similarly, in the case p = 2 we establish that cubic regularization of Newton’s method [40, 16] achieves the optimal rate ε−3/2, and for general p we show that pth order Taylor-approximation methods [7] are optimal.
These results say little about the potential of first-order methods on functions with higher-order Lipschitz derivatives, where first-order methods attain rates better than ε−2 [13]. In Part II of this series [14], we address this issue and show lower bounds for deterministic algorithms using only first- order information. The lower bounds exhibit a fundamental gap between first- and second-order methods, and nearly match the known upper bounds [13].
1.4 Our approach and paper organization
In Section 2 we introduce the classes of functions and algorithms we consider as well as our no- tion of complexity. Then, in Section 3, we present the generic technique we use to prove lower bound for deterministic algorithms in both this paper and Part II [14]. While essentially present in previous work, our technique abstracts away and generalizes the central arguments in many lower bounds [35, 34, 45, 4]. The technique applies to higher-order methods and provides lower bounds for general optimization goals, including finding stationary points (our main focus), approximate min- imizers, and second-order stationary points. It is also independent of whether the functions under consideration are convex, applying to any function class with appropriate rotational invariance [35]. The key building blocks of the technique are Nesterov’s notion of a “chain-like” function [37], which is difficult for a certain subclass of algorithms, and a “resisting oracle” [35, 37] reduction that turns a lower bound for this subclass into a lower bound for all deterministic algorithms.
In Section 4 we apply this generic method to produce lower bounds for deterministic methods (Theorem 1). The deterministic results underpin our analysis for randomized algorithms, which culminates in Theorem 2 in Section 5. Following Woodworth and Srebro [45], we consider random rotations of our deterministic construction, and show that for any algorithm such a randomly rotated function is, with high probability, difficult. For completeness, in Section 6 we provide lower bounds on finding stationary points of functions where x(0) − x? is bounded, rather than the function value gap f(x(0)) − f(x?); these bounds have the same ε dependence as their bounded function value counterparts.
Notation Before continuing, we provide the conventions we adopt throughout the paper. For a sequence of vectors, subscripts denote coordinate index, while parenthesized superscripts denote
element index, e.g. x (i) j is the jth coordinate of the ith entry in the sequence x(1), x(2), . . .. For any
p ≥ 1 and p times continuously differentiable f : Rd → R, we let ∇pf(x) denote the tensor of pth order partial derivatives of f at point x, so ∇pf(x) is an order p symmetric tensor with entries
[∇pf(x)]i1,...,ip = ∇pi1,...,ipf(x) = ∂pf
4
Equivalently, we may write ∇pf(x) as a multilinear operator ∇pf(x) : (Rd)p → R,
∇pf(x) [ v(1), . . . , v(p)
⟩ ,
where ·, · is the Euclidean inner product on tensors, defined for order k tensors T and M by T,M =
∑ i1,...,ik
Ti1,...,ikMi1,...,ik , and ⊗ denotes the Kronecker product. We let ⊗kd denote
d× · · · × d, k times, so that T ∈ R⊗kd denotes an order k tensor. For a vector v ∈ Rd we let v :=
√ v, v denote the Euclidean norm of v. For a tensor
T ∈ R⊗kd, the Euclidean operator norm of T is
Top := sup v(1),...,v(k)
} .
If T is a symmetric order k tensor, meaning that Ti1,...,ik is invariant to permutations of the indices (for example, ∇kf(x) is always symmetric), then Zhang et al. [47, Thm. 2.1] show that
Top = sup v=1
T, v⊗k, where v⊗k = v ⊗ v ⊗ · · · ⊗ v k times
. (2)
For vectors, the Euclidean and operator norms are identical. For any n ∈ N, we let [n] := {1, . . . , n} denote the set of positive integers less than or equal to
n. We let C∞ denote the set of infinitely differentiable functions. We denote the ith standard basis vector by e(i), and let Id ∈ Rd×d denote the d× d identity matrix; we drop the subscript d when it is clear from context. For any set S and functions g, h : S → [0,∞) we write g . h or g = O(h) if there exists c > 0 such that g(s) ≤ c · h(s) for every s ∈ S. We write g = O (h) if g . h log(h+ 2).
2 Preliminaries
We begin our development with definitions of the classes of functions (§ 2.1), classes of algorithms (§ 2.2), and notions of complexity (§ 2.3) that we study.
2.1 Function classes
Measures of function regularity are crucial for the design and analysis of optimization algorithms [37, 9, 35]. We focus on two types of regularity conditions: Lipschitzian properties of derivatives and bounds on function value.
We first list a few equivalent definitions of Lipschitz continuity. A function f : Rd → R has Lp-Lipschitz pth order derivatives if it is p times continuously differentiable, and for every x ∈ Rd and direction v ∈ Rd, v ≤ 1, the directional projection fx,v(t) := f(x + t · v) of f , defined for t ∈ R, satisfies f (p)
x,v (t)− f (p) x,v (t′)
≤ Lp t− t′ for all t, t′ ∈ R, where f
(p) x,v (·) is the pth derivative of t 7→ fx,v(t). If f is p + 1 times continuously
differentiable, this is equivalent to requiringf (p+1) x,v (0)
≤ Lp or ∇p+1f(x)
5
for all x, v ∈ Rd, v ≤ 1. We occasionally refer to a function with Lipschitz pth order derivatives as pth order smooth.
Complexity guarantees for finding stationary points of non-convex functions f typically depend on the function value bound f(x(0)) − infx f(x), where x(0) is a pre-specified point. Without loss of generality, we take the pre-specified point to be 0 for the remainder of the paper. With that in mind, we define the following classes of functions.
Definition 1. Let p ≥ 1, > 0 and Lp > 0. Then the set
Fp(, Lp)
denotes the union, over d ∈ N, of the collection of C∞ functions f : Rd → R with Lp-Lipschitz pth derivative and f(0)− infx f(x) ≤ .
The function classes Fp(, Lp) include functions on Rd for all d ∈ N, following the established study of “dimension free” convergence guarantees [35, 37]. As explained in Section 1.2, we construct explicit functions f : Rd → R that are difficult to optimize, where the dimension d is finite, but our choice of d grows inversely in the desired accuracy of the solution.
For our results, we also require the following important invariance notion, proposed (in the context of optimization) by Nemirovski and Yudin [35, Ch. 7.2].
Definition 2 (Orthogonal invariance). A class of functions F is orthogonally invariant if for every f ∈ F , f : Rd → R, and every matrix U ∈ Rd′×d such that U>U = Id, the function fU : Rd′ → R defined by fU (x) = f(U>x) belongs to F .
Every function class we consider is orthogonally invariant, as f(0)− infx f(x) = fU (0)− infx fU (x) and fU has the same Lipschitz constants to all orders as f , as their collections of associated directional projections are identical.
2.2 Algorithm classes
We also require careful definition of the classes of optimization algorithms we consider. For any dimension d ∈ N, an algorithm A (also referred to as method) maps functions f : Rd → R to a sequence of iterates in Rd; that is, A is defined separately for every finite d. We let
A[f ] = {x(t)}∞t=1
denote the sequence x(t) ∈ Rd of iterates that A generates when operating on f . To model the computational cost of an algorithm, we adopt the information-based complexity
framework, which Nemirovski and Yudin [35] develop (see also [43, 1, 10]), and view every every iterate x(t) as a query to an information oracle. Typically, one places restrictions on the information the oracle returns (e.g. only the function value and gradient at the query point) and makes certain assumptions on how the algorithm uses this information (e.g. deterministically). Our approach is syntactically different but semantically identical: we build the oracle restriction, along with any other assumption, directly into the structure of the algorithm. To formalize this, we define
∇(0,...,p)f(x) := {f(x),∇f(x),∇2f(x), . . . ,∇pf(x)}
as shorthand for the response of a pth order oracle to a query at point x. When p = ∞ this corresponds to an oracle that reveals all derivatives at x. Our algorithm classes follow.
6
Deterministic algorithms For any p ≥ 0, a pth-order deterministic algorithm A operating on f : Rd → R is one producing iterates of the form
x(i) = A(i) ( ∇(0,...,p)f(x(1)), . . . ,∇(0,...,p)f(x(i−1))
) for i ∈ N,
where A(i) is a measurable mapping to Rd (the dependence on dimension d is implicit). We denote
the class of pth-order deterministic algorithms by A(p) det and let Adet := A(∞)
det denote the class of all deterministic algorithms based on derivative information.
As a concrete example, for any p ≥ 1 and L > 0 consider the algorithm REGp,L ∈ A(p) det that
produces iterates by minimizing the sum of a pth order Taylor expansion and an order p+1 proximal term:
x(k+1) := argmin x
} . (3)
For p = 1, REGp,L is gradient descent with step-size 1/L, for p = 2 it is cubic-regularized Newton’s method [40], and for general p it is a simplified form of the scheme that Birgin et al. [7] propose.
Randomized algorithms (and function-informed processes) A pth-order randomized algo- rithm A is a distribution on pth-order deterministic algorithms. We can write any such algorithm as a deterministic algorithm given access to a random uniform variable on [0, 1] (i.e. infinitely many random bits). Thus the algorithm operates on f by drawing ξ ∼ Uni[0, 1] (independently of f), then producing iterates of the form
x(i) = A(i) ( ξ,∇(0,...,p)f(x(1)), . . . ,∇(0,...,p)f(x(i−1))
) for i ∈ N, (4)
where A(i) are measurable mappings into Rd. In this case, A[f ] is a random sequence, and we call a random process {x(t)}t∈N informed by f if it has the same law as A[f ] for some randomized
algorithm A. We let A(p) rand denote the class of pth-order randomized algorithms and Arand := A(∞)
rand denote the class of randomized algorithms that use derivative-based information.
Zero-respecting sequences and algorithms While deterministic and randomized algorithms are the natural collections for which we prove lower bounds, it is useful to define an additional structurally restricted class. This class forms the backbone of our lower bound strategy (Sec. 3), as it is both ‘small’ enough to uniformly underperform on a single function, and ‘large’ enough to imply lower bounds on the natural algorithm classes.
For v ∈ Rd we let supp {v} := {i ∈ [d] | vi 6= 0} denote the support (non-zero indices) of v. We
extend this to tensors as follows. Let T ∈ R⊗kd be an order k tensor, and for i ∈ {1, . . . , d} let
Ti ∈ R⊗k−1d be the order (k − 1) tensor defined by [Ti]j1,...,jk−1 = Ti,j1,...,jk−1
. With this notation, we define
supp {T} := {i ∈ {1, . . . , d} | Ti 6= 0}.
Then for p ∈ N and any f : Rd → R, we say that the sequence x(1), x(2), . . . is pth order zero- respecting with respect to f if
supp { x(t) } ⊆ q∈[p]
7
The definition (5) says that x (t) i = 0 if all partial derivatives involving the ith coordinate of f (up
to the pth order) are zero. For p = 1, this definition is equivalent to the requirement that for every
t and j ∈ [d], if ∇jf(x(s)) = 0 for s < t, then x (t) j = 0. The requirement (5) implies that x(1) = 0.
An algorithm A ∈ Arand is pth order zero-respecting if for any f : Rd → R, the (potentially random) iterate sequence A[f ] is pth order zero respecting w.r.t. f . Informally, an algorithm is zero- respecting if it never explores coordinates which appear not to affect the function. When initialized at the origin, most common first- and second-order optimization methods are zero-respecting, including gradient descent (with and without Nesterov acceleration), conjugate gradient [23], BFGS and L-BFGS [29, 41],2 Newton’s method (with and without cubic regularization [40]) and trust-
region methods [22]. We denote the class of pth order zero-respecting algorithms by A(p) zr , and let
Azr := A(∞) zr .
In the literature on lower bounds for first-order convex optimization, it is common to assume that methods only query points in the span of the gradients they observe [37, 3]. Our notion of zero-respecting algorithms generalizes this assumption to higher-order methods, but even first-order zero-respecting algorithms are slightly more general. For example, coordinate descent methods [38] are zero-respecting, but they generally do not remain in the span of the gradients.
2.3 Complexity measures
With the definitions of function and algorithm class in hand, we turn to formalizing our notion of complexity: what is the best performance an algorithm in class A can achieve for all functions in class F? As we consider finding stationary points of f , the natural performance measure is the number of iterations (oracle queries) required to find a point x such that ∇f(x) ≤ ε. Thus for a deterministic sequence {x(t)}t∈N we define
Tε ( {x(t)}t∈N, f
) := inf
{ t ∈ N |
∇f(x(t)) ≤ ε} ,
and refer to it as the complexity of {x(t)}t∈N on f . As we consider randomized algorithms as well, for a random process {x(t)}t∈N with probability distribution P , meaning for a set A ⊂ (Rd)N the probability that {x(t)}t∈N ∈ A is P (A), we define
Tε ( P, f
) ≤ 1
2
} . (6)
) is also the median of the random variable Tε
( {x(t)}t∈N, f
) for {x(t)}t∈N ∼
P . By Markov’s inequality, definition (6) provides a lower bound on expectation-based alternatives, as
inf { t ∈ N | EP
)] ≥ 1
) .
(Here EP denotes expectation taken according to the distribution P .) To measure the performance of algorithm A on function f , we evaluate the iterates it produces
from f , and with mild abuse of notation, we define
Tε ( A, f
( A[f ], f
) 2if the initial Hessian approximation is a diagonal matrix, as is typical.
8
as the complexity of A on f . With this setup, we define the complexity of algorithm class A on function class F as
Tε ( A,F
Tε ( A, f
) . (7)
Many algorithms guarantee “dimension independent” convergence [37] and thus provide upper bounds for the quantity (7). A careful tracing of constants in the analysis of Birgin et al. [7] implies that the generalized regularization scheme REGp,L defined by the recursion (3) guarantees
Tε ( A(p)
) ≤ sup
) . L1/p
p ε−(1+p)/p (8)
for all p ∈ N. In this paper we prove these rates are sharp to within (p-dependent) constant factors. While definition (7) is our primary notion of complexity, our proofs provide bounds on smaller
quantities than (7) that also carry meaning. For zero-respecting algorithms, we exhibit a single function f and bound infA∈Azr Tε
( A, f
) from below, in effect interchanging the inf and sup in (7).
This implies that all zero-respecting algorithms share a common vulnerability. For randomized algorithms, we exhibit a distribution P supported on functions of a fixed dimension d, and we lower bound the average infA∈Arand
∫ Tε ( A, f
) dP (f), bounding the distributional complexity [35, 10],
which is never greater than worst-case complexity (and is equal for randomized and deterministic algorithms). Even randomized algorithms share a common vulnerability: functions drawn from P .
3 Anatomy of a lower bound
In this section we present a generic approach to proving lower bounds for optimization algorithms. The basic techniques we use are well-known and applied extensively in the literature on lower bounds for convex optimization [35, 37, 45, 4]. However, here we generalize and abstract away these techniques, showing how they apply to high-order methods, non-convex functions, and various optimization goals (e.g. ε-stationarity, ε-optimality).
3.1 Zero-chains
Nesterov [37, Chapter 2.1.2] proves lower bounds for smooth convex optimization problems using the “chain-like” quadratic function
f(x) := 1
d−1∑ i=1
(xi − xi+1)2, (9)
which he calls the “worst function in the world.” The important property of f is that for every i ∈ [d], ∇if(x) = 0 whenever xi−1 = xi = xi+1 = 0 (with x0 := 1 and xd+1 := 0). Thus, if we “know” only the first t − 1 coordinates of f , i.e. are able to query only vectors x such xt = xt+1 = · · · = xd = 0, then any x we query satisfies ∇sf(x) = 0 for s > t; we only “discover” a single new coordinate t. We generalize this chain structure to higher-order derivatives as follows.
Definition 3. For p ∈ N, a function f : Rd → R is a pth-order zero-chain if for every x ∈ Rd,
supp {x} ⊆ {1, . . . , i− 1} implies q∈[p]
supp {∇qf(x)} ⊆ {1, . . . , i}.
We say f is a zero-chain if it is a pth-order zero-chain for every p ∈ N.
9
In our terminology, Nesterov’s function (9) is a first-order zero-chain but not a second-order zero- chain, as supp
{ ∇2f(0)
} = [d]. Informally, at a point for which xi−1 = xi = · · · = xd = 0, a
zero-chain appears constant in xi, xi+1, . . . , xd. Zero-chains structurally limit the rate with which zero-respecting algorithms acquire information from derivatives. We formalize this in the following observation, whose proof is a straightforward induction; see Table 1 for an illustration.
Observation 1. Let f : Rd → R be a pth order zero-chain and let x(1) = 0, x(2), . . . be a pth order
zero-respecting sequence with respect to f . Then x (t) j = 0 for j ≥ t and all t ≤ d.
Proof. We show by induction on k that supp { x(t) } ⊆ [t − 1] for every t ≤ k; the case k = d is
the required result. The case k = 1 holds since x(1) = 0. If the hypothesis holds for some k < d then by Definition 3 we have ∪q∈[p]supp
{ ∇qf(x(t))
the zero-respecting property (5), we have supp { x(k+1)
} ⊆ ∪q∈[p] ∪t<k+1 supp
{ ∇qf(x(t))
} ⊆ [k],
iteration information j = 1 2 3 4 · · · d− 1 d
t = 0 x(0) 0 0 0 0 · · · 0 0
∇f(x(0)) ∗ 0 0 0 · · · 0 0
t = 1 x(1) ∗ 0 0 0 · · · 0 0
∇f(x(1)) ∗ ∗ 0 0 · · · 0 0
t = 2 x(2) ∗ ∗ 0 0 · · · 0 0
∇f(x(2)) ∗ ∗ ∗ 0 · · · 0 0 ...
∇f(x(d−1)) ∗ ∗ ∗ ∗ · · · ∗ ∗
Table 1: Illustration of Observation 1: a zero-respecting algorithm operating on a zero-chain. We indicate the nonzero entries of the iterates and the gradients by ∗.
3.2 A lower bound strategy
The preceding discussion shows that zero-respecting algorithms take many iterations to “discover” all the coordinates of a zero-chain. In the following observation, we formalize how finding a suitable zero-chain provides a lower bound on the performance of zero-respecting algorithms.
Observation 2. Consider ε > 0, a function class F , and p, T ∈ N. If f : RT → R satisfies
i. f is a pth-order zero-chain,
ii. f belongs to the function class, i.e. f ∈ F , and
iii. ∇f(x) > ε for every x such that xT = 0;3
then Tε ( A(p)
zr ,F ) ≥ Tε
zr , {f} ) > T .
3 We can readily adapt this property for lower bounds on other termination criteria, e.g. require f(x)−infy f(y) > ε for every x such that xT = 0.
10
Proof. For A ∈ A(p) zr and {x(t)}t∈N = A[f ] we have by Observation 1 that x
(t) T = 0 for all t ≤ T and
the large gradient property (iii) then implies ∇f(x(t))
> ε for all t ≤ T . Therefore Tε ( A, f
) > T ,
and since this holds for any A ∈ A(p) zr we have
Tε ( A(p)
zr ,F )
) > T.
If f is a zero-chain, then so is the function x 7→ µf(x/σ) for any multiplier µ and scale parameter σ. This is useful for our development, as we construct zero-chains {gT }T∈N such that ∇gT (x) > c for every x with xT = 0 and some constant c > 0. By setting f(x) = µgT (x/σ), then choosing T , µ, and σ to satisfy conditions (ii) and (iii), we obtain a lower bound. As our choice of T is also the final lower bound, it must grow to infinity as ε tends to zero. Thus, the hard functions we construct are fundamentally high-dimensional, making this strategy suitable only for dimension-free lower bounds.
3.3 From deterministic to zero-respecting algorithms
Zero-chains allow us to generate strong lower bounds for zero-respecting algorithms. The following reduction shows that these lower bounds are valid for deterministic algorithms as well.
Proposition 1. Let p ∈ N ∪ {∞}, F be an orthogonally invariant function class and ε > 0. Then
Tε ( A(p)
det,F ) ≥ Tε
zr ,F ) .
We also give a variant of Proposition 1 that is tailored to lower bounds constructed by means of Observation 2 and allows explicit accounting of dimensionality.
Proposition 2. Let p ∈ N ∪ {∞}, F be an orthogonally invariant function class, f ∈ F with
domain of dimension d, and ε > 0. If Tε ( A(p)
zr , {f} ) ≥ T , then
det, {fU | U ∈ O(d+ T, d)} ) ≥ T,
where fU := f(U>z) and O(d + T, d) is the set of (d + T ) × d orthogonal matrices, so that {fU | U ∈ O(d+ T, d)} contains only function with domain of dimension d+ T .
The proofs of Propositions 1 and 2, given in Appendix A, build on the classical notion of a resisting oracle [35, 37], which we briefly sketch here. Let A ∈ Adet, and let f ∈ F , f : Rd → R. We adversarially select an orthogonal matrix U ∈ Rd′×d (for some finite d′ > d) such that on the function fU := f(U>z) ∈ F the algorithm A behaves as if it was a zero-respecting algorithm. In particular, U is sequentially constructed such that for the function fU (z) the sequence U>A[fU ] ⊂ Rd is zero-respecting with respect to f . Thus, there exists an algorithm ZA ∈ Adet ∩Azr such that ZA[f ] = U>A[fU ], implying Tε
( A, fU
) ,
giving Proposition 1; Proposition 2 follows similarly, and for it we may take d′ = d+ T . The adversarial rotation argument that yields Propositions 1 and 2 is more or less apparent in
the proofs of previous lower bounds in convex optimization [35, 45, 4] for deterministic algorithms. We believe it is instructive to separate the proof of lower bounds on Tε
( Azr,F
) and the reduction
( ·, · )
11
1. Orthogonal invariance: for every f : Rd → R, every U ∈ Rd′×d such that U>U = Id and every sequence {z(t)}t∈N ⊂ Rd′ , we have
Tε ( {z(t)}t∈N, f(U>·)
) = Tε
) .
2. “Stopping time” invariance: for any T0 ∈ N, if Tε ( {x(t)}t∈N, f
) ≤ T0 then Tε
Tε ( {x(t)}t∈N, f
) for any sequence {x(t)}t∈N such that x(t) = x(t) for t ≤ T0.
These properties hold for the typical performance measures used in optimization. Examples include time to ε-optimality, in which case Tε
( {x(t)}t∈N, f
( {x(t)}t∈N, f
3.4 Randomized algorithms
Proposition 1 and 2 do not apply to randomized algorithms, as they require the adversary (maximiz- ing choice of f) to simulate the action of A on f . To handle randomized algorithms, we strengthen the notion of a zero-chain as follows.
Definition 4. A function f : Rd → R is a robust zero-chain if for every x ∈ Rd,
|xj | < 1/2, ∀j ≥ i implies f(y) = f(y1, . . . , yi, 0, . . . , 0) for all y in a neighborhood of x.
A robust zero-chain is also an “ordinary” zero-chain. In Section 5 we replace the adversarial rotation U of § 3.3 with an orthogonal matrix drawn uniformly at random, and consider the random function fU (x) = f(U>x), where f is a robust zero-chain. We adapt a lemma by Woodworth and Srebro [45], and use it to show that for every A ∈ Arand, A[fU ] satisfies an approximate form of Observation 1 (w.h.p.) whenever the iterates A[fU ] have bounded norm. With further modification of fU to handle unbounded iterates, our zero-chain strategy yields a strong distributional complexity lower bound on Arand.
4 Lower bounds for zero-respecting and deterministic algorithms
For our first main results, we provide lower bounds on the complexity of all deterministic algorithms for finding stationary points of smooth, potentially non-convex functions. By Observation 2 and Proposition 1, to prove a lower bound on deterministic algorithms it is sufficient to construct a function that is difficult for zero-respecting algorithms. For fixed T > 0 , we define the (unscaled) hard instance fT : Rd → R as
fT (x) = −Ψ (1) Φ (x1) +
T∑ i=2
[Ψ (−xi−1) Φ (−xi)−Ψ (xi−1) Φ (xi)] , (10)
where the component functions are
Ψ(x) :=
1 2 t2dt.
Our construction, illustrated in Figure 1, has two key properties. First is that f is a zero- chain (Observation 3 in the sequel). Second, as we show in Lemma 2, ∇fT (x) is large unless |xi| ≥ 1 for every i ∈ [T ]. These properties make it hard for any zero-respecting method to find a stationary point of scaled versions of fT , and coupled with Proposition 1, this gives a lower bound for deterministic algorithms.
12
Figure 1: Hard instance for full derivative information. Left: the functions Ψ and Φ (top) and their derivatives (bottom). Right: Surface and contour plot of a two-dimensional cross-section of the hard instance fT .
4.1 Properties of the hard instance
Before turning to the main theorem of this section, we catalogue the important properties of the functions Ψ, Φ and fT .
Lemma 1. The functions Ψ and Φ satisfy the following.
i. For all x ≤ 1 2 and all k ∈ N, Ψ(k)(x) = 0.
ii. For all x ≥ 1 and |y| < 1, Ψ(x)Φ′(y) > 1.
iii. Both Ψ and Φ are infinitely differentiable, and for all k ∈ N we have
sup x |Ψ(k)(x)| ≤ exp
) .
iv. The functions and derivatives Ψ,Ψ′,Φ and Φ′ are non-negative and bounded, with
0 ≤ Ψ < e, 0 ≤ Ψ′ ≤ √
54/e, 0 < Φ < √
2πe, and 0 < Φ′ ≤ √ e.
We prove Lemma 1 in Appendix B.1. The remainder our development relies on Ψ and Φ only through Lemma 1. Therefore, the precise choice of Ψ,Φ is not particularly special; any two functions with properties similar to Lemma 1 will yield similar lower bounds.
The key consequence of Lemma 1.i is that the function f is a robust zero-chain (see Definition 4) and consequently also a zero-chain (Definition 3):
Observation 3. For any j > 1, if |xj−1|, |xj | < 1/2 then fT (y) = fT (y1, . . . , yj−1, 0, yj+1, . . . , yT ) for all y in a neighborhood of x.
Applying Observation 3 for j = i + 1, . . . , T gives that fT is a robust zero-chain by Definition 4. Taking derivatives of fT (x1, . . . , xi, 0, . . . , 0) with respect to xj , j > i, shows that fT is also a zero- chain by Definition 3. Thus, Observation 1 shows that any zero-respecting algorithm operating on fT requires T + 1 iterations to find a point where xT 6= 0.
Next, we establish the “large gradient property” that ∇fT (x) must be large if any coordinate of x is near zero.
13
Lemma 2. If |xi| < 1 for any i ≤ T , then there exists j ≤ i such that |xj | < 1 and
∇fT (x) ≥ ∂∂xj fT (x)
> 1.
Proof. We take j ≤ i to be the smallest j for which |xj | < 1, so that |xj−1| ≥ 1 (where we use the shorthand x0 ≡ 1). Therefore, we have
∂fT ∂xj
(x) = −Ψ (−xj−1) Φ′ (−xj)−Ψ (xj−1) Φ′ (xj)−Ψ′ (−xj) Φ (−xj+1)−Ψ′ (xj) Φ (xj+1)
(i)
≤ −Ψ (−xj−1) Φ′ (−xj)−Ψ (xj−1) Φ′ (xj) (ii) = −Ψ(|xj−1|)Φ′ (xj sign(xj−1))
(iii) < −1.
In the chain of inequalities, inequality (i) follows because Ψ′(x)Φ(y) ≥ 0 for every x, y; inequality (ii) follows because Ψ(x) = 0 for x ≤ 1/2, while equality (iii) follows from Lemma 1.ii and the pairing of |xj | < 1 and |xj−1| ≥ 1.
Finally, we verify that fT meets the smoothness and boundedness requirements of the function classes we consider.
Lemma 3. The function fT satisfies the following.
i. We have fT (0)− infx fT (x) ≤ 12T .
ii. For all x ∈ Rd, ∇fT (x)
≤ 23 √ T .
iii. For every p ≥ 1, the p-th order derivatives of fT are `p-Lipschitz continuous, where `p ≤ exp(5
2p log p+ cp) for a numerical constant c <∞.
The proof of Lemma 3 is technical, so we defer it to Appendix B.2. In the lemma, Properties i and iii allow us to guarantee that appropriately scaled versions of fT are in Fp(, Lp). Property is ii is necessary for analysis of the randomized construction in Section 5.
4.2 Lower bounds for zero-respecting and deterministic algorithms
We can now state and prove a lower bound for finding stationary points of pth order smooth func- tions using full derivative information and zero-respecting algorithms (the class Azr). Proposition 1 transforms this bound into one on all deterministic algorithms (the class Adet).
Theorem 1. There exist numerical constants 0 < c0, c1 < ∞ such that the following lower bound holds. Let p ≥ 1, p ∈ N, and let , Lp, and ε be positive. Then
Tε ( Adet,Fp(, Lp)
p
where `p ≤ e 5 2 p log p+c1p. The lower bound holds even if we restrict Fp(, Lp) to functions whose
domain has dimension 1 + 2c0(Lp/`p) 1/pε
− 1+p p .
Before we prove the theorem, a few remarks are in order. First, our lower bound matches the upper bound (8) that pth-order regularization schemes achieve [7], up to a constant depending polynomially on p. Thus, although our lower bound applies to algorithms given access to ∇qf(x)
14
for all q ∈ N, only the first p derivatives are necessary to achieve minimax optimal scaling in , Lp, and ε.
Second, inspection of the proof shows that we actually bound smaller quantities than the
complexity defined in Eq. (7). Indeed, we show that taking T & (Lp/`p) 1/pε
− 1+p p in the con-
struction (10) and appropriately scaling fT yields a function f : RT → R that has Lp-Lipschitz continuous pth derivative, and for which any zero-respecting algorithm generates iterates such that ∇f(x(t)) > ε for every t ≤ T . That is,
inf A∈Azr
Tε ( A, f
p ,
which is stronger than a lower bound on Tε ( Azr,Fp(, Lp)
) . Combined with the reduction in
Proposition 2, this implies that for any deterministic algorithm A ∈ Adet there exists orthogonal U ∈ R(2T+1)×T for which fU (x) = f(U>x) is difficult, i.e. Tε
( A, f(U>·)
) > T .
Finally, the scaling of `p with p may appear strange, or perhaps extraneous. We provide two viewpoints on this. First, one expects that the smoothness constants Lp should grow quickly
as p grows; for C∞ functions such as φ(t) = e−t 2
or φ(t) = log(1 + et), supt |φ(p)(t)| grows super- exponentially in p. Indeed, `p is the Lipschitz constant of the pth derivative of fT . Second, the cases
of main practical interest are p ∈ {1, 2}, where ` 1/p p . p
5 2 can be considered a numerical constant.
This is because, for p ≥ 3, the only known methods with dimension-free rate of convergence ε−(p+1)/p [7] require full access to third derivatives, which is generally impractical. Therefore, a realistic discussion of the complexity of finding stationary point with smoothness of order p ≥ 3 must include additional restrictions on the algorithm class.
4.3 Proof of Theorem 1
To prove Theorem 1, we set up the hard instance f : RT → R for some integer T by appropriately scaling f defined in Eq. (10),
f(x) := Lpσ
p+1
`p fT (x/σ) ,
for some scale parameter σ > 0 to be determined, where `p ≤ e2.5p log p+c1 is as in Lemma 3.iii. We wish to show f satisfies Observation 2. Observation 3 implies Observation 2.i (f is a zero-chain). Therefore it remains to show parts ii and iii of Observation 2. Consider any x ∈ RT such that xT = 0. Applying Lemma 2 guarantees that
∇fT (x/σ) > 1, and therefore
∇f(x) = Lpσ
`p . (11)
It remains to choose T and σ based on ε such that ∇f(x) > ε and f ∈ Fp(, Lp). By the lower bound (11), the choice σ = (`pε/Lp)
1/p guarantees ∇f(x) > ε. We note that ∇p+1f(x) = (Lp/`p)∇p+1f(x/σ) and therefore by Lemma 3.iii we have that the p-th order derivatives of f are Lp-Lipschitz continuous. Thus, to ensure f ∈ Fp(, Lp) it suffices to show that f(0)−infx f(x) ≤ . By the first part of Lemma 3 we have
f(0)− inf x f(x) =
T,
where in the last transition we substituted σ = (`pε/Lp) 1/p. We conclude that f ∈ Fp(, Lp) and
T =
⌊ L
( Azr,Fp(, Lp)
15
) ,
where functions of dimension 2T + 1 suffice to establish it.
5 Lower bounds for randomized algorithms
With our lower bounds on the complexity of deterministic algorithms established, we turn to the class of all randomized algorithms. We provide strong distributional complexity lower bounds by exhibiting a distribution on functions such that a function drawn from it is “difficult” for any randomized algorithm, with high probability. We do this via the composition of a random orthogonal transformation with the function fT defined in (10).
The key steps in our deterministic bounds are (a) to show that any algorithm can “discover” at most one coordinate per iteration and (b) finding an approximate stationary point requires “discovering” T coordinates. In the context of randomized algorithms, we must elaborate this development in two ways. First, in Section 5.1 we provide a “robust” analogue of Observation 1 (step (a) above): we show that for a random orthogonal matrix U , any sequence of bounded iterates {x(t)}t∈N based on derivatives of fT (U>·) must (with high probability) satisfy that |x(t), u(j)| ≤ 1
2
for all t and j ≥ t, so that by Lemma 2, ∇fT (U>x(t))
must be large (step (b)). Second, in Section 5.2 we further augment our construction to force boundedness of the iterates by composing fT (U>·) with a soft projection, so that an algorithm cannot “cheat” with unbounded iterates. Finally, we present our general lower bounds in Section 5.3.
5.1 Random rotations and bounded iterates
To transform our hard instance (10) into a hard instance distribution, we introduce an orthogonal matrix U ∈ Rd×T (with columns u(1), . . . , u(T )), and define
fT ;U (x) := fT (U>x) = fT (u(1), x, . . . , u(T ), x), (12)
We assume throughout that U is chosen uniformly at random from the space of orthogonal matrices O(d, T ) = {V ∈ Rd×T | V >V = IT }; unless otherwise stated, the probabilistic statements we give are respect to this uniform U in addition to any randomness in the algorithm that produces the iterates. With this definition, we have the following extension of Observation 1 to randomized iterates, which we prove for fT but is valid for any robust zero-chain (Definition 4). Recall that a sequence is informed by f if it has the same distribution as A[f ] for some randomized algorithm f (with iteration (4)).
Lemma 4. Let δ > 0 and R ≥ √ T , and let x(1), . . . , x(T ) be informed by fT ;U and bounded, so that
x(t) ≤ R for each T . If d ≥ 52TR2 log 2T 2
δ then with probability at least 1− δ, for all t ≤ T and each j ∈ {t, . . . , T}, we have
|u(j), x(t)| < 1/2.
The result of Lemma 4 is identical (to constant factors) to an important result of Woodworth and Srebro [45, Lemma 7], but we must be careful with the sequential conditioning of randomness between the iterates x(t), the random orthogonal U , and how much information the sequentially computed derivatives may leak. Because of this additional care, we require a modification of their original proof,4 which we provide in Section B.3, giving a rough outline here. For a fixed t < T ,
4 In a recent note Woodworth and Srebro [46] independently provide a revision of their proof that is similar, but not identical, to the one we propose here.
16
assume that |u(j), x(s)| < 1/2 holds for every pair s ≤ t and j ∈ {s, . . . , T}; we argue that this (roughly) implies that |u(j), x(t+1)| < 1/2 for every j ∈ {t + 1, . . . , T} with high probability, completing the induction. When the assumption that |u(j), x(s)| < 1/2 holds, the robust zero- chain property of fT (Definition 4 and Observation 3) implies that for every s ≤ t we have
fT ;U (y) = fT (u(1), y, . . . , u(s), y, 0, . . . , 0)
for all y in a neighborhood of x(s). That is, we can compute all the derivatives of fT ;U at x(s)
from x(s) and u(1), . . . , u(s), as fT is known. Therefore, given u(1), x(1), . . . , u(t), x(t) it is possi- ble to reconstruct all the information the algorithm has collected up to iteration t. This means that beyond possibly revealing u(1), . . . , u(t), these derivatives contain no additional information on u(t+1), . . . , u(T ). Consequently, any component of x(t+1) outside the span of u(1), x(1), . . . , u(t), x(t)
is a complete “shot in the dark.” To give “shot in the dark” a more precise meaning, let u(j) be the projection of u(j) to the
orthogonal complement of span{u(1), x(1), . . . , u(t), x(t)}. We show that conditioned on u(1), . . . , u(T ), and the induction hypothesis, u(j) has a rotationally symmetric distribution in that subspace, and that it is independent of x(t+1). Therefore, by concentration of measure arguments on the sphere [5], we have |u(j), x(t+1)| . x(t+1)/
√ d ≤ R/
probability. Using an appropriate induction hypothesis, this is sufficient to guarantee that for every t+ 1 ≤ j ≤ T , |u(j), x(t+1)| . R
√ (T log T )/d, which is bounded by 1/2 for sufficiently large d.
5.2 Handling unbounded iterates
In the deterministic case, the adversary (choosing the hard function f) can choose the rotation matrix U to be exactly orthogonal to all past iterates; this is impossible for randomized algorithms. The construction (12) thus fails for unbounded random iterates, since as long as x(t) and u(j) are not exactly orthogonal, their inner product will exceed 1/2 for sufficiently large x(t), thus breaching the “dead zone” of Ψ and providing the algorithm with information on u(j). To prevent this, we force the algorithm to only access fT ;U at points with bounded norm, by first passing the iterates through a smooth mapping from Rd to a ball around the origin. We denote our final hard instance construction by fT ;U : Rd → R, and define it as
fT ;U (x) = fT ;U (ρ(x)) + 1
10 x2 , where ρ(x) =
x√ 1 + x2 /R2
and R = 230 √ T . (13)
The quadratic term in fT ;U guarantees that all points beyond a certain norm have a large gradient, which prevents the algorithm from trivially making the gradient small by increasing the norm of the iterates. The following lemma captures the hardness of fT ;U for randomized algorithms.
Lemma 5. Let δ > 0, and let x(1), . . . , x(T ) be informed by fT ;U . If d ≥ 52 · 2302 · T 2 log 2T 2
δ then, with probability at least 1− δ, ∇fT ;U (x(t))
> 1/2 for all t ≤ T.
Proof. For t ≤ T , set y(t) := ρ(x(t)). For every p ≥ 0 and t ∈ N, the quantity ∇pfT ;U (x(t)) is measurable with respect x(t) and {∇ifT ;U (y(t))}pi=0 (the chain rule shows it can be computed from these variables without additional dependence on U , as ρ is fixed). Therefore, the process y(1), . . . , y(T ) is informed by fT ;U (recall defining iteration (4)). Since y(t) = ρ(x(t)) ≤ R for every t, we may apply Lemma 4 with R = 230
√ T to obtain that with probability at least 1− δ,
|u(T ), y(t)| < 1/2 for every t ≤ T.
17
Therefore, by Lemma 2 with i = T , for each t there exists j ≤ T such that⟨u(j), y(t) ⟩ < 1 and
⟨u(j),∇fT ;U (y(t)) ⟩ > 1. (14)
To show that ∇fT ;U (x(t)) is also large, we consider separately the cases x(t) ≤ R/2 and
x(t) ≥ R/2. For the first case, we use ∂ρ ∂x(x) = I−ρ(x)ρ(x)>/R2√
1+x2/R2 to write
⟨ u(j),∇fT ;U (x(t))
⟩ +
⟩ = u(j),∇fT ;U (y(t)) − u(j), y(t)y(t),∇fT ;U (y(t))/R2√
1 + x(t)2/R2 +
⟨u(j),∇fT ;U (x(t)) ⟩ ≥ 2√
5
⟩(∇fT ;U (y(t)) 2R
) .
By Lemma 3.ii we have ∇fT ;U (y(t)) ≤ 23 √ T = R/10, which combined with (14) and the above
display yields ∇fT ;U (x(T )) ≥ |u(j),∇fT ;U (x(T ))| ≥ 2√ 5 − 1
20 − 1
In the second case, x(t)
≥ R/2, we have for any x satisfying x ≥ R/2 and y = ρ(x) that∇fT ;U (x) ≥ 1
5 x −
∂ρ∂x(x)
where we used ∂ρ∂x(x)op ≤ 1√ 1+x2/R2
≤ 2/ √
5 and that ∇fT ;U (y) ≤ 23 √ T = R/10.
As our lower bounds repose on appropriately scaling the function fT ;U , it remains to verify that
fT ;U satisfies the few boundedness properties we require. We do so in the following lemma.
Lemma 6. The function fT ;U satisfies the following.
i. We have fT ;U (0)− infx fT ;U (x) ≤ 12T .
ii. For every p ≥ 1, the pth order derivatives of fT ;U are ˆ p-Lipschitz continuous, where ˆ
p ≤ exp(cp log p+ c) for a numerical constant c <∞.
We defer the (computationally involved) proof of this lemma to Section B.4.
5.3 Final lower bounds
With Lemmas 5 and 6 in hand, we can state our lower bound for all algorithms, randomized or otherwise, given access to all derivatives of a C∞ function. Note that our construction also implies an identical lower bound for (slightly) more general algorithms that use any local oracle [35, 10], meaning that the information the oracle returns about a function f when queried at a point x is identical to that it returns when a function g is queried at x whenever f(z) = g(z) for all z in a neighborhood of x.
18
Theorem 2. There exist numerical constants 0 < c0, c1 < ∞ such that the following lower bound holds. Let p ≥ 1, p ∈ N, and let , Lp, and ε be positive. Then
Tε ( Arand,Fp(, Lp)
p ,
where ˆ p ≤ ec1p log p+c1. The lower bound holds even if we restrict Fp(, Lp) to functions where
the domain has dimension 1 + c2q (
(Lp/`p) 1/p ε
x2 log(2x).
We return to the proof of Theorem 2 in Sec. 5.4, following the same outline as that of Theorem 1, and provide some commentary here. An inspection of the proof to come shows that we actually demonstrate a stronger result than that claimed in the theorem. For any δ ∈ (0, 1) let d ≥⌈ 52 · (230)2 · T 2 log(2T 2/δ)
⌉ where T = bc0(Lp/ˆ
p) 1/pε
− 1+p p c as in the claimed lower bound. In
the proof we construct a probability measure µ on functions in Fp(, Lp), of fixed dimension d, such that
inf A∈Arand
) dµ(f) > 1− δ, (16)
where the randomness in PA depends only on A. Therefore, by definition (6), for any A ∈ Arand a function f drawn from µ satisfies
Tε ( A, f
) > T with probability greater than 1− 2δ, (17)
implying Theorem 2 for any δ ≥ 1/2. Thus, we exhibit a randomized procedure for finding hard instances for any randomized algorithm that requires no knowledge of the algorithm itself.
Theorem 2 is stronger than Theorem 1 in that it applies to the broad class of all randomized algorithms. Our probabilistic analysis requires that the functions constructed to prove Theorem 2 have dimension scaling proportional to T 2 log(T ) where T is the lower bound on the number of iterations. Contrast this to Theorem 1, which only requires dimension 2T + 1. A similar gap exists in complexity results for convex optimization [45, 46]. At present, it unclear if these gaps are fundamental or a consequence of our specific constructions.
5.4 Proof of Theorem 2
We set up our hard instance distribution fU : Rd → R, indexed by a uniformly distributed orthog- onal matrix U ∈ O(d, T ), by appropriately scaling fT ;U defined in (13),
fU (x) := Lpσ
fT ;U (x/σ),
where the integer T and scale parameter σ > 0 are to be determined, d = d52 · (230)2T 2 log(4T 2)e, and the quantity ˆ
p ≤ exp(c1p log p+ c1) for a numerical constant c1 is defined in Lemma 6.ii. Fix A ∈ Arand and let x(1), x(2), . . . , x(T ) be the iterates produced by A applied on fU . Since f
and fT ;U differ only by scaling, the iterates x(1)/σ, x(2)/σ, . . . , x(T )/σ are informed by fT ;U (recall Sec. 2.2), and therefore we may apply Lemma 5 with δ = 1/2 and our large enough choice of dimension d to conclude that
PA,U
) >
1
2 ,
19
where the probability is taken over both the random orthogonal U and any randomness in A. As A is arbitrary, taking σ = (2ˆ
pε/Lp) 1/p, this inequality becomes the desired strong inequality (16)
with δ = 1/2 and µ induced by the distribution of U . Thus, by (17), for every A ∈ Arand there exists UA ∈ O(d, T ) such that Tε
( A, fUA
) ≥ 1 + T.
It remains to choose T to guarantee that fU belongs to the relevant function class (bounded and smooth) for every orthogonal U . By Lemma 6.ii, fU has Lp-Lipschitz continuous pth order derivatives. By Lemma 6.i, we have
fU (0)− inf x fU (x) ≤ Lpσ
p+1
ˆ p
T,
where in the last transition we have substituted σ = (2`pε/Lp) 1/p. Setting T = b
48(Lp/ˆ p)
1/pε − 1+p
p c gives fU (0)− infx fU (x) ≤ , and fU ∈ Fp(, Lp), yielding the theorem.
6 Distance-based lower bounds
We have so far considered finding approximate stationary points of smooth functions with bounded sub-optimality at the origin, i.e. f(0) − infx f(x) ≤ . In convex optimization, it is common to consider instead functions with bounded distance between the origin and a global minimum. We may consider a similar restriction for non-convex functions; for p ≥ 1 and positive Lp, D, let
Fdist p (D,Lp)
be the class of C∞ functions with Lp-Lipschitz pth order derivatives satisfying
sup x {x | x ∈ argmin f} ≤ D, (18)
that is, all global minima have bounded distance to the origin. In this section we give a lower bound on the complexity of this function class that has the same ε
dependence as our bound for the class Fp(, Lp). This is in sharp contrast to convex optimization, where distance-bounded functions enjoy significantly better ε dependence than their value-bounded counterparts (see Section 3 in the companion [14]). Qualitatively, the reason for this difference is that the lack of convexity allows us to “hide” global minima close to the origin that are difficult to find for any algorithm with local function access [35].
We postpone the construction and proof to Appendix C, and move directly to the final bound.
Theorem 3. There exist numerical constants 0 < c0, c1 < ∞ such that the following lower bound holds. For any p ≥ 1, let D,Lp, and ε be positive. Then
Tε ( Arand,Fdist
( Lp `′p
p ,
where `′p ≤ ec1p log p+c1. The lower bound holds even if we restrict Fdist p (D,Lp) to functions with
domain of dimension 1 + c2q ( D1+p
( Lp/`
′ p
p
q(x) = x2 log(2x).
20
We remark that a lower-dimensional construction suffices for proving the lower bound for deter- ministic algorithm, similarly to Theorem 1.
While we do not have a matching upper bound for Theorem 3, we can match its ε dependence in the smaller function class
Fdist 1,p (D,L1, Lp) = Fdist
1 (D,L1) ∩ Fdist p (D,Lp),
due to the fact that for any f : Rd → R with L1-Lipschitz continuous gradient and global minimizer x?, we have f(x)−f(x?) ≤ 1
2L1 x− x?2 for all x ∈ Rd [cf. 9, Eq. (9.13)]. Hence Fdist 1,p (D,L1, Lp) ⊂
Fp(, Lp), with := 1 2L1D
2, and consequently by the bound (8) we have
Tε ( A(p)
1,p (D,L1, Lp) ) . D2L1L
7 Conclusion
This work provides the first algorithm independent and tight lower bounds on the dimension-free complexity of finding stationary points. As a consequence, we have characterized the optimal rates of convergence to ε-stationarity, under the assumption of high dimension and an oracle that provides all derivatives. Yet, given the importance of high-dimensional problems, the picture is incomplete: high-order algorithms—even second-order method—are often impractical in large scale settings. We address this in the companion [14], which provides sharper lower bounds for the more restricted class of first-order methods. In [14] we also provide a full conclusion for this paper sequence, discussing in depth the implications and questions that arise from our results.
Acknowledgments
OH was supported by the PACCAR INC fellowship. YC and JCD were partially supported by the SAIL-Toyota Center for AI Research, NSF-CAREER award 1553086, and a Sloan Foundation Fellowship in Mathematics. YC was partially supported by the Stanford Graduate Fellowship and the Numerical Technologies Fellowship.
References
[1] A. Agarwal, P. L. Bartlett, P. Ravikumar, and M. J. Wainwright. Information-theoretic lower bounds on the oracle complexity of convex optimization. IEEE Transactions on Information Theory, 58(5):3235–3249, 2012.
[2] N. Agarwal, Z. Allen-Zhu, B. Bullins, E. Hazan, and T. Ma. Finding approximate local minima faster than gradient descent. In Proceedings of the Forty-Ninth Annual ACM Symposium on the Theory of Computing, 2017.
[3] Y. Arjevani, S. Shalev-Shwartz, and O. Shamir. On lower and upper bounds in smooth and strongly convex optimization. Journal of Machine Learning Research, 17(126):1–51, 2016.
[4] Y. Arjevani, O. Shamir, and R. Shiff. Oracle complexity of second-order methods for smooth convex optimization. arXiv:1705.07260 [math.OC], 2017.
[5] K. Ball. An elementary introduction to modern convex geometry. In S. Levy, editor, Flavors of Geometry, pages 1–58. MSRI Publications, 1997.
21
[6] D. Berend and T. Tassa. Improved bounds on Bell numbers and on moments of sums of random variables. Probability and Mathematical Statistics, 30(2):185–205, 2010.
[7] E. G. Birgin, J. L. Gardenghi, J. M. Martnez, S. A. Santos, and P. L. Toint. Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models. Mathematical Programming, 163(1–2):359–368, 2017.
[8] N. Boumal, V. Voroninski, and A. Bandeira. The non-convex Burer-Monteiro approach works on smooth semidefinite programs. In Advances in Neural Information Processing Systems 30, pages 2757–2765, 2016.
[9] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[10] G. Braun, C. Guzman, and S. Pokutta. Lower bounds on the oracle complexity of nonsmooth convex optimization via information theory. IEEE Transactions on Information Theory, 63 (7), 2017.
[11] S. Burer and R. D. Monteiro. A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. Mathematical Programming, 95(2):329–357, 2003.
[12] E. J. Candes, X. Li, and M. Soltanolkotabi. Phase retrieval via Wirtinger flow: Theory and algorithms. IEEE Transactions on Information Theory, 61(4):1985–2007, 2015.
[13] Y. Carmon, J. C. Duchi, O. Hinder, and A. Sidford. Convex until proven guilty: dimension- free acceleration of gradient descent on non-convex functions. In Proceedings of the 34th International Conference on Machine Learning, 2017.
[14] Y. Carmon, J. C. Duchi, O. Hinder, and A. Sidford. Lower bounds for finding stationary points II: First-order methods. arXiv: 1711.00841 [math.OC], 2017. URL https://arxiv.
org/pdf/1711.00841.pdf.
[15] Y. Carmon, J. C. Duchi, O. Hinder, and A. Sidford. Accelerated methods for non-convex optimization. SIAM Journal on Optimization, 28(2):1751–1772, 2018.
[16] C. Cartis, N. I. Gould, and P. L. Toint. On the complexity of steepest descent, Newton’s and regularized Newton’s methods for nonconvex unconstrained optimization problems. SIAM Journal on Optimization, 20(6):2833–2852, 2010.
[17] C. Cartis, N. I. Gould, and P. L. Toint. Complexity bounds for second-order optimality in unconstrained optimization. Journal of Complexity, 28(1):93–108, 2012.
[18] C. Cartis, N. I. M. Gould, and P. L. Toint. How much patience do you have? A worst-case perspective on smooth nonconvex optimization. Optima, 88, 2012.
[19] C. Cartis, N. I. Gould, and P. L Toint. A note about the complexity of minimizing nesterov’s smooth chebyshev-rosenbrock function. Optimization Methods and Software, 28:451–457, 2013.
[20] C. Cartis, N. I. M. Gould, and P. L. Toint. Worst-case evaluation complexity and optimality of second-order methods for nonconvex smooth optimization. arXiv:1709.07180 [math.OC], 2017.
[21] S. Chowla, I. N. Herstein, and W. K. Moore. On recursions connected with symmetric groups I. Canadian Journal of Mathematics, 3:328–334, 1951.
[22] A. R. Conn, N. I. M. Gould, and P. L. Toint. Trust Region Methods. MPS-SIAM Series on Optimization. SIAM, 2000.
[23] W. W. Hager and H. Zhang. A survey of nonlinear conjugate gradient methods. Pacific Journal of Optimization, 2(1):35–58, 2006.
[24] O. Hinder. Cutting plane methods can be extended into nonconvex optimization. In Proceed- ings of the Thirty First Annual Conference on Computational Learning Theory, 2018.
[25] F. Jarre. On Nesterov’s smooth Chebyshev-Rosenbrock function. Optimization Methods and Software, 28(3):478–500, 2013.
[26] C. Jin, R. Ge, P. Netrapalli, S. M. Kakade, and M. I. Jordan. How to escape saddle points efficiently. In Proceedings of the 34th International Conference on Machine Learning, 2017.
[27] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. Journal of Machine Learning Research, 11:2057–2078, 2010.
[28] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
[29] D. Liu and J. Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(1):503–528, 1989.
[30] P.-L. Loh and M. J. Wainwright. High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity. Annals of Statistics, 40(3):1637–1664, 2012.
[31] P.-L. Loh and M. J. Wainwright. Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima. Journal of Machine Learning Research, 16:559–616, 2013.
[32] R. D. Monteiro and B. F. Svaiter. An accelerated hybrid proximal extragradient method for convex optimization and its implications to second-order methods. SIAM Journal on Optimization, 23(2):1092–1125, 2013.
[33] K. Murty and S. Kabadi. Some NP-complete problems in quadratic and nonlinear program- ming. Mathematical Programming, 39:117–129, 1987.
[34] A. Nemirovski. Efficient methods in convex programming. Technion: The Israel Institute of Technology, 1994.
[35] A. Nemirovski and D. Yudin. Problem Complexity and Method Efficiency in Optimization. Wiley, 1983.
[36] Y. Nesterov. A method of solving a convex programming problem with convergence rate O(1/k2). Soviet Mathematics Doklady, 27(2):372–376, 1983.
[37] Y. Nesterov. Introductory Lectures on Convex Optimization. Kluwer Academic Publishers, 2004.
[38] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341–362, 2012.
[39] Y. Nesterov. How to make the gradients small. Optima, 88, 2012.
[40] Y. Nesterov and B. Polyak. Cubic regularization of Newton method and its global performance. Mathematical Programming, Series A, 108:177–205, 2006.
23
[41] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 2006.
[42] J. Sun, Q. Qu, and J. Wright. A geometric analysis of phase retrieval. Foundations of Com- putational Mathematics, 18(5):1131–1198, 2018.
[43] J. Traub, H. Wasilkowski, and H. Wozniakowski. Information-Based Complexity. Academic Press, 1988.
[44] S. A. Vavasis. Black-box complexity of local minimization. SIAM Journal on Optimization, 3 (1):60–80, 1993.
[45] B. E. Woodworth and N. Srebro. Tight complexity bounds for optimizing composite objectives. In Advances in Neural Information Processing Systems 30, pages 3639–3647, 2016.
[46] B. E. Woodworth and N. Srebro. Lower bound for randomized first order convex optimization. arXiv:1709.03594 [math.OC], 2017.
[47] X. Zhang, C. Ling, and L. Qi. The best rank-1 approximation of a symmetric tensor and related spherical optimization problems. SIAM Journal on Matrix Analysis and Applications, 33(3):806–821, 2012.
24
A Proof of Propositions 1 and 2
The core of the proofs of Propositions 1 and 2 is the following construction.
Lemma 7. Let p ∈ N ∪ {∞}, T0 ∈ N and A ∈ A(p) det. There exists an algorithm ZA ∈ A
(p) zr with the
following property. For every f : Rd → R there exists an orthogonal matrix U ∈ R(d+T0)×d such that, for every ε > 0,
Tε ( A, fU
) > T0 or Tε
where fU (x) := f(U>x).
Proof. We explicitly construct ZA with the following slightly stronger property. For every every f : Rd → R in F , there exists an orthogonal U ∈ R(d+T0)×d, U>U = Id, such that fU (x) := f(U>x) satisfies that the first T0 iterates in sequences ZA[f ] and U>A[fU ] are identical. (Recall the notation A[f ] = {a(t)}t∈N where a(t) are the iterates of A on f , and we use the obvious shorthand U>{a(t)}t∈N = {U>a(t)}t∈N.)
Before explaining the construction of ZA, let us see how its defining property implies the lemma. If Tε
( A, fU
( A, fU
) , (19)
as required. The equality (i) follows because Ug = g for all orthogonal U , so for any sequence {a(t)}t∈N
Tε ( {a(t)}t∈N, fU
) = inf
} = inf
} = Tε
( {U>a(t)}t∈N, f
( ·, · )
( U>A[fU ], f
) ≤ T0 then the first T0 iterates of U>A[fU ] determine Tε
( U>A[fU ], f
) , and
these T0 iterates are identical to the first T0 iterates of ZA[f ] by assumption. It remains to construct the zero-respecting algorithm ZA with iterates matching those of A under
appropriate rotation. We do this by describing its operation inductively on any given f : Rd → R, which we denote {z(t)}t∈N = ZA[f ]. Letting d′ = d+ T0, the state of the algorithm ZA at iteration t is determined by a support St ⊆ [d] and orthonormal vectors {u(i)}i∈St ⊂ Rd′ identified with this support. The support condition (5) defines the set St,
St = q∈[p]
} ,
so that ∅ = S1 ⊆ S2 ⊆ · · · and the collection {u(i)}i∈St grows with t. We let U ∈ Rd′×d be the orthogonal matrix whose ith column is u(i)—even though U may not be completely determined throughout the runtime of ZA, our partial knowledge of it will suffice to simulate the operation of A on fU (a) = f(U>a). Letting {a(t)}t∈N = A[fU ], our requirements ZA[f ] = U>A[fU ] and ZA ∈ Azr
are equivalent to z(t) = U>a(t) and supp{z(t)} ⊆ St (20)
for every t ≤ T0 (we set z(i) = 0 for every i > T0 without loss of generality). Let us proceed with the inductive argument. The iterate a(1) ∈ Rd′ is an arbitrary (but de-
terministic) vector in Rd′ . We thus satisfy (20) at t = 1 by requiring that u(j), a(1) = 0 for
25
every j ∈ [d], whence the first iterate of ZA satisfies z(1) = 0 ∈ Rd. Assume now the equality and containment (20) holds for every s < t, where t ≤ T0 (implying that ZA has emulated the iterates a(2), . . . , a(t−1) of A); we show how ZA can emulate a(t), the t’th iterate of A, and from it can construct z(t) that satisfies (20). To obtain a(t), note that for every q ≤ p, and every s < t, the derivatives ∇qfU (a(s)) are a function of ∇qf(z(s)) and orthonormal the vectors {u(i)}i∈Ss+1 ,
because supp{∇qf(z(s))} ⊆ Ss+1 and therefore the chain rule implies[ ∇qfU (a(s))
] j1,...,jq
jq .
Since A ∈ A(p) det is deterministic, a(t) is a function of ∇qf(z(s)) for q ∈ [p] and s ∈ [t− 1], and thus
ZA can simulate and compute it. To satisfy the support condition supp{z(t)} ⊆ St we require that u(j), a(t) = 0 for every j 6∈ St. This also means that to compute z(t) = U>a(t) we require only the columns of U indexed by the support St.
Finally, we need to show that after computing St+1 we can find the vectors {u(i)}i∈St+1\St satis-
fying u(j), a(s) = 0 for every s ≤ t and j ∈ St+1 \St, and additionally that U be orthogonal. Thus, we need to choose {u(i)}i∈St+1\St in the orthogonal complement of span
{ a(1), ..., a(t), {u(i)}i∈St
} .
This orthogonal complement has dimension at least d′ − t − |St| = |Sct | + T0 − t ≥ |Sct |. Since |St+1 \ St| ≤ |Sct |, there exist orthonormal vectors {u(i)}i∈St+1\St that meet the requirements. This completes the induction.
Finally, note that the arguments above hold unchanged for p =∞.
With Lemma 7 in hand, the propositions follow easily.
Proposition 1. Let p ∈ N ∪ {∞}, F be an orthogonally invariant function class and ε > 0. Then
Tε ( A(p)
det,F ) ≥ Tε
det,F ) < T0 for some integer T0 < ∞, as otherwise we have
Tε ( A(p)
det,F )
= ∞ and the result holds trivially. For any A ∈ A(p) det and the value T0, we invoke
Lemma 7 to construct ZA ∈ A (p) zr such that Tε
( A, fU
) ≥ min{T0,Tε
( ZA, f
) } for every f ∈ F and
some orthogonal matrix U that depends on f and A. Consequently, we have
Tε ( A(p)
zr ,F )} ,
) ≥
(p) zr by construction. As we chose T0 for which
Tε ( A(p)
( A(p)
zr ,F ) , concluding the proof.
Proposition 2. Let p ∈ N ∪ {∞}, F be an orthogonally invariant function class, f ∈ F with
domain of dimension d, and ε > 0. If Tε ( A(p)
zr , {f} ) ≥ T , then
det, {fU | U ∈ O(d+ T, d)} ) ≥ T,
where fU := f(U>z) and O(d + T, d) is the set of (d + T ) × d orthogonal matrices, so that {fU | U ∈ O(d+ T, d)} contains only function with domain of dimension d+ T .
26
Proof. For any A ∈ A(p) det, we invoke Lemma 7 with T0 = T to obtain ZA ∈ A
(p) zr and orthogonal
Tε ( A, fU ′
where the last equality is due to inf B∈A(p)
zr Tε ( B, f
O(d+ T, d)}, we have sup
f ′∈{fU |U∈O(d+T,d)} Tε ( A, f ′
) ≥ T,
and taking the infimum over A ∈ A(p) det concludes the proof.
B Technical Results
Lemma 1. The functions Ψ and Φ satisfy the following.
i. For all x ≤ 1 2 and all k ∈ N, Ψ(k)(x) = 0.
ii. For all x ≥ 1 and |y| < 1, Ψ(x)Φ′(y) > 1.
iii. Both Ψ and Φ are infinitely differentiable, and for all k ∈ N we have
sup x |Ψ(k)(x)| ≤ exp
) .
iv. The functions and derivatives Ψ,Ψ′,Φ and Φ′ are non-negative and bounded, with
0 ≤ Ψ < e, 0 ≤ Ψ′ ≤ √
54/e, 0 < Φ < √
2πe, and 0 < Φ′ ≤ √ e.
Each of the statements in the lemma is immediate except for part iii. To see this part, we require a few further calculations. We begin by providing bounds on the derivatives of Φ(x) =
e 1 2
∫ x −∞ e
− 1 2 t2dt. To avoid annoyances with scaling factors, we define φ(t) = e−
1 2 t2 .
Lemma 8. For all k ∈ N, there exist constants c (k) i satisfying |c(k)
i | ≤ (2 max{i, 1})k, and
φ(k)(t) =
) φ(t).
Proof. We prove the result by induction. We have φ′(t) = −te− 1 2 t2 , so that the base case of the
induction is satisfied. Now, assume for our induction that
φ(k)(t) =
c (k) i tiφ(t).
where |c(k) i | ≤ 2k(max{i, 1})k. Then taking derivatives, we have
φ(k+1)(t) =
] φ(t)− c(k)
0 tφ(t) =
27
(k) i+1 − c
k+1 | = 1. With the induction
hypothesis that c (k) i ≤ (2 max{i, 1})k, we obtain
|c(k+1) i | ≤ 2k(i+ 1)(i+ 1)k + 2k(max{i, 1})k ≤ 2k+1(i+ 1)k+1.
This gives the result.
With this result, we find that for any k ≥ 1,
Φ(k)(x) = √ e
) φ(x).
The function log(xiφ(x)) = i log x − 1 2x
2 is maximized at x = √ i, so that xiφ(x) ≤ exp( i2 log i
e). We thus obtain the numerically verifiable upper bound
|Φ(k)(x)| ≤ √ e k−1∑ i=0
(2 max{i, 1})k−1 exp
( i
) ≤ exp (1.5k log(1.5k)) .
Now, we turn to considering the function Ψ(x). We assume w.l.o.g. that x > 1 2 , as otherwise
Ψ(k)(x) = 0 for all k. Recall Ψ(x) = exp (
1− 1 (2x−1)2
regarding its derivatives.
Lemma 9. For all k ∈ N, there exist constants c (k) i satisfying |c(k)
i | ≤ 6k(2i+ k)k such that
Ψ(k)(x) =
) Ψ(x).
Proof. We provide the proof by induction over k. For k = 1, we have that
Ψ′(x) = 4
(2x− 1)3 exp
(2x− 1)3 Ψ(x),
which yields the base case of the induction. Now, assume that for some k, we have
Ψ(k)(x) =
(k) i
(k) 0 = 0. Defining c1
1 = 4 and c (k+1) i = 4c
(k) i−1 − 2(k + 2i)c
(k) i for i > 1, then,
under the inductive hypothesis that |c(k) i | ≤ 6k(2i+ k)k, we have
|c(k+1) i | ≤ 4 · 6k(k − 2 + 2i)k + 2 · 6k(k + 2i)(k + 2i)k ≤ 6k+1(k + 2i)k+1 ≤ 6k+1(k + 1 + 2i)k+1
which gives the result.
28
As in the derivation immediately following Lemma 8, by replacing t = 1 2x−1 , we have that
tk+2ie−t 2
is maximized by t = √
(k + 2i)/2, so that
( 1 +
|Ψ(k)(x)| ≤ k∑ i=1
Lemma 3. The function fT satisfies the following.
i. We have fT (0)− infx fT (x) ≤ 12T .
ii. For all x ∈ Rd, ∇fT (x)
≤ 23 √ T .
iii. For every p ≥ 1, the p-th order derivatives of fT are `p-Lipschitz continuous, where `p ≤ exp(5
2p log p+ cp) for a numerical constant c <∞.
Proof. Part i follows because fT (0) < 0 and, since 0 ≤ Ψ(x) ≤ e and 0 ≤ Φ(x) ≤ √
2πe,
Ψ (xi−1) Φ (xi) > −T · e · √
2πe ≥ −12T.
Part ii follows additionally from Ψ(x) = 0 on x < 1/2, 0 ≤ Ψ′(x) ≤ √
54e−1 and 0 ≤ Φ′(x) ≤ √ e,
which when substituted into
∂fT ∂xj
(x) = −Ψ (−xj−1) Φ′ (−xj)−Ψ (xj−1) Φ′ (xj)−Ψ′ (−xj) Φ (−xj+1)−Ψ′ (xj) Φ (xj+1)
yields ∂fT∂xj (x)
≤ √T ≤ 23 √ T .
To establish part iii, fix a point x ∈ RT and a unit vector v ∈ RT . Define the real function hx,v : R → R by the directional projection of fT , hx,v(θ) := fT (x + θv). The function θ 7→ hx,v(θ) is infinitely differentiable for every x and v. Therefore, fT has `p-Lipschitz p-th order derivatives
if and only if |h(p+1) x,v (0)| ≤ `p for every x, v. Using the shorthand notation ∂i1 · · · ∂ik for ∂k
∂xi1 ···∂xik ,
we have
∂i1 · · · ∂ip+1 fT (x) vi1 · · · vip+1 .
Examining fT , we see that ∂i1 · · · ∂ip+1 fT is non-zero if and only if |ij − ik| ≤ 1 for every j, k ∈ [p+ 1]. Consequently, we can rearrange the above summation as
h(p+1) x,v (0) =
T∑ i=1
∂i+δ1 · · · ∂i+δp∂ifT (x) vi+δ1 · · · vi+δpvi,
29
where we take v0 := 0 and vT+1 := 0. Brief calculation show that
sup x∈RT
max i∈[T ]
k∈[p+1]
≤ 2 √
2πe · e2.5(p+1) log(4(p+1)) ≤ exp (2.5p log p+ 4p+ 9) .
where the second inequality uses Lemma 1.iii, and Φ(x′) ≤ √
2πe for the case k = p+ 1. Defining `p = 2p+1e2.5p log p+4p+9 ≤ e2.5p log p+5p+10, we thus have
h(p+1) x,v (0)
2−(p+1)`p
2−(p+1)`p ≤ `p,
where we have used | ∑T
i=1 vi+δ1 · · · vi+δpvi| ≤ 1 for every δ ∈ {0, 1}p ∪ {0,−1}p. To see this last claim is true, recall that v is a unit vector and note that
T∑ i=1
i±1 .
i=1 vi+δ1 · · · vi+δpvi| = | ∑T
i=1 v p+1 i | ≤
∑T i=1 v
j=1 |δj | = n ≤ p, the Cauchy-Swartz inequality implies
T∑ i=1
≤ √√√√ T∑
√√√√ T∑ i=1
v2n i+s ≤
T∑ i=1
v2 i = 1,
B.3 Proof of Lemma 4
The proof of Lemma 4 uses a number of auxiliary arguments, marked as Lemmas 4a, 4b and 4c. Readers looking to gain a high-level view of the proof of Lemma 4 can safely skip the proofs of these sub-lemmas. In the following, recall that U ∈ Rd×T is drawn from the uniform distribution over d×T orthogonal matrices (satisfying UTU = I, as d > T ), that the columns of U are denoted u(1), . . . , u(T ), and that fT ;U (x) = fT (U>x).
Lemma 4. Let δ > 0 and R ≥ √ T , and let x(1), . . . , x(T ) be informed by fT ;U and bounded, so that
x(t) ≤ R for each T . If d ≥ 52TR2 log 2T 2
δ then with probability at least 1− δ, for all t ≤ T and each j ∈ {t, . . . , T}, we have
|u(j), x(t)| < 1/2.
For t ∈ N, let Pt ∈ Rd×d denote the projection operator to the span of x(1), u(1), . . . , x(t), u(t), and let P⊥t = I − Pt denote its orthogonal complement. We define the event Gt as
Gt =
{ max
(t) } where α =
5R √ T . (21)
For every t, define G≤t = ∩i≤tGi and G<t = ∩i<tGi .
The following linear-algebraic result justifies the definition (21) of Gt.
30
Lemma 4a. For all t ≤ T , G≤t implies |u(j), x(s)| < 1/2 for every s ∈ {1, . . . , t} and every j ∈ {s, . . . , T}.
Proof. First, notice that since G≤t implies G≤s for every s ≤ t, it suffices to show that G≤t implies |u(j), x(t)| < 1/2 for every j ∈ {t, . . . , T}. We will in fact prove a stronger statement:
For every t, G<t implies Pt−1u
(j) 2 ≤ 2α2 (t− 1) for every j ∈ {t, . . . , T}, (22)
where we recall that Pt ∈ Rd×d is the projection operator to the span of x(1), u(1), . . . , x(t), u(t), P⊥t = Id − Pt and α = 1/(5R
√ T ). Before proving (22), let us show that it implies our result.
Fixing j ∈ {t, . . . , T}, we have⟨u(j), x(t) ⟩ ≤ ⟨u(j), P⊥t−1x
(t) ⟩+
⟨u(j), Pt−1x (t) ⟩ .
Since Gt holds, its definition (21) implies |u(j), P⊥t−1x (t)| ≤ α
P⊥t−1x (t) ≤ α
x(t) . Moreover,
by Cauchy-Schwarz and the implication (22), we have |u(j), Pt−1x (t)| ≤
Pt−1u (j) x(t)
≤√ 2α2(t− 1)
x(t) . Combining the two bounds, we obtain the result of the lemma,⟨u(j), x(t)
⟩ ≤ x(t) (α+
√ 2α2(t− 1)) <
where we have used x(t)
≤ R and α = 1/(5R √ T ).
We prove bound (22) by induction. The basis of the induction, t = 1, is trivial, as P0 = 0. We shall assume (22) holds for s ∈ {1, . . . , t− 1} and show that it consequently holds for s = t as well. We may apply the Graham-Schmidt procedure on the sequence x(1), u(1), . . . , x(t−1), u(t&mi