TY - GEN

T1 - On the iteration complexity of oblivious first-order optimization algorithms

AU - Arjevani, Yossi

AU - Shamir, Ohad

PY - 2016

Y1 - 2016

N2 - We consider a broad class of first-order optimization algorithms which are oblivious, in the sense that their step sizes are scheduled regardless of the function under consideration, except for limited side-information such as smoothness or strong convexity parameters. With the knowledge of these two parameters, we show that any such algorithm attains an iteration complexity lower bound of Ω(√L/∈) for//-smooth convex functions, and Ω(√L/μ ln(l/∈)) for L-smooth μ-strongly convex functions. These lower bounds are stronger than those in the traditional oracle model, as they hold independently of the dimension. To attain these, we abandon the oracle model in favor of a structure-based approach which builds upon a framework recently proposed in (Arjevani et al., 2015). We further show that without knowing the strong convexity parameter, it is impossible to attain an iteration complexity better than Ω(√L/μ ln(l/∈)). This result is then used to formalize an observation regarding L-smooth convex functions, namely, that the iteration complexity of algorithms employing time-invariant step sizes must be at least Ω (Ω/∈).

AB - We consider a broad class of first-order optimization algorithms which are oblivious, in the sense that their step sizes are scheduled regardless of the function under consideration, except for limited side-information such as smoothness or strong convexity parameters. With the knowledge of these two parameters, we show that any such algorithm attains an iteration complexity lower bound of Ω(√L/∈) for//-smooth convex functions, and Ω(√L/μ ln(l/∈)) for L-smooth μ-strongly convex functions. These lower bounds are stronger than those in the traditional oracle model, as they hold independently of the dimension. To attain these, we abandon the oracle model in favor of a structure-based approach which builds upon a framework recently proposed in (Arjevani et al., 2015). We further show that without knowing the strong convexity parameter, it is impossible to attain an iteration complexity better than Ω(√L/μ ln(l/∈)). This result is then used to formalize an observation regarding L-smooth convex functions, namely, that the iteration complexity of algorithms employing time-invariant step sizes must be at least Ω (Ω/∈).

UR - http://www.scopus.com/inward/record.url?scp=84998644019&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84998644019

T3 - 33rd International Conference on Machine Learning, ICML 2016

SP - 1433

EP - 1447

BT - 33rd International Conference on Machine Learning, ICML 2016

A2 - Balcan, Maria Florina

A2 - Weinberger, Kilian Q.

PB - International Machine Learning Society (IMLS)

T2 - 33rd International Conference on Machine Learning, ICML 2016

Y2 - 19 June 2016 through 24 June 2016

ER -