Abstract
The graduated optimization approach, also known as the continuation method, is a popular heuristic to solving non-convcx problems that has received renewed interest over the last decade. Despite being popular, very little is known in terms of its theoretical convergence analysis. In this paper we describe a new first-order algorithm based on graduated optimization and analyze its performance. We characterize a family of non-convex functions for which this algorithm provably converges to a global optimum. In particular, we prove that the algorithm converges to an ϵ-approximate solution within O( 1/ϵ4) gradient-based steps. We extend our algorithm and analysis to the setting of stochastic non-convex optimization with noisy gradient feedback, attaining the same convergence rate. Additionally, we discuss the setting of "zero- order optimization", and devise a variant of our algorithm which converges at rate of O(d2/ϵ4).
Original language | English |
---|---|
Title of host publication | 33rd International Conference on Machine Learning, ICML 2016 |
Editors | Kilian Q. Weinberger, Maria Florina Balcan |
Publisher | International Machine Learning Society (IMLS) |
Pages | 2726-2739 |
Number of pages | 14 |
ISBN (Electronic) | 9781510829008 |
State | Published - 2016 |
Event | 33rd International Conference on Machine Learning, ICML 2016 - New York City, United States Duration: 19 Jun 2016 → 24 Jun 2016 |
Publication series
Name | 33rd International Conference on Machine Learning, ICML 2016 |
---|---|
Volume | 4 |
Conference
Conference | 33rd International Conference on Machine Learning, ICML 2016 |
---|---|
Country/Territory | United States |
City | New York City |
Period | 19/06/16 → 24/06/16 |
Bibliographical note
Publisher Copyright:© 2016 by the author(s).