Abstract
We present a new method for regularized convex optimization and analyze it under both online and stochastic optimization settings. In addition to unifying previously known firstorder algorithms, such as the projected gradient method, mirror descent, and forward-backward splitting, our method yields new analysis and algorithms. We also derive specific instantiations of our method for commonly used regularization functions, such as l1, mixed norm, and trace-norm.
Original language | English |
---|---|
Title of host publication | COLT 2010 - The 23rd Conference on Learning Theory |
Pages | 14-26 |
Number of pages | 13 |
State | Published - 2010 |
Event | 23rd Conference on Learning Theory, COLT 2010 - Haifa, Israel Duration: 27 Jun 2010 → 29 Jun 2010 |
Publication series
Name | COLT 2010 - The 23rd Conference on Learning Theory |
---|
Conference
Conference | 23rd Conference on Learning Theory, COLT 2010 |
---|---|
Country/Territory | Israel |
City | Haifa |
Period | 27/06/10 → 29/06/10 |
Bibliographical note
Funding Information:The authors greatly appreciate the support of Central Glass and Ceramic Research Institute, Kolkata, and are also grateful to Dr. V. Kumar and Dr. B. R.Chakraborty, National Physical Laboratory, New Delhi for encouragement throughout the work. The authors would like to thank to the reviewers for their valuable comments/ suggestions that significantly improve the quality of article.