Composite objective mirror descent

John C. Duchi, Shai Shalev-Shwartz, Yoram Singer, Ambuj Tewari

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

182 Scopus citations


We present a new method for regularized convex optimization and analyze it under both online and stochastic optimization settings. In addition to unifying previously known firstorder algorithms, such as the projected gradient method, mirror descent, and forward-backward splitting, our method yields new analysis and algorithms. We also derive specific instantiations of our method for commonly used regularization functions, such as l1, mixed norm, and trace-norm.

Original languageAmerican English
Title of host publicationCOLT 2010 - The 23rd Conference on Learning Theory
Number of pages13
StatePublished - 2010
Event23rd Conference on Learning Theory, COLT 2010 - Haifa, Israel
Duration: 27 Jun 201029 Jun 2010

Publication series

NameCOLT 2010 - The 23rd Conference on Learning Theory


Conference23rd Conference on Learning Theory, COLT 2010

Bibliographical note

Funding Information:
The authors greatly appreciate the support of Central Glass and Ceramic Research Institute, Kolkata, and are also grateful to Dr. V. Kumar and Dr. B. R.Chakraborty, National Physical Laboratory, New Delhi for encouragement throughout the work. The authors would like to thank to the reviewers for their valuable comments/ suggestions that significantly improve the quality of article.

Cite this