Composite objective mirror descent

John C. Duchi, Shai Shalev-Shwartz, Yoram Singer, Ambuj Tewari

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

203 Scopus citations

Abstract

We present a new method for regularized convex optimization and analyze it under both online and stochastic optimization settings. In addition to unifying previously known firstorder algorithms, such as the projected gradient method, mirror descent, and forward-backward splitting, our method yields new analysis and algorithms. We also derive specific instantiations of our method for commonly used regularization functions, such as l1, mixed norm, and trace-norm.

Original languageEnglish
Title of host publicationCOLT 2010 - The 23rd Conference on Learning Theory
Pages14-26
Number of pages13
StatePublished - 2010
Event23rd Conference on Learning Theory, COLT 2010 - Haifa, Israel
Duration: 27 Jun 201029 Jun 2010

Publication series

NameCOLT 2010 - The 23rd Conference on Learning Theory

Conference

Conference23rd Conference on Learning Theory, COLT 2010
Country/TerritoryIsrael
CityHaifa
Period27/06/1029/06/10

Bibliographical note

Funding Information:
The authors greatly appreciate the support of Central Glass and Ceramic Research Institute, Kolkata, and are also grateful to Dr. V. Kumar and Dr. B. R.Chakraborty, National Physical Laboratory, New Delhi for encouragement throughout the work. The authors would like to thank to the reviewers for their valuable comments/ suggestions that significantly improve the quality of article.

Fingerprint

Dive into the research topics of 'Composite objective mirror descent'. Together they form a unique fingerprint.

Cite this