Mind the duality gap: Logarithmic regret algorithms for online optimization

Sham M. Kakade*, Shai Shalev-Shwartz

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

50 Scopus citations

Abstract

We describe a primal-dual framework for the design and analysis of online strongly convex optimization algorithms. Our framework yields the tightest known logarithmic regret bounds for Follow-The-Leader and for the gradient descent algorithm proposed in Hazan et al. [2006]. We then show that one can interpolate between these two extreme cases. In particular, we derive a new algorithm that shares the computational simplicity of gradient descent but achieves lower regret in many practical situations. Finally, we further extend our framework for generalized strongly convex functions.

Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 21 - Proceedings of the 2008 Conference
PublisherNeural Information Processing Systems
Pages1457-1464
Number of pages8
ISBN (Print)9781605609492
StatePublished - 2009
Externally publishedYes
Event22nd Annual Conference on Neural Information Processing Systems, NIPS 2008 - Vancouver, BC, Canada
Duration: 8 Dec 200811 Dec 2008

Publication series

NameAdvances in Neural Information Processing Systems 21 - Proceedings of the 2008 Conference

Conference

Conference22nd Annual Conference on Neural Information Processing Systems, NIPS 2008
Country/TerritoryCanada
CityVancouver, BC
Period8/12/0811/12/08

Fingerprint

Dive into the research topics of 'Mind the duality gap: Logarithmic regret algorithms for online optimization'. Together they form a unique fingerprint.

Cite this