Online learning meets optimization in the dual

Shai Shalev-Shwartz*, Yoram Singer

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

26 Scopus citations

Abstract

We describe a novel framework for the design and analysis of online learning algorithms based on the notion of duality in constrained optimization. We cast a sub-family of universal online bounds as an optimization problem. Using the weak duality theorem we reduce the process of online learning to the task of incrementally increasing the dual objective function. The amount by which the dual increases serves as a new and natural notion of progress. We are thus able to tie the primal objective value and the number of prediction mistakes using and the increase in the dual. The end result is a general framework for designing and analyzing old and new online learning algorithms in the mistake bound model.

Original languageEnglish
Title of host publicationLearning Theory - 19th Annual Conference on Learning Theory, COLT 2006, Proceedings
PublisherSpringer Verlag
Pages423-437
Number of pages15
ISBN (Print)3540352945, 9783540352945
DOIs
StatePublished - 2006
Event19th Annual Conference on Learning Theory, COLT 2006 - Pittsburgh, PA, United States
Duration: 22 Jun 200625 Jun 2006

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume4005 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference19th Annual Conference on Learning Theory, COLT 2006
Country/TerritoryUnited States
CityPittsburgh, PA
Period22/06/0625/06/06

Fingerprint

Dive into the research topics of 'Online learning meets optimization in the dual'. Together they form a unique fingerprint.

Cite this