Abstract
Boosting algorithms build highly accurate prediction mechanisms from a collection of low-accuracy predictors. To do so, they employ the notion of weak-learnability. The starting point of this paper is a proof which shows that weak learn-ability is equivalent to linear separability with l1 margin. While this equivalence is a direct consequence of von Neumann's minimax theorem, we derive the equivalence directly using Fenchel duality. We then use our derivation to describe a family of relaxations to the weak-learnability assumption that readily translates to a family of relaxations of linear separability with margin. This alternative perspective sheds new light on known soft-margin boosting algorithms and also enables us to derive several new relaxations of the notion of linear separability. Last, we describe and analyze an efficient boosting framework that can be used for minimizing the loss functions derived from our family of relaxations. In particular, we obtain efficient boosting algorithms for maximizing hard and soft versions of the l1 margin.
Original language | English |
---|---|
Pages | 311-321 |
Number of pages | 11 |
State | Published - 2008 |
Externally published | Yes |
Event | 21st Annual Conference on Learning Theory, COLT 2008 - Helsinki, Finland Duration: 9 Jul 2008 → 12 Jul 2008 |
Conference
Conference | 21st Annual Conference on Learning Theory, COLT 2008 |
---|---|
Country/Territory | Finland |
City | Helsinki |
Period | 9/07/08 → 12/07/08 |
Bibliographical note
Funding Information:This work was partially supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under contract No. DE-FG03-88ER45375.