Learning optimally sparse support vector machines

Andrew Cotter, Nathan Srebro, Shai Shalev-Shwartz

Research output: Contribution to conferencePaperpeer-review

35 Scopus citations

Abstract

We show how to train SVMs with an optimal guarantee on the number of support vectors (up to constants), and with sample complexity and training runtime bounds matching the best known for kernel SVM optimization (i.e. without any additional asymptotic cost beyond standard SVM training). Our method is simple to implement and works well in practice.

Original languageAmerican English
Pages266-274
Number of pages9
StatePublished - 2013
Event30th International Conference on Machine Learning, ICML 2013 - Atlanta, GA, United States
Duration: 16 Jun 201321 Jun 2013

Conference

Conference30th International Conference on Machine Learning, ICML 2013
Country/TerritoryUnited States
CityAtlanta, GA
Period16/06/1321/06/13

Fingerprint

Dive into the research topics of 'Learning optimally sparse support vector machines'. Together they form a unique fingerprint.

Cite this