SVM optimization: Inverse dependence on training set size

Shai Shalev-Shwartz*, Nathan Srebro

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

183 Scopus citations

Abstract

We discuss how the runtime of SVM optimization should decrease as the size of the training data increases. We present theoretical and empirical results demonstrating how a simple subgradient descent approach indeed displays such behavior, at least for linear kernels.

Original languageEnglish
Title of host publicationProceedings of the 25th International Conference on Machine Learning
Pages928-935
Number of pages8
StatePublished - 2008
Externally publishedYes
Event25th International Conference on Machine Learning - Helsinki, Finland
Duration: 5 Jul 20089 Jul 2008

Publication series

NameProceedings of the 25th International Conference on Machine Learning

Conference

Conference25th International Conference on Machine Learning
Country/TerritoryFinland
CityHelsinki
Period5/07/089/07/08

Fingerprint

Dive into the research topics of 'SVM optimization: Inverse dependence on training set size'. Together they form a unique fingerprint.

Cite this