Learning sparse low-threshold linear classifiers

Sivan Sabato, Shai Shalev-Shwartz, Nathan Srebro, Daniel Hsu, Tong Zhang

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

We consider the problem of learning a non-negative linear classifier with a ℓ1-norm of at most k, and a fixed threshold, under the hinge-loss. This problem generalizes the problem of learning a k-monotone disjunction. We prove that we can learn efficiently in this setting, at a rate which is linear in both k and the size of the threshold, and that this is the best possible rate. We provide an efficient online learning algorithm that achieves the optimal rate, and show that in the batch case, empirical risk minimization achieves this rate as well. The rates we show are tighter than the uniform convergence rate, which grows with k2.

Original languageEnglish
Pages (from-to)1275-1304
Number of pages30
JournalJournal of Machine Learning Research
Volume16
StatePublished - Jul 2015

Bibliographical note

Publisher Copyright:
© 2015 Sivan Sabato, Shai Shalev-Shwartz, Nathan Srebro, Daniel Hsu, and Tong Zhang.

Keywords

  • Empirical risk minimization
  • Linear classifiers
  • Monotone disjunctions
  • Online learning
  • Uniform convergence

Fingerprint

Dive into the research topics of 'Learning sparse low-threshold linear classifiers'. Together they form a unique fingerprint.

Cite this