Stochastic methods for ℓ1 regularized loss minimization

Shai Shalev-Shwartz*, Ambuj Tewari

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

79 Scopus citations

Abstract

We describe and analyze two stochastic methods for ℓ1 regularized loss minimization problems, such as the Lasso. The first method updates the weight of a single feature at each iteration while the second method updates the entire weight vector but only uses a single training example at each iteration. In both methods, the choice of feature/example is uniformly at random. Our theoretical runtime analysis suggests that the stochastic methods should outperform state-of-the-art deterministic approaches, including their deterministic counterparts, when the size of the problem is large. We demonstrate the advantage of stochastic methods by experimenting with synthetic and natural data sets.

Original languageEnglish
Title of host publicationProceedings of the 26th International Conference On Machine Learning, ICML 2009
Pages929-936
Number of pages8
StatePublished - 2009
Externally publishedYes
Event26th International Conference On Machine Learning, ICML 2009 - Montreal, QC, Canada
Duration: 14 Jun 200918 Jun 2009

Publication series

NameProceedings of the 26th International Conference On Machine Learning, ICML 2009

Conference

Conference26th International Conference On Machine Learning, ICML 2009
Country/TerritoryCanada
CityMontreal, QC
Period14/06/0918/06/09

Fingerprint

Dive into the research topics of 'Stochastic methods for ℓ1 regularized loss minimization'. Together they form a unique fingerprint.

Cite this