TY - GEN
T1 - Stochastic methods for ℓ1 regularized loss minimization
AU - Shalev-Shwartz, Shai
AU - Tewari, Ambuj
PY - 2009
Y1 - 2009
N2 - We describe and analyze two stochastic methods for ℓ1 regularized loss minimization problems, such as the Lasso. The first method updates the weight of a single feature at each iteration while the second method updates the entire weight vector but only uses a single training example at each iteration. In both methods, the choice of feature/example is uniformly at random. Our theoretical runtime analysis suggests that the stochastic methods should outperform state-of-the-art deterministic approaches, including their deterministic counterparts, when the size of the problem is large. We demonstrate the advantage of stochastic methods by experimenting with synthetic and natural data sets.
AB - We describe and analyze two stochastic methods for ℓ1 regularized loss minimization problems, such as the Lasso. The first method updates the weight of a single feature at each iteration while the second method updates the entire weight vector but only uses a single training example at each iteration. In both methods, the choice of feature/example is uniformly at random. Our theoretical runtime analysis suggests that the stochastic methods should outperform state-of-the-art deterministic approaches, including their deterministic counterparts, when the size of the problem is large. We demonstrate the advantage of stochastic methods by experimenting with synthetic and natural data sets.
UR - http://www.scopus.com/inward/record.url?scp=71149119963&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:71149119963
SN - 9781605585161
T3 - Proceedings of the 26th International Conference On Machine Learning, ICML 2009
SP - 929
EP - 936
BT - Proceedings of the 26th International Conference On Machine Learning, ICML 2009
T2 - 26th International Conference On Machine Learning, ICML 2009
Y2 - 14 June 2009 through 18 June 2009
ER -