TY - JOUR
T1 - More data speeds up training time in learning halfspaces over sparse vectors
AU - Daniely, Amit
AU - Linial, Nati
AU - Shalev-Shwartz, Shai
PY - 2013
Y1 - 2013
N2 - The increased availability of data in recent years has led several authors to ask whether it is possible to use data as a computational resource. That is, if more data is available, beyond the sample complexity limit, is it possible to use the extra examples to speed up the computation time required to perform the learning task? We give the first positive answer to this question for a natural supervised learning problem - we consider agnostic PAC learning of halfspaces over 3-sparse vectors in {-1; 1; 0}n. This class is inefficiently learnable using O (n=ε 2) examples. Our main contribution is a novel, non-cryptographic, methodology for establishing computational-statistical gaps, which allows us to show that, under a widely believed assumption that refuting random 3CNF formulas is hard, it is impossible to efficiently learn this class using only O (n=ε 2) examples. We further show that under stronger hardness assumptions, even O (n1:499=ε 2) examples do not suffice. On the other hand, we show a new algorithm that learns this class efficiently using Ω̃(n 2=ε 2 ) examples. This formally establishes the tradeoff between sample and computational complexity for a natural supervised learning problem.
AB - The increased availability of data in recent years has led several authors to ask whether it is possible to use data as a computational resource. That is, if more data is available, beyond the sample complexity limit, is it possible to use the extra examples to speed up the computation time required to perform the learning task? We give the first positive answer to this question for a natural supervised learning problem - we consider agnostic PAC learning of halfspaces over 3-sparse vectors in {-1; 1; 0}n. This class is inefficiently learnable using O (n=ε 2) examples. Our main contribution is a novel, non-cryptographic, methodology for establishing computational-statistical gaps, which allows us to show that, under a widely believed assumption that refuting random 3CNF formulas is hard, it is impossible to efficiently learn this class using only O (n=ε 2) examples. We further show that under stronger hardness assumptions, even O (n1:499=ε 2) examples do not suffice. On the other hand, we show a new algorithm that learns this class efficiently using Ω̃(n 2=ε 2 ) examples. This formally establishes the tradeoff between sample and computational complexity for a natural supervised learning problem.
UR - http://www.scopus.com/inward/record.url?scp=84899017109&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.conferencearticle???
AN - SCOPUS:84899017109
SN - 1049-5258
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
T2 - 27th Annual Conference on Neural Information Processing Systems, NIPS 2013
Y2 - 5 December 2013 through 10 December 2013
ER -