Complexity theoretic limitations on learning DNF's

Research output: Contribution to journalConference articlepeer-review

39 Scopus citations


Using the recently developed framework of Daniely et al. (2014), we show that under a natural assumption on the complexity of random K-SAT, learning DNF formulas is hard. Furthermore, the same assumption implies the hardness of various learning problems, including intersections of ω(log(n)) halfspaces, agnostically learning conjunctions, as well as virtually all (distribution free) learning problems that were previously shown hard (under various complexity assumptions).

Original languageEnglish
Pages (from-to)815-830
Number of pages16
JournalProceedings of Machine Learning Research
Issue numberJune
StatePublished - 6 Jun 2016
Event29th Conference on Learning Theory, COLT 2016 - New York, United States
Duration: 23 Jun 201626 Jun 2016

Bibliographical note

Funding Information:
Amit Daniely was a recipient of the Google Europe Fellowship in Learning Theory, and this research was supported in part by this Google Fellowship. Shai Shalev-Shwartz is supported by the Israeli Science Foundation grant number 590-10. We thank Uri Feige, Guy Kindler and Nati Linial for valuable discussions.

Publisher Copyright:
© 2016 A. Daniely & S. Shalev-Shwartz.


  • DNFs
  • Hardness of learning


Dive into the research topics of 'Complexity theoretic limitations on learning DNF's'. Together they form a unique fingerprint.

Cite this