Abstract
Several recent works have shown separation results between deep neural networks, and hypothesis classes with inferior approximation capacity such as shallow networks or kernel classes. On the other hand, the fact that deep networks can efficiently express a target function does not mean that this target function can be learned efficiently by deep neural networks. In this work we study the intricate connection between learnability and approximation capacity. We show that learnability with deep networks of a target function depends on the ability of simpler classes to approximate the target. Specifically, we show that a necessary condition for a function to be learnable by gradient descent on deep neural networks is to be able to approximate the function, at least in a weak sense, with shallow neural networks. We also show that a class of functions can be learned by an efficient statistical query algorithm if and only if it can be approximated in a weak sense by some kernel class. We give several examples of functions which demonstrate depth separation, and conclude that they cannot be efficiently learned, even by a hypothesis class that can efficiently approximate them.
Original language | English |
---|---|
Pages (from-to) | 3265-3295 |
Number of pages | 31 |
Journal | Proceedings of Machine Learning Research |
Volume | 134 |
State | Published - 2021 |
Event | 34th Conference on Learning Theory, COLT 2021 - Boulder, United States Duration: 15 Aug 2021 → 19 Aug 2021 |
Bibliographical note
Funding Information:This research is supported by the European Research Council (TheoryDL project), and by European Research Council (ERC) grant 754705.
Publisher Copyright:
© 2021 O. Shamir.