Abstract
It has long been conjectured that hypotheses spaces suitable for data that is compositional in nature, such as text or images, may be more efficiently represented with deep hierarchical networks than with shallow ones. Despite the vast empirical evidence supporting this belief, theoretical justifications to date are limited. In particular, they do not account for the locality, sharing and pooling constructs of convolutional networks, the most successful deep learning architecture to date. In this work we derive a deep network architecture based on arithmetic circuits that inherently employs locality, sharing and pooling. An equivalence between the networks and hierarchical tensor factorizations is established. We show that a shallow network corresponds to CP (rank-1) decomposition, whereas a deep network corresponds to Hierarchical Tucker decomposition. Using tools from measure theory and matrix algebra, we prove that besides a negligible set, all functions that can be implemented by a deep network of polynomial size, require exponential size in order to be realized (or even approximated) by a shallow network. Since log-space computation transforms our networks into SimNets, the result applies directly to a deep learning architecture demonstrating promising empirical performance. The construction and theory developed in this paper shed new light on various practices and ideas employed by the deep learning community.
Original language | American English |
---|---|
Pages (from-to) | 698-728 |
Number of pages | 31 |
Journal | Journal of Machine Learning Research |
Volume | 49 |
Issue number | June |
State | Published - 6 Jun 2016 |
Event | 29th Conference on Learning Theory, COLT 2016 - New York, United States Duration: 23 Jun 2016 → 26 Jun 2016 |
Bibliographical note
Funding Information:Amnon Shashua would like to thank Tomaso Poggio and Shai S. Shwartz for illuminating discussions during the preparation of this manuscript. We would also like to thank Tomer Galanti, Tamir Hazan and Lior Wolf for commenting on draft versions of the paper. The work is partly funded by Intel grant ICRI-CI no. 9-2012-6133 and by ISF Center grant 1790/12. Nadav Cohen is supported by a Google Fellowship in Machine Learning.
Publisher Copyright:
© 2016 N. Cohen, O. Sharir & A. Shashua.
Keywords
- Arithmetic Circuits
- Deep Learning
- Expressive Power
- Tensor Decompositions