Hardness of learning neural networks with natural weights

Amit Daniely, Gal Vardi

Research output: Contribution to journalConference articlepeer-review

6 Scopus citations

Abstract

Neural networks are nowadays highly successful despite strong hardness results. The existing hardness results focus on the network architecture, and assume that the network’s weights are arbitrary. A natural approach to settle the discrepancy is to assume that the network’s weights are “well-behaved" and posses some generic properties that may allow efficient learning. This approach is supported by the intuition that the weights in real-world networks are not arbitrary, but exhibit some”random-like" properties with respect to some”natural" distributions. We prove negative results in this regard, and show that for depth-2 networks, and many “natural" weights distributions such as the normal and the uniform distribution, most networks are hard to learn. Namely, there is no efficient learning algorithm that is provably successful for most weights, and every input distribution. It implies that there is no generic property that holds with high probability in such random networks and allows efficient learning.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume2020-December
StatePublished - 2020
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: 6 Dec 202012 Dec 2020

Bibliographical note

Publisher Copyright:
© 2020 Neural information processing systems foundation. All rights reserved.

Fingerprint

Dive into the research topics of 'Hardness of learning neural networks with natural weights'. Together they form a unique fingerprint.

Cite this