Most Neural Networks Are Almost Learnable

Amit Daniely, Nathan Srebro, Gal Vardi

Research output: Contribution to journalConference articlepeer-review

Abstract

We present a PTAS for learning random constant-depth networks. We show that for any fixed ϵ > 0 and depth i, there is a poly-time algorithm that for any distribution on √d· Sd−1 learns random Xavier networks of depth i, up to an additive error of ϵ. The algorithm runs in time and sample complexity of (d̄)poly(ϵ−1), where d̄is the size of the network. For some cases of sigmoid and ReLU-like activations the bound can be improved to (d̄)polylog(ϵ−1), resulting in a quasi-poly-time algorithm for learning constant depth random networks.

Original languageAmerican English
JournalAdvances in Neural Information Processing Systems
Volume36
StatePublished - 2023
Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
Duration: 10 Dec 202316 Dec 2023
Conference number: 37

Bibliographical note

Publisher Copyright:
© 2023 Neural information processing systems foundation. All rights reserved.

Fingerprint

Dive into the research topics of 'Most Neural Networks Are Almost Learnable'. Together they form a unique fingerprint.

Cite this