Abstract
We present a PTAS for learning random constant-depth networks. We show that for any fixed ϵ > 0 and depth i, there is a poly-time algorithm that for any distribution on √d· Sd−1 learns random Xavier networks of depth i, up to an additive error of ϵ. The algorithm runs in time and sample complexity of (d̄)poly(ϵ−1), where d̄is the size of the network. For some cases of sigmoid and ReLU-like activations the bound can be improved to (d̄)polylog(ϵ−1), resulting in a quasi-poly-time algorithm for learning constant depth random networks.
Original language | English |
---|---|
Journal | Advances in Neural Information Processing Systems |
Volume | 36 |
State | Published - 2023 |
Event | 37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States Duration: 10 Dec 2023 → 16 Dec 2023 Conference number: 37 |
Bibliographical note
Publisher Copyright:© 2023 Neural information processing systems foundation. All rights reserved.