Abstract
The lottery ticket hypothesis (Frankle and Carbin, 2018), states that a randomly-initialized network contains a small subnetwork such that, when trained in isolation, can compete with the per- formance of the original network. We prove an even stronger hypothesis (as was also con- jectured in Ramanujan et al., 2019), showing that for every bounded distribution and every tar- get network with bounded weights, a sufficiently over-parameterized neural network with random weights contains a subnetwork with roughly the same accuracy as the target network, without any further training.
| Original language | English |
|---|---|
| Journal | Proceedings of Machine Learning Research |
| Volume | 119 |
| State | Published - 2020 |
| Event | 37th International Conference on Machine Learning, ICML 2020 - Virtual, Online Duration: 13 Jul 2020 → 18 Jul 2020 |
Bibliographical note
Publisher Copyright:© 2020 by the author(s).