Abstract
The lottery ticket hypothesis (Frankle and Carbin. 2018), states that a randomly-initialized network contains a small subnetwork such that, when trained in isolation, can compete with the performance of the original network. We prove an even stronger hypothesis (as was also conjectured in Rarnanujan et al., 2019), showing that for every bounded distribution and every target network with bounded weights. a sufficiently over-parameterized neural network with random weights contains a subnetwork with roughly the same accuracy as the target network, without any further training.
Original language | English |
---|---|
Title of host publication | 37th International Conference on Machine Learning, ICML 2020 |
Editors | Hal Daume, Aarti Singh |
Publisher | International Machine Learning Society (IMLS) |
Pages | 6638-6647 |
Number of pages | 10 |
ISBN (Electronic) | 9781713821120 |
State | Published - 1 Mar 2020 |
Externally published | Yes |
Event | 37th International Conference on Machine Learning, ICML 2020 - Virtual, Online Duration: 13 Jul 2020 → 18 Jul 2020 |
Publication series
Name | 37th International Conference on Machine Learning, ICML 2020 |
---|---|
Volume | PartF168147-9 |
Conference
Conference | 37th International Conference on Machine Learning, ICML 2020 |
---|---|
City | Virtual, Online |
Period | 13/07/20 → 18/07/20 |
Bibliographical note
Funding Information:Acknowledgements: This research is supported by the European Research Council (TheoryDL project), and by European Research Council (ERC) grant 754705.
Publisher Copyright:
© 2020 37th International Conference on Machine Learning, ICML 2020. All rights reserved.