Abstract
We provide theoretical investigation of curriculum learning in the context of stochastic gradient descent when optimizing the convex linear regression loss. We prove that the rate of convergence of an ideal curriculum learning method is monotonically increasing with the difficulty of the examples. Moreover, among all equally difficult points, convergence is faster when using points which incur higher loss with respect to the current hypothesis. We then analyze curriculum learning in the context of training a CNN. We describe a method which infers the curriculum by way of transfer learning from another network, pre-trained on a different task. While this approach can only approximate the ideal curriculum, we observe empirically similar behavior to the one predicted by the theory, namely, a significant boost in convergence speed at the beginning of training. When the task is made more difficult, improvement in generalization performance is also observed. Finally, curriculum learning exhibits robustness against unfavorable conditions such as excessive regularization.
Original language | English |
---|---|
Title of host publication | 35th International Conference on Machine Learning, ICML 2018 |
Editors | Jennifer Dy, Andreas Krause |
Publisher | International Machine Learning Society (IMLS) |
Pages | 8331-8339 |
Number of pages | 9 |
ISBN (Electronic) | 9781510867963 |
State | Published - 2018 |
Event | 35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden Duration: 10 Jul 2018 → 15 Jul 2018 |
Publication series
Name | 35th International Conference on Machine Learning, ICML 2018 |
---|---|
Volume | 12 |
Conference
Conference | 35th International Conference on Machine Learning, ICML 2018 |
---|---|
Country/Territory | Sweden |
City | Stockholm |
Period | 10/07/18 → 15/07/18 |
Bibliographical note
Publisher Copyright:© 35th International Conference on Machine Learning, ICML 2018.All Rights Reserved.