Curriculum Learning by Transfer Learning: Theory and Experiments with Deep Networks

Daphna Weinshall*, Gad Cohen, Dan Amir

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

98 Scopus citations

Abstract

We provide theoretical investigation of curriculum learning in the context of stochastic gradient descent when optimizing the convex linear regression loss. We prove that the rate of convergence of an ideal curriculum learning method is monotonically increasing with the difficulty of the examples. Moreover, among all equally difficult points, convergence is faster when using points which incur higher loss with respect to the current hypothesis. We then analyze curriculum learning in the context of training a CNN. We describe a method which infers the curriculum by way of transfer learning from another network, pre-trained on a different task. While this approach can only approximate the ideal curriculum, we observe empirically similar behavior to the one predicted by the theory, namely, a significant boost in convergence speed at the beginning of training. When the task is made more difficult, improvement in generalization performance is also observed. Finally, curriculum learning exhibits robustness against unfavorable conditions such as excessive regularization.

Original languageEnglish
Pages (from-to)5238-5246
Number of pages9
JournalProceedings of Machine Learning Research
Volume80
StatePublished - 2018
Event35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden
Duration: 10 Jul 201815 Jul 2018

Bibliographical note

Publisher Copyright:
© 2018 by the author(s).

Fingerprint

Dive into the research topics of 'Curriculum Learning by Transfer Learning: Theory and Experiments with Deep Networks'. Together they form a unique fingerprint.

Cite this