Neural networks exhibit good generalization behavior in the over-parameterized regime, where the number of network parameters exceeds the number of observations. Nonetheless, current generalization bounds for neural networks fail to explain this phenomenon. In an attempt to bridge this gap, we study the problem of learning a two-layer over-parameterized neural network, when the data is generated by a linearly separable function. In the case where the network has Leaky ReLU activations and only the first layer is trained, we provide both optimization and generalization guarantees for over-parameterized networks. Specifically, we prove convergence rates of SGD to a global minimum, and provide generalization guarantees for this global minimum that are independent of the network size. Therefore, our result clearly shows that the use of SGD for optimization both finds a global minimum, and avoids overfitting despite the high capacity of the model. This is the first theoretical demonstration that SGD can avoid overfitting, when learning over-specified neural network classifiers.
|Original language||American English|
|State||Published - 2018|
|Event||6th International Conference on Learning Representations, ICLR 2018 - Vancouver, Canada|
Duration: 30 Apr 2018 → 3 May 2018
|Conference||6th International Conference on Learning Representations, ICLR 2018|
|Period||30/04/18 → 3/05/18|
Bibliographical noteFunding Information:
This research is supported by the Blavatnik Computer Science Research Fund, ISF F.I.R.S.T. (Bikura) grant and the European Research Council (TheoryDL project).
© Learning Representations, ICLR 2018 - Conference Track Proceedings.All right reserved.