Abstract
Neural networks have recently re-emerged as a powerful hypothesis class, yielding impressive classification accuracy in multiple domains. However, their training is a non-convex optimization problem which poses theoretical and practical challenges. Here we address this difficulty by turning to “improper” learning of neural nets. In other words, we learn a classifier that is not a neural net but is competitive with the best neural net model given a sufficient number of training examples. Our approach relies on a novel kernel construction scheme in which the kernel is a result of integration over the set of all possible instantiation of neural models. It turns out that the corresponding integral can be evaluated in closed-form via a simple recursion. Thus we translate the non-convex learning problem of a neural net to an SVM with an appropriate kernel. We also provide sample complexity results which depend on the stability of the optimal neural net.
Original language | English |
---|---|
Pages | 1159-1167 |
Number of pages | 9 |
State | Published - 2016 |
Event | 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016 - Cadiz, Spain Duration: 9 May 2016 → 11 May 2016 |
Conference
Conference | 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016 |
---|---|
Country/Territory | Spain |
City | Cadiz |
Period | 9/05/16 → 11/05/16 |
Bibliographical note
Publisher Copyright:Copyright 2016 by the authors.