Improper deep kernels

Uri Heinemann, Roi Livni, Elad Eban, Gal Elidan, Amir Globerson

Research output: Contribution to conferencePaperpeer-review

8 Scopus citations


Neural networks have recently re-emerged as a powerful hypothesis class, yielding impressive classification accuracy in multiple domains. However, their training is a non-convex optimization problem which poses theoretical and practical challenges. Here we address this difficulty by turning to “improper” learning of neural nets. In other words, we learn a classifier that is not a neural net but is competitive with the best neural net model given a sufficient number of training examples. Our approach relies on a novel kernel construction scheme in which the kernel is a result of integration over the set of all possible instantiation of neural models. It turns out that the corresponding integral can be evaluated in closed-form via a simple recursion. Thus we translate the non-convex learning problem of a neural net to an SVM with an appropriate kernel. We also provide sample complexity results which depend on the stability of the optimal neural net.

Original languageAmerican English
Number of pages9
StatePublished - 2016
Event19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016 - Cadiz, Spain
Duration: 9 May 201611 May 2016


Conference19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016

Bibliographical note

Publisher Copyright:
Copyright 2016 by the authors.


Dive into the research topics of 'Improper deep kernels'. Together they form a unique fingerprint.

Cite this