On the computational efficiency of training neural networks

Roi Livni, Shai Shalev-Shwartz, Ohad Shamir

Research output: Contribution to journalConference articlepeer-review

307 Scopus citations

Abstract

It is well-known that neural networks are computationally hard to train. On the other hand, in practice, modern day neural networks are trained efficiently using SGD and a variety of tricks that include different activation functions (e.g. ReLU), over-specification (i.e., train networks which are larger than needed), and regularization. In this paper we revisit the computational complexity of training neural networks from a modern perspective. We provide both positive and negative results, some of them yield new provably efficient and practical algorithms for training certain types of neural networks.

Original languageEnglish
Pages (from-to)855-863
Number of pages9
JournalAdvances in Neural Information Processing Systems
Volume1
Issue numberJanuary
StatePublished - 2014
Event28th Annual Conference on Neural Information Processing Systems 2014, NIPS 2014 - Montreal, Canada
Duration: 8 Dec 201413 Dec 2014

Fingerprint

Dive into the research topics of 'On the computational efficiency of training neural networks'. Together they form a unique fingerprint.

Cite this