Abstract
It is well-known that neural networks are computationally hard to train. On the other hand, in practice, modern day neural networks are trained efficiently using SGD and a variety of tricks that include different activation functions (e.g. ReLU), over-specification (i.e., train networks which are larger than needed), and regularization. In this paper we revisit the computational complexity of training neural networks from a modern perspective. We provide both positive and negative results, some of them yield new provably efficient and practical algorithms for training certain types of neural networks.
Original language | English |
---|---|
Pages (from-to) | 855-863 |
Number of pages | 9 |
Journal | Advances in Neural Information Processing Systems |
Volume | 1 |
Issue number | January |
State | Published - 2014 |
Event | 28th Annual Conference on Neural Information Processing Systems 2014, NIPS 2014 - Montreal, Canada Duration: 8 Dec 2014 → 13 Dec 2014 |