Abstract
In recent years, approaches based on machine learning have achieved state-of-theart performance on image restoration problems. Successful approaches include both generative models of natural images as well as discriminative training of deep neural networks. Discriminative training of feed forward architectures allows explicit control over the computational cost of performing restoration and therefore often leads to better performance at the same cost at run time. In contrast, generative models have the advantage that they can be trained once and then adapted to any image restoration task by a simple use of Bayes' rule. In this paper we show how to combine the strengths of both approaches by training a discriminative, feed-forward architecture to predict the state of latent variables in a generative model of natural images. We apply this idea to the very successful Gaussian Mixture Model (GMM) of natural images. We show that it is possible to achieve comparable performance as the original GMM but with two orders of magnitude improvement in run time while maintaining the advantage of generative models.
Original language | English |
---|---|
Pages (from-to) | 2683-2691 |
Number of pages | 9 |
Journal | Advances in Neural Information Processing Systems |
Volume | 2015-January |
State | Published - 2015 |
Event | 29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada Duration: 7 Dec 2015 → 12 Dec 2015 |
Bibliographical note
Funding Information:Support by the ISF, Intel ICRI-CI and the Gatsby Foundation is greatfully acknowledged.