The return of the gating network: Combining generative models and discriminative training in natural image priors

Dan Rosenbaum, Yair Weiss

Research output: Contribution to journalConference articlepeer-review

10 Scopus citations

Abstract

In recent years, approaches based on machine learning have achieved state-of-theart performance on image restoration problems. Successful approaches include both generative models of natural images as well as discriminative training of deep neural networks. Discriminative training of feed forward architectures allows explicit control over the computational cost of performing restoration and therefore often leads to better performance at the same cost at run time. In contrast, generative models have the advantage that they can be trained once and then adapted to any image restoration task by a simple use of Bayes' rule. In this paper we show how to combine the strengths of both approaches by training a discriminative, feed-forward architecture to predict the state of latent variables in a generative model of natural images. We apply this idea to the very successful Gaussian Mixture Model (GMM) of natural images. We show that it is possible to achieve comparable performance as the original GMM but with two orders of magnitude improvement in run time while maintaining the advantage of generative models.

Original languageAmerican English
Pages (from-to)2683-2691
Number of pages9
JournalAdvances in Neural Information Processing Systems
Volume2015-January
StatePublished - 2015
Event29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada
Duration: 7 Dec 201512 Dec 2015

Bibliographical note

Funding Information:
Support by the ISF, Intel ICRI-CI and the Gatsby Foundation is greatfully acknowledged.

Fingerprint

Dive into the research topics of 'The return of the gating network: Combining generative models and discriminative training in natural image priors'. Together they form a unique fingerprint.

Cite this