Several methods were recently proposed for Unsupervised Domain Mapping, which is the task of translating images between domains without prior knowledge of correspondences. Current approaches suffer from an instability in training due to relying on GANs which are powerful but highly sensitive to hyper-parameters and suffer from mode collapse. In addition, most methods rely heavily on “cycle” relationships between the domains, which enforce a one-to-one mapping. In this work, we introduce an alternative method: NAM. NAM relies on a pre-trained generative model of the source domain, and aligns each target image with an image sampled from the source distribution while jointly optimizing the domain mapping function. Experiments are presented validating the effectiveness of our method.
|Original language||American English|
|State||Published - 2018|
|Event||6th International Conference on Learning Representations, ICLR 2018 - Vancouver, Canada|
Duration: 30 Apr 2018 → 3 May 2018
|Conference||6th International Conference on Learning Representations, ICLR 2018|
|Period||30/04/18 → 3/05/18|
Bibliographical notePublisher Copyright:
© 6th International Conference on Learning Representations, ICLR 2018 - Workshop Track Proceedings. All rights reserved.