COCO-FUNIT: Few-Shot Unsupervised Image Translation with a Content Conditioned Style Encoder
Kuniaki Saito, Kate Saenko, Ming-Yu Liu
;
Abstract
Unsupervised image-to-image translation intends to learn a mapping of an image in a given domain to an analogous image in a different domain, without explicit supervision of the mapping. Few-shot unsupervised image-to-image translation further attempts to generalize the model to an unseen domain by leveraging example images of the unseen domain provided at inference time. While remarkably successful, existing few-shot image-to-image translation models find it difficult to preserve the structure of the input image while emulating the appearance of the unseen domain, which we refer to as the extit{content loss} problem. This is particularly severe when the poses of the objects in the input and example images are very different. To address the issue, we propose a new few-shot image translation model, COCO-FUNIT, which computes the style embedding of the example images conditioned on the input image and a new module called the constant style bias. Through extensive experimental validations with comparison to the state-of-the-art, our model shows effectiveness in addressing the extit{content loss} problem. Code and pretrained models are available at \url{https://nvlabs.github.io/COCO-FUNIT/}."
Related Material
[pdf]