EpipolarGAN: Omnidirectional Image Synthesis with Explicit Camera Control

Christopher May*, Daniel Aliaga ;

Abstract


"In recent years, generative networks have achieved high quality results in 3D-aware image synthesis. However, most prior approaches focus on outside-in generation of a single object or face, as opposed to full inside-looking-out scenes. Those that do generate scenes typically require depth/pose information, or do not provide camera positioning control. We introduce EpipolarGAN, an omnidirectional Generative Adversarial Network for interior scene synthesis that does not need depth information, yet allows for direct control over the camera viewpoint. Rather than conditioning on an input position, we directly resample the input features to simulate a change of perspective. To reinforce consistency between viewpoints, we introduce an epipolar loss term that employs feature matching along epipolar arcs in the feature-rich intermediate layers of the network. We validate our results with comparisons to recent methods, and we formulate a generative reconstruction metric to evaluate multi-view consistency."

Related Material


[pdf] [supplementary material] [DOI]