FreeCompose: Generic Zero-Shot Image Composition with Diffusion Prior

Zhekai Chen, Wen Wang, Zhen Yang, Zeqing Yuan, Hao Chen*, Chunhua Shen* ;

Abstract


"[width=0.985]assets/teaser.pdf Figure 1: harnesses the generative prior of pre-trained diffusion models to achieve versatile image composition, such as appearance editing (image harmonization) and semantic editing (semantic image composition). Furthermore, it can be extended to various downstream applications, including object removal and multi-character customization. We offer a novel approach to image composition, which integrates multiple input images into a single, coherent image. Rather than concentrating on specific use cases such as appearance editing (image harmonization) or semantic editing (semantic image composition), we showcase the potential of utilizing the powerful generative prior inherent in large-scale pre-trained diffusion models to accomplish generic image composition applicable to both scenarios. We observe that the pre-trained diffusion models automatically identify simple copy-paste boundary areas as low-density regions during denoising. Building on this insight, we propose to optimize the composed image towards high-density regions guided by the diffusion prior. In addition, we introduce a novel mask-guided loss to further enable flexible semantic image composition. Extensive experiments validate the superiority of our approach in achieving generic zero-shot image composition. Additionally, our approach shows promising potential in various tasks, such as object removal and multi-concept customization. Project webpage: https://github.com/aim-uofa/FreeCompose"

Related Material


[pdf] [supplementary material] [DOI]