EditShield: Protecting Unauthorized Image Editing by Instruction-guided Diffusion Models
Ruoxi Chen, Haibo Jin, Yixin Liu, Jinyin Chen*, Haohan Wang, Lichao Sun
;
Abstract
"Text-to-image diffusion models have emerged as an evolutionary for producing creative content in image synthesis. Based on the impressive generation abilities of these models, instruction-guided diffusion models can edit images with simple instructions and input images. While they empower users to obtain their desired edited images with ease, they have raised concerns about unauthorized image manipulation. Prior research has delved into the unauthorized use of personalized diffusion models; however, this problem of instruction-guided diffusion models remains largely unexplored. In this paper, we first propose a protection method against unauthorized modifications from such models. Specifically, works by adding imperceptible perturbations that can shift the latent representation used in the diffusion process, tricking models into generating unrealistic images with mismatched subjects. Our extensive experiments demonstrate ’s effectiveness among synthetic and real-world datasets. Besides, we found that performs robustly against various manipulation settings across editing types and synonymous instruction phrases."
Related Material
[pdf]
[supplementary material]
[DOI]