IMMA: Immunizing text-to-image Models against Malicious Adaptation

Amber Yijia Zheng*, Raymond A. Yeh ;

Abstract


"Advancements in open-sourced text-to-image models and fine-tuning methods have led to the increasing risk of malicious adaptation, , fine-tuning to generate harmful/unauthorized content. Recent works, , Glaze or MIST, have developed data-poisoning techniques which protect the data against adaptation methods. In this work, we consider an alternative paradigm for protection. We propose to “immunize” the model by learning model parameters that are difficult for the adaptation methods when fine-tuning malicious content; in short IMMA. Specifically, IMMA should be applied before the release of the model weights to mitigate these risks. Empirical results show IMMA’s effectiveness against malicious adaptations, including mimicking the artistic style and learning of inappropriate/unauthorized content, over three adaptation methods: LoRA, Textual-Inversion, and DreamBooth. The code is available at https://github. com/amberyzheng/IMMA."

Related Material


[pdf] [supplementary material] [DOI]