CLIP-Guided Generative Networks for Transferable Targeted Adversarial Attacks

Hao Fang, Jiawei Kong, Bin Chen*, Tao Dai, Hao Wu, Shu-Tao Xia ;

Abstract


"Transferable targeted adversarial attacks aim to mislead models into outputting adversary-specified predictions in black-box scenarios. Recent studies have introduced single-target attacks that train a generator for each target class to generate highly transferable perturbations, resulting in substantial computational overhead when handling multiple classes. Multi-target attacks address this by training only one class-conditional generator for multiple classes. However, the generator simply uses class labels as conditions, failing to leverage the rich semantic information of the target class. To this end, we design a CLIP-guided Generative Network with Cross-attention modules (CGNC) to enhance multi-target attacks by incorporating textual knowledge of CLIP into the generator. Extensive experiments demonstrate that CGNC yields significant improvements over previous multi-target attacks, e.g., a 21.46% improvement in success rate from Res-152 to DenseNet-121. Moreover, we propose the masked fine-tuning to further strengthen our method in attacking a single class, which surpasses existing single-target methods."

Related Material


[pdf] [supplementary material] [DOI]