Improving Adversarial Transferability via Model Alignment

Avery Ma*, Amir-massoud Farahmand, Yangchen Pan, Philip Torr, Jindong Gu ;

Abstract


"Neural networks are susceptible to adversarial perturbations that are transferable across different models. In this paper, we introduce a novel model alignment technique aimed at improving a given source model’s ability in generating transferable adversarial perturbations. During the alignment process, the parameters of the source model are fine-tuned to minimize an alignment loss. This loss measures the divergence in the predictions between the source model and another, independently trained model, referred to as the witness model. To understand the effect of model alignment, we conduct a geometric analysis of the resulting changes in the loss landscape. Extensive experiments on the ImageNet dataset, using a variety of model architectures, demonstrate that perturbations generated from aligned source models exhibit significantly higher transferability than those from the original source model. Our source code is available at https://github.com/averyma/model-alignment."

Related Material


[pdf] [supplementary material] [DOI]