Robustness Preserving Fine-tuning using Neuron Importance

Guangrui Li, Rahul Duggal*, Aaditya Singh, Kaustav Kundu, Bing Shuai, Jonathan Wu ;

Abstract


"Robust fine-tuning aims to adapt a vision-language model to downstream tasks while preserving its zero-shot capabilities on unseen data. Recent studies have introduced fine-tuning strategies to improve in-distribution (ID) performance on the downstream tasks while minimizing deterioration in out-of-distribution (OOD) performance on unseen data. This balance is achieved either by aligning the fine-tuned representations with the pre-trained ones or by constraining significant deviations in fine-tuned weights compared to the pre-trained model. In the latter approach, the regularization term is uniformly applied to all parameters. Our work proposes selectively applying the regularization term based on the importance of each neuron to the fine-tuning dataset. To this end, we have developed an importance-score metric to quantify each neurons’ importance to the downstream task, which we then leverage to develop two fine-tuning strategies: importance-guided selective fine-tuning and importance-guided regularization. Our approach can be used concurrently with representation space-based methods, outperforming other approaches based on parameter space. We achieve improvements over the state-of-the-art on standard robust fine-tuning benchmarks across various datasets, in both full-shot and low-shot settings."

Related Material


[pdf] [supplementary material] [DOI]