UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening

Siyuan Cheng*, Guangyu Shen, Kaiyuan Zhang, Guanhong Tao, Shengwei An, Hanxi Guo, Shiqing Ma, Xiangyu Zhang ;

Abstract


"Deep neural networks (DNNs) have demonstrated effectiveness in various fields. However, DNNs are vulnerable to backdoor attacks, which inject a unique pattern, called trigger, into the input to cause misclassification to an attack-chosen target label. While existing works have proposed various methods to mitigate backdoor effects in poisoned models, they tend to be less effective against recent advanced attacks. In this paper, we introduce a novel post-training defense technique that can effectively eliminate backdoor effects for a variety of attacks. In specific, approximates a unique and tight activation distribution for each neuron in the model. It then proactively dispels substantially large activation values that exceed the approximated boundaries. Our experimental results demonstrate that outperforms 7 popular defense methods against 14 existing backdoor attacks, including 2 advanced attacks, using only 5% of clean training data. is also cost efficient. The code is accessible at https://github.com/Megum1/UNIT."

Related Material


[pdf] [supplementary material] [DOI]