Towards Robust Neural Networks via Random Self-ensemble
Xuanqing Liu, Minhao Cheng, Huan Zhang, Cho-Jui Hsieh; The European Conference on Computer Vision (ECCV), 2018, pp. 369-385
Abstract
Recent studies have revealed the vulnerability of deep neural networks - A small adversarial perturbation that is imperceptible to human can easily make a well-trained deep neural network misclassify. This makes it unsafe to apply neural networks in security-critical applications. In this paper, we propose a new defense algorithm called Random Self-Ensemble (RSE) by combining two important concepts: {f randomness} and {f ensemble}. To protect a targeted model, RSE adds random noise layers to the neural network to prevent the strong gradient-based attacks, and ensembles the prediction over random noises to stabilize the performance. We show that our algorithm is equivalent to ensemble an infinite number of noisy models $f_epsilon$ without any additional memory overhead, and the proposed training procedure based on noisy stochastic gradient descent can ensure the ensemble model has a good predictive capability. Our algorithm significantly outperforms previous defense techniques on real data sets. For instance, on CIFAR-10 with VGG network (which has 92% accuracy without any attack), under the strong C&W attack within a certain distortion tolerance, the accuracy of unprotected model drops to less than 10%, the best previous defense technique has $48%$ accuracy, while our method still has $86%$ prediction accuracy under the same level of attack. Finally, our method is simple and easy to integrate into any neural network.
Related Material
[pdf] [
bibtex]
@InProceedings{Liu_2018_ECCV,
author = {Liu, Xuanqing and Cheng, Minhao and Zhang, Huan and Hsieh, Cho-Jui},
title = {Towards Robust Neural Networks via Random Self-ensemble},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}