Resilience of Entropy Model in Distributed Neural Networks
Milin Zhang*, Mohammad Abdi, Shahriar Rifat, Francesco Restuccia
;
Abstract
"Distributed have emerged as a key technique to reduce communication overhead without sacrificing performance in edge computing systems. Recently, entropy coding has been introduced to further reduce the communication overhead. The key idea is to train the distributed jointly with an entropy model, which is used as side information during inference time to adaptively encode latent representations into bit streams with variable length. To the best of our knowledge, the resilience of entropy models is yet to be investigated. As such, in this paper we formulate and investigate the resilience of entropy models to intentional interference (, adversarial attacks) and unintentional interference (, weather changes and motion blur). Through an extensive experimental campaign with 3 different architectures, 2 entropy models and 4 rate-distortion trade-off factors, we demonstrate that the entropy attacks can increase the communication overhead by up to 95%. By separating compression features in frequency and spatial domain, we propose a new defense mechanism that can reduce the transmission overhead of the attacked input by about 9% compared to unperturbed data, with only about 2% accuracy loss. Importantly, the proposed defense mechanism is a standalone approach which can be applied in conjunction with approaches such as adversarial training to further improve robustness. Code is available at https://github.com/Restuccia-Group/EntropyR."
Related Material
[pdf]
[DOI]