BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation
Zekai Xu, Kang You, Qinghai Guo, Xiang Wang, Zhezhi He*
;
Abstract
"Spiking neural networks (SNNs), which mimic biological neural systems to convey information via discrete spikes, are well-known as brain-inspired models with excellent computing efficiency. By utilizing the surrogate gradient estimation for discrete spikes, learning-based SNN training methods that can achieve ultra-low inference latency (, number of time-step) have emerged recently. Nevertheless, due to the difficulty of deriving precise gradient for discrete spikes in learning-based methods, a distinct accuracy gap persists between SNNs and their artificial neural networks (ANNs) counterparts. To address the aforementioned issue, we propose a blurred knowledge distillation (BKD) technique, which leverages randomly blurred SNN features to restore and imitate the ANN features. Note that, our BKD is applied upon the feature map right before the last layer of SNNs, which can also mix with prior logits-based knowledge distillation for maximal accuracy boost. In the category of learning-based methods, our work achieves state-of-the-art performance for training SNNs on both static and neuromorphic datasets. On the ImageNet dataset, BKDSNN outperforms prior best results by 4.51% and 0.93% with the network topology of CNN and Transformer, respectively."
Related Material
[pdf]
[supplementary material]
[DOI]