MTKD: Multi-Teacher Knowledge Distillation for Image Super-Resolution
Yuxuan Jiang*, Chen Feng, Fan Zhang, David Bull
;
Abstract
"Knowledge distillation (KD) has emerged as a promising technique in deep learning, typically employed to enhance a compact student network through learning from their high-performance but more complex teacher variant. When applied in the context of image super-resolution, most KD approaches are modified versions of methods developed for other computer vision tasks, which are based on training strategies with a single teacher and simple loss functions. In this paper, we propose a novel Multi-Teacher Knowledge Distillation (MTKD) framework specifically for image super-resolution. It exploits the advantages of multiple teachers by combining and enhancing the outputs of these teacher models, which then guides the learning process of the compact student network. To achieve more effective learning performance, we have also developed a new wavelet-based loss function for MTKD, which can better optimize the training process by observing differences in both the spatial and frequency domains. We fully evaluate the effectiveness of the proposed method by comparing it to five commonly used KD methods for image super-resolution based on three popular network architectures. The results show that the proposed MTKD method achieves evident improvements in super-resolution performance, up to 0.46 dB (based on PSNR), over state-of-the-art KD approaches across different network structures. The source code of MTKD will be made available https: //github.com/YuxuanJJ/MTKD for public evaluation."
Related Material
[pdf]
[DOI]