Learning Compression from Limited Unlabeled Data
Xiangyu He, Jian Cheng; The European Conference on Computer Vision (ECCV), 2018, pp. 752-769
Abstract
Convolutional neural networks (CNNs) have dramatically advanced the state-of-art in a number of domains. However, most models are both computation and memory intensive, which arouse the interest of network compression. While existing compression methods achieve good performance, they suffer from three limitations: 1) the inevitable retraining with enormous labeled data; 2) the massive GPU hours for retraining; 3) the training tricks for model compression. Especially the requirement of retraining on original datasets makes it difficult to apply in many real-world scenarios, where training data is not publicly available. In this paper, we reveal that re-normalization is the practical and effective way to alleviate the above limitations. Through quantization or pruning, most methods may compress a large number of parameters but ignore the core role in performance degradation, which is the Gaussian conjugate prior induced by batch normalization. By employing the re-estimated statistics in batch normalization, we significantly improve the accuracy of compressed CNNs. Extensive experiments on ImageNet show it outperforms baselines by a large margin and is comparable to label-based methods. Besides, the fine-tuning process takes less than 5 minutes on CPU, using 1000 unlabeled images.
Related Material
[pdf] [
bibtex]
@InProceedings{He_2018_ECCV,
author = {He, Xiangyu and Cheng, Jian},
title = {Learning Compression from Limited Unlabeled Data},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}