DεpS: Delayed ε-Shrinking for Faster Once-For-All Training

Aditya Annavajjala*, Alind Khare*, Animesh Agrawal, Igor Fedorov, Hugo M Latapie, Myungjin Lee, Alexey Tumanov ;

Abstract


"CNNs are increasingly deployed across different hardware, dynamic environments, and low-power embedded devices. This has led to the design and training of CNN architectures with the goal of maximizing accuracy subject to such variable deployment constraints. As the number of deployment scenarios grows, there is a need to find scalable solutions to design and train specialized CNNs. Once-for-all training has emerged as a scalable approach that jointly co-trains many models (subnets) at once with a constant training cost and finds specialized CNNs later. The scalability is achieved by training the full model and simultaneously reducing it to smaller subnets that share model weights (weight-shared shrinking). However, existing once-for-all training approaches incur huge training costs reaching 1200 GPU hours. We argue this is because they either start the process of shrinking the full model too early or too late. Hence, we propose Delayed ϵ-Shrinking () that starts the process of shrinking the full model when it is partially trained (∼ 50%), which leads to training cost improvement and better in-place knowledge distillation to smaller models. The proposed approach also consists of novel heuristics that dynamically adjust subnet learning rates incrementally (ϵ), leading to improved weight-shared knowledge distillation from larger to smaller subnets as well. As a result, outperforms state-of-the-art once-for-all training techniques across different datasets including CIFAR10/100, ImageNet-100, and ImageNet-1k on accuracy and cost. It achieves 1.83% higher ImageNet-1k top1 accuracy or the same accuracy with 1.3x reduction in FLOPs and 2.5x drop in training cost (GPU*hrs). Code is released at https://github.com/gatech-sysml/deps."

Related Material


[pdf] [supplementary material] [DOI]