Dynamic Data Selection for Efficient SSL via Coarse-to-Fine Refinement

Aditay Tripathi*, Pradeep Shenoy, Anirban Chakraborty ;

Abstract


"Self-supervised learning (SSL) is critical for learning high-quality representations from unlabeled images at scale. Earlier efforts at reducing the compute requirements of SSL have focused on identifying subsets of training data that are sufficient for training. In addition to using a static representative subset, these methods also require small amounts of labeled data for scoring instances. In this work, we design a new family of algorithms that exploits the training dynamics of SSL methods and adjusts the selected subset throughout the training process. Our proposal has two key components: a) a coarse-to-fine refinement schedule for training data, where initial training rounds are performed on larger subsets of data, and the selected subset shrinks throughout the training process, and b) the use of an unsupervised proxy model that dynamically selects training instances based on their informativeness for the model’s current state. We also use the proxy model to speed up initial learning by aligning the representations of the primary and proxy models using an additional regularization loss. We validate our method on CIFAR100, CIFAR10, Tiny ImageNet, and STL10 datasets and document significant gains in terms of compute-accuracy tradeoff compared to previous approaches. Notably, we show a 31.6% reduction in computational load (includes training the target model and data subset selection) on Tiny ImageNet with similar classification accuracy."

Related Material


[pdf] [supplementary material] [DOI]