Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation
Marco Mistretta*, Alberto Baldrati, Marco Bertini, Andrew D. Bagdanov
;
Abstract
"Vision-Language Models (VLMs) demonstrate remarkable zero-shot generalization to unseen tasks, but fall short of the performance of supervised methods in generalizing to downstream tasks with limited data. Prompt learning is emerging as a parameter-efficient method for adapting VLMs, but state-of-the-art approaches require annotated samples. In this paper we propose a novel approach to prompt learning based on unsupervised knowledge distillation from more powerful models. Our approach, which we call (), can be integrated into existing prompt learning techniques and eliminates the need for labeled examples during adaptation. Our experiments on more than ten standard benchmark datasets demonstrate that is very effective at improving generalization of learned prompts for zero-shot domain generalization, zero-shot cross-dataset generalization, and zero-shot base-to-novel class generalization problems. requires no ground-truth labels for adaptation, and moreover we show that even in the absence of any knowledge of training class names () can be used to effectively transfer knowledge. The code is publicly available at https://github.com/miccunifi/KDPL."
Related Material
[pdf]
[supplementary material]
[DOI]