MVP: Multimodality-Guided Visual Pre-training
Longhui Wei, Lingxi Xie, Wengang Zhou, Houqiang Li, Qi Tian
;
Abstract
"Recently, masked image modeling (MIM) has become a promising direction for visual pre-training. In the context of vision transformers, MIM learns effective visual representation by aligning the token-level features with a pre-defined space (e,g,, BEIT used a d-VAE trained on a large image corpus as the tokenizer). In this paper, we go one step further by introducing guidance from other modalities and validating that such additional knowledge leads to impressive gains for visual pre-training. The proposed approach is named Multimodality-guided Visual Pre-training (MVP), in which we replace the tokenizer with the vision branch of CLIP, a vision-language model pre-trained on 400 million image-text pairs. We demonstrate the effectivenss of MVP by performing standard experiments, i.e., pre-training the ViT models on ImageNet and fine-tuning them on a series of downstream visual recognition tasks. In particular, pre-training ViT-Base/16 for 300 epochs, MVP reports a 52.4% mIOU on ADE-20K, surpassing BEIT (the baseline and previous state-of-the-art) with an impressive margin of 6.8%."
Related Material
[pdf]
[DOI]