RAVE: Residual Vector Embedding for CLIP-Guided Backlit Image Enhancement
Tatiana Gaintseva*, Martin Benning, Gregory Slabaugh*
;
Abstract
"In this paper we propose a novel modification of Contrastive Language-Image Pre-Training (CLIP) guidance for the task of backlit image enhancement. Our work builds on the state-of-the-art CLIP-LIT approach, which learns a prompt pair by constraining the text-image similarity between a prompt (negative/positive sample) and a corresponding image (backlit image/well-lit image) in the CLIP embedding space. Learned prompts then guide an image enhancement network. Based on the CLIP-LIT framework, we propose two novel methods for CLIP guidance. First, we show that instead of tuning prompts in the space of text embeddings, it is possible to directly tune their embeddings in the latent space without any loss in quality. This accelerates training and potentially enables the use of additional encoders that do not have a text encoder. Second, we propose a novel approach that does not require any prompt tuning. Instead, based on CLIP embeddings of backlit and well-lit images from training data, we compute the residual vector in the embedding space as a simple difference between the mean embeddings of the well-lit and backlit images. This vector then guides the enhancement network during training, pushing a backlit image towards the space of well-lit images. This approach further dramatically reduces training time, stabilizes training and produces high quality enhanced images without artifacts. Additionally, we show that residual vectors can be interpreted, revealing biases in training data, and thereby enabling potential bias correction. Code is available at https://github.com/Atmyre/RAVE"
Related Material
[pdf]
[supplementary material]
[DOI]