Deep Reward Supervisions for Tuning Text-to-Image Diffusion Models

Xiaoshi Wu, Yiming Hao, Manyuan Zhang*, Keqiang Sun, Zhaoyang Huang, Guanglu Song, Yu Liu, Hongsheng Li* ;

Abstract


"Optimizing a text-to-image diffusion model with a given reward function is an important but underexplored research area. In this study, we propose Deep Reward Tuning (DRTune), an algorithm that directly supervises the final output image of a text-to-image diffusion model and back-propagates through the iterative sampling process to the input noise. We find that training earlier steps in the sampling process is crucial for low-level rewards, and deep supervision can be achieved efficiently and effectively by stopping the gradient of the denoising network input. DRTune is extensively evaluated on various reward models. It consistently outperforms other algorithms, particularly for low-level control signals, where all shallow supervision methods fail. Additionally, we fine-tune Stable Diffusion XL 1.0 (SDXL 1.0) model via DRTune to optimize Human Preference Score v2.1, resulting in the Favorable Diffusion XL 1.0 (FDXL 1.0) model. FDXL 1.0 significantly enhances image quality compared to SDXL 1.0 and reaches comparable quality compared with Midjourney v5.2. 1 1 Authors with all_papers.txt decode_tex_noligatures.sh decode_tex_noligatures.sh~ decode_tex.sh decode_tex.sh~ ECCV_abstracts.csv ECCV_abstracts_good.csv ECCV.csv ECCV.csv~ ECCV_new.csv generate_list.sh generate_list.sh~ generate_overview.sh gen.sh gen.sh~ HOWTO HOWTO~ pdflist pdflist.copied RCS snippet.html contributed equally to this work."

Related Material


[pdf] [DOI]