SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models

Yuwei Guo, Ceyuan Yang*, Anyi Rao, Maneesh Agrawala, Dahua Lin*, Bo Dai* ;

Abstract


"The development of text-to-video (T2V), i.e., generating videos with a given text prompt, has been significantly advanced in recent years. However, relying solely on text prompts often results in ambiguous frame composition due to spatial uncertainty. The research community thus leverages the dense structure signals, e.g., per-frame depth/edge sequences to enhance controllability, whose collection accordingly increases the burden of inference. In this work, we present to enable flexible structure control with temporally sparse signals, requiring only one or few inputs, as shown in fig:teaser. It incorporates an additional condition encoder to process these sparse signals while leaving the pre-trained T2V model untouched. The proposed approach is compatible with various modalities, including sketches, depth, and RGB images, providing more practical control for video generation and promoting applications such as storyboarding, depth rendering, keyframe animation, and interpolation. Extensive experiments demonstrate the generalization of on both original and personalized T2V generators2 . 2 Project page: https://guoyww.github.io/projects/SparseCtrl"

Related Material


[pdf] [supplementary material] [DOI]