Controllable Contextualized Image Captioning: Directing the Visual Narrative through User-Defined Highlights

Shunqi Mao*, Chaoyi Zhang, Hang Su, Hwanjun Song, Igor Shalyminov, Weidong Cai ;

Abstract


"(CIC) evolves traditional image captioning into a more complex domain, necessitating the ability for multimodal reasoning. It aims to generate image captions given specific contextual information. This paper further introduces a novel domain of (). Unlike CIC, which solely relies on broad context, accentuates a user-defined highlight, compelling the model to tailor captions that resonate with the highlighted aspects of the context. We present two approaches, Prompting-based Controller () and Recalibration-based Controller (), to generate focused captions. conditions the model generation on highlight by prepending captions with highlight-driven prefixes, whereas tunes the model to selectively recalibrate the encoder embeddings for highlighted tokens. Additionally, we design a GPT-4V empowered evaluator to assess the quality of the controlled captions alongside standard assessment methods. Extensive experimental results demonstrate the efficient and effective controllability of our method, charting a new direction in achieving user-adaptive image captioning. Code is avaliable at https://github.com/ShunqiM/Ctrl-CIC."

Related Material


[pdf] [supplementary material] [DOI]