View Selection for 3D Captioning via Diffusion Ranking

Tiange Luo*, Justin Johnson, Honglak Lee ;

Abstract


"Scalable annotation approaches are crucial for constructing extensive 3D-text datasets, facilitating a broader range of applications. However, existing methods sometimes lead to the generation of hallucinated captions, compromising caption quality. This paper explores the issue of hallucination in 3D object captioning, with a focus on Cap3D [?] method, which renders 3D objects into 2D views for captioning using pre-trained models. We pinpoint a major challenge: certain rendered views of 3D objects are atypical, deviating from the training data of standard image captioning models and causing hallucinations. To tackle this, we present DiffuRank, a method that leverages a pre-trained text-to-3D model to assess the alignment between 3D objects and their 2D rendered views, where the view with high alignment closely represent the object’s characteristics. By ranking all rendered views and feeding the top-ranked ones into GPT4-Vision, we enhance the accuracy and detail of captions, enabling the correction of 200k captions in the Cap3D dataset and extending it to 1 million captions across the entire Objaverse dataset and a portion of the Objaverse-XL high-quality subset. Additionally, our dataset includes 20 rendered images per caption, providing both intrinsic and extrinsic camera details, depth data, and masks, resulting in a total of 60 million PNG images. Beyond datasets, we showcase the adaptability of DiffuRank by applying it to pre-trained text-to-image models for a Visual Question Answering task, where it outperforms the CLIP model."

Related Material


[pdf] [supplementary material] [DOI]