G3R: Gradient Guided Generalizable Reconstruction

Yun Chen*, Jingkang Wang, Ze Yang, Sivabalan Manivasagam*, Raquel Urtasun* ;

Abstract


"Large scale 3D scene reconstruction is important for applications such as virtual reality and simulation. Existing neural rendering approaches (, NeRF, 3DGS) have achieved realistic reconstructions on large scenes, but optimize per scene, which is expensive and slow, and exhibit noticeable artifacts under large view changes due to overfitting. Generalizable approaches, or large reconstruction models, are fast, but primarily work for small scenes/objects and often produce lower quality rendering results. In this work, we introduce , a generalizable reconstruction approach that can efficiently predict high-quality 3D scene representations for large scenes. We propose to learn a reconstruction network that takes the gradient feedback signals from differentiable rendering to iteratively update a 3D scene representation, combining the benefits of high photorealism from per-scene optimization with data-driven priors from fast feed-forward prediction methods. Experiments on urban-driving and drone datasets show that generalizes across diverse large scenes and accelerates the reconstruction process by at least 10× while achieving comparable or better realism compared to 3DGS, and also being more robust to large view changes. Please visit our project page for more results: https://waabi.ai/g3r."

Related Material


[pdf] [supplementary material] [DOI]