NeRF-XL: NeRF at Any Scale with Multi-GPU

Ruilong Li*, Sanja Fidler, Angjoo Kanazawa, Francis Williams ;

Abstract


"We present , a principled method for distributing Neural Radiance Fields (NeRFs) across multiple GPUs, thus enabling the training and rendering of NeRFs with an arbitrarily large capacity. We begin by revisiting existing multi-GPU approaches, which decompose large scenes into multiple independently trained NeRFs [?, ?, ?], and identify several fundamental issues with these methods that hinder improvements in reconstruction quality as additional computational resources (GPUs) are used in training. remedies these issues and enables the training and rendering of NeRFs with an arbitrary number of parameters by simply using more hardware. At the core of our method lies a novel distributed training and rendering formulation, which is mathematically equivalent to the classic single-GPU case and minimizes communication between GPUs. By unlocking NeRFs with arbitrarily large parameter counts, our approach is the first to reveal multi-GPU scaling laws for NeRFs, showing improvements in reconstruction quality with larger parameter counts and speed improvements with more GPUs. We demonstrate the effectiveness of on a wide variety of datasets, including the largest open-source dataset to date, MatrixCity [?], containing 258K images covering a 25km2 city area. Visit our webpage at https: //research.nvidia.com/labs/toronto-ai/nerfxl/ for code and videos."

Related Material


[pdf] [supplementary material] [DOI]