ArticulatedFusion: Real-time Reconstruction of Motion, Geometry and Segmentation Using a Single Depth Camera
Chao Li, Zheheng Zhao, Xiaohu Guo; The European Conference on Computer Vision (ECCV), 2018, pp. 317-332
Abstract
This paper proposes a real-time dynamic scene reconstruction method capable of reproducing the motion, geometry, and segmentation simultaneously given live depth stream from a single RGB-D camera. Our approach fuses geometry frame by frame and uses a segmentation-enhanced node graph structure to drive the deformation of geometry in registration step. A two-level node motion optimization is proposed. The optimization space of node motions and the range of physically-plausible deformations are largely reduced by taking advantage of the articulated motion prior, which is solved by an efficient node graph segmentation method. Compared to previous fusion-based dynamic scene reconstruction methods, our experiments show robust and improved reconstruction results for tangential and occluded motions.
Related Material
[pdf] [
bibtex]
@InProceedings{Li_2018_ECCV,
author = {Li, Chao and Zhao, Zheheng and Guo, Xiaohu},
title = {ArticulatedFusion: Real-time Reconstruction of Motion, Geometry and Segmentation Using a Single Depth Camera},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}