Within the Dynamic Context: Inertia-aware 3D Human Modeling with Pose Sequence
Yutong Chen, Yifan Zhan, Zhihang Zhong*, Wei Wang, Xiao Sun*, Yu Qiao, Yinqiang Zheng
;
Abstract
"Neural rendering techniques have significantly advanced 3D human body modeling. However, previous approaches overlook dynamics induced by factors such as motion inertia, leading to challenges in scenarios where the pose remains static while the appearance changes, such as abrupt stops after spinning. This limitation arises from conditioning on a single pose, which leads to ambiguity in mapping one pose to multiple appearances. In this study, we elucidate that variations in human appearance depend not only on the current frame’s pose condition but also on past pose states. We introduce Dyco, a novel method that utilizes the delta pose sequence to effectively model temporal appearance variations. To mitigate overfitting to the delta pose sequence, we further propose a localized dynamic context encoder to reduce unnecessary inter-body part dependencies. To validate the effectiveness of our approach, we collect a novel dataset named I3D-Human, focused on capturing temporal changes in clothing appearance under similar poses. Dyco significantly outperforms baselines on I3D-Human and achieves comparable results on ZJU-MoCap. Furthermore, our inertia-aware 3D human method can unprecedentedly simulate appearance changes caused by inertia at different velocities. The code, data and model are available at our project website at https://ai4sports.opengvlab.com/Dyco."
Related Material
[pdf]
[supplementary material]
[DOI]