3D Hand Pose Estimation in Everyday Egocentric Images

Aditya Prakash*, Ruisen Tu, Matthew Chang, Saurabh Gupta ;

Abstract


"3D hand pose estimation in everyday egocentric images is challenging for several reasons: poor visual signal (occlusion from the object of interaction, low resolution & motion blur), large perspective distortion (hands are close to the camera), and lack of 3D annotations outside of controlled settings. While existing methods often use hand crops as input to focus on fine-grained visual information to deal with poor visual signal, the challenges arising from perspective distortion and lack of 3D annotations in the wild have not been systematically studied. We focus on this gap and explore the impact of different practices, crops as input, incorporating camera information, auxiliary supervision, scaling up datasets. We provide several insights that are applicable to both convolutional and transformer models, leading to better performance. Based on our findings, we also present , a system for 3D hand pose estimation in everyday egocentric images. Zero-shot evaluation on 4 diverse datasets (H2O, , , ) demonstrate the effectiveness of our approach across 2D and 3D metrics, where we beat past methods by 7.4% – 66%. In system level comparisons, achieves the best 3D hand pose on egocentric split, outperforms FrankMocap across all metrics and HaMeR on 3 out of 6 metrics while being 10× smaller and trained on 5× less data."

Related Material


[pdf] [supplementary material] [DOI]