Put Myself in Your Shoes: Lifting the Egocentric Perspective from Exocentric Videos
Mi Luo*, Zihui Xue, Alex Dimakis, Kristen Grauman
;
Abstract
"We investigate exocentric-to-egocentric cross-view translation, which aims to generate a first-person (egocentric) view of an actor based on a video recording that captures the actor from a third-person (exocentric) perspective. To this end, we propose a generative framework called Exo2Ego that decouples the translation process into two stages: high-level structure transformation, which explicitly encourages cross-view correspondence between exocentric and egocentric views, and a diffusion-based pixel-level hallucination, which incorporates a hand layout prior to enhance the fidelity of the generated egocentric view. To pave the way for future advancements in this field, we curate a comprehensive exo-to-ego cross-view translation benchmark focused on hand-object manipulations. It consists of a diverse collection of synchronized ego-exo video pairs from four public datasets: H2O, Aria Pilot, Assembly101, and Ego-Exo4D. The experimental results validate that Exo2Ego delivers photorealistic video results with clear hand manipulation details and outperforms several baselines in terms of both synthesis quality and generalization to new actions."
Related Material
[pdf]
[DOI]