VideoMamba: State Space Model for Efficient Video Understanding
Kunchang Li*, Xinhao Li, Yi Wang*, Yinan He, Yali Wang*, Limin Wang*, Yu Qiao*
;
Abstract
"Addressing the dual challenges of local redundancy and global dependencies in video understanding, this work innovatively adapts the Mamba to the video domain. The proposed overcomes the limitations of existing 3D convolution neural networks (CNNs) and video transformers. Its linear-complexity operator enables efficient long-term modeling, which is crucial for high-resolution long video understanding. Extensive evaluations reveal ’s four core abilities: (1) Scalability in the visual domain without extensive dataset pretraining, thanks to a novel self-distillation technique; (2) Sensitivity for recognizing short-term actions even with fine-grained motion differences; (3) Superiority in long-term video understanding, showcasing significant advancements over traditional feature-based models; and (4) Compatibility with other modalities, demonstrating robustness in multi-modal contexts. Through these advantages, sets a new benchmark, offering a scalable and efficient solution for comprehensive video understanding."
Related Material
[pdf]
[supplementary material]
[DOI]