DDGCN: A Dynamic Directed Graph Convolutional Network for Action Recognition
Matthew Korban, Xin Li
;
Abstract
We propose a Dynamic Directed Graph Convolutional Network (DDGCN) to model spatial and temporal features of human actions from their skeletal representations. The DDGCN consists of three new feature modeling modules: (1) Dynamic Convolutional Sampling (DCS), (2) Dynamic Convolutional Weight (DCW) assignment, and (3) Directed Graph Spatial-Temporal (DGST) feature extraction. Comprehensive experiments show that the DDGCN outperforms existing state-of-the-art action recognition approaches in various testing datasets. Our source code and model will be released at http://www.ece.lsu.edu/xinli/ActionModeling/index.html ."
Related Material
[pdf]