img

Notice détaillée

Combining channel-wise joint attention and temporal attention in graph convolutional networks for skeleton-based action recognition

Article Ecrit par: Sun, Zhonghua ; Wang, Tianyi ; Dai, Meng ;

Résumé: Graph convolutional networks (GCNs) have been shown to be effective in performing skeleton-based action recognition, as graph topology has advantages in representing the natural connectivity of the human bodies. Nevertheless, it is challenging to effectively model the human joints spatially and temporally, and we are lacking attentional mechanisms for critical temporal frames and important skeletal points. In this work, we propose a novel GCNs combined with channel-wise joints and temporal attention for skeleton-based action recognition. Our temporal attention module captures the long-term dependence of time and then enhances the temporal semantics of key frames. In addition, we design a channel-wise attention module that fuses multi-channel joint weights with the topological map to capture the attention of nodes at different actions along the channel dimension. We propose to concatenate joint and bone together along the channel dimension as the joint& bone (J& B) modality, J& B modality can extract hybrid action patterns under the coalition of channel-wise joint attention. We prove the powerful spatio-temporal modeling capability of our model on three widely used dataset, NTU-RGB D, NTU RGB+D 120 and Northwestern-UCLA. Compared with leading GCN-based methods, we achieve performance comparable to the-state-of-art


Langue: Anglais