img

Notice détaillée

Adversarial Multi-view Networks for Activity Recognition

Article Ecrit par: Bai, Lei ; Kanhere, Salil S. ; Yu, Zhiwen ; Yao, Lina ; Guo, Bin ; Wang, Xianzhi ;

Résumé: Human activity recognition (HAR) plays an irreplaceable role in various applications and has been a prosperous researchtopic for years. Recent studies show significant progress in feature extraction (i.e., data representation) using deep learningtechniques. However, they face significant challenges in capturing multi-modal spatial-temporal patterns from the sensorydata, and they commonly overlook the variants between subjects. We propose a Discriminative Adversarial MUlti-viewNetwork (DAMUN) to address the above issues in sensor-based HAR. We first design a multi-view feature extractor to obtainrepresentations of sensory data streams from temporal, spatial, and spatio-temporal views using convolutional networks.Then, we fuse the multi-view representations into a robust joint representation through a trainable Hadamard fusion module,and finally employ a Siamese adversarial network architecture to decrease the variants between the representations of differentsubjects. We have conducted extensive experiments under an iterative left-one-subject-out setting on three real-world datasetsand demonstrated both the effectiveness and robustness of our approach.


Langue: Anglais