img

Notice détaillée

Hardest and semi-hard negative pairs mining for text-based person search with visual–textual attention

Article Ecrit par: Ge, Jing ; Wang, Qianxiang ; Gao, Guangyu ;

Résumé: Searching persons in large-scale image databases with the query of natural language description is a more practical and important application in video surveillance. Intuitively, for person search, the core issue should be the visual-textual association, which is still an extremely challenging task, due to the contradiction between the high abstraction of textual description and the intuitive expression of visual images. In this paper, aim for more consistent visual-textual features and better inter-class discriminate ability, we propose a text-based person search approach with visual-textual attention on the hardest and semi-hard negative pairs mining. First, for the visual and textual attentions, we designed a Smoothed Global Maximum Pooling (SGMP) to extract more concentrated visual features, and also the memory attention based on LSTM's cell unit for more strictly correspondence matching. Second, while we only have labeled positive pairs, more valuable negative pairs are mined by defining the cross-modality-based hardest and semi-hard negative pairs. After that, we combine the triplet loss on the single modality with the hardest negative pairs, and the cross-entropy loss on cross-modalities with both the hardest and semi-hard negative pairs, to train the whole network. Finally, to evaluate the effectiveness and feasibility of the proposed approach, we conduct extensive experiments on the typical person search dataset: CUHK-PEDES, in which our approach achieves satisfactory performance, e.g, the top1 accuracy of . Besides, we also evaluate the semi-hard pair mining method in the COCO caption dataset and validate its effectiveness and complementary.


Langue: Anglais