img

Notice détaillée

HCMSL

Hybrid Cross-modal Similarity Learning for Cross-modal Retrieval

Article Ecrit par: Zhang, Chengyuan ; Zhang, Shichao ; Zhu, Xiaofeng ; Song, Jiayu ; Zhu, Lei ;

Résumé: The purpose of cross-modal retrieval is to find the relationship between different modal samples, and to retrieve other modal samples with similar semantics by using a certain modal sample. As the data of different modalities presents heterogeneous low-level feature and semantic-related high-level features, the main problem of cross-modal retrieval is how to measure the similarity between different modalities. In this paper, we present a novel cross-modal retrieval method, named Hybrid Cross-Modal Similarity Learning model (HCMSL for short). It aims to capture sufficient semantic information from both labeled and unlabeled cross-modal pairs and intra-modal pairs with same classification label. Specifically, a coupled deep fully-connected networks are used to map cross-modal feature representations into a common subspace. Weight-sharing strategy is utilized between two branches of networks to diminish cross-modal heterogeneity. Furthermore, two siamese CNNs models are employed to learn intra-modal similarity from samples of same modality. Comprehensive experiments on real datasets clearly demonstrate that our proposed technique achieves substantial improvements over the state-of-the-art cross-modal retrieval techniques.


Langue: Anglais