img

Notice détaillée

SANTM

Efficient Self-attention-driven Network for Text Matching

Article Ecrit par: Kumar Jaiswal, Amit ; Garg, Sahil ; You, Ilsun ; Tiwari, Prayag ;

Résumé: Self-attention mechanisms have recently been embraced for a broad range of text-matching applications. Self-attention model takes only one sentence as an input with no extra information, i.e., one can utilize the final hidden state or pooling. However, text-matching problems can be interpreted either in symmetrical or asymmetrical scopes. For instance, paraphrase detection is an asymmetrical task, while textual entailment classification and question-answer matching are considered asymmetrical tasks. In this article, we leverage attractive properties of self-attention mechanism and proposes an attention-based network that incorporates three key components for inter-sequence attention: global pointwise features, preceding attentive features, and contextual features while updating the rest of the components. Our model follows evaluation on two benchmark datasets cover tasks of textual entailment and question-answer matching. The proposed efficient Self-attention-driven Network for Text Matching outperforms the state of the art on the Stanford Natural Language Inference and WikiQA datasets with much fewer parameters


Langue: Anglais
Index décimal 621 .Physique appliquée (électrotechnique, génie civil, génie mécanique, ingénierie appliquée, principes physiques en ingénierie)
Thème Informatique

Mots clés:
Deep learning
Text matching
Attention mechanism

SANTM

Sommaire