1 |
The "Fat Face" illusion: A robust adaptation for processing pairs of faces
|
|
|
|
In: ISSN: 0042-6989 ; EISSN: 0042-6989 ; Vision Research ; https://hal.archives-ouvertes.fr/hal-03579276 ; Vision Research, Elsevier, 2022, 195, pp.108015. ⟨10.1016/j.visres.2022.108015⟩ (2022)
|
|
BASE
|
|
Show details
|
|
2 |
Speaking clearly improves speech segmentation by statistical learning under optimal listening conditions ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
SLOGAN: Handwriting Style Synthesis for Arbitrary-Length and Out-of-Vocabulary Text ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Fostering student engagement with feedback: an integrated approach
|
|
|
|
BASE
|
|
Show details
|
|
5 |
O desenho de uma aplicação de MAVL em PLE destinado a aprendentes chineses
|
|
|
|
In: Entrepalavras; v. 11, n. 11esp (11): Dicionário, léxico e ensino de línguas; 313-339 (2022)
|
|
BASE
|
|
Show details
|
|
6 |
Making Better Use of Bilingual Information for Cross-Lingual AMR Parsing ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Speaking clearly improves speech segmentation by statistical learning under optimal listening conditions
|
|
|
|
In: Laboratory Phonology: Journal of the Association for Laboratory Phonology; Vol 12, No 1 (2021); 14 ; 1868-6354 (2021)
|
|
BASE
|
|
Show details
|
|
9 |
Pushing Paraphrase Away from Original Sentence: A Multi-Round Paraphrase Generation Approach ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Modeling Endorsement for Multi-Document Abstractive Summarization ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Context-Aware Interaction Network for Question Matching ...
|
|
|
|
Abstract:
Anthology paper link: https://aclanthology.org/2021.emnlp-main.312/ Abstract: Impressive milestones have been achieved in text matching by adopting a cross-attention mechanism to capture pertinent semantic connections between two sentence representations. However, regular cross-attention focuses on word-level links between the two input sequences, neglecting the importance of contextual information. We propose a context-aware interaction network (COIN) to properly align two sequences and infer their semantic relationship. Specifically, each interaction block includes (1) a context-aware cross-attention mechanism to effectively integrate contextual information when aligning two sequences, and (2) a gate fusion layer to flexibly interpolate aligned representations. We apply multiple stacked interaction blocks to produce alignments at different levels and gradually refine the attention results. Experiments on two question matching datasets and detailed analyses demonstrate the effectiveness of our model. ...
|
|
Keyword:
Computational Linguistics; Deep Learning; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
|
|
URL: https://underline.io/lecture/37579-context-aware-interaction-network-for-question-matching https://dx.doi.org/10.48448/7kvt-gs89
|
|
BASE
|
|
Hide details
|
|
15 |
Weakly Supervised Named Entity Tagging with Learnable Logical Rules ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Additional file 3 of Could graph neural networks learn better molecular representation for drug discovery? A comparison study of descriptor-based and graph-based models ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Additional file 3 of Could graph neural networks learn better molecular representation for drug discovery? A comparison study of descriptor-based and graph-based models ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Weakly Supervised Named Entity Tagging with Learnable Logical Rules ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Injecting Semantic Concepts into End-to-End Image Captioning ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance
|
|
|
|
BASE
|
|
Show details
|
|
|
|