1 |
MuMiN: A Large-Scale Multilingual Multimodal Fact-Checked Misinformation Social Network Dataset ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
From Examples to Rules: Neural Guided Rule Synthesis for Information Extraction ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Topic Discovery via Latent Space Clustering of Pretrained Language Model Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
LoL: A Comparative Regularization Loss over Query Reformulation Losses for Pseudo-Relevance Feedback ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Improving Word Translation via Two-Stage Contrastive Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
nigam@COLIEE-22: Legal Case Retrieval and Entailment using Cascading of Lexical and Semantic-based models ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Introducing Neural Bag of Whole-Words with ColBERTer: Contextualized Late Interactions using Enhanced Reduction ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Zero-Shot Open Information Extraction using Question Generation and Reading Comprehension ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Boosting Low-Resource Biomedical QA via Entity-Aware Masking Strategies ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Personalized Transformer for Explainable Recommendation ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Improving Authorship Verification using Linguistic Divergence ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
On the Calibration and Uncertainty of Neural Learning to Rank Models ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
A Comparison of Latent Semantic Analysis and Correspondence Analysis of Document-Term Matrices ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Leveraging Multilingual Transformers for Hate Speech Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
NewsEmbed: Modeling News through Pre-trained Document Representations ...
|
|
|
|
Abstract:
Effectively modeling text-rich fresh content such as news articles at document-level is a challenging problem. To ensure a content-based model generalize well to a broad range of applications, it is critical to have a training dataset that is large beyond the scale of human labels while achieving desired quality. In this work, we address those two challenges by proposing a novel approach to mine semantically-relevant fresh documents, and their topic labels, with little human supervision. Meanwhile, we design a multitask model called NewsEmbed that alternatively trains a contrastive learning with a multi-label classification to derive a universal document encoder. We show that the proposed approach can provide billions of high quality organic training examples and can be naturally extended to multilingual setting where texts in different languages are encoded in the same semantic space. We experimentally demonstrate NewsEmbed's competitive performance across multiple natural language understanding tasks, both ... : Accepted in SIGKDD 2021 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences; Information Retrieval cs.IR; Machine Learning cs.LG
|
|
URL: https://dx.doi.org/10.48550/arxiv.2106.00590 https://arxiv.org/abs/2106.00590
|
|
BASE
|
|
Hide details
|
|
20 |
A Review of Bangla Natural Language Processing Tasks and the Utility of Transformer Models ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|