1 |
MuMiN: A Large-Scale Multilingual Multimodal Fact-Checked Misinformation Social Network Dataset ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
From Examples to Rules: Neural Guided Rule Synthesis for Information Extraction ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Topic Discovery via Latent Space Clustering of Pretrained Language Model Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
LoL: A Comparative Regularization Loss over Query Reformulation Losses for Pseudo-Relevance Feedback ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Improving Word Translation via Two-Stage Contrastive Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
nigam@COLIEE-22: Legal Case Retrieval and Entailment using Cascading of Lexical and Semantic-based models ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Introducing Neural Bag of Whole-Words with ColBERTer: Contextualized Late Interactions using Enhanced Reduction ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Zero-Shot Open Information Extraction using Question Generation and Reading Comprehension ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Boosting Low-Resource Biomedical QA via Entity-Aware Masking Strategies ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
How to Query Language Models? ...
|
|
|
|
Abstract:
Large pre-trained language models (LMs) are capable of not only recovering linguistic but also factual and commonsense knowledge. To access the knowledge stored in mask-based LMs, we can use cloze-style questions and let the model fill in the blank. The flexibility advantage over structured knowledge bases comes with the drawback of finding the right query for a certain information need. Inspired by human behavior to disambiguate a question, we propose to query LMs by example. To clarify the ambivalent question "Who does Neuer play for?", a successful strategy is to demonstrate the relation using another subject, e.g., "Ronaldo plays for Portugal. Who does Neuer play for?". We apply this approach of querying by example to the LAMA probe and obtain substantial improvements of up to 37.8% for BERT-large on the T-REx data when providing only 10 demonstrations--even outperforming a baseline that queries the model with up to 40 paraphrases of the question. The examples are provided through the model's context and ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences; Information Retrieval cs.IR; Machine Learning cs.LG
|
|
URL: https://dx.doi.org/10.48550/arxiv.2108.01928 https://arxiv.org/abs/2108.01928
|
|
BASE
|
|
Hide details
|
|
14 |
Personalized Transformer for Explainable Recommendation ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Improving Authorship Verification using Linguistic Divergence ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
On the Calibration and Uncertainty of Neural Learning to Rank Models ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
A Comparison of Latent Semantic Analysis and Correspondence Analysis of Document-Term Matrices ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Leveraging Multilingual Transformers for Hate Speech Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
NewsEmbed: Modeling News through Pre-trained Document Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
A Review of Bangla Natural Language Processing Tasks and the Utility of Transformer Models ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|