DE eng

Search in the Catalogues and Directories

Page: 1 2 3
Hits 1 – 20 of 45

1
Delving Deeper into Cross-lingual Visual Question Answering ...
BASE
Show details
2
Cross-Lingual Dialogue Dataset Creation via Outline-Based Generation ...
BASE
Show details
3
Improving Word Translation via Two-Stage Contrastive Learning ...
BASE
Show details
4
Towards Zero-shot Language Modeling ...
BASE
Show details
5
Multilingual and Cross-Lingual Intent Detection from Spoken Data ...
BASE
Show details
6
Crossing the Conversational Chasm: A Primer on Natural Language Processing for Multilingual Task-Oriented Dialogue Systems ...
BASE
Show details
7
Modelling Latent Translations for Cross-Lingual Transfer ...
BASE
Show details
8
Prix-LM: Pretraining for Multilingual Knowledge Base Construction ...
BASE
Show details
9
Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking ...
BASE
Show details
10
xGQA: Cross-Lingual Visual Question Answering ...
BASE
Show details
11
On Cross-Lingual Retrieval with Multilingual Text Encoders ...
BASE
Show details
12
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
BASE
Show details
13
Evaluating Multilingual Text Encoders for Unsupervised Cross-Lingual Retrieval ...
BASE
Show details
14
AM2iCo: Evaluating Word Meaning in Context across Low-Resource Languages with Adversarial Examples ...
BASE
Show details
15
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders ...
Abstract: Pretrained Masked Language Models (MLMs) have revolutionised NLP in recent years. However, previous work has indicated that off-the-shelf MLMs are not effective as universal lexical or sentence encoders without further task-specific fine-tuning on NLI, sentence similarity, or paraphrasing tasks using annotated task data. In this work, we demonstrate that it is possible to turn MLMs into effective universal lexical and sentence encoders even without any additional data and without any supervision. We propose an extremely simple, fast and effective contrastive learning technique, termed Mirror-BERT, which converts MLMs (e.g., BERT and RoBERTa) into such encoders in 20-30 seconds without any additional external knowledge. Mirror-BERT relies on fully identical or slightly modified string pairs as positive (i.e., synonymous) fine-tuning examples, and aims to maximise their similarity during identity fine-tuning. We report huge gains over off-the-shelf MLMs with Mirror-BERT in both lexical-level and sentence-level ... : EMNLP 2021 camera-ready version ...
Keyword: Artificial Intelligence cs.AI; Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
URL: https://arxiv.org/abs/2104.08027
https://dx.doi.org/10.48550/arxiv.2104.08027
BASE
Hide details
16
XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning ...
BASE
Show details
17
Emergent Communication Pretraining for Few-Shot Machine Translation ...
BASE
Show details
18
Orthogonal Language and Task Adapters in Zero-Shot Cross-Lingual Transfer ...
BASE
Show details
19
MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer ...
BASE
Show details
20
How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models ...
BASE
Show details

Page: 1 2 3

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
45
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern