1 |
Welcome to the Modern World of Pronouns: Identity-Inclusive Natural Language Processing beyond Gender ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
MultiCite: Modeling realistic citations requires moving beyond the single-sentence single-label setting ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
AraWEAT: Multidimensional Analysis of Biases in Arabic Word Embeddings ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Rhetoric, Logic, and Dialectic: Advancing Theory-based Argument Quality Assessment in Natural Language Processing ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Rhetoric, Logic, and Dialectic: Advancing Theory-based Argument Quality Assessment in Natural Language Processing ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Creating a Domain-diverse Corpus for Theory-based Argument Quality Assessment ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers ...
|
|
|
|
Abstract:
Massively multilingual transformers (MMTs) pretrained via language modeling (e.g., mBERT, XLM-R) have become a default paradigm for zero-shot language transfer in NLP, offering unmatched transfer performance. Current evaluations, however, verify their efficacy in transfers (a) to languages with sufficiently large pretraining corpora, and (b) between close languages. In this work, we analyze the limitations of downstream language transfer with MMTs, showing that, much like cross-lingual word embeddings, they are substantially less effective in resource-lean scenarios and for distant languages. Our experiments, encompassing three lower-level tasks (POS tagging, dependency parsing, NER) and two high-level tasks (NLI, QA), empirically correlate transfer performance with linguistic proximity between source and target languages, but also with the size of target language corpora used in MMT pretraining. Most importantly, we demonstrate that the inexpensive few-shot transfer (i.e., additional fine-tuning on a few ...
|
|
URL: https://www.repository.cam.ac.uk/handle/1810/315103 https://dx.doi.org/10.17863/cam.62210
|
|
BASE
|
|
Hide details
|
|
11 |
From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity
|
|
Lauscher, Anne; Vulic, Ivan; Ponti, Edoardo. - : International Committee on Computational Linguistics, 2020. : https://www.aclweb.org/anthology/2020.coling-main.118, 2020. : Proceedings of the 28th International Conference on Computational Linguistics (COLING 2020), 2020
|
|
BASE
|
|
Show details
|
|
15 |
From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Specializing unsupervised pretraining models for word-level semantic similarity
|
|
|
|
BASE
|
|
Show details
|
|
17 |
AraWEAT: Multidimensional analysis of biases in Arabic word embeddings
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Common sense or world knowledge? Investigating adapter-based knowledge injection into pretrained transformers
|
|
|
|
BASE
|
|
Show details
|
|
19 |
From zero to hero: On the limitations of zero-shot language transfer with multilingual transformers
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|