DE eng

Search in the Catalogues and Directories

Hits 1 – 19 of 19

1
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models ...
BASE
Show details
2
AraWEAT: Multidimensional Analysis of Biases in Arabic Word Embeddings ...
BASE
Show details
3
From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers ...
Lauscher, Anne; Ravishankar, Vinit; Vulic, Ivan. - : Apollo - University of Cambridge Repository, 2020
BASE
Show details
4
From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers ...
BASE
Show details
5
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
BASE
Show details
6
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
Lauscher, Anne; Vulic, Ivan; Ponti, Edoardo. - : Apollo - University of Cambridge Repository, 2020
BASE
Show details
7
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity
Lauscher, Anne; Vulic, Ivan; Ponti, Edoardo. - : International Committee on Computational Linguistics, 2020. : https://www.aclweb.org/anthology/2020.coling-main.118, 2020. : Proceedings of the 28th International Conference on Computational Linguistics (COLING 2020), 2020
BASE
Show details
8
From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers
Ravishankar, Vinit; Glavas, Goran; Lauscher, Anne; Vulic, Ivan. - : Association for Computational Linguistics, 2020. : Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), 2020
Abstract: Massively multilingual transformers (MMTs) pretrained via language modeling (e.g., mBERT, XLM-R) have become a default paradigm for zero-shot language transfer in NLP, offering unmatched transfer performance. Current evaluations, however, verify their efficacy in transfers (a) to languages with sufficiently large pretraining corpora, and (b) between close languages. In this work, we analyze the limitations of downstream language transfer with MMTs, showing that, much like cross-lingual word embeddings, they are substantially less effective in resource-lean scenarios and for distant languages. Our experiments, encompassing three lower-level tasks (POS tagging, dependency parsing, NER) and two high-level tasks (NLI, QA), empirically correlate transfer performance with linguistic proximity between source and target languages, but also with the size of target language corpora used in MMT pretraining. Most importantly, we demonstrate that the inexpensive few-shot transfer (i.e., additional fine-tuning on a few target-language instances) is surprisingly effective across the board, warranting more research efforts reaching beyond the limiting zero-shot conditions.
URL: https://doi.org/10.17863/CAM.62210
https://www.repository.cam.ac.uk/handle/1810/315103
BASE
Hide details
9
Specializing unsupervised pretraining models for word-level semantic similarity
Ponti, Edoardo Maria; Korhonen, Anna; Vulić, Ivan. - : Association for Computational Linguistics, ACL, 2020
BASE
Show details
10
AraWEAT: Multidimensional analysis of biases in Arabic word embeddings
Lauscher, Anne; Takieddin, Rafik; Ponzetto, Simone Paolo. - : Association for Computational Linguistics, 2020
BASE
Show details
11
Common sense or world knowledge? Investigating adapter-based knowledge injection into pretrained transformers
Lauscher, Anne; Majewska, Olga; Ribeiro, Leonardo F. R.. - : Association for Computational Linguistics, 2020
BASE
Show details
12
From zero to hero: On the limitations of zero-shot language transfer with multilingual transformers
Ravishankar, Vinit; Glavaš, Goran; Lauscher, Anne. - : Association for Computational Linguistics, 2020
BASE
Show details
13
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
BASE
Show details
14
Informing unsupervised pretraining with external linguistic knowledge
Lauscher, Anne; Vulić, Ivan; Ponti, Edoardo Maria. - : Cornell University, 2019
BASE
Show details
15
Are we consistently biased? Multidimensional analysis of biases in distributional word vectors
Lauscher, Anne; Glavaš, Goran. - : Association for Computational Linguistics, 2019
BASE
Show details
16
ArguminSci: a tool for analyzing argumentation and rhetorical aspects in scientific writing
Glavaš, Goran; Lauscher, Anne; Eckert, Kai. - : Association for Computational Linguistics, 2018
BASE
Show details
17
An argument-annotated corpus of scientific publications
Ponzetto, Simone Paolo; Lauscher, Anne; Glavaš, Goran. - : Association for Computational Linguistics, 2018
BASE
Show details
18
Investigating the role of argumentation in the rhetorical analysis of scientific publications with neural multi-task learning models
Ponzetto, Simone Paolo; Eckert, Kai; Lauscher, Anne. - : Association for Computational Linguistics, 2018
BASE
Show details
19
University of Mannheim @ CLSciSumm-17: Citation-Based Summarization of Scientific Articles Using Semantic Textual Similarity
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
19
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern