DE eng

Search in the Catalogues and Directories

Hits 1 – 4 of 4

1
{PROST}: {P}hysical Reasoning about Objects through Space and Time ...
BASE
Show details
2
Don't Rule Out Monolingual Speakers: A Method For Crowdsourcing Machine Translation Data ...
BASE
Show details
3
What Would a Teacher Do? {P}redicting Future Talk Moves ...
BASE
Show details
4
How to Adapt Your Pretrained Multilingual Model to 1600 Languages ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.351 Abstract: Pretrained multilingual models (PMMs) enable zero-shot learning via cross-lingual transfer, performing best for languages seen during pretraining. While methods exist to improve performance for unseen languages, they have almost exclusively been evaluated using amounts of raw text only available for a small fraction of the world's languages. In this paper, we evaluate the performance of existing methods to adapt PMMs to new languages using a resource available for close to 1600 languages: the New Testament. This is challenging for two reasons: (1) the small corpus size, and (2) the narrow domain. While performance drops for all approaches, we surprisingly still see gains of up to 17.69% accuracy for part-of-speech tagging and 6.29 F1 for NER on average over all languages as compared to XLM-R. Another unexpected finding is that continued pretraining, the simplest approach, performs best. Finally, we perform a case study to disentangle the ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://underline.io/lecture/26066-how-to-adapt-your-pretrained-multilingual-model-to-1600-languages
https://dx.doi.org/10.48448/xj48-9d02
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
4
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern