Home
Catalogue search
Refine your search:
Keyword
Creator / Publisher:
Gurevych, Iryna (3)
Pfeiffer, Jonas (3)
Ruder, Sebastian (3)
Vulić, Ivan (3)
Rust, Phillip (1)
Year:
2020 (3)
Medium
Type:
Miscellaneous (3)
BLLDB-Access
Search in the Catalogues and Directories
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
Sort by
creator [A → Z]
'
creator [Z → A]
'
publishing year ↑ (asc)
'
publishing year ↓ (desc)
'
title [A → Z]
'
title [Z → A]
'
Simple Search
Hits 1 – 3 of 3
1
MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer ...
Pfeiffer, Jonas
;
Vulić, Ivan
;
Gurevych, Iryna
. - : arXiv, 2020
BASE
Show details
2
How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models ...
Rust, Phillip
;
Pfeiffer, Jonas
;
Vulić, Ivan
. - : arXiv, 2020
BASE
Show details
3
UNKs Everywhere: Adapting Multilingual Language Models to New Scripts ...
Pfeiffer, Jonas
;
Vulić, Ivan
;
Gurevych, Iryna
;
Ruder, Sebastian
. - : arXiv, 2020
Abstract:
Massively multilingual language models such as multilingual BERT offer state-of-the-art cross-lingual transfer performance on a range of NLP tasks. However, due to limited capacity and large differences in pretraining data sizes, there is a profound performance gap between resource-rich and resource-poor target languages. The ultimate challenge is dealing with under-resourced languages not covered at all by the models and written in scripts unseen during pretraining. In this work, we propose a series of novel data-efficient methods that enable quick and effective adaptation of pretrained multilingual models to such low-resource languages and unseen scripts. Relying on matrix factorization, our methods capitalize on the existing latent knowledge about multiple languages already available in the pretrained model's embedding matrix. Furthermore, we show that learning of the new dedicated embedding matrix in the target language can be improved by leveraging a small number of vocabulary items (i.e., the so-called ... : EMNLP 2021 ...
Keyword:
Computation and Language cs.CL
;
FOS Computer and information sciences
URL:
https://arxiv.org/abs/2012.15562
https://dx.doi.org/10.48550/arxiv.2012.15562
BASE
Hide details
Mobile view
All
Catalogues
UB Frankfurt Linguistik
0
IDS Mannheim
0
OLC Linguistik
0
UB Frankfurt Retrokatalog
0
DNB Subject Category Language
0
Institut für Empirische Sprachwissenschaft
0
Leibniz-Centre General Linguistics (ZAS)
0
Bibliographies
BLLDB
0
BDSL
0
IDS Bibliografie zur deutschen Grammatik
0
IDS Bibliografie zur Gesprächsforschung
0
IDS Konnektoren im Deutschen
0
IDS Präpositionen im Deutschen
0
IDS OBELEX meta
0
MPI-SHH Linguistics Collection
0
MPI for Psycholinguistics
0
Linked Open Data catalogues
Annohub
0
Online resources
Link directory
0
Journal directory
0
Database directory
0
Dictionary directory
0
Open access documents
BASE
3
Linguistik-Repository
0
IDS Publikationsserver
0
Online dissertations
0
Language Description Heritage
0
© 2013 - 2024 Lin|gu|is|tik
|
Imprint
|
Privacy Policy
|
Datenschutzeinstellungen ändern