DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 22

1
Observation of new excited ${B} ^0_{s} $ states
In: Eur.Phys.J.C ; https://hal.archives-ouvertes.fr/hal-03010999 ; Eur.Phys.J.C, 2021, 81 (7), pp.601. ⟨10.1140/epjc/s10052-021-09305-3⟩ (2021)
BASE
Show details
2
Infective Endocarditis in Patients on Chronic Hemodialysis
In: ISSN: 0735-1097 ; Journal of the American College of Cardiology ; https://hal.archives-ouvertes.fr/hal-03369871 ; Journal of the American College of Cardiology, Elsevier, 2021, 77 (13), pp.1629-1640. ⟨10.1016/j.jacc.2021.02.014⟩ (2021)
BASE
Show details
3
Plan-then-Generate: Controlled Data-to-Text Generation via Planning ...
BASE
Show details
4
Plan-then-Generate: Controlled Data-to-Text Generation via Planning ...
BASE
Show details
5
Prix-LM: Pretraining for Multilingual Knowledge Base Construction ...
BASE
Show details
6
Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking ...
BASE
Show details
7
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
BASE
Show details
8
Language and ethnobiological skills decline precipitously in Papua New Guinea, the world's most linguistically diverse nation
Ibalim, Sentiko; Saulei, Simon; Novotny, Vojtech. - : National Academy of Sciences, 2021
BASE
Show details
9
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
Liu, Qianchu; Liu, Fangyu; Collier, Nigel. - : Apollo - University of Cambridge Repository, 2021
BASE
Show details
10
Visually Grounded Reasoning across Languages and Cultures ...
BASE
Show details
11
Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking ...
BASE
Show details
12
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
Abstract: Recent work indicated that pretrained language models (PLMs) such as BERT and RoBERTa can be transformed into effective sentence and word encoders even via simple self-supervised techniques. Inspired by this line of work, in this paper we propose a fully unsupervised approach to improving word-in-context (WiC) representations in PLMs, achieved via a simple and efficient WiC-targeted fine-tuning procedure: MirrorWiC. The proposed method leverages only raw texts sampled from Wikipedia, assuming no sense-annotated data, and learns context-aware word representations within a standard contrastive learning setup. We experiment with a series of standard and comprehensive WiC benchmarks across multiple languages. Our proposed fully unsupervised MirrorWiC models obtain substantial gains over off-the-shelf PLMs across all monolingual, multilingual and cross-lingual setups. Moreover, on some standard WiC benchmarks, MirrorWiC is even on-par with supervised models fine-tuned with in-task data and sense labels. ...
Keyword: Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
URL: https://underline.io/lecture/39862-mirrorwic-on-eliciting-word-in-context-representations-from-pretrained-language-models
https://dx.doi.org/10.48448/hs20-qq06
BASE
Hide details
13
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders ...
Liu, Fangyu; Vulić, I; Korhonen, Anna-Leena. - : Apollo - University of Cambridge Repository, 2021
BASE
Show details
14
Visually Grounded Reasoning across Languages and Cultures ...
BASE
Show details
15
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders ...
BASE
Show details
16
Visually Grounded Reasoning across Languages and Cultures ...
BASE
Show details
17
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders ...
BASE
Show details
18
Self-Alignment Pretraining for Biomedical Entity Representations
Liu, Fangyu; Shareghi, Ehsan; Meng, Zaiqiao. - : Association for Computational Linguistics, 2021. : Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021
BASE
Show details
19
Language and ethnobiological skills decline precipitously in Papua New Guinea, the world’s most linguistically diverse nation
In: Proc Natl Acad Sci U S A (2021)
BASE
Show details
20
Current challenges and future perspectives in oral absorption research : an opinion of the UNGAP network
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
22
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern