DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 21 – 34 of 34

21
Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking ...
BASE
Show details
22
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
BASE
Show details
23
Multilingual and Cross-Lingual Intent Detection from Spoken Data ...
BASE
Show details
24
Semantic Data Set Construction from Human Clustering and Spatial Arrangement ...
Majewska, Olga; McCarthy, Diana; Van Den Bosch, Jasper JF. - : Apollo - University of Cambridge Repository, 2021
BASE
Show details
25
AM2iCo: Evaluating Word Meaning in Context across Low-Resource Languages with Adversarial Examples ...
BASE
Show details
26
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders ...
BASE
Show details
27
Parameter space factorization for zero-shot learning across tasks and languages
In: Transactions of the Association for Computational Linguistics, 9 (2021)
BASE
Show details
28
AM2iCo: Evaluating Word Meaning in Context across Low-Resource Languages with Adversarial Examples ...
BASE
Show details
29
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders ...
BASE
Show details
30
LexFit: Lexical Fine-Tuning of Pretrained Language Models ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.410 Abstract: Transformer-based language models (LMs) pretrained on large text collections implicitly store a wealth of lexical semantic knowledge, but it is non-trivial to extract that knowledge effectively from their parameters. Inspired by prior work on semantic specialization of static word embedding (WE) models, we show that it is possible to expose and enrich lexical knowledge from the LMs, that is, to specialize them to serve as effective and universal "decontextualized" word encoders even when fed input words "in isolation" (i.e., without any context). Their transformation into such word encoders is achieved through a simple and efficient lexical fine-tuning procedure (termed LexFit) based on dual-encoder network structures. Further, we show that LexFit can yield effective word encoders even with limited lexical supervision and, via cross-lingual transfer, in different languages without any readily available external knowledge. Our evaluation ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://dx.doi.org/10.48448/2skf-gv34
https://underline.io/lecture/25829-lexfit-lexical-fine-tuning-of-pretrained-language-models
BASE
Hide details
31
Verb Knowledge Injection for Multilingual Event Processing ...
BASE
Show details
32
A Closer Look at Few-Shot Crosslingual Transfer: The Choice of Shots Matters ...
BASE
Show details
33
Is supervised syntactic parsing beneficial for language understanding tasks? An empirical investigation
Glavaš, Goran; Vulić, Ivan. - : Association for Computational Linguistics, 2021
BASE
Show details
34
Evaluating multilingual text encoders for unsupervised cross-lingual retrieval
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
3
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
31
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern