DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6
Hits 1 – 20 of 113

1
EVI: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification ...
BASE
Show details
2
Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval ...
BASE
Show details
3
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages ...
BASE
Show details
4
Cross-Lingual Dialogue Dataset Creation via Outline-Based Generation ...
BASE
Show details
5
Improving Word Translation via Two-Stage Contrastive Learning ...
BASE
Show details
6
On cross-lingual retrieval with multilingual text encoders
Litschko, Robert; Vulić, Ivan; Ponzetto, Simone Paolo. - : Springer Science + Business Media, 2022
BASE
Show details
7
SimLex-999 Slovenian translation SimLex-999-sl 1.0
Pollak, Senja; Vulić, Ivan; Pelicon, Andraž. - : University of Ljubljana, 2021
BASE
Show details
8
Towards Zero-shot Language Modeling ...
BASE
Show details
9
Multilingual and Cross-Lingual Intent Detection from Spoken Data ...
BASE
Show details
10
Crossing the Conversational Chasm: A Primer on Natural Language Processing for Multilingual Task-Oriented Dialogue Systems ...
BASE
Show details
11
Modelling Latent Translations for Cross-Lingual Transfer ...
BASE
Show details
12
Prix-LM: Pretraining for Multilingual Knowledge Base Construction ...
BASE
Show details
13
Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking ...
BASE
Show details
14
xGQA: Cross-Lingual Visual Question Answering ...
BASE
Show details
15
On Cross-Lingual Retrieval with Multilingual Text Encoders ...
BASE
Show details
16
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
BASE
Show details
17
Evaluating Multilingual Text Encoders for Unsupervised Cross-Lingual Retrieval ...
Abstract: Pretrained multilingual text encoders based on neural Transformer architectures, such as multilingual BERT (mBERT) and XLM, have achieved strong performance on a myriad of language understanding tasks. Consequently, they have been adopted as a go-to paradigm for multilingual and cross-lingual representation learning and transfer, rendering cross-lingual word embeddings (CLWEs) effectively obsolete. However, questions remain to which extent this finding generalizes 1) to unsupervised settings and 2) for ad-hoc cross-lingual IR (CLIR) tasks. Therefore, in this work we present a systematic empirical study focused on the suitability of the state-of-the-art multilingual encoders for cross-lingual document and sentence retrieval tasks across a large number of language pairs. In contrast to supervised language understanding, our results indicate that for unsupervised document-level CLIR -- a setup with no relevance judgments for IR-specific fine-tuning -- pretrained encoders fail to significantly outperform models ... : accepted at ECIR'21 (preprint) ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences; H.3.3; I.2.7; Information Retrieval cs.IR
URL: https://dx.doi.org/10.48550/arxiv.2101.08370
https://arxiv.org/abs/2101.08370
BASE
Hide details
18
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models ...
BASE
Show details
19
Parameter space factorization for zero-shot learning across tasks and languages ...
BASE
Show details
20
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
Liu, Qianchu; Liu, Fangyu; Collier, Nigel. - : Apollo - University of Cambridge Repository, 2021
BASE
Show details

Page: 1 2 3 4 5 6

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
113
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern