DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...10
Hits 1 – 20 of 182

1
EVI: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification ...
BASE
Show details
2
Delving Deeper into Cross-lingual Visual Question Answering ...
BASE
Show details
3
Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval ...
BASE
Show details
4
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages ...
BASE
Show details
5
Cross-Lingual Dialogue Dataset Creation via Outline-Based Generation ...
BASE
Show details
6
Improving Word Translation via Two-Stage Contrastive Learning ...
BASE
Show details
7
On cross-lingual retrieval with multilingual text encoders
Litschko, Robert; Vulić, Ivan; Ponzetto, Simone Paolo. - : Springer Science + Business Media, 2022
BASE
Show details
8
SimLex-999 Slovenian translation SimLex-999-sl 1.0
Pollak, Senja; Vulić, Ivan; Pelicon, Andraž. - : University of Ljubljana, 2021
BASE
Show details
9
Towards Zero-shot Language Modeling ...
BASE
Show details
10
Multilingual and Cross-Lingual Intent Detection from Spoken Data ...
BASE
Show details
11
Crossing the Conversational Chasm: A Primer on Natural Language Processing for Multilingual Task-Oriented Dialogue Systems ...
BASE
Show details
12
Modelling Latent Translations for Cross-Lingual Transfer ...
Abstract: While achieving state-of-the-art results in multiple tasks and languages, translation-based cross-lingual transfer is often overlooked in favour of massively multilingual pre-trained encoders. Arguably, this is due to its main limitations: 1) translation errors percolating to the classification phase and 2) the insufficient expressiveness of the maximum-likelihood translation. To remedy this, we propose a new technique that integrates both steps of the traditional pipeline (translation and classification) into a single model, by treating the intermediate translations as a latent random variable. As a result, 1) the neural machine translation system can be fine-tuned with a variant of Minimum Risk Training where the reward is the accuracy of the downstream task classifier. Moreover, 2) multiple samples can be drawn to approximate the expected loss across all possible translations during inference. We evaluate our novel latent translation-based model on a series of multilingual NLU tasks, including commonsense ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://dx.doi.org/10.48550/arxiv.2107.11353
https://arxiv.org/abs/2107.11353
BASE
Hide details
13
Prix-LM: Pretraining for Multilingual Knowledge Base Construction ...
BASE
Show details
14
Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking ...
BASE
Show details
15
xGQA: Cross-Lingual Visual Question Answering ...
BASE
Show details
16
On Cross-Lingual Retrieval with Multilingual Text Encoders ...
BASE
Show details
17
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
BASE
Show details
18
Evaluating Multilingual Text Encoders for Unsupervised Cross-Lingual Retrieval ...
BASE
Show details
19
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models ...
BASE
Show details
20
Parameter space factorization for zero-shot learning across tasks and languages ...
BASE
Show details

Page: 1 2 3 4 5...10

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
182
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern