DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4
Hits 1 – 20 of 79

1
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages ...
BASE
Show details
2
Cross-Lingual Dialogue Dataset Creation via Outline-Based Generation ...
BASE
Show details
3
Specializing unsupervised pretraining models for word-level semantic similarity
Lauscher, Anne [Verfasser]; Vulic, Ivan [Verfasser]; Ponti, Edoardo Maria [Verfasser]. - Mannheim : Universitätsbibliothek Mannheim, 2021
DNB Subject Category Language
Show details
4
Towards Zero-shot Language Modeling ...
BASE
Show details
5
Differentiable Generative Phonology ...
BASE
Show details
6
Crossing the Conversational Chasm: A Primer on Natural Language Processing for Multilingual Task-Oriented Dialogue Systems ...
BASE
Show details
7
Modelling Latent Translations for Cross-Lingual Transfer ...
BASE
Show details
8
Minimax and Neyman–Pearson Meta-Learning for Outlier Languages ...
BASE
Show details
9
Minimax and Neyman–Pearson Meta-Learning for Outlier Languages ...
BASE
Show details
10
Minimax and Neyman–Pearson Meta-Learning for Outlier Languages ...
BASE
Show details
11
Parameter space factorization for zero-shot learning across tasks and languages ...
Abstract: Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task–language combinations with available data to low-resource ones? In this work, we propose a Bayesian generative model for the space of neural parameters. We assume that this space can be factorized into latent variables for each language and each task. We infer the posteriors over such latent variables based on data from seen task–language combinations through variational inference. This enables zero-shot classification on unseen combinations at prediction time. For instance, given training data for named entity recognition (NER) in Vietnamese and for part-of-speech (POS) tagging in Wolof, our model can perform accurate predictions for NER in Wolof. In particular, we experiment with a typologically diverse sample of 33 languages from 4 continents and 11 families, and show that our model yields ... : Transactions of the Association for Computational Linguistics, 9 ...
URL: https://dx.doi.org/10.3929/ethz-b-000498270
http://hdl.handle.net/20.500.11850/498270
BASE
Hide details
12
Visually Grounded Reasoning across Languages and Cultures ...
BASE
Show details
13
Visually Grounded Reasoning across Languages and Cultures ...
BASE
Show details
14
AM2iCo: Evaluating Word Meaning in Context across Low-Resource Languages with Adversarial Examples ...
BASE
Show details
15
Parameter space factorization for zero-shot learning across tasks and languages
In: Transactions of the Association for Computational Linguistics, 9 (2021)
BASE
Show details
16
Visually Grounded Reasoning across Languages and Cultures ...
BASE
Show details
17
AM2iCo: Evaluating Word Meaning in Context across Low-Resource Languages with Adversarial Examples ...
BASE
Show details
18
LexFit: Lexical Fine-Tuning of Pretrained Language Models ...
BASE
Show details
19
Verb Knowledge Injection for Multilingual Event Processing ...
BASE
Show details
20
Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual Lexical Semantic Similarity ...
BASE
Show details

Page: 1 2 3 4

Catalogues
0
0
0
0
1
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
78
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern