DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6
Hits 61 – 80 of 116

61
Universal Phone Recognition with a Multilingual Allophone System ...
BASE
Show details
62
The Return of Lexical Dependencies: Neural Lexicalized PCFGs ...
BASE
Show details
63
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization ...
BASE
Show details
64
X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models ...
BASE
Show details
65
AlloVera: a multilingual allophone database
In: LREC 2020: 12th Language Resources and Evaluation Conference ; https://halshs.archives-ouvertes.fr/halshs-02527046 ; LREC 2020: 12th Language Resources and Evaluation Conference, European Language Resources Association, May 2020, Marseille, France ; https://lrec2020.lrec-conf.org/ (2020)
BASE
Show details
66
How Can We Know What Language Models Know?
In: Transactions of the Association for Computational Linguistics, Vol 8, Pp 423-438 (2020) (2020)
BASE
Show details
67
Improving Candidate Generation for Low-resource Cross-lingual Entity Linking
In: Transactions of the Association for Computational Linguistics, Vol 8, Pp 109-124 (2020) (2020)
BASE
Show details
68
A Bilingual Generative Transformer for Semantic Sentence Embedding ...
BASE
Show details
69
Generalized Data Augmentation for Low-Resource Translation ...
BASE
Show details
70
Improving Robustness of Machine Translation with Synthetic Noise ...
BASE
Show details
71
Cross-lingual Alignment vs Joint Training: A Comparative Study and A Simple Unified Framework ...
Wang, Zirui; Xie, Jiateng; Xu, Ruochen. - : arXiv, 2019
BASE
Show details
72
Towards Zero-resource Cross-lingual Entity Linking ...
BASE
Show details
73
Target Conditioned Sampling: Optimizing Data Selection for Multilingual Neural Machine Translation ...
Wang, Xinyi; Neubig, Graham. - : arXiv, 2019
Abstract: To improve low-resource Neural Machine Translation (NMT) with multilingual corpora, training on the most related high-resource language only is often more effective than using all data available (Neubig and Hu, 2018). However, it is possible that an intelligent data selection strategy can further improve low-resource NMT with data from other auxiliary languages. In this paper, we seek to construct a sampling distribution over all multilingual data, so that it minimizes the training loss of the low-resource language. Based on this formulation, we propose an efficient algorithm, Target Conditioned Sampling (TCS), which first samples a target sentence, and then conditionally samples its source sentence. Experiments show that TCS brings significant gains of up to 2 BLEU on three of four languages we test, with minimal training overhead. ... : Accepted at ACL 2019 ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://dx.doi.org/10.48550/arxiv.1905.08212
https://arxiv.org/abs/1905.08212
BASE
Hide details
74
Pushing the Limits of Low-Resource Morphological Inflection ...
BASE
Show details
75
Self-Attentional Models for Lattice Inputs ...
BASE
Show details
76
Multilingual Neural Machine Translation With Soft Decoupled Encoding ...
BASE
Show details
77
Beyond BLEU: Training Neural Machine Translation with Semantic Similarity ...
BASE
Show details
78
Domain Adaptation of Neural Machine Translation by Lexicon Induction ...
BASE
Show details
79
Should All Cross-Lingual Embeddings Speak English? ...
BASE
Show details
80
DIRE: A Neural Approach to Decompiled Identifier Naming ...
BASE
Show details

Page: 1 2 3 4 5 6

Catalogues
0
0
2
0
1
0
0
Bibliographies
1
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
113
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern