DE eng

Search in the Catalogues and Directories

Hits 1 – 18 of 18

1
Optimizing segmentation granularity for neural machine translation [<Journal>]
Salesky, Elizabeth [Verfasser]; Runge, Andrew [Verfasser]; Coda, Alex [Verfasser].
DNB Subject Category Language
Show details
2
A set of recommendations for assessing human-machine parity in language translation
In: Läubli, Samuel orcid:0000-0001-5362-4106 , Castilho, Sheila orcid:0000-0002-8416-6555 , Neubig, Graham, Sennrich, Rico orcid:0000-0002-1438-4741 , Shen, Qinlan and Toral, Antonio orcid:0000-0003-2357-2960 (2020) A set of recommendations for assessing human-machine parity in language translation. Journal of Artificial Intelligence Research, 67 . pp. 653-672. ISSN 1076-9757 (2020)
BASE
Show details
3
Speech technology for unwritten languages
In: ISSN: 2329-9290 ; EISSN: 2329-9304 ; IEEE/ACM Transactions on Audio, Speech and Language Processing ; https://hal.inria.fr/hal-02480675 ; IEEE/ACM Transactions on Audio, Speech and Language Processing, Institute of Electrical and Electronics Engineers, 2020, &#x27E8;10.1109/TASLP.2020.2973896&#x27E9; (2020)
BASE
Show details
4
AlloVera: a multilingual allophone database
In: LREC 2020: 12th Language Resources and Evaluation Conference ; https://halshs.archives-ouvertes.fr/halshs-02527046 ; LREC 2020: 12th Language Resources and Evaluation Conference, European Language Resources Association, May 2020, Marseille, France ; https://lrec2020.lrec-conf.org/ (2020)
BASE
Show details
5
AlloVera: A Multilingual Allophone Database ...
BASE
Show details
6
Explicit Alignment Objectives for Multilingual Bidirectional Encoders ...
BASE
Show details
7
Balancing Training for Multilingual Neural Machine Translation ...
BASE
Show details
8
Automatic Extraction of Rules Governing Morphological Agreement ...
BASE
Show details
9
A Summary of the First Workshop on Language Technology for Language Documentation and Revitalization ...
BASE
Show details
10
A Set of Recommendations for Assessing Human-Machine Parity in Language Translation ...
BASE
Show details
11
Improving Target-side Lexical Transfer in Multilingual Neural Machine Translation ...
Gao, Luyu; Wang, Xinyi; Neubig, Graham. - : arXiv, 2020
BASE
Show details
12
Universal Phone Recognition with a Multilingual Allophone System ...
BASE
Show details
13
The Return of Lexical Dependencies: Neural Lexicalized PCFGs ...
BASE
Show details
14
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization ...
BASE
Show details
15
X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models ...
BASE
Show details
16
AlloVera: a multilingual allophone database
In: LREC 2020: 12th Language Resources and Evaluation Conference ; https://halshs.archives-ouvertes.fr/halshs-02527046 ; LREC 2020: 12th Language Resources and Evaluation Conference, European Language Resources Association, May 2020, Marseille, France ; https://lrec2020.lrec-conf.org/ (2020)
BASE
Show details
17
How Can We Know What Language Models Know?
In: Transactions of the Association for Computational Linguistics, Vol 8, Pp 423-438 (2020) (2020)
Abstract: Recent work has presented intriguing results examining the knowledge contained in language models (LMs) by having the LM fill in the blanks of prompts such as “ Obama is a __ by profession”. These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as “ Obama worked as a __ ” may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know. We have released the code and the resulting LM Prompt And Query Archive (LPAQA) at https://github.com/jzbjyb/LPAQA .
Keyword: Computational linguistics. Natural language processing; P98-98.5
URL: https://doaj.org/article/861ecb5d6ec2467287cf263aa94e6a75
https://doi.org/10.1162/tacl_a_00324
BASE
Hide details
18
Improving Candidate Generation for Low-resource Cross-lingual Entity Linking
In: Transactions of the Association for Computational Linguistics, Vol 8, Pp 109-124 (2020) (2020)
BASE
Show details

Catalogues
0
0
0
0
1
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
17
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern