DE eng

Search in the Catalogues and Directories

Hits 1 – 11 of 11

1
Text mining at multiple granularity: leveraging subwords, words, phrases, and sentences
BASE
Show details
2
Classification-based Quality Estimation: Small and Efficient Models for Real-world Applications ...
BASE
Show details
3
As Easy as 1, 2, 3: Behavioural Testing of NMT Systems for Numerical Translation ...
BASE
Show details
4
Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoning ...
BASE
Show details
5
XLEnt: Mining a Large Cross-lingual Entity Dataset with Lexical-Semantic-Phonetic Word Alignment ...
BASE
Show details
6
Adapting High-resource NMT Models to Translate Low-resource Related Languages without Parallel Data ...
BASE
Show details
7
XLEnt: Mining a Large Cross-lingual Entity Dataset with Lexical-Semantic-Phonetic Word Alignment ...
BASE
Show details
8
Massively Multilingual Document Alignment with Cross-lingual Sentence-Mover's Distance ...
BASE
Show details
9
Beyond English-Centric Multilingual Machine Translation ...
BASE
Show details
10
An exploratory study on multilingual quality estimation
In: 366 ; 377 (2020)
Abstract: This is an accepted manuscript of an article published by ACL, available online at: https://www.aclweb.org/anthology/2020.aacl-main.39 The accepted version of the publication may differ from the final published version. ; Predicting the quality of machine translation has traditionally been addressed with language-specific models, under the assumption that the quality label distribution or linguistic features exhibit traits that are not shared across languages. An obvious disadvantage of this approach is the need for labelled data for each given language pair. We challenge this assumption by exploring different approaches to multilingual Quality Estimation (QE), including using scores from translation models. We show that these outperform singlelanguage models, particularly in less balanced quality label distributions and low-resource settings. In the extreme case of zero-shot QE, we show that it is possible to accurately predict quality for any given new language from models trained on other languages. Our findings indicate that state-of-the-art neural QE models based on powerful pre-trained representations generalise well across languages, making them more applicable in real-world settings.
Keyword: machine translation; multilingual; multitask learning; quality estimation; zero-shot learning
URL: http://hdl.handle.net/2436/623698
BASE
Hide details
11
Incorporating World Knowledge to Document Clustering via Heterogeneous Information Networks
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
11
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern