DE eng

Search in the Catalogues and Directories

Hits 1 – 19 of 19

1
Neural MT and Human Post-editing : a Method to Improve Editorial Quality
In: ISSN: 1134-8941 ; Interlingüística ; https://halshs.archives-ouvertes.fr/halshs-03603590 ; Interlingüística, Alacant [Spain] : Universitat Autònoma de Barcelona, 2022, pp.15-36 (2022)
BASE
Show details
2
Human evaluation of three machine translation systems : from quality to attitudes by professional translators
BASE
Show details
3
Quantitative Fine-grained Human Evaluation of Machine Translation Systems: a Case Study on English to Croatian
In: Articles (2018)
BASE
Show details
4
Human-Guided Evolutionary-Based Linguistics Approach For Automatic Story Generation ...
Wang, Kun. - : UNSW Sydney, 2013
BASE
Show details
5
MSEE: Stochastic Cognitive Linguistic Behavior Models for Semantic Sensing
In: DTIC (2013)
BASE
Show details
6
English → Russian MT evaluation campaign
In: ACL 2013 - 51st Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (2013)
BASE
Show details
7
Human-Guided Evolutionary-Based Linguistics Approach For Automatic Story Generation
Wang, Kun, Engineering & Information Technology, UNSW Canberra, UNSW. - : University of New South Wales - UNSW Canberra. Engineering & Information Technology, 2013
BASE
Show details
8
Bucking the Trend: Large-Scale Cost-Focused Active Learning for Statistical Machine Translation
Bloodgood, Michael; Callison-Burch, Chris. - : Association for Computational Linguistics, 2010
BASE
Show details
9
Task muddiness, intelligence metrics, and the necessity of autonomous mental development
In: http://www.cse.msu.edu/~cse841/papers/MuddyTasks.pdf (2009)
BASE
Show details
10
A Method for Stopping Active Learning Based on Stabilizing Predictions and the Need for User-Adjustable Stopping ...
Bloodgood, Michael; Vijay-Shanker, K. - : Digital Repository at the University of Maryland, 2009
BASE
Show details
11
A Method for Stopping Active Learning Based on Stabilizing Predictions and the Need for User-Adjustable Stopping
Bloodgood, Michael; Vijay-Shanker, K. - : Association for Computational Linguistics, 2009
BASE
Show details
12
Automating Convoy Training Assessment to Improve Soldier Performance
In: DTIC (2008)
BASE
Show details
13
Differential Effect of Correct Name Translation on Human and Automated Judgments of Translation Acceptability: A Pilot Study
In: DTIC (2008)
BASE
Show details
14
Automatic Computation of . . .
In: http://www.informatik.uni-freiburg.de/~ksimon/papers/CIKM-06-Proximity.pdf (2006)
BASE
Show details
15
Toward Joint Segmentation and Classification of Dialog Acts in Multiparty Meetings
In: DTIC (2005)
BASE
Show details
16
Symposium on Speech Communication Metrics and Human Performance.
In: DTIC AND NTIS (1995)
BASE
Show details
17
Structural analysis of hypertexts: Identifying hierarchies and useful metrics
In: http://www.cs.technion.ac.il/~ehudr/publications/pdf/BotafogoRS92a.pdf (1992)
BASE
Show details
18
Natural Language Processing Systems Evaluation Workshop Held in Berkely, California on 18 June 1991
In: DTIC AND NTIS (1991)
BASE
Show details
19
Metrics for MT evaluation: Evaluating reordering
In: http://homepages.inf.ed.ac.uk/miles/papers/mt09.pdf
Abstract: Abstract. Translating between dissimilar languages requires an account of the use of divergent word orders when expressing the same semantic content. Reordering poses a serious problem for statistical machine translation systems and has generated a considerable body of research aimed at meeting its challenges. Direct evaluation of reordering requires automatic metrics that explicitly measure the quality of word order choices in translations. Current metrics, such as BLEU, only evaluate re-ordering indirectly. We analyse the ability of current metrics to capture reordering performance. We then introduce permutation distance metrics as a direct method for measuring word order similarity between translations and reference sentences. By correlating all metrics with a novel method for eliciting human judgements of reordering quality, we show that current metrics are largely influenced by lexical choice, and that they are not able to distinguish between different reordering sce-narios. Also, we show that permutation distance metrics correlate very well with human judgements, and are impervious to lexical differences.
Keyword: BLEU; Human Evaluation; Machine Translation; METEOR; Metrics; Permutation Distances; Reordering; TER
URL: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.650.9610
http://homepages.inf.ed.ac.uk/miles/papers/mt09.pdf
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
19
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern