1 |
SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
SHAPE: Shifted Absolute Position Embedding for Transformers ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Incorporating Residual and Normalization Layers into Analysis of Masked Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Pseudo Zero Pronoun Resolution Improves Zero Anaphora Resolution ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Exploring Methods for Generating Feedback Comments for Writing Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Transformer-based Lexically Constrained Headline Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Transformer-based Lexically Constrained Headline Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Topicalization in Language Models: A Case Study on Japanese ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
An Empirical Study of Contextual Data Augmentation for Japanese Zero Anaphora Resolution ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents ...
|
|
|
|
Abstract:
Neural Machine Translation (NMT) has shown drastic improvement in its quality when translating clean input, such as text from the news domain. However, existing studies suggest that NMT still struggles with certain kinds of input with considerable noise, such as User-Generated Contents (UGC) on the Internet. To make better use of NMT for cross-cultural communication, one of the most promising directions is to develop a model that correctly handles these expressions. Though its importance has been recognized, it is still not clear as to what creates the great gap in performance between the translation of clean input and that of UGC. To answer the question, we present a new dataset, PheMT, for evaluating the robustness of MT systems against specific linguistic phenomena in Japanese-English translation. Our experiments with the created dataset revealed that not only our in-house models but even widely used off-the-shelf systems are greatly disturbed by the presence of certain phenomena. ... : 15 pages, 4 figures, accepted at COLING 2020 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://dx.doi.org/10.48550/arxiv.2011.02121 https://arxiv.org/abs/2011.02121
|
|
BASE
|
|
Hide details
|
|
14 |
Seeing the world through text: Evaluating image descriptions for commonsense reasoning in machine reading comprehension ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Attention is Not Only a Weight: Analyzing Transformers with Vector Norms ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Filtering Noisy Dialogue Corpora by Connectivity and Content Relatedness ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Modeling Event Salience in Narratives via Barthes' Cardinal Functions ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language? ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|