1 |
Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Slangvolution: A Causal Analysis of Semantic Change and Frequency Dynamics in Slang ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Similarity between person roles in a card sorting experiment ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
SPT-Code: Sequence-to-Sequence Pre-Training for Learning Source Code Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Ensemble of Opinion Dynamics Models to Understand the Role of the Undecided in the Vaccination Debate ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Generating Authentic Adversarial Examples beyond Meaning-preserving with Doubly Round-trip Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Pirá: A Bilingual Portuguese-English Dataset for Question-Answering about the Ocean ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
A comparative study of several parameterizations for speaker recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
A Neural Pairwise Ranking Model for Readability Assessment ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
A bilingual approach to specialised adjectives through word embeddings in the karstology domain ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Speaker verification in mismatch training and testing conditions ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Universal Conditional Masked Language Pre-training for Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
SMDT: Selective Memory-Augmented Neural Document Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Learning How to Translate North Korean through South Korean ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
When do Contrastive Word Alignments Improve Many-to-many Neural Machine Translation? ...
|
|
|
|
Abstract:
Word alignment has proven to benefit many-to-many neural machine translation (NMT). However, high-quality ground-truth bilingual dictionaries were used for pre-editing in previous methods, which are unavailable for most language pairs. Meanwhile, the contrastive objective can implicitly utilize automatically learned word alignment, which has not been explored in many-to-many NMT. This work proposes a word-level contrastive objective to leverage word alignments for many-to-many NMT. Empirical results show that this leads to 0.8 BLEU gains for several language pairs. Analyses reveal that in many-to-many NMT, the encoder's sentence retrieval performance highly correlates with the translation quality, which explains when the proposed method impacts translation. This motivates future exploration for many-to-many NMT to improve the encoder's sentence retrieval performance. ... : NAACL 2022 findings ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://dx.doi.org/10.48550/arxiv.2204.12165 https://arxiv.org/abs/2204.12165
|
|
BASE
|
|
Hide details
|
|
19 |
Conditional Bilingual Mutual Information Based Adaptive Training for Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|