DE eng

Search in the Catalogues and Directories

Hits 1 – 17 of 17

1
Self-Training Sampling with Monolingual Data Uncertainty for Neural Machine Translation ...
BASE
Show details
2
TexSmart: A System for Enhanced Natural Language Understanding ...
BASE
Show details
3
Tail-to-Tail Non-Autoregressive Sequence Prediction for Chinese Grammatical Error Correction ...
BASE
Show details
4
Self-Training Sampling with Monolingual Data Uncertainty for Neural Machine Translation ...
BASE
Show details
5
On the Copying Behaviors of Pre-Training for Neural Machine Translation ...
BASE
Show details
6
deepQuest-py: large and distilled models for quality estimation
Alva-Manchego, Fernando; Obamuyide, Abiola; Gajbhiye, Amit. - : Association for Computational Linguistics, 2021
BASE
Show details
7
deepQuest-py: large and distilled models for quality estimation
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations ; 382 ; 389 (2021)
BASE
Show details
8
On the Inference Calibration of Neural Machine Translation ...
Wang, Shuo; Tu, Zhaopeng; Shi, Shuming. - : arXiv, 2020
BASE
Show details
9
Assessing the Bilingual Knowledge Learned by Neural Machine Translation Models ...
He, Shilin; Wang, Xing; Shi, Shuming. - : arXiv, 2020
BASE
Show details
10
Neuron Interaction Based Representation Composition for Neural Machine Translation ...
Li, Jian; Wang, Xing; Yang, Baosong. - : arXiv, 2019
BASE
Show details
11
Multi-Granularity Self-Attention for Neural Machine Translation ...
Abstract: Current state-of-the-art neural machine translation (NMT) uses a deep multi-head self-attention network with no explicit phrase information. However, prior work on statistical machine translation has shown that extending the basic translation unit from words to phrases has produced substantial improvements, suggesting the possibility of improving NMT performance from explicit modeling of phrases. In this work, we present multi-granularity self-attention (Mg-Sa): a neural network that combines multi-head self-attention and phrase modeling. Specifically, we train several attention heads to attend to phrases in either n-gram or syntactic formalism. Moreover, we exploit interactions among phrases to enhance the strength of structure modeling - a commonly-cited weakness of self-attention. Experimental results on WMT14 English-to-German and NIST Chinese-to-English translation tasks show the proposed approach consistently improves performance. Targeted linguistic analysis reveals that Mg-Sa indeed captures useful ... : EMNLP 2019 ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/1909.02222
https://dx.doi.org/10.48550/arxiv.1909.02222
BASE
Hide details
12
Towards Understanding Neural Machine Translation with Word Importance ...
He, Shilin; Tu, Zhaopeng; Wang, Xing. - : arXiv, 2019
BASE
Show details
13
Towards Better Modeling Hierarchical Structure for Self-Attention with Ordered Neurons ...
Hao, Jie; Wang, Xing; Shi, Shuming. - : arXiv, 2019
BASE
Show details
14
Translating pro-drop languages with reconstruction models
In: Wang, Longyue orcid:0000-0002-9062-6183 , Tu, Zhaopeng, Shi, Shuming, Zhang, Tong, Graham, Yvette and Liu, Qun orcid:0000-0002-7000-1792 (2018) Translating pro-drop languages with reconstruction models. In: Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), 2–7 Feb 2018, New Orleans, LA, USA. ISBN 978-1-57735-800-8 (2018)
BASE
Show details
15
Translating pro-drop languages with reconstruction models
In: Wang, Longyue orcid:0000-0002-9062-6183 , Tu, Zhaopeng, Shi, Shuming, Zhang, Tong, Graham, Yvette and Liu, Qun orcid:0000-0002-7000-1792 (2018) Translating pro-drop languages with reconstruction models. In: 32nd AAAI Conference on Artificial Intelligence (AAAI 2018), 2 - 7 Feb 2018, New Orleans, LA, USA. ISBN 978-1-57735-800-8 (2018)
BASE
Show details
16
Translating Pro-Drop Languages with Reconstruction Models ...
BASE
Show details
17
Exploiting Deep Representations for Neural Machine Translation ...
Dou, Zi-Yi; Tu, Zhaopeng; Wang, Xing. - : arXiv, 2018
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
17
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern