DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6
Hits 1 – 20 of 110

1
On Homophony and Rényi Entropy ...
BASE
Show details
2
Backtranslation in Neural Morphological Inflection ...
BASE
Show details
3
Rule-based Morphological Inflection Improves Neural Terminology Translation ...
BASE
Show details
4
Translating Headers of Tabular Data: A Pilot Study of Schema Translation ...
BASE
Show details
5
An Information-Theoretic Characterization of Morphological Fusion ...
BASE
Show details
6
Analyzing the Surprising Variability in Word Embedding Stability Across Languages ...
BASE
Show details
7
Neural Machine Translation with Heterogeneous Topic Knowledge Embeddings ...
BASE
Show details
8
STaCK: Sentence Ordering with Temporal Commonsense Knowledge ...
BASE
Show details
9
Wikily Supervised Neural Translation Tailored to Cross-Lingual Tasks ...
BASE
Show details
10
Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation ...
BASE
Show details
11
Rethinking Data Augmentation for Low-Resource Neural Machine Translation: A Multi-Task Learning Approach ...
BASE
Show details
12
Sequence Length is a Domain: Length-based Overfitting in Transformer Models ...
BASE
Show details
13
Speechformer: Reducing Information Loss in Direct Speech Translation ...
BASE
Show details
14
Data and Parameter Scaling Laws for Neural Machine Translation ...
BASE
Show details
15
A Simple Geometric Method for Cross-Lingual Linguistic Transformations with Pre-trained Autoencoders ...
BASE
Show details
16
Universal Simultaneous Machine Translation with Mixture-of-Experts Wait-k Policy ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.581/ Abstract: Simultaneous machine translation (SiMT) generates translation before reading the entire source sentence and hence it has to trade off between translation quality and latency. To fulfill the requirements of different translation quality and latency in practical applications, the previous methods usually need to train multiple SiMT models for different latency levels, resulting in large computational costs. In this paper, we propose a universal SiMT model with Mixture-of-Experts Wait-k Policy to achieve the best translation quality under arbitrary latency with only one trained model. Specifically, our method employs multi-head attention to accomplish the mixture of experts where each head is treated as a wait-k expert with its own waiting words number, and given a test latency and source inputs, the weights of the experts are accordingly adjusted to produce the best translation. Experiments on three datasets show that our method ...
Keyword: Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Machine translation; Natural Language Processing
URL: https://underline.io/lecture/37449-universal-simultaneous-machine-translation-with-mixture-of-experts-wait-k-policy
https://dx.doi.org/10.48448/sg3t-st28
BASE
Hide details
17
Learning to Rewrite for Non-Autoregressive Neural Machine Translation ...
BASE
Show details
18
Towards Making the Most of Dialogue Characteristics for Neural Chat Translation ...
BASE
Show details
19
Improving the Quality Trade-Off for Neural Machine Translation Multi-Domain Adaptation ...
BASE
Show details
20
Sometimes We Want Ungrammatical Translations ...
BASE
Show details

Page: 1 2 3 4 5 6

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
110
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern