DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6
Hits 1 – 20 of 110

1
On Homophony and Rényi Entropy ...
BASE
Show details
2
Backtranslation in Neural Morphological Inflection ...
BASE
Show details
3
Rule-based Morphological Inflection Improves Neural Terminology Translation ...
BASE
Show details
4
Translating Headers of Tabular Data: A Pilot Study of Schema Translation ...
BASE
Show details
5
An Information-Theoretic Characterization of Morphological Fusion ...
BASE
Show details
6
Analyzing the Surprising Variability in Word Embedding Stability Across Languages ...
BASE
Show details
7
Neural Machine Translation with Heterogeneous Topic Knowledge Embeddings ...
BASE
Show details
8
STaCK: Sentence Ordering with Temporal Commonsense Knowledge ...
BASE
Show details
9
Wikily Supervised Neural Translation Tailored to Cross-Lingual Tasks ...
BASE
Show details
10
Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.132/ Abstract: We study the power of cross-attention in the Transformer architecture within the context of transfer learning for machine translation, and extend the findings of studies into cross-attention when training from scratch. We conduct a series of experiments through fine-tuning a translation model on data where either the source or target language has changed. These experiments reveal that fine-tuning only the cross-attention parameters is nearly as effective as fine-tuning all parameters (i.e., the entire translation model). We provide insights into why this is the case and observe that limiting fine-tuning in this manner yields cross-lingually aligned embeddings. The implications of this finding for researchers and practitioners include a mitigation of catastrophic forgetting, the potential for zero-shot translation, and the ability to extend machine translation models to several new language pairs with reduced parameter storage ...
Keyword: Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Machine translation; Natural Language Processing
URL: https://dx.doi.org/10.48448/pgv2-cn55
https://underline.io/lecture/37970-cross-attention-is-all-you-need-adapting-pretrained-transformers-for-machine-translation
BASE
Hide details
11
Rethinking Data Augmentation for Low-Resource Neural Machine Translation: A Multi-Task Learning Approach ...
BASE
Show details
12
Sequence Length is a Domain: Length-based Overfitting in Transformer Models ...
BASE
Show details
13
Speechformer: Reducing Information Loss in Direct Speech Translation ...
BASE
Show details
14
Data and Parameter Scaling Laws for Neural Machine Translation ...
BASE
Show details
15
A Simple Geometric Method for Cross-Lingual Linguistic Transformations with Pre-trained Autoencoders ...
BASE
Show details
16
Universal Simultaneous Machine Translation with Mixture-of-Experts Wait-k Policy ...
BASE
Show details
17
Learning to Rewrite for Non-Autoregressive Neural Machine Translation ...
BASE
Show details
18
Towards Making the Most of Dialogue Characteristics for Neural Chat Translation ...
BASE
Show details
19
Improving the Quality Trade-Off for Neural Machine Translation Multi-Domain Adaptation ...
BASE
Show details
20
Sometimes We Want Ungrammatical Translations ...
BASE
Show details

Page: 1 2 3 4 5 6

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
110
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern