2 |
Tackling data scarcity in speech translation using zero-shot multilingual machine translation techniques ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Maastricht University’s Multilingual Speech Translation System for IWSLT 2021
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Toward Multilingual Neural Machine Translation with Universal Encoder and Decoder
|
|
|
|
BASE
|
|
Show details
|
|
7 |
The Karlsruhe Institute of Technology Systems for the News Translation Task in WMT 2017
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation
|
|
|
|
In: Transactions of the Association for Computational Linguistics, 7, 313–325 ; ISSN: 2307-387X (2022)
|
|
Abstract:
Speech translation has traditionally been approached through cascaded models consisting of a speech recognizer trained on a corpus of transcribed speech, and a machine translation system trained on parallel texts. Several recent works have shown the feasibility of collapsing the cascade into a single, direct model that can be trained in an end-to-end fashion on a corpus of translated speech. However, experiments are inconclusive on whether the cascade or the direct model is stronger, and have only been conducted under the unrealistic assumption that both are trained on equal amounts of data, ignoring other available speech recognition and machine translation corpora. In this paper, we demonstrate that direct speech translation models require more data to perform well than cascaded models, and although they allow including auxiliary data through multi-task training, they are poor at exploiting such data, putting them at a severe disadvantage. As a remedy, we propose the use of end-to-end trainable models with two attention mechanisms, the first establishing source speech to source text alignments, the second modeling source to target text alignment. We show that such models naturally decompose into multi-task–trainable recognition and translation tasks and propose an attention-passing technique that alleviates error propagation issues in a previous formulation of a model with two attention stages. Our proposed model outper-forms all examined baselines and is able to exploit auxiliary training data much more effectively than direct attentional models.
|
|
Keyword:
DATA processing & computer science; ddc:004; info:eu-repo/classification/ddc/004
|
|
URL: https://publikationen.bibliothek.kit.edu/1000145064 https://publikationen.bibliothek.kit.edu/1000145064/148663710 https://doi.org/10.5445/IR/1000145064
|
|
BASE
|
|
Hide details
|
|
11 |
Adapting End-to-End Speech Recognition for Readable Subtitles
|
|
|
|
BASE
|
|
Show details
|
|
13 |
KIT Lecture Translator: Multilingual Speech Translation with One-Shot Learning
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Robust and Scalable Differentiable Neural Computer for Question Answering
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Lecture Translator Speech translation framework for simultaneous lecture translation
|
|
|
|
BASE
|
|
Show details
|
|
16 |
The Universität Karlsruhe Translation System for the EACL-WMT 2009
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Incremental processing of noisy user utterances in the spoken language understanding task
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Improving Zero-shot Translation with Language-Independent Constraints
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Lexical Translation Model Using A Deep Neural Network Architecture
|
|
|
|
BASE
|
|
Show details
|
|
|
|