1 |
Between Deterministic and Nondeterministic Quantitative Automata (Invited Talk)
|
|
Boker, Udi. - : LIPIcs - Leibniz International Proceedings in Informatics. 30th EACSL Annual Conference on Computer Science Logic (CSL 2022), 2022
|
|
BASE
|
|
Show details
|
|
3 |
Latin Lemmatization & POS Tagging. Issues, Resources, Tools ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Latin Lemmatization & POS Tagging. Issues, Resources, Tools ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Mehrsprachigkeit und Translanguaging in Migrationsstudien: einige methodologische Überlegungen ; Multilingualism and Translanguaging in Migration Studies: Some Methodological Reflections
|
|
|
|
In: Forum Qualitative Sozialforschung / Forum: Qualitative Social Research; Bd. 23 Nr. 1 (2022) ; Forum Qualitative Sozialforschung / Forum: Qualitative Social Research; Vol. 23 No. 1 (2022) ; Forum Qualitative Sozialforschung / Forum: Qualitative Social Research; Vol. 23 Núm. 1 (2022) ; 1438-5627 (2022)
|
|
BASE
|
|
Show details
|
|
6 |
A Corpus-Based Sentence Classifier for Entity–Relationship Modelling
|
|
|
|
In: Electronics; Volume 11; Issue 6; Pages: 889 (2022)
|
|
BASE
|
|
Show details
|
|
7 |
Text Data Augmentation for the Korean Language
|
|
|
|
In: Applied Sciences; Volume 12; Issue 7; Pages: 3425 (2022)
|
|
BASE
|
|
Show details
|
|
8 |
Connecting Text Classification with Image Classification: A New Preprocessing Method for Implicit Sentiment Text Classification
|
|
|
|
In: Sensors; Volume 22; Issue 5; Pages: 1899 (2022)
|
|
BASE
|
|
Show details
|
|
9 |
FedQAS: Privacy-Aware Machine Reading Comprehension with Federated Learning
|
|
|
|
In: Applied Sciences; Volume 12; Issue 6; Pages: 3130 (2022)
|
|
BASE
|
|
Show details
|
|
10 |
eHealth Engagement on Facebook during COVID-19: Simplistic Computational Data Analysis
|
|
|
|
In: International Journal of Environmental Research and Public Health; Volume 19; Issue 8; Pages: 4615 (2022)
|
|
BASE
|
|
Show details
|
|
11 |
A Novel Method of Generating Geospatial Intelligence from Social Media Posts of Political Leaders
|
|
|
|
In: Information; Volume 13; Issue 3; Pages: 120 (2022)
|
|
BASE
|
|
Show details
|
|
12 |
Multilingualism and Translanguaging in Migration Studies: Some Methodological Reflections
|
|
|
|
In: Forum Qualitative Sozialforschung / Forum: Qualitative Social Research ; 23 ; 1 (2022)
|
|
BASE
|
|
Show details
|
|
13 |
Detecting weak and strong Islamophobic hate speech on social media
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Maastricht University’s Multilingual Speech Translation System for IWSLT 2021
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Toward Multilingual Neural Machine Translation with Universal Encoder and Decoder
|
|
|
|
BASE
|
|
Show details
|
|
18 |
The Karlsruhe Institute of Technology Systems for the News Translation Task in WMT 2017
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation
|
|
|
|
In: Transactions of the Association for Computational Linguistics, 7, 313–325 ; ISSN: 2307-387X (2022)
|
|
Abstract:
Speech translation has traditionally been approached through cascaded models consisting of a speech recognizer trained on a corpus of transcribed speech, and a machine translation system trained on parallel texts. Several recent works have shown the feasibility of collapsing the cascade into a single, direct model that can be trained in an end-to-end fashion on a corpus of translated speech. However, experiments are inconclusive on whether the cascade or the direct model is stronger, and have only been conducted under the unrealistic assumption that both are trained on equal amounts of data, ignoring other available speech recognition and machine translation corpora. In this paper, we demonstrate that direct speech translation models require more data to perform well than cascaded models, and although they allow including auxiliary data through multi-task training, they are poor at exploiting such data, putting them at a severe disadvantage. As a remedy, we propose the use of end-to-end trainable models with two attention mechanisms, the first establishing source speech to source text alignments, the second modeling source to target text alignment. We show that such models naturally decompose into multi-task–trainable recognition and translation tasks and propose an attention-passing technique that alleviates error propagation issues in a previous formulation of a model with two attention stages. Our proposed model outper-forms all examined baselines and is able to exploit auxiliary training data much more effectively than direct attentional models.
|
|
Keyword:
DATA processing & computer science; ddc:004; info:eu-repo/classification/ddc/004
|
|
URL: https://publikationen.bibliothek.kit.edu/1000145064 https://publikationen.bibliothek.kit.edu/1000145064/148663710 https://doi.org/10.5445/IR/1000145064
|
|
BASE
|
|
Hide details
|
|
|
|