DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 34

1
Mandarin-English Code-switching Speech Recognition with Self-supervised Speech Representation Models ...
BASE
Show details
2
Improving Cross-Lingual Reading Comprehension with Self-Training ...
BASE
Show details
3
Investigating the Reordering Capability in CTC-based Non-Autoregressive End-to-End Speech Translation ...
BASE
Show details
4
S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained Representations ...
BASE
Show details
5
Mitigating Biases in Toxic Language Detection through Invariant Rationalization ...
BASE
Show details
6
Mitigating Biases in Toxic Language Detection through Invariant Rationalization ...
BASE
Show details
7
Looking for Clues of Language in Multilingual BERT to Improve Cross-lingual Generalization ...
BASE
Show details
8
DARTS-ASR: Differentiable Architecture Search for Multilingual Speech Recognition and Adaptation ...
BASE
Show details
9
What makes multilingual BERT multilingual? ...
BASE
Show details
10
A Study of Cross-Lingual Ability and Language-specific Information in Multilingual BERT ...
BASE
Show details
11
Pretrained Language Model Embryology: The Birth of ALBERT ...
BASE
Show details
12
AGAIN-VC: A One-shot Voice Conversion using Activation Guidance and Adaptive Instance Normalization ...
BASE
Show details
13
VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net architecture ...
Wu, Da-Yi; Chen, Yen-Hao; Lee, Hung-Yi. - : arXiv, 2020
BASE
Show details
14
Defending Your Voice: Adversarial Attack on Voice Conversion ...
BASE
Show details
15
FragmentVC: Any-to-Any Voice Conversion by End-to-End Extracting and Fusing Fine-Grained Voice Fragments With Attention ...
BASE
Show details
16
Training a code-switching language model with monolingual data ...
BASE
Show details
17
Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model ...
Abstract: Because it is not feasible to collect training data for every language, there is a growing interest in cross-lingual transfer learning. In this paper, we systematically explore zero-shot cross-lingual transfer learning on reading comprehension tasks with a language representation model pre-trained on multi-lingual corpus. The experimental results show that with pre-trained language representation zero-shot learning is feasible, and translating the source data into the target language is not necessary and even degrades the performance. We further explore what does the model learn in zero-shot setting. ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG; Machine Learning stat.ML
URL: https://arxiv.org/abs/1909.09587
https://dx.doi.org/10.48550/arxiv.1909.09587
BASE
Hide details
18
Towards Unsupervised Speech Recognition and Synthesis with Quantized Speech Representation Learning ...
BASE
Show details
19
From Semi-supervised to Almost-unsupervised Speech Recognition with Very-low Resource by Jointly Learning Phonetic Structures from Audio and Text Embeddings ...
BASE
Show details
20
Improved Speech Separation with Time-and-Frequency Cross-domain Joint Embedding and Clustering ...
BASE
Show details

Page: 1 2

Catalogues
0
0
1
0
0
0
0
Bibliographies
1
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
33
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern