1 |
Mandarin-English Code-switching Speech Recognition with Self-supervised Speech Representation Models ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Improving Cross-Lingual Reading Comprehension with Self-Training ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Investigating the Reordering Capability in CTC-based Non-Autoregressive End-to-End Speech Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Mitigating Biases in Toxic Language Detection through Invariant Rationalization ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Mitigating Biases in Toxic Language Detection through Invariant Rationalization ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Looking for Clues of Language in Multilingual BERT to Improve Cross-lingual Generalization ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
DARTS-ASR: Differentiable Architecture Search for Multilingual Speech Recognition and Adaptation ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
A Study of Cross-Lingual Ability and Language-specific Information in Multilingual BERT ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Pretrained Language Model Embryology: The Birth of ALBERT ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
AGAIN-VC: A One-shot Voice Conversion using Activation Guidance and Adaptive Instance Normalization ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net architecture ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Defending Your Voice: Adversarial Attack on Voice Conversion ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
FragmentVC: Any-to-Any Voice Conversion by End-to-End Extracting and Fusing Fine-Grained Voice Fragments With Attention ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Training a code-switching language model with monolingual data ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Towards Unsupervised Speech Recognition and Synthesis with Quantized Speech Representation Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
From Semi-supervised to Almost-unsupervised Speech Recognition with Very-low Resource by Jointly Learning Phonetic Structures from Audio and Text Embeddings ...
|
|
|
|
Abstract:
Producing a large amount of annotated speech data for training ASR systems remains difficult for more than 95% of languages all over the world which are low-resourced. However, we note human babies start to learn the language by the sounds (or phonetic structures) of a small number of exemplar words, and "generalize" such knowledge to other words without hearing a large amount of data. We initiate some preliminary work in this direction. Audio Word2Vec is used to learn the phonetic structures from spoken words (signal segments), while another autoencoder is used to learn the phonetic structures from text words. The relationships among the above two can be learned jointly, or separately after the above two are well trained. This relationship can be used in speech recognition with very low resource. In the initial experiments on the TIMIT dataset, only 2.1 hours of speech data (in which 2500 spoken words were annotated and the rest unlabeled) gave a word error rate of 44.6%, and this number can be reduced to ...
|
|
Keyword:
Audio and Speech Processing eess.AS; Computation and Language cs.CL; FOS Computer and information sciences; FOS Electrical engineering, electronic engineering, information engineering; Sound cs.SD
|
|
URL: https://arxiv.org/abs/1904.05078 https://dx.doi.org/10.48550/arxiv.1904.05078
|
|
BASE
|
|
Hide details
|
|
20 |
Improved Speech Separation with Time-and-Frequency Cross-domain Joint Embedding and Clustering ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|