1 |
Training a code-switching language model with monolingual data ...
|
|
|
|
Abstract:
A lack of code-switching data complicates the training of code-switching (CS) language models. We propose an approach to train such CS language models on monolingual data only. By constraining and normalizing the output projection matrix in RNN-based language models, we bring embeddings of different languages closer to each other. Numerical and visualization results show that the proposed approaches remarkably improve the performance of CS language models trained on monolingual data. The proposed approaches are comparable or even better than training CS language models with artificially generated CS data. We additionally use unsupervised bilingual word translation to analyze whether semantically equivalent words in different languages are mapped together. ... : Accepted as an oral presentation in ICASSP 2020 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/1911.06003 https://dx.doi.org/10.48550/arxiv.1911.06003
|
|
BASE
|
|
Hide details
|
|
2 |
Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Towards Unsupervised Speech Recognition and Synthesis with Quantized Speech Representation Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
From Semi-supervised to Almost-unsupervised Speech Recognition with Very-low Resource by Jointly Learning Phonetic Structures from Audio and Text Embeddings ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Improved Speech Separation with Time-and-Frequency Cross-domain Joint Embedding and Clustering ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|