1 |
Deep Neural Convolutive Matrix Factorization for Articulatory Representation Decomposition ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Cross-Lingual Text-to-Speech Using Multi-Task Learning and Speaker Classifier Joint Training ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Improving the fusion of acoustic and text representations in RNN-T ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Automatic Depression Detection: An Emotional Audio-Textual Corpus and a GRU/BiLSTM-based Model ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Separate What You Describe: Language-Queried Audio Source Separation ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Chain-based Discriminative Autoencoders for Speech Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Unsupervised word-level prosody tagging for controllable speech synthesis ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
gTLO: A Generalized and Non-linear Multi-Objective Deep Reinforcement Learning Approach ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Cetacean Translation Initiative: a roadmap to deciphering the communication of sperm whales ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Improving End-To-End Modeling for Mispronunciation Detection with Effective Augmentation Mechanisms ...
|
|
|
|
Abstract:
Recently, end-to-end (E2E) models, which allow to take spectral vector sequences of L2 (second-language) learners' utterances as input and produce the corresponding phone-level sequences as output, have attracted much research attention in developing mispronunciation detection (MD) systems. However, due to the lack of sufficient labeled speech data of L2 speakers for model estimation, E2E MD models are prone to overfitting in relation to conventional ones that are built on DNN-HMM acoustic models. To alleviate this critical issue, we in this paper propose two modeling strategies to enhance the discrimination capability of E2E MD models, each of which can implicitly leverage the phonetic and phonological traits encoded in a pretrained acoustic model and contained within reference transcripts of the training data, respectively. The first one is input augmentation, which aims to distill knowledge about phonetic discrimination from a DNN-HMM acoustic model. The second one is label augmentation, which manages to ... : 7 pages, 2 figures, 4 tables, accepted to Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC 2021) ...
|
|
Keyword:
Artificial Intelligence cs.AI; Audio and Speech Processing eess.AS; FOS Computer and information sciences; FOS Electrical engineering, electronic engineering, information engineering; Sound cs.SD
|
|
URL: https://arxiv.org/abs/2110.08731 https://dx.doi.org/10.48550/arxiv.2110.08731
|
|
BASE
|
|
Hide details
|
|
14 |
An Improved StarGAN for Emotional Voice Conversion: Enhancing Voice Quality and Data Augmentation ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Speech2Slot: An End-to-End Knowledge-based Slot Filling from Speech ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
NIST SRE CTS Superset: A large-scale dataset for telephony speaker recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Interpreting intermediate convolutional layers of CNNs trained on raw speech ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
A multispeaker dataset of raw and reconstructed speech production real-time MRI video and 3D volumetric images ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
A multispeaker dataset of raw and reconstructed speech production real-time MRI video and 3D volumetric images ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|