1 |
Emotion Intensity and its Control for Emotional Voice Conversion ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Limited Data Emotional Voice Conversion Leveraging Text-to-Speech: Two-stage Sequence-to-Sequence Training ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Identity Conversion for Emotional Speakers: A Study for Disentanglement of Emotion Style and Speaker Identity ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
CRSLab: An Open-Source Toolkit for Building Conversational Recommender System ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
VAW-GAN for Disentanglement and Recomposition of Emotional Elements in Speech ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Seen and Unseen emotional style transfer for voice conversion with a new emotional speech dataset ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Converting Anyone's Emotion: Towards Speaker-Independent Emotional Voice Conversion ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Transforming Spectrum and Prosody for Emotional Voice Conversion with Non-Parallel Training Data ...
|
|
|
|
Abstract:
Emotional voice conversion aims to convert the spectrum and prosody to change the emotional patterns of speech, while preserving the speaker identity and linguistic content. Many studies require parallel speech data between different emotional patterns, which is not practical in real life. Moreover, they often model the conversion of fundamental frequency (F0) with a simple linear transform. As F0 is a key aspect of intonation that is hierarchical in nature, we believe that it is more adequate to model F0 in different temporal scales by using wavelet transform. We propose a CycleGAN network to find an optimal pseudo pair from non-parallel training data by learning forward and inverse mappings simultaneously using adversarial and cycle-consistency losses. We also study the use of continuous wavelet transform (CWT) to decompose F0 into ten temporal scales, that describes speech prosody at different time resolution, for effective F0 conversion. Experimental results show that our proposed framework outperforms ... : accepted by Speaker Odyssey 2020 in Tokyo, Japan ...
|
|
Keyword:
Audio and Speech Processing eess.AS; Computation and Language cs.CL; FOS Computer and information sciences; FOS Electrical engineering, electronic engineering, information engineering; Sound cs.SD
|
|
URL: https://arxiv.org/abs/2002.00198 https://dx.doi.org/10.48550/arxiv.2002.00198
|
|
BASE
|
|
Hide details
|
|
10 |
VAW-GAN for Singing Voice Conversion with Non-parallel Training Data ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|