DE eng

Search in the Catalogues and Directories

Hits 1 – 10 of 10

1
Emotion Intensity and its Control for Emotional Voice Conversion ...
Zhou, Kun; Sisman, Berrak; Rana, Rajib. - : arXiv, 2022
BASE
Show details
2
Limited Data Emotional Voice Conversion Leveraging Text-to-Speech: Two-stage Sequence-to-Sequence Training ...
Zhou, Kun; Sisman, Berrak; Li, Haizhou. - : arXiv, 2021
BASE
Show details
3
Identity Conversion for Emotional Speakers: A Study for Disentanglement of Emotion Style and Speaker Identity ...
BASE
Show details
4
CRSLab: An Open-Source Toolkit for Building Conversational Recommender System ...
BASE
Show details
5
Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models ...
BASE
Show details
6
VAW-GAN for Disentanglement and Recomposition of Emotional Elements in Speech ...
Zhou, Kun; Sisman, Berrak; Li, Haizhou. - : arXiv, 2020
BASE
Show details
7
Seen and Unseen emotional style transfer for voice conversion with a new emotional speech dataset ...
Zhou, Kun; Sisman, Berrak; Liu, Rui. - : arXiv, 2020
BASE
Show details
8
Converting Anyone's Emotion: Towards Speaker-Independent Emotional Voice Conversion ...
Abstract: Emotional voice conversion aims to convert the emotion of speech from one state to another while preserving the linguistic content and speaker identity. The prior studies on emotional voice conversion are mostly carried out under the assumption that emotion is speaker-dependent. We consider that there is a common code between speakers for emotional expression in a spoken language, therefore, a speaker-independent mapping between emotional states is possible. In this paper, we propose a speaker-independent emotional voice conversion framework, that can convert anyone's emotion without the need for parallel data. We propose a VAW-GAN based encoder-decoder structure to learn the spectrum and prosody mapping. We perform prosody conversion by using continuous wavelet transform (CWT) to model the temporal dependencies. We also investigate the use of F0 as an additional input to the decoder to improve emotion conversion performance. Experiments show that the proposed speaker-independent framework achieves ... : Accepted by Interspeech 2020 ...
Keyword: Artificial Intelligence cs.AI; Audio and Speech Processing eess.AS; Computation and Language cs.CL; FOS Computer and information sciences; FOS Electrical engineering, electronic engineering, information engineering; Sound cs.SD
URL: https://arxiv.org/abs/2005.07025
https://dx.doi.org/10.48550/arxiv.2005.07025
BASE
Hide details
9
Transforming Spectrum and Prosody for Emotional Voice Conversion with Non-Parallel Training Data ...
Zhou, Kun; Sisman, Berrak; Li, Haizhou. - : arXiv, 2020
BASE
Show details
10
VAW-GAN for Singing Voice Conversion with Non-parallel Training Data ...
Lu, Junchen; Zhou, Kun; Sisman, Berrak. - : arXiv, 2020
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
10
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern