DE eng

Search in the Catalogues and Directories

Hits 1 – 10 of 10

1
Emotion Intensity and its Control for Emotional Voice Conversion ...
Zhou, Kun; Sisman, Berrak; Rana, Rajib. - : arXiv, 2022
BASE
Show details
2
Limited Data Emotional Voice Conversion Leveraging Text-to-Speech: Two-stage Sequence-to-Sequence Training ...
Zhou, Kun; Sisman, Berrak; Li, Haizhou. - : arXiv, 2021
BASE
Show details
3
Identity Conversion for Emotional Speakers: A Study for Disentanglement of Emotion Style and Speaker Identity ...
BASE
Show details
4
CRSLab: An Open-Source Toolkit for Building Conversational Recommender System ...
BASE
Show details
5
Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models ...
BASE
Show details
6
VAW-GAN for Disentanglement and Recomposition of Emotional Elements in Speech ...
Zhou, Kun; Sisman, Berrak; Li, Haizhou. - : arXiv, 2020
BASE
Show details
7
Seen and Unseen emotional style transfer for voice conversion with a new emotional speech dataset ...
Zhou, Kun; Sisman, Berrak; Liu, Rui. - : arXiv, 2020
BASE
Show details
8
Converting Anyone's Emotion: Towards Speaker-Independent Emotional Voice Conversion ...
BASE
Show details
9
Transforming Spectrum and Prosody for Emotional Voice Conversion with Non-Parallel Training Data ...
Zhou, Kun; Sisman, Berrak; Li, Haizhou. - : arXiv, 2020
BASE
Show details
10
VAW-GAN for Singing Voice Conversion with Non-parallel Training Data ...
Abstract: Singing voice conversion aims to convert singer's voice from source to target without changing singing content. Parallel training data is typically required for the training of singing voice conversion system, that is however not practical in real-life applications. Recent encoder-decoder structures, such as variational autoencoding Wasserstein generative adversarial network (VAW-GAN), provide an effective way to learn a mapping through non-parallel training data. In this paper, we propose a singing voice conversion framework that is based on VAW-GAN. We train an encoder to disentangle singer identity and singing prosody (F0 contour) from phonetic content. By conditioning on singer identity and F0, the decoder generates output spectral features with unseen target singer identity, and improves the F0 rendering. Experimental results show that the proposed framework achieves better performance than the baseline frameworks. ... : Accepted to APSIPA ASC 2020 ...
Keyword: Audio and Speech Processing eess.AS; Computation and Language cs.CL; FOS Computer and information sciences; FOS Electrical engineering, electronic engineering, information engineering; Sound cs.SD
URL: https://arxiv.org/abs/2008.03992
https://dx.doi.org/10.48550/arxiv.2008.03992
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
10
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern