DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 27

1
Learning and controlling the source-filter representation of speech with a variational autoencoder
In: https://hal.archives-ouvertes.fr/hal-03650569 ; 2022 (2022)
BASE
Show details
2
Learning and controlling the source-filter representation of speech with a variational autoencoder ...
BASE
Show details
3
Repeat after me: Self-supervised learning of acoustic-to-articulatory mapping by vocal imitation ...
BASE
Show details
4
High-resolution speaker counting in reverberant rooms using CRNN with Ambisonics features
In: EUSIPCO 2020 - 28th European Signal Processing Conference (EUSIPCO) ; https://hal.archives-ouvertes.fr/hal-03537323 ; EUSIPCO 2020 - 28th European Signal Processing Conference (EUSIPCO), Jan 2021, Amsterdam, Netherlands. pp.71-75, ⟨10.23919/Eusipco47968.2020.9287637⟩ (2021)
BASE
Show details
5
Alternate Endings: Improving Prosody for Incremental Neural TTS with Predicted Future Text Input
In: Interspeech 2021 - 22nd Annual Conference of the International Speech Communication Association ; https://hal.archives-ouvertes.fr/hal-03372802 ; Interspeech 2021 - 22nd Annual Conference of the International Speech Communication Association, Aug 2021, Brno, Czech Republic. pp.3865-3869, ⟨10.21437/Interspeech.2021-275⟩ (2021)
BASE
Show details
6
Learning robust speech representation with an articulatory-regularized variational autoencoder
In: Proccedings of Interspeech 2021 ; Interspeech 2021 - 22nd Annual Conference of the International Speech Communication Association ; https://hal.archives-ouvertes.fr/hal-03373252 ; Interspeech 2021 - 22nd Annual Conference of the International Speech Communication Association, Aug 2021, Brno, Czech Republic (2021)
BASE
Show details
7
Learning robust speech representation with an articulatory-regularized variational autoencoder ...
BASE
Show details
8
Towards an articulatory-driven neural vocoder for speech synthesis
In: ISSP 2020 - 12th International Seminar on Speech Production ; https://hal.archives-ouvertes.fr/hal-03184762 ; ISSP 2020 - 12th International Seminar on Speech Production, Dec 2020, Providence (virtual), United States (2020)
BASE
Show details
9
Evaluating the Potential Gain of Auditory and Audiovisual Speech-Predictive Coding Using Deep Learning
In: ISSN: 0899-7667 ; EISSN: 1530-888X ; Neural Computation ; https://hal.archives-ouvertes.fr/hal-03016083 ; Neural Computation, Massachusetts Institute of Technology Press (MIT Press), 2020, 32 (3), pp.596-625. ⟨10.1162/neco_a_01264⟩ (2020)
BASE
Show details
10
Deeppredspeech: Computational Models Of Predictive Speech Coding Based On Deep Learning ...
BASE
Show details
11
DeepPredSpeech: computational models of predictive speech coding based on deep learning ...
BASE
Show details
12
DeepPredSpeech: computational models of predictive speech coding based on deep learning ...
BASE
Show details
13
Extending the Cascaded Gaussian Mixture Regression Framework for Cross-Speaker Acoustic-Articulatory Mapping
In: ISSN: 2329-9290 ; EISSN: 2329-9304 ; IEEE/ACM Transactions on Audio, Speech and Language Processing ; https://hal.archives-ouvertes.fr/hal-01485540 ; IEEE/ACM Transactions on Audio, Speech and Language Processing, Institute of Electrical and Electronics Engineers, 2017, 25 (3), pp.662-673. ⟨10.1109/TASLP.2017.2651398⟩ (2017)
BASE
Show details
14
Automatic animation of an articulatory tongue model from ultrasound images of the vocal tract
In: ISSN: 0167-6393 ; EISSN: 1872-7182 ; Speech Communication ; https://hal.archives-ouvertes.fr/hal-01578315 ; Speech Communication, Elsevier : North-Holland, 2017, 93, pp.63 - 75. ⟨10.1016/j.specom.2017.08.002⟩ (2017)
BASE
Show details
15
Voice Activity Detection Based on Statistical Likelihood Ratio With Adaptive Thresholding
In: IWAENC 2016 - International Workshop on Acoustic Signal Enhancement (IWAENC) ; https://hal.inria.fr/hal-01349776 ; IWAENC 2016 - International Workshop on Acoustic Signal Enhancement (IWAENC), Sep 2016, Xi'an, China. pp.1-5, ⟨10.1109/IWAENC.2016.7602911⟩ (2016)
BASE
Show details
16
Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces
In: ISSN: 1553-734X ; EISSN: 1553-7358 ; PLoS Computational Biology ; https://hal.archives-ouvertes.fr/hal-01459706 ; PLoS Computational Biology, Public Library of Science, 2016, 12 (11), pp.e1005119. ⟨10.1371/journal.pcbi.1005119⟩ (2016)
Abstract: International audience ; Restoring natural speech in paralyzed and aphasic people could be achieved using a Brain-Computer Interface (BCI) controlling a speech synthesizer in real-time. To reach this goal, a prerequisite is to develop a speech synthesizer producing intelligible speech in real-time with a reasonable number of control parameters. We present here an articulatory-based speech synthesizer that can be controlled in real-time for future BCI applications. This synthesizer converts movements of the main speech articulators (tongue, jaw, velum, and lips) into intelligible speech. The articulatory-to-acoustic mapping is performed using a deep neural network (DNN) trained on electromagnetic articulography (EMA) data recorded on a reference speaker synchronously with the produced speech signal. This DNN is then used in both offline and online modes to map the position of sensors glued on different speech articulators into acoustic parameters that are further converted into an audio signal using a vocoder. In offline mode, highly intelligible speech could be obtained as assessed by perceptual evaluation performed by 12 listeners. Then, to anticipate future BCI applications, we further assessed the real-time control of the synthesizer by both the reference speaker and new speakers, in a closed-loop paradigm using EMA data recorded in real time. A short calibration period was used to compensate for differences in sensor positions and articulatory differences between new speakers and the reference speaker. We found that real-time synthesis of vowels and consonants was possible with good intelligibility. In conclusion, these results open to future speech BCI applications using such articulatory-based speech synthesizer.
Keyword: [SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing
URL: https://hal.archives-ouvertes.fr/hal-01459706
https://doi.org/10.1371/journal.pcbi.1005119
BASE
Hide details
17
By2014 Articulatory-Acoustic Dataset ...
BASE
Show details
18
Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces
Bocquelet, Florent; Hueber, Thomas; Girin, Laurent. - : Public Library of Science, 2016
BASE
Show details
19
Log-Rayleigh distribution: a simple and efficient statistical representation of log-spectral coefficients
In: Institute of Electrical and Electronics Engineers. IEEE transactions on audio, speech and language processing. - New York, NY : Inst. 15 (2007) 3, 796-802
BLLDB
Show details
20
Perceptual long-term variable-rate sinusoidal modeling of speech
In: Institute of Electrical and Electronics Engineers. IEEE transactions on audio, speech and language processing. - New York, NY : Inst. 15 (2007) 3, 851-861
BLLDB
Show details

Page: 1 2

Catalogues
0
0
2
0
0
0
0
Bibliographies
7
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
19
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern