DE eng

Search in the Catalogues and Directories

Page: 1 2 3
Hits 21 – 40 of 47

21
Phonetic convergence in interaction ; Convergence phonétique en interaction Phonetic convergence in interaction
Lelong, Amélie. - : HAL CCSD, 2012
In: https://tel.archives-ouvertes.fr/tel-00822871 ; Autre. Université de Grenoble, 2012. Français. ⟨NNT : 2012GRENT079⟩ (2012)
BASE
Show details
22
Bayesian Speaker Adaptation Based on a New Hierarchical Probabilistic Model
In: Electrical and Computer Engineering Faculty Research and Publications (2012)
BASE
Show details
23
Using articulatory adjustment to compensate for hypernasality - a modeling study based on measures of electromagnetic articulography (EMA)
Rong, Panying. - 2012
Abstract: The speech of individuals with velopharyngeal incompetency (VPI) is characterized by hypernasality, a speech quality related to excessive emission of acoustic energy through the nose, as caused by failure of velopharyngeal closure. As an attempt to reduce hypernasality and, in turn, improve the quality of VPI-related hypernasal speech, this study is dedicated to developing an approach that uses speech-dependent articulatory adjustments to reduce hypernasality caused by excessive velopharyngeal opening. A preliminary study has been done to derive such articulatory adjustments for hypernasal /i/ vowels based on the simulation of an articulatorymodel (Speech Processing and Synthesis Toolboxes, Childers (2000)). Both nasal /i/ vowels with and without articulatory adjustments were synthesized by the model. Spectral analysis found that nasal acoustic features were attenuated and oral formant structures were restored after articulatory adjustments. In addition, comparisons of perceptual ratings of nasality between the two types of nasal vowels showed the articulatory adjustments generated by the model significantly reduced the perception of nasality for nasal /i/ vowels. Such articulatory adjustments for nasal /i/ have two patterns: 1) a consistent adjustment pattern, which corresponds an expansion at the velopharynx, and 2) some speech-dependent fine-tuning adjustment patterns, including adjustments in the lip area and the upper pharynx. The long-term goal of this study is to apply this approach of articulatory adjustment as a therapeutic tool in clinical speech treatment to detect and correct the maladaptive articulatory behaviors developed spontaneously by speakers with VPI on individual bases. This study constructed a speaker-adaptive articulatory model on the basis of the framework of Childers’s vocal tract model to simulate articulatory adjustments aiming at compensating for the acoustic outcome caused by velopharyngeal opening and reducing nasality. To construct such a speaker-adaptive articulatory model, (1) an articulatory-acoustic-aerodynamic database was recorded using the articulography and aerodynamic instruments to provide point-wise articulatory data to be fitted into the framework of Childers’s standard vocal tract model; (2) the length and transverse dimension of the vocal tract were adjusted to fit individual speaker by minimizing the acoustic discrepancy between the model simulation and the target derived from acoustic signal in the database using the simulated annealing algorithm; (3) the articulatory space of the model was adjusted to fit individual articulatory features by adapting the movement ranges of all articulators. With the speaker-adaptive articulatory model, the articulatory configurations of the oral and nasal vowels in the database were simulated and synthesized. Given the acoustic targets derived from the oral vowels in the database, speech-dependent articulatory adjustments were simulated to compensate for the acoustic outcome caused by VPO. The resultant articulatory configurations corresponds to nasal vowels with articulatory adjustment, which were synthesized to serve as the perceptual stimuli for a listening task of nasality rating. The oral and nasal vowels synthesized based on the oral and nasal vowel targets in the database also served as the perceptual stimuli. The results suggest both acoustic and perceptual effects of the mode-generated articulatory adjustment on the nasal vowels /a/, /i/ and /u/. In terms of acoustics, the articulatory adjustment (1) restores the altered formant structures due to nasal coupling, including shifted formant frequency, attenuated formant intensity and expanded formant bandwidth and (2) attenuates the peaks and zeros caused by nasal resonances. Perceptually, the articulatory adjustment generated by the speaker-adaptive model significantly reduces the perceived nasality for all three vowels (/a/, /i/, /u/). The acoustic and perceptual effects of articulatory adjustment suggest achievement of the acoustic goal of compensating for the acoustic discrepancy caused by VPO and the auditory goal of reducing the perception of nasality. Such a finding is consistent with motor equivalence (Hughes and Abbs, 1976; Maeda, 1990), which enables inter-articulator coordination to compensate for the deviation from the acoustic/auditory goal caused by the shifted position of an articulator. The articulatory adjustment responsible for the acoustic and perceptual effects as described above was decomposed into a set of empirical orthogonal modes (Story and Titze, 1998). Both gross articulatory patterns and fine-tuning adjustments were found in the principal orthogonal modes, which lead to the acoustic compensation and reduction of nasality. For /a/ and /i/, a direct relationship was found among the acoustic features, nasality, and articulatory adjustment patterns. Specifically, the articulatory adjustments indicated by the principal orthogonal modes of the adjusted nasal /a/ and /i/ were directly correlated with the attenuation of the acoustic cues of nasality (i.e., shifting of F1 and F2 frequencies) and the reduction of nasality rating. For /u/, such a direct relationship among the acoustic features, nasality and articulatory adjustment was not as prominent, suggesting the possibility of additional acoustic correlates of nasality other than F1 and F2. The findings of this study demonstrate the possibility of using articulatory adjustment to reduce the perception of nasality through model simulation. A speaker-adaptive articulatory model is able to simulate individual-based articulatory adjustment strategies that can be applied in clinical settings to serve as the articulatory targets for correction of the maladaptive articulatory behaviors developed spontaneously by speakers with hypernasal speech. Such a speaker-adaptive articulatory model provides an intuitive way of articulatory learning and self-training for speakers with VPI to learn appropriate articulatory strategies through model-speaker interaction.
Keyword: Articulatory adjustment; Articulatory modeling; Electromagnetic Articulography; Hypernasality; Speaker adaptation
URL: http://hdl.handle.net/2142/42157
BASE
Hide details
24
Speaker similarity evaluation of foreign-accented speech synthesis using HMM-based speaker adaptation
In: http://www.cstr.inf.ed.ac.uk/downloads/publications/2011/wester_icassp_2011.pdf (2011)
BASE
Show details
25
Computational differences between whispered and non-whispered speech
Lim, Boon Pang. - 2011
BASE
Show details
26
Automatic Speech Recognition for ageing voices
Vipperla, Ravichander. - : The University of Edinburgh, 2011
BASE
Show details
27
Vocal Attractiveness Of Statistical Speech Synthesisers
BASE
Show details
28
Speaker similarity evaluation of foreign-accented speech synthesis using HMM-based speaker adaptation
Wester, Mirjam; Karhila, Reima. - : IEEE, 2011
BASE
Show details
29
Unsupervised intralingual and cross-lingual speaker adaptation for HMM-based speech synthesis using two-pass decision tree construction
Gibson, Matthew; Byrne, William. - : IEEE Transactions on Audio, Speech and Language Processing, 2010. : IEEE Transactions on Audio, Speech, and Language Processing, 2010
BASE
Show details
30
Thousands of Voices for HMM-Based Speech Synthesis-Analysis and Application of TTS Systems Built on Various ASR Corpora
BASE
Show details
31
Thousands of Voices for HMM-Based Speech Synthesis-Analysis and Application of TTS Systems Built on Various ASR Corpora
BASE
Show details
32
Speaker adaptation and the evaluation of speaker similarity in the EMIME speech-to-speech translation project
Wester, Mirjam; Dines, John; Gibson, Matthew. - : 7th ISCA Speech Synthesis Workshop, 2010
BASE
Show details
33
Two-pass decision tree construction for unsupervised adaptation of HMM-based synthesis models
Gibson, Matthew. - 2009
BASE
Show details
34
Cross-lingual speaker adaptation for HMM-based speech synthesis
In: http://isca-speech.org/archive_open/archive_papers/iscslp2008/009.pdf (2008)
BASE
Show details
35
Cross-Lingual Speaker Adaptation for HMM-Based Speech Synthesis
Yi Jian Wu; Simon King; Keiichi Tokuda. - : Institute of Electrical and Electronics Engineers, 2008
BASE
Show details
36
Speaker Adaptation of Language Models for Automatic Dialog Act Segmentation of Meetings
In: DTIC (2007)
BASE
Show details
37
Nonparallel Training for Voice Conversion Based on a Parameter Adaptation Approach
In: Departmental Papers (ESE) (2006)
BASE
Show details
38
Non-Parallel Training for Voice Conversion by Maximum Likelihood Constrained Adaptation
In: Departmental Papers (ESE) (2004)
BASE
Show details
39
Speech Recognition Using Dynamical Model of Speech Production Ken-ichi Iso \Lambda
In: http://reports-archive.adm.cs.cmu.edu/anon/1992/CMU-CS-92-187.ps (1992)
BASE
Show details
40
Speech Recognition Using Dynamical Model of Speech Production
In: ftp://reports.adm.cs.cmu.edu/usr/anon/1992/CMU-CS-92-187.ps (1992)
BASE
Show details

Page: 1 2 3

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
47
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern