21 |
Discrimination of multiple coronal stop contrasts in Wubuy (Australia): a natural referent consonant account
|
|
|
|
BASE
|
|
Show details
|
|
22 |
Frequency in the input affects perception of phonological contrasts for native speakers
|
|
|
|
BASE
|
|
Show details
|
|
23 |
Articulatory basis of the apical/laminal distinction : tongue tip/body coordination in the Wubuy 4-way coronal stop contrast
|
|
|
|
BASE
|
|
Show details
|
|
24 |
Exploring nonlinear relationships between speech face motion and tongue movements using Mutual Information
|
|
|
|
BASE
|
|
Show details
|
|
27 |
Now you see it, now you don't : frequency distribution of articulatory information reflected in speech face motion
|
|
|
|
BASE
|
|
Show details
|
|
29 |
Speech articulator movements recorded from facing talkers using two electromagnetic articulometer systems simultaneously
|
|
|
|
BASE
|
|
Show details
|
|
30 |
Multimodal speech animation from electromagnetic articulography data
|
|
Gibert, Guillaume (R14173); Attina, Virginie (R14534); Tiede, Mark; Bundgaard-Nielsen, Rikke L. (R14172); Kroos, Christian (R11604); Kasisopa, Benjawan (S25379); Vatikiotis-Bateson, Eric; Best, Catherine T. (R11322). - : U.S., IEEE, 2012
|
|
Abstract:
Virtual humans have become part of our everyday life (movies, internet, and computer games). Even though they are more and more realistic, their speech capabilities are, most of the time, limited and not coherent and/or not synchronous with the corresponding acoustic signal. We describe a method to convert a virtual human avatar (animated through key frames and interpolation) into a more naturalistic talking head. Speech-capabilities were added to the avatar using real speech production data. Electromagnetic articulography (EMA) data provided lip, jaw and tongue trajectories of a speaker involved in face to face communication. An articulatory model driving jaw, lip and tongue movements was built. Constraining the key frame values, a corresponding high definition tongue articulatory model was developed. The resulting avatar was able to produce visible and partly occluded facial speech movements coherent and synchronous with the acoustic signal.
|
|
Keyword:
170204 - Linguistic Processes (incl. Speech Production and Comprehension); 970120 - Expanding Knowledge in Languages; avatars (virtual reality); Communication and Culture; speech synthesis
|
|
URL: http://www.eurasip.org/Proceedings/Eusipco/Eusipco2012/Conference/index.html http://handle.uws.edu.au:8081/1959.7/521451
|
|
BASE
|
|
Hide details
|
|
31 |
Vowel acoustics reliably differentiate three coronal stops of Wubuy across prosodic contexts
|
|
|
|
BASE
|
|
Show details
|
|
32 |
Second language learners' vocabulary expansion is associated with improved second language vowel intelligibility
|
|
|
|
BASE
|
|
Show details
|
|
33 |
Vowel acoustics reliably differentiate three coronal stops of Wubuy across prosodic contexts
|
|
|
|
BASE
|
|
Show details
|
|
35 |
Vocabulary size matters : the assimilation of second-language Australian English vowels to first-language Japanese vowel categories
|
|
|
|
BASE
|
|
Show details
|
|
36 |
Vocabulary size is associated with second-language vowel perception performance in adult learners
|
|
|
|
BASE
|
|
Show details
|
|
38 |
Tongue body position differences in the coronal stop consonants of Wubuy
|
|
|
|
BASE
|
|
Show details
|
|
39 |
A kinematic analysis of temporal differentiation of the four-way coronal stop contrast in Wubuy (Australia)
|
|
|
|
BASE
|
|
Show details
|
|
|
|