1 |
Using deep neural networks to estimate tongue movements from speech face motion
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Using deep neural networks to estimate tongue movements from speech face motion
|
|
|
|
Abstract:
This study concludes a tripartite investigation into the indirect visibility of the moving tongue in human speech as reflected in co-occurring changes of the facial surface. We were in particular interested in how the shared information is distributed over the range of contributing frequencies. In the current study we examine the degree to which tongue movements during speech can be reliably estimated from face motion using artificial neural networks. We simultaneously acquired data for both movement types; tongue movements were measured with Electromagnetic Articulography (EMA), face motion with a passive marker-based motion capture system. A multiresolution analysis using wavelets provided the desired decomposition into frequency subbands. In the two earlier studies of the project we established linear and non-linear relations between lingual and facial speech motions, as predicted and compatible with previous research in auditory-visual speech. The results of the current study using a Deep Neural Network (DNN) for prediction show that a substantive amount of variance can be recovered (between 13.9 and 33.2% dependent on the speaker and tongue sensor location). Importantly, however, the recovered variance values and the root mean squared error values of the Euclidean distances between the measured and the predicted tongue trajectories are in the range of the linear estimations of our earlier study.
|
|
Keyword:
speech processing systems; XXXXXX - Unknown
|
|
URL: http://avsp2017.loria.fr/proceedings/ http://handle.westernsydney.edu.au:8081/1959.7/uws:44755
|
|
BASE
|
|
Hide details
|
|
5 |
Consonantal timing and release burst acoustics distinguish multiple coronal stop place distinctions in Wubuy (Australia)
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Message vs. messenger effects on cross-modal matching for spoken phrases
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Discrimination of Multiple Coronal Stop Contrasts in Wubuy (Australia): A Natural Referent Consonant Account
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Discrimination of Multiple Coronal Stop Contrasts in Wubuy (Australia): A Natural Referent Consonant Account
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Message vs. messenger effects on cross-modal matching for spoken phrases
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Discrimination of multiple coronal stop contrasts in Wubuy (Australia) : a natural referent consonant account
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Discrimination of multiple coronal stop contrasts in Wubuy (Australia): a natural referent consonant account
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Singing emotionally: a study of pre-production, production, and post-production facial expressions
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Singing emotionally : a study of pre-production, production, and post-production facial expressions
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Articulatory basis of the apical/laminal distinction : tongue tip/body coordination in the Wubuy 4-way coronal stop contrast
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Exploring nonlinear relationships between speech face motion and tongue movements using Mutual Information
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Now you see it, now you don’t - frequency distribution of articulatory information reflected in speech face motion
|
|
|
|
BASE
|
|
Show details
|
|
|
|