1 |
Can you ‘read’ tongue movements? Evaluation of the contribution of tongue display to speech understanding”
|
|
|
|
In: http://www.gipsa-lab.fr/%7Epierre.badin/PublisPDF_Badin_Endnote/Tarabalka_badin_Elisei_Bailly_TongueReading_ASSISTH_2007.pdf (2010)
|
|
BASE
|
|
Show details
|
|
2 |
Author manuscript, published in "Interspeech, Brisbane: Australie Australia (2008)" Can you “read tongue movements”?
|
|
|
|
In: http://hal.archives-ouvertes.fr/docs/00/33/36/88/PDF/pb_IS08.pdf (2008)
|
|
BASE
|
|
Show details
|
|
3 |
Can you ”read tongue movements
|
|
|
|
In: http://www.gipsa-lab.fr/%7Epierre.badin/PublisPDF_Badin_Endnote/Badin_Tarabalka_Elisei_Bailly_TongueReading_Interspeech_2008.pdf (2008)
|
|
BASE
|
|
Show details
|
|
4 |
Can you ”read tongue movements
|
|
|
|
In: http://www.gipsa-lab.grenoble-inp.fr/%7Egerard.bailly/publis/synthese/_pbadin/pb_IS08.pdf (2008)
|
|
BASE
|
|
Show details
|
|
5 |
Author manuscript, published in "Auditory-Visual Speech Processing (AVSP), Moreton Island: Australia (2008)" Speaking with smile or disgust: data and models
|
|
|
|
In: http://hal.archives-ouvertes.fr/docs/00/33/36/73/PDF/gb_AVSP08.pdf (2008)
|
|
BASE
|
|
Show details
|
|
6 |
Degrees of freedom of facial movements in face-to-face conversational speech
|
|
|
|
In: http://www.gipsa-lab.fr/%7Epierre.badin/PublisPDF_Badin_Endnote/Bailly_Elisei_Badin_Savariaux_DoFFacialMovements_MMC_2006.pdf (2006)
|
|
BASE
|
|
Show details
|
|
7 |
Degrees of freedom of facial movements in face-to-face conversational speech
|
|
|
|
In: http://www.gipsa-lab.grenoble-inp.fr/%7Echristophe.savariaux/PDF/LREC_2006.pdf (2006)
|
|
BASE
|
|
Show details
|
|
8 |
Mother: A new generation of talking heads providing a flexible articulatory control for video-realistic speech animation
|
|
|
|
In: http://hal.inria.fr/docs/00/38/93/62/PDF/icslp00.pdf (2000)
|
|
BASE
|
|
Show details
|
|
9 |
Towards the Use of a Virtual Talking Head and of Speech Mapping tools for pronunciation training
|
|
|
|
In: http://www.icp.grenet.fr/ICP/publis/acoustique/_pb/STiLL98.ps (1998)
|
|
BASE
|
|
Show details
|
|
10 |
you ”read tongue movements
|
|
|
|
In: http://halshs.archives-ouvertes.fr/docs/00/33/36/88/PDF/pb_IS08.pdf
|
|
BASE
|
|
Show details
|
|
11 |
Virtual Talking Heads and audiovisual articulatory synthesis
|
|
|
|
In: http://www.icp.inpg.fr/ICP/publis/synthese/_autres/vth_pb_03.pdf
|
|
BASE
|
|
Show details
|
|
12 |
Visual articulatory feedback for phonetic correction in second language learning
|
|
|
|
In: http://www.gavo.t.u-tokyo.ac.jp/L2WS2010/papers/L2WS2010_P1-10.pdf
|
|
BASE
|
|
Show details
|
|
13 |
Cross-speaker Acoustic-to-Articulatory Inversion using Phone-based Trajectory HMM for Pronunciation Training
|
|
|
|
In: http://www.cstr.ed.ac.uk/downloads/publications/2012/Hueber_etal_IS2012.pdf
|
|
BASE
|
|
Show details
|
|
14 |
INTERSPEECH 2011 Toward a multi-speaker visual articulatory feedback system
|
|
|
|
In: http://www.cstr.ed.ac.uk/downloads/publications/2011/BenYoussef-etal_IS11.pdf
|
|
Abstract:
In this paper, we present recent developments on the HMMbased acoustic-to-articulatory inversion approach that we develop for a “visual articulatory feedback ” system. In this approach, multi-stream phoneme HMMs are trained jointly on synchronous streams of acoustic and articulatory data, acquired by electromagnetic articulography (EMA). Acousticto-articulatory inversion is achieved in two steps. Phonetic and state decoding is first performed. Then articulatory trajectories are inferred from the decoded phone and state sequence using the maximum-likelihood parameter generation algorithm (MLPG). We introduce here a new procedure for the reestimation of the HMM parameters, based on the Minimum Generation Error criterion (MGE). We also investigate the use of model adaptation techniques based on maximum likelihood linear regression (MLLR), as a first step toward a multispeaker visual articulatory feedback system.
|
|
Keyword:
Acoustic-articulatory inversion; Index Terms
|
|
URL: http://www.cstr.ed.ac.uk/downloads/publications/2011/BenYoussef-etal_IS11.pdf http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.395.8518
|
|
BASE
|
|
Hide details
|
|
15 |
Speaking with smile or disgust: data and models
|
|
|
|
In: http://isca-speech.org/archive_open/archive_papers/avsp08/av08_111.pdf
|
|
BASE
|
|
Show details
|
|
16 |
Mother: A new generation of talking heads providing a flexible articulatory control for video-realistic speech animation
|
|
|
|
In: http://www-evasion.imag.fr/people/Lionel.Reveret/publis/icslp00.pdf
|
|
BASE
|
|
Show details
|
|
17 |
Vision of Tongue in Augmented Speech: Contribution to Speech Comprehension and Visual Tracking Strategies
|
|
|
|
In: http://www.icp.inpg.fr/~dohen/face2face/Proceedings/SubmittedContributions/BadinEliseiHuangTarabalkaBailly.pdf
|
|
BASE
|
|
Show details
|
|
18 |
Acoustic-toarticulatory inversion in speech based on statistical models
|
|
|
|
In: http://www.cstr.ed.ac.uk/downloads/publications/2010/BenYoussef_Badin_Bailly_AVSP2010.pdf
|
|
BASE
|
|
Show details
|
|
19 |
Hearing By Eyes Thanks To The "Labiophone": Exchanging Speech Movements
|
|
|
|
In: http://www.icp.inpg.fr/~bailly/publis/synthese/_gb/labiophone_gb_COST00.ps
|
|
BASE
|
|
Show details
|
|
|
|