DE eng

Search in the Catalogues and Directories

Hits 1 – 19 of 19

1
Can you ‘read’ tongue movements? Evaluation of the contribution of tongue display to speech understanding”
In: http://www.gipsa-lab.fr/%7Epierre.badin/PublisPDF_Badin_Endnote/Tarabalka_badin_Elisei_Bailly_TongueReading_ASSISTH_2007.pdf (2010)
BASE
Show details
2
Author manuscript, published in "Interspeech, Brisbane: Australie Australia (2008)" Can you “read tongue movements”?
In: http://hal.archives-ouvertes.fr/docs/00/33/36/88/PDF/pb_IS08.pdf (2008)
BASE
Show details
3
Can you ”read tongue movements
In: http://www.gipsa-lab.fr/%7Epierre.badin/PublisPDF_Badin_Endnote/Badin_Tarabalka_Elisei_Bailly_TongueReading_Interspeech_2008.pdf (2008)
BASE
Show details
4
Can you ”read tongue movements
In: http://www.gipsa-lab.grenoble-inp.fr/%7Egerard.bailly/publis/synthese/_pbadin/pb_IS08.pdf (2008)
BASE
Show details
5
Author manuscript, published in "Auditory-Visual Speech Processing (AVSP), Moreton Island: Australia (2008)" Speaking with smile or disgust: data and models
In: http://hal.archives-ouvertes.fr/docs/00/33/36/73/PDF/gb_AVSP08.pdf (2008)
BASE
Show details
6
Degrees of freedom of facial movements in face-to-face conversational speech
In: http://www.gipsa-lab.fr/%7Epierre.badin/PublisPDF_Badin_Endnote/Bailly_Elisei_Badin_Savariaux_DoFFacialMovements_MMC_2006.pdf (2006)
BASE
Show details
7
Degrees of freedom of facial movements in face-to-face conversational speech
In: http://www.gipsa-lab.grenoble-inp.fr/%7Echristophe.savariaux/PDF/LREC_2006.pdf (2006)
BASE
Show details
8
Mother: A new generation of talking heads providing a flexible articulatory control for video-realistic speech animation
In: http://hal.inria.fr/docs/00/38/93/62/PDF/icslp00.pdf (2000)
BASE
Show details
9
Towards the Use of a Virtual Talking Head and of Speech Mapping tools for pronunciation training
In: http://www.icp.grenet.fr/ICP/publis/acoustique/_pb/STiLL98.ps (1998)
BASE
Show details
10
you ”read tongue movements
In: http://halshs.archives-ouvertes.fr/docs/00/33/36/88/PDF/pb_IS08.pdf
BASE
Show details
11
Virtual Talking Heads and audiovisual articulatory synthesis
In: http://www.icp.inpg.fr/ICP/publis/synthese/_autres/vth_pb_03.pdf
BASE
Show details
12
Visual articulatory feedback for phonetic correction in second language learning
In: http://www.gavo.t.u-tokyo.ac.jp/L2WS2010/papers/L2WS2010_P1-10.pdf
BASE
Show details
13
Cross-speaker Acoustic-to-Articulatory Inversion using Phone-based Trajectory HMM for Pronunciation Training
In: http://www.cstr.ed.ac.uk/downloads/publications/2012/Hueber_etal_IS2012.pdf
BASE
Show details
14
INTERSPEECH 2011 Toward a multi-speaker visual articulatory feedback system
In: http://www.cstr.ed.ac.uk/downloads/publications/2011/BenYoussef-etal_IS11.pdf
BASE
Show details
15
Speaking with smile or disgust: data and models
In: http://isca-speech.org/archive_open/archive_papers/avsp08/av08_111.pdf
BASE
Show details
16
Mother: A new generation of talking heads providing a flexible articulatory control for video-realistic speech animation
In: http://www-evasion.imag.fr/people/Lionel.Reveret/publis/icslp00.pdf
BASE
Show details
17
Vision of Tongue in Augmented Speech: Contribution to Speech Comprehension and Visual Tracking Strategies
In: http://www.icp.inpg.fr/~dohen/face2face/Proceedings/SubmittedContributions/BadinEliseiHuangTarabalkaBailly.pdf
BASE
Show details
18
Acoustic-toarticulatory inversion in speech based on statistical models
In: http://www.cstr.ed.ac.uk/downloads/publications/2010/BenYoussef_Badin_Bailly_AVSP2010.pdf
Abstract: Two speech inversion methods are implemented and compared. In the first, multistream Hidden Markov Models (HMMs) of phonemes are jointly trained from synchronous streams of articulatory data acquired by EMA and speech spectral parameters; an acoustic recognition system uses the acoustic part of the HMMs to deliver a phoneme chain and the states durations; this information is then used by a trajectory formation procedure based on the articulatory part of the HMMs to resynthesise the articulatory movements. In the second, Gaussian Mixture Models (GMMs) are trained on these streams to directly associate articulatory frames with acoustic frames in context, using Maximum Likelihood Estimation. Over a corpus of 17 minutes uttered by a French speaker, the RMS error was 1.62 mm with the HMMs and 2.25 mm with the GMMs.
Keyword: ElectroMagnetic Articulography (EMA; Gaussian Mixture Model (GMM; Hidden Markov Model (HMM; Index Terms; Maximum Likelihood; Speech inversion
URL: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.395.9994
http://www.cstr.ed.ac.uk/downloads/publications/2010/BenYoussef_Badin_Bailly_AVSP2010.pdf
BASE
Hide details
19
Hearing By Eyes Thanks To The "Labiophone": Exchanging Speech Movements
In: http://www.icp.inpg.fr/~bailly/publis/synthese/_gb/labiophone_gb_COST00.ps
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
19
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern