DE eng

Search in the Catalogues and Directories

Hits 1 – 8 of 8

1
Rejection Based On A Posteriori Probability Estimated By Mlp With Application
In: http://www.princeton.edu/~lzhong/publications/ICASSP2000.pdf (2000)
BASE
Show details
2
Efficient Embedded Speech Recognition for Very Large Vocabulary Mandarin Car-Navigation Systems
In: http://speechlab.eece.mu.edu/johnson/papers/qian_ce09.pdf
BASE
Show details
3
ISCA Archive REALTIME VITERBI SEARCHING FOR PRACTICAL TELEPHONE SPEECH RECOGNITION SYSTEMS
In: http://isca-speech.org/archive_open/archive_papers/iscslp2002/clp2_104.pdf
BASE
Show details
4
IMPROVING TASK INDEPENDENT UTTERANCE VERIFICATION BASED ON ON-LINE GARBAGE PHONEME LIKELIHOOD
In: http://www.ruf.rice.edu/~mobile/publications/report-UV-2000.pdf
BASE
Show details
5
Perception of Face Parts and Face Configurations: An fMRI Study
In: http://web.mit.edu/bcs/nklab/media/pdfs/Liu.Harris.Kanwisher.JOCN2010.pdf
BASE
Show details
6
Perception of Face Parts and Face Configurations: An fMRI Study
In: http://web.mit.edu/bcs/nklab/media/pdfs/Liu.Perception.2009.pdf
BASE
Show details
7
Real-time Speech-driven Animation of Expressive Talking Faces
In: http://levis.tongji.edu.cn/gzli/pub/ijgs/ijgs7-zju.pdf
Abstract: In this paper, we present a real-time facial animation system in which speech drives mouth movements and facial expressions synchronously. Considering five basic emotions, a hierarchical structure with an upper-layer of emotion classification is established. Based on the recognized emotion label, the under-layer classification at sub-phonemic level has been modelled on the relationship between acoustic features of frames and audio labels in phonemes. Using certain constraint, the predicted emotion labels of speech are adjusted to gain the facial expression labels which are combined with sub-phonemic labels. The combinations are mapped into Facial Action Units (FAUs) and audio-visual synchronized animation with mouth movements and facial expressions are generated by morphing between FAUs. The experimental results demonstrate that the two-layer structure succeeds in both emotion and sub-phonemic classifications, and the synthesized facial sequences reaches a comparative convincing quality.
Keyword: audio-visual mapping
URL: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.332.2928
http://levis.tongji.edu.cn/gzli/pub/ijgs/ijgs7-zju.pdf
BASE
Hide details
8
The Part Task of the Part-Spacing Paradigm Is Not a Pure Measurement of Part-Based Information of Faces
In: ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/c3/a4/PLoS_One_2009_Jul_15_4(7)_e6239.tar.gz
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
8
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern