DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 27

1
Multistream neural architectures for cued-speech recognition using a pre-trained visual feature extractor and constrained CTC decoding
In: ICASSP 2022 - IEEE International Conference on Acoustics, Speech and Signal Processing ; https://hal.archives-ouvertes.fr/hal-03578503 ; ICASSP 2022 - IEEE International Conference on Acoustics, Speech and Signal Processing, May 2022, Singapour, Singapore (2022)
BASE
Show details
2
Multistream neural architectures for cued-speech recognition using a pre-trained visual feature extractor and constrained CTC decoding
In: ICASSP 2022 - IEEE International Conference on Acoustics, Speech and Signal Processing ; https://hal.archives-ouvertes.fr/hal-03578503 ; ICASSP 2022 - IEEE International Conference on Acoustics, Speech and Signal Processing, May 2022, Singapour, Singapore (2022)
BASE
Show details
3
Multistream neural architectures for cued-speech recognition using a pre-trained visual feature extractor and constrained CTC decoding ...
BASE
Show details
4
Re-synchronization using the Hand Preceding Model for Multi-modal Fusion in Automatic Continuous Cued Speech Recognition
In: ISSN: 1520-9210 ; IEEE Transactions on Multimedia ; https://hal.archives-ouvertes.fr/hal-02433830 ; IEEE Transactions on Multimedia, Institute of Electrical and Electronics Engineers, 2021, 23, pp.292-305. ⟨10.1109/TMM.2020.2976493⟩ (2021)
Abstract: International audience ; Cued Speech (CS) is an augmented lip reading complemented by hand coding, and it is very helpful to the deaf people. Automatic CS recognition can help communications between the deaf people and others. Due to the asynchronous nature of lips and hand movements, fusion of them in automatic CS recognition is a challenging problem. In this work, we propose a novel re-synchronization procedure for multi-modal fusion, which aligns the hand features with lips feature. It is realized by delaying hand position and hand shape with their optimal hand preceding time which is derived by investigating the temporal organizations of hand position and hand shape movements in CS. This re-synchronization procedure is incorporated into a practical continuous CS recognition system that combines convolutional neural network (CNN) with multi-stream hidden markov model (MSHMM). A significant improvement of about 4.6% has been achieved retaining 76.6% CS phoneme recognition correctness compared with the state-of-the-art architecture (72.04%), which did not take into account the asynchrony issue of multi-modal fusion in CS. To our knowledge, this is the first work to tackle the asynchronous multi-modal fusion in the automatic continuous CS recognition.
Keyword: [SPI.ACOU]Engineering Sciences [physics]/Acoustics [physics.class-ph]; [SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing; Automatic CS recognition; CNN; Index Terms-Cued Speech; MSHMM; Multi-modal fusion; Re- synchronization procedure
URL: https://hal.archives-ouvertes.fr/hal-02433830
https://hal.archives-ouvertes.fr/hal-02433830/file/A%20New%20Re-synchronization%20Method%20based%20Multi-modal%20Fusion%20for%20Automatic%20Continuous%20Cued%20Speech%20Recognition.pdf
https://hal.archives-ouvertes.fr/hal-02433830/document
https://doi.org/10.1109/TMM.2020.2976493
BASE
Hide details
5
Auditory and Audiovisual Close-shadowing in Post-Lingually Deaf Cochlear-Implanted Patients and Normal-Hearing Elderly Adults
In: ISSN: 0196-0202 ; Ear and Hearing ; https://hal.archives-ouvertes.fr/hal-01546756 ; Ear and Hearing, Lippincott, Williams & Wilkins, 2018, 39 (1), pp.139-149. ⟨10.1097/AUD.0000000000000474⟩ (2018)
BASE
Show details
6
Csf18 ...
Liu, Li; Hueber, Thomas; Feng, Gang. - : Zenodo, 2018
BASE
Show details
7
Csf18 ...
Liu, Li; Hueber, Thomas; Feng, Gang. - : Zenodo, 2018
BASE
Show details
8
The shadow of a doubt? Evidence for perceptuo-motor linkage during auditory and audiovisual close-shadowing
Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc. - : Frontiers Media S.A., 2014
BASE
Show details
9
Physical modeling of bilabial plosives production
In: ICA 2013 - Acoustics 2013 - 21st International Congress on Acoustics - 165th Meeting of the Acoustical Society of America ; https://hal.archives-ouvertes.fr/hal-00868345 ; ICA 2013 - Acoustics 2013 - 21st International Congress on Acoustics - 165th Meeting of the Acoustical Society of America, Jun 2013, Montréal, Canada. 035047 (9p.), ⟨10.1121/1.4799466⟩ (2013)
BASE
Show details
10
Temporal organization of Cued Speech production
Beautemps, Denis; Cathiard, Marie-Agnes; Attina, Virginie (R14534). - : U.K., Cambridge University Press, 2012
BASE
Show details
11
Cued Speech automatic recognition in normal-hearing and deaf subjects
In: Speech communication. - Amsterdam [u.a.] : Elsevier 52 (2010) 6, 504-512
BLLDB
OLC Linguistik
Show details
12
Image and Video for hearing impaired people
In: ISSN: 1687-5176 ; EISSN: 1687-5281 ; EURASIP Journal on Image and Video Processing ; https://hal.archives-ouvertes.fr/hal-00275029 ; EURASIP Journal on Image and Video Processing, Springer, 2007, 2007, pp.ID 45641. ⟨10.1155/2007/45641⟩ (2007)
BASE
Show details
13
Analysis and synthesis of the 3D movements of the head, face and hand of a speaker using cued speech
In: ISSN: 0001-4966 ; EISSN: 1520-8524 ; Journal of the Acoustical Society of America ; https://hal.archives-ouvertes.fr/hal-00143622 ; Journal of the Acoustical Society of America, Acoustical Society of America, 2005, 118 (2), pp.1144-1153 (2005)
BASE
Show details
14
La langue française Parlée Complétée (LPC) : sa coproduction avec la parole et l'organisation temporelle de sa perception
In: Revue parole. - Mons : Univ. (2004) 31-32, 255-280
BLLDB
OLC Linguistik
Show details
15
A pilot study of temporal organization in Cued Speech production of French syllables: rules for a Cued Speech synthesizer
In: Speech communication. - Amsterdam [u.a.] : Elsevier 44 (2004) 1-4, 197-214
OLC Linguistik
Show details
16
A pilot study of temporal organization in Cued Speech production of French syllables : rules for a Cued Speech synthesizer
In: Speech communication. - Amsterdam [u.a.] : Elsevier 44 (2004) 1-4, 197-214
BLLDB
Show details
17
Characterizing and classifying Cued Speech vowels from labial parameters
In: 8th International Conference on Spoken Language Processing (ICSLP'04 or InterSpeech'04) ; https://hal.archives-ouvertes.fr/hal-00328134 ; 8th International Conference on Spoken Language Processing (ICSLP'04 or InterSpeech'04), 2004, Jeju, South Korea (2004)
BASE
Show details
18
Linear degrees of freedom in speech production : analysis of cineradio- and labio-film data and articulatory-acoustic modeling
In: Acoustical Society of America. The journal of the Acoustical Society of America. - Melville, NY : AIP 109 (2001) 5,1, 2165-2180
BLLDB
Show details
19
Deriving vocal-tract area functions from midsagittal profiles and formant frequencies : a new model for vowels and fricative consonants based on experimental data
In: Institut de la communication parlée <Grenoble>. Cahiers de l'ICP. Rapport de recherche. - Grenoble 5 (1996-1997), 33-56
BLLDB
Show details
20
Recovery of vocal tract geometry from formants for vowels and fricative consonants : using a midsagittal-to-area function conversion model
In: Institut de la communication parlée <Grenoble>. Cahiers de l'ICP. Rapport de recherche. - Grenoble 5 (1996-1997), 57-65
BLLDB
Show details

Page: 1 2

Catalogues
0
0
3
0
0
0
0
Bibliographies
13
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
13
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern