DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 27

1
Multistream neural architectures for cued-speech recognition using a pre-trained visual feature extractor and constrained CTC decoding
In: ICASSP 2022 - IEEE International Conference on Acoustics, Speech and Signal Processing ; https://hal.archives-ouvertes.fr/hal-03578503 ; ICASSP 2022 - IEEE International Conference on Acoustics, Speech and Signal Processing, May 2022, Singapour, Singapore (2022)
BASE
Show details
2
Multistream neural architectures for cued-speech recognition using a pre-trained visual feature extractor and constrained CTC decoding
In: ICASSP 2022 - IEEE International Conference on Acoustics, Speech and Signal Processing ; https://hal.archives-ouvertes.fr/hal-03578503 ; ICASSP 2022 - IEEE International Conference on Acoustics, Speech and Signal Processing, May 2022, Singapour, Singapore (2022)
BASE
Show details
3
Multistream neural architectures for cued-speech recognition using a pre-trained visual feature extractor and constrained CTC decoding ...
BASE
Show details
4
Re-synchronization using the Hand Preceding Model for Multi-modal Fusion in Automatic Continuous Cued Speech Recognition
In: ISSN: 1520-9210 ; IEEE Transactions on Multimedia ; https://hal.archives-ouvertes.fr/hal-02433830 ; IEEE Transactions on Multimedia, Institute of Electrical and Electronics Engineers, 2021, 23, pp.292-305. ⟨10.1109/TMM.2020.2976493⟩ (2021)
BASE
Show details
5
Auditory and Audiovisual Close-shadowing in Post-Lingually Deaf Cochlear-Implanted Patients and Normal-Hearing Elderly Adults
In: ISSN: 0196-0202 ; Ear and Hearing ; https://hal.archives-ouvertes.fr/hal-01546756 ; Ear and Hearing, Lippincott, Williams & Wilkins, 2018, 39 (1), pp.139-149. ⟨10.1097/AUD.0000000000000474⟩ (2018)
BASE
Show details
6
Csf18 ...
Liu, Li; Hueber, Thomas; Feng, Gang. - : Zenodo, 2018
BASE
Show details
7
Csf18 ...
Liu, Li; Hueber, Thomas; Feng, Gang. - : Zenodo, 2018
BASE
Show details
8
The shadow of a doubt? Evidence for perceptuo-motor linkage during auditory and audiovisual close-shadowing
Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc; Sato, Marc. - : Frontiers Media S.A., 2014
Abstract: One classical argument in favor of a functional role of the motor system in speech perception comes from the close-shadowing task in which a subject has to identify and to repeat as quickly as possible an auditory speech stimulus. The fact that close-shadowing can occur very rapidly and much faster than manual identification of the speech target is taken to suggest that perceptually induced speech representations are already shaped in a motor-compatible format. Another argument is provided by audiovisual interactions often interpreted as referring to a multisensory-motor framework. In this study, we attempted to combine these two paradigms by testing whether the visual modality could speed motor response in a close-shadowing task. To this aim, both oral and manual responses were evaluated during the perception of auditory and audiovisual speech stimuli, clear or embedded in white noise. Overall, oral responses were faster than manual ones, but it also appeared that they were less accurate in noise, which suggests that motor representations evoked by the speech input could be rough at a first processing stage. In the presence of acoustic noise, the audiovisual modality led to both faster and more accurate responses than the auditory modality. No interaction was however, observed between modality and response. Altogether, these results are interpreted within a two-stage sensory-motor framework, in which the auditory and visual streams are integrated together and with internally generated motor representations before a final decision may be available.
Keyword: Psychology
URL: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4068292
http://www.ncbi.nlm.nih.gov/pubmed/25009512
https://doi.org/10.3389/fpsyg.2014.00568
BASE
Hide details
9
Physical modeling of bilabial plosives production
In: ICA 2013 - Acoustics 2013 - 21st International Congress on Acoustics - 165th Meeting of the Acoustical Society of America ; https://hal.archives-ouvertes.fr/hal-00868345 ; ICA 2013 - Acoustics 2013 - 21st International Congress on Acoustics - 165th Meeting of the Acoustical Society of America, Jun 2013, Montréal, Canada. 035047 (9p.), ⟨10.1121/1.4799466⟩ (2013)
BASE
Show details
10
Temporal organization of Cued Speech production
Beautemps, Denis; Cathiard, Marie-Agnes; Attina, Virginie (R14534). - : U.K., Cambridge University Press, 2012
BASE
Show details
11
Cued Speech automatic recognition in normal-hearing and deaf subjects
In: Speech communication. - Amsterdam [u.a.] : Elsevier 52 (2010) 6, 504-512
BLLDB
OLC Linguistik
Show details
12
Image and Video for hearing impaired people
In: ISSN: 1687-5176 ; EISSN: 1687-5281 ; EURASIP Journal on Image and Video Processing ; https://hal.archives-ouvertes.fr/hal-00275029 ; EURASIP Journal on Image and Video Processing, Springer, 2007, 2007, pp.ID 45641. ⟨10.1155/2007/45641⟩ (2007)
BASE
Show details
13
Analysis and synthesis of the 3D movements of the head, face and hand of a speaker using cued speech
In: ISSN: 0001-4966 ; EISSN: 1520-8524 ; Journal of the Acoustical Society of America ; https://hal.archives-ouvertes.fr/hal-00143622 ; Journal of the Acoustical Society of America, Acoustical Society of America, 2005, 118 (2), pp.1144-1153 (2005)
BASE
Show details
14
La langue française Parlée Complétée (LPC) : sa coproduction avec la parole et l'organisation temporelle de sa perception
In: Revue parole. - Mons : Univ. (2004) 31-32, 255-280
BLLDB
OLC Linguistik
Show details
15
A pilot study of temporal organization in Cued Speech production of French syllables: rules for a Cued Speech synthesizer
In: Speech communication. - Amsterdam [u.a.] : Elsevier 44 (2004) 1-4, 197-214
OLC Linguistik
Show details
16
A pilot study of temporal organization in Cued Speech production of French syllables : rules for a Cued Speech synthesizer
In: Speech communication. - Amsterdam [u.a.] : Elsevier 44 (2004) 1-4, 197-214
BLLDB
Show details
17
Characterizing and classifying Cued Speech vowels from labial parameters
In: 8th International Conference on Spoken Language Processing (ICSLP'04 or InterSpeech'04) ; https://hal.archives-ouvertes.fr/hal-00328134 ; 8th International Conference on Spoken Language Processing (ICSLP'04 or InterSpeech'04), 2004, Jeju, South Korea (2004)
BASE
Show details
18
Linear degrees of freedom in speech production : analysis of cineradio- and labio-film data and articulatory-acoustic modeling
In: Acoustical Society of America. The journal of the Acoustical Society of America. - Melville, NY : AIP 109 (2001) 5,1, 2165-2180
BLLDB
Show details
19
Deriving vocal-tract area functions from midsagittal profiles and formant frequencies : a new model for vowels and fricative consonants based on experimental data
In: Institut de la communication parlée <Grenoble>. Cahiers de l'ICP. Rapport de recherche. - Grenoble 5 (1996-1997), 33-56
BLLDB
Show details
20
Recovery of vocal tract geometry from formants for vowels and fricative consonants : using a midsagittal-to-area function conversion model
In: Institut de la communication parlée <Grenoble>. Cahiers de l'ICP. Rapport de recherche. - Grenoble 5 (1996-1997), 57-65
BLLDB
Show details

Page: 1 2

Catalogues
0
0
3
0
0
0
0
Bibliographies
13
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
13
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern