DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...2.532
Hits 1 – 20 of 50.634

1
Fall 2021
In: Scientia (2921-10-15T07:00:00Z)
BASE
Show details
2
A guide to school services in speech-language pathology
Seidel, Courtney L.; Schraeder, Trici. - San Diego : Plural Publishing, 2022
BLLDB
UB Frankfurt Linguistik
Show details
3
Using Automatic Speech Recognition to Optimize Hearing-Aid Time Constants
In: ISSN: 1662-4548 ; EISSN: 1662-453X ; Frontiers in Neuroscience ; https://hal.archives-ouvertes.fr/hal-03627441 ; Frontiers in Neuroscience, Frontiers, 2022, 16 (779062), ⟨10.3389/fnins.2022.779062⟩ ; https://www.frontiersin.org/articles/10.3389/fnins.2022.779062/full (2022)
BASE
Show details
4
RETRIEVING SPEAKER INFORMATION FROM PERSONALIZED ACOUSTIC MODELS FOR SPEECH RECOGNITION
In: IEEE ICASSP 2022 ; https://hal.archives-ouvertes.fr/hal-03539741 ; IEEE ICASSP 2022, 2022, Singapour, Singapore (2022)
BASE
Show details
5
Emotional Speech Recognition Using Deep Neural Networks
In: ISSN: 1424-8220 ; Sensors ; https://hal.archives-ouvertes.fr/hal-03632853 ; Sensors, MDPI, 2022, 22 (4), pp.1414. ⟨10.3390/s22041414⟩ (2022)
Abstract: International audience ; The expression of emotions in human communication plays a very important role in the information that needs to be conveyed to the partner. The forms of expression of human emotions are very rich. It could be body language, facial expressions, eye contact, laughter, and tone of voice. The languages of the world’s peoples are different, but even without understanding a language in communication, people can almost understand part of the message that the other partner wants to convey with emotional expressions as mentioned. Among the forms of human emotional expression, the expression of emotions through voice is perhaps the most studied. This article presents our research on speech emotion recognition using deep neural networks such as CNN, CRNN, and GRU. We used the Interactive Emotional Dyadic Motion Capture (IEMOCAP) corpus for the study with four emotions: anger, happiness, sadness, and neutrality. The feature parameters used for recognition include the Mel spectral coefficients and other parameters related to the spectrum and the intensity of the speech signal. The data augmentation was used by changing the voice and adding white noise. The results show that the GRU model gave the highest average recognition accuracy of 97.47%. This result is superior to existing studies on speech emotion recognition with the IEMOCAP corpus.
Keyword: [INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]; CNN; CRNN; data augmentation; emotion; GRU; IEMOCAP; recognition; speech
URL: https://hal.archives-ouvertes.fr/hal-03632853/document
https://hal.archives-ouvertes.fr/hal-03632853
https://doi.org/10.3390/s22041414
https://hal.archives-ouvertes.fr/hal-03632853/file/sensors-22-01414-v2.pdf
BASE
Hide details
6
The Impact of Removing Head Movements on Audio-visual Speech Enhancement
In: ICASSP 2022 - IEEE International Conference on Acoustics, Speech and Signal Processing ; https://hal.inria.fr/hal-03551610 ; ICASSP 2022 - IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE Signal Processing Society, May 2022, Singapore, Singapore. pp.1-5 (2022)
BASE
Show details
7
Regional variation in British English voice quality
In: English world-wide. - Amsterdam [u.a.] : Benjamins 43 (2022) 1, 96-123
BLLDB
Show details
8
Efficient localization of the cortical language network and its functional neuroanatomy in dyslexia
Lee, Jayden J.. - 2022
BASE
Show details
9
How are visemes and graphemes integrated with speech sounds during spoken word recognition? ERP evidence for supra-additive responses during audiovisual compared to auditory speech processing
In: ISSN: 0093-934X ; EISSN: 1090-2155 ; Brain and Language ; https://hal.archives-ouvertes.fr/hal-03472191 ; Brain and Language, Elsevier, 2022, 225, ⟨10.1016/j.bandl.2021.105058⟩ (2022)
BASE
Show details
10
Multistream neural architectures for cued-speech recognition using a pre-trained visual feature extractor and constrained CTC decoding
In: ICASSP 2022 - IEEE International Conference on Acoustics, Speech and Signal Processing ; https://hal.archives-ouvertes.fr/hal-03578503 ; ICASSP 2022 - IEEE International Conference on Acoustics, Speech and Signal Processing, May 2022, Singapour, Singapore (2022)
BASE
Show details
11
Multistream neural architectures for cued-speech recognition using a pre-trained visual feature extractor and constrained CTC decoding
In: ICASSP 2022 - IEEE International Conference on Acoustics, Speech and Signal Processing ; https://hal.archives-ouvertes.fr/hal-03578503 ; ICASSP 2022 - IEEE International Conference on Acoustics, Speech and Signal Processing, May 2022, Singapour, Singapore (2022)
BASE
Show details
12
Hippocampal and auditory contributions to speech segmentation
In: ISSN: 0010-9452 ; Cortex ; https://hal.archives-ouvertes.fr/hal-03604957 ; Cortex, Elsevier, 2022, ⟨10.1016/j.cortex.2022.01.017⟩ (2022)
BASE
Show details
13
Speech Perception and Implementation in a Virtual Medical Assistant
In: 6. ICAART – 14th International Conference on Agents and Artificial Intelligence ; https://hal.archives-ouvertes.fr/hal-03621550 ; 6. ICAART – 14th International Conference on Agents and Artificial Intelligence, Feb 2022, Vienna, Austria (2022)
BASE
Show details
14
Évaluation de la perception des sons de parole chez les populations pédiatriques : réflexion sur les épreuves existantes
In: ISSN: 0298-6477 ; EISSN: 2117-7155 ; Glossa ; https://hal.archives-ouvertes.fr/hal-03646757 ; Glossa, UNADREO - Union NAtionale pour le Développement de la Recherche en Orthophonie, 2022, 132, pp.1-27 ; https://www.glossa.fr/index.php/glossa/article/view/1043 (2022)
BASE
Show details
15
Automatic generation of the complete vocal tract shape from the sequence of phonemes to be articulated
In: ISSN: 0167-6393 ; EISSN: 1872-7182 ; Speech Communication ; https://hal.univ-lorraine.fr/hal-03650212 ; Speech Communication, Elsevier : North-Holland, 2022, ⟨10.1016/j.specom.2022.04.004⟩ (2022)
BASE
Show details
16
Cross-lingual few-shot hate speech and offensive language detection using meta learning
In: ISSN: 2169-3536 ; EISSN: 2169-3536 ; IEEE Access ; https://hal.archives-ouvertes.fr/hal-03559484 ; IEEE Access, IEEE, 2022, 10, pp.14880-14896. ⟨10.1109/ACCESS.2022.3147588⟩ (2022)
BASE
Show details
17
Fine-tuning pre-trained models for Automatic Speech Recognition: experiments on a fieldwork corpus of Japhug (Trans-Himalayan family)
In: https://halshs.archives-ouvertes.fr/halshs-03647315 ; 2022 (2022)
BASE
Show details
18
Intelligibility and comprehensibility: A Delphi consensus study
In: ISSN: 1368-2822 ; EISSN: 1460-6984 ; International Journal of Language and Communication Disorders ; https://hal.archives-ouvertes.fr/hal-03543198 ; International Journal of Language and Communication Disorders, Wiley, 2022, 57 (1), pp.21 - 41. ⟨10.1111/1460-6984.12672⟩ ; https://onlinelibrary.wiley.com/doi/10.1111/1460-6984.12672 (2022)
BASE
Show details
19
Vocal size exaggeration may have contributed to the origins of vocalic complexity
In: ISSN: 0962-8436 ; EISSN: 1471-2970 ; Philosophical Transactions of the Royal Society B: Biological Sciences ; https://hal.archives-ouvertes.fr/hal-03501105 ; Philosophical Transactions of the Royal Society B: Biological Sciences, Royal Society, The, 2022, 377 (1841), ⟨10.1098/rstb.2020.0401⟩ (2022)
BASE
Show details
20
Investigating the locus of transposed-phoneme effects using cross-modal priming
In: ISSN: 0001-6918 ; EISSN: 1873-6297 ; Acta Psychologica ; https://hal.archives-ouvertes.fr/hal-03619856 ; Acta Psychologica, Elsevier, 2022, 226, pp.103578. ⟨10.1016/j.actpsy.2022.103578⟩ (2022)
BASE
Show details

Page: 1 2 3 4 5...2.532

Catalogues
1.600
17
3.106
0
0
58
125
Bibliographies
15.085
10
12
0
0
0
48
70
154
Linked Open Data catalogues
364
Online resources
198
18
22
0
Open access documents
34.469
18
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern