DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5
Hits 1 – 20 of 83

1
Voice quality : the laryngeal articulator model
Esling, John H.; Benner, Allison; Crevier-Buchman, Lise. - Cambridge, United Kingdom : Cambridge University Press, 2019
BLLDB
UB Frankfurt Linguistik
Show details
2
The Airbus Air Traffic Control speech recognition 2018 challenge: towards ATC automatic transcription and call sign detection
In: Proceedings of INTERSPEECH 2019 ; 20th Annual Conference of the International Speech Communication Association (INTERSPEECH 2019) ; https://hal.archives-ouvertes.fr/hal-02419437 ; 20th Annual Conference of the International Speech Communication Association (INTERSPEECH 2019), Sep 2019, Graz, Austria. pp.2993-2997 (2019)
BASE
Show details
3
Challenges in Audio Processing of Terrorist-Related Data
In: International Conference on Multimedia Modeling ; https://hal.archives-ouvertes.fr/hal-02415176 ; International Conference on Multimedia Modeling, Springer, Jan 2019, Thessaloniki, Greece (2019)
BASE
Show details
4
Char+CV-CTC: Combining Graphemes and Consonant/Vowel Units for CTC-Based ASR Using Multitask Learning
In: Proceedings of INTERSPEECH 2019 ; 20th Annual Conference of the International Speech Communication Association (INTERSPEECH 2019) ; https://hal.archives-ouvertes.fr/hal-02419431 ; 20th Annual Conference of the International Speech Communication Association (INTERSPEECH 2019), Sep 2019, Graz, Austria. pp.1611-1615 (2019)
BASE
Show details
5
Interests of using Automatic Speech recognition for Speech-Language Therapists
In: World Congress of the International Association of Logopedics and Phoniatrics ; https://hal.archives-ouvertes.fr/hal-03012571 ; World Congress of the International Association of Logopedics and Phoniatrics, IALP : International Association of Logopedics and Phoniatrics, Aug 2019, Taipei, Taiwan. pp.(electronic medium) ; http://www.ialptaipei2019.org/ (2019)
BASE
Show details
6
Challenges in Audio Processing of Terrorist-Related Data
In: International Conference on Multimedia Modeling ; https://hal.archives-ouvertes.fr/hal-02387373 ; International Conference on Multimedia Modeling, Springer, Jan 2019, Thessaloniki, Greece (2019)
BASE
Show details
7
Extractive Text-Based Summarization of Arabic videos: Issues, Approaches and Evaluations
In: ICALP: International Conference on Arabic Language Processing ; https://hal.archives-ouvertes.fr/hal-02314238 ; ICALP: International Conference on Arabic Language Processing, Oct 2019, Nancy, France. pp.65-78, ⟨10.1007/978-3-030-32959-4_5⟩ (2019)
BASE
Show details
8
Adapting a FrameNet Semantic Parser for Spoken Language Understanding Using Adversarial Learning
In: Interspeech 2019 ; https://hal.archives-ouvertes.fr/hal-02298417 ; Interspeech 2019, Sep 2019, Graz, Austria. pp.799-803, ⟨10.21437/Interspeech.2019-2732⟩ (2019)
BASE
Show details
9
Deception in Spoken Dialogue: Classification and Individual Differences
BASE
Show details
10
A Perceptual Study of CV Syllables in both Spoken and Whistled Speech: a Tashlhiyt Berber Perspective
In: Interspeech 2019 - 20th Annual Conference of the International Speech Communication Association ; https://hal.archives-ouvertes.fr/hal-02371794 ; Interspeech 2019 - 20th Annual Conference of the International Speech Communication Association, Sep 2019, Graz, Austria. ⟨10.21437/Interspeech.2019-2251⟩ (2019)
BASE
Show details
11
Comparing unsupervised speech learning directly to human performance in speech perception
In: Proceedings of the Annual Conference of the Cognitive Science Society (Cog Sci) ; CogSci 2019 - 41st Annual Meeting of Cognitive Science Society ; https://hal.archives-ouvertes.fr/hal-02274499 ; CogSci 2019 - 41st Annual Meeting of Cognitive Science Society, Jul 2019, Montréal, Canada (2019)
BASE
Show details
12
Issues in L2 phonological processing ; Questions sur le traitement phonologique en langue seconde
Melnik, Gerda Ana. - : HAL CCSD, 2019
In: https://tel.archives-ouvertes.fr/tel-02304656 ; Linguistics. Université Paris sciences et lettres, 2019. English. ⟨NNT : 2019PSLEE007⟩ (2019)
BASE
Show details
13
Using automatic speech recognition for the prediction of impaired speech identification
In: 11th Speech in Noise Workshop (SPiN 2019) ; https://hal.archives-ouvertes.fr/hal-02976603 ; 11th Speech in Noise Workshop (SPiN 2019), Jan 2019, Ghent, Belgium ; https://spin2019.be/?p=program&id=88 (2019)
BASE
Show details
14
Learning representations of speech from the raw waveform ; Apprentissage de représentations de la parole à partir du signal brut
Zeghidour, Neil. - : HAL CCSD, 2019
In: https://tel.archives-ouvertes.fr/tel-02278616 ; Machine Learning [cs.LG]. Université Paris sciences et lettres, 2019. English. ⟨NNT : 2019PSLEE004⟩ (2019)
BASE
Show details
15
Phonetic lessons from automatic phonemic transcription: preliminary reflections on Na (Sino-Tibetan) and Tsuut’ina (Dene) data
In: ICPhS XIX (19th International Congress of Phonetic Sciences) ; https://halshs.archives-ouvertes.fr/halshs-02059313 ; ICPhS XIX (19th International Congress of Phonetic Sciences), Aug 2019, Melbourne, Australia ; https://icphs2019.org/icphs2019-fullpapers/ (2019)
BASE
Show details
16
The role of working memory for syntactic formulation in language production.
In: Journal of experimental psychology. Learning, memory, and cognition, vol 45, iss 10 (2019)
BASE
Show details
17
Privacy-Preserving Adversarial Representation Learning in ASR: Reality or Illusion?
In: INTERSPEECH 2019 - 20th Annual Conference of the International Speech Communication Association ; https://hal.inria.fr/hal-02166434 ; INTERSPEECH 2019 - 20th Annual Conference of the International Speech Communication Association, Sep 2019, Graz, Austria (2019)
BASE
Show details
18
Summarizing videos into a target language: Methodology, architectures and evaluation
In: ISSN: 1064-1246 ; EISSN: 1875-8967 ; Journal of Intelligent and Fuzzy Systems ; https://hal.archives-ouvertes.fr/hal-02271287 ; Journal of Intelligent and Fuzzy Systems, IOS Press, 2019, 1, pp.1-12. ⟨10.3233/JIFS-179350⟩ (2019)
BASE
Show details
19
Automatic speech recognition in Laryngology & Phoniatrics practice
In: Congress of European ORL-HNS ; https://hal.archives-ouvertes.fr/hal-02172743 ; Congress of European ORL-HNS, 2019, Bruxelles, Belgium (2019)
BASE
Show details
20
Speech Emotion Recognition: Recurrent Neural Networks compared to SVM and Linear Regression
In: Artificial Neural Networks and Machine Learning – ICANN 2017 ; https://hal.archives-ouvertes.fr/hal-02432632 ; Alessandra Lintas; Stefano Rovetta; Paul F.M.J. Verschure; Alessandro E.P. Villa. Artificial Neural Networks and Machine Learning – ICANN 2017, 10613, Springer International Publishing, pp.451-453, 2019, Lecture Notes in Computer Science ; https://link.springer.com/book/10.1007%2F978-3-319-68600-4?page=2#toc (2019)
Abstract: Proceedings of the 26th International Conference on Artificial Neural Networks, Alghero, Italy, September 11-14, 2017 ; International audience ; Emotion recognition in spoken dialogues has been gaining increasing interest all through current years. A speech emotion recognition (SER) is a challenging research area in the field of Human Computer Interaction (HCI). It refers to the ability of detection the current emotional state of a human being from his or her voice. SER has potentially wide applications, such as the interface with robots, banking, call centers, car board systems, computer games etc. In our research we are interested to how, emotion recognition, can top enhance the quality of teaching for both of classroom orchestration and E-learnning. Integration of SER into aided teaching system, can guide teacher to decide what subjects can be taught and must be able to develop strategies for managing emotions within the learning environment. In linguistic activity, from student's interaction and articulation, we can extract information about their emotional state. That is why learner's emotional state should be considered in the language classroom. In general, the SER is a computational task consisting of two major parts: feature extraction and emotion machine classification. The questions that arise here: What are the acoustic features needed for a most robust automatic recognition of a speaker's emotion? Which methods is most appropriate for classification? How the database used influence the recognition of emotion in speech?
Keyword: [INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]; [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL]; [INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE]; Linear Regression; MFCC; Modulation Spectral Features; Recurrent Neural Networks; Speech Emotion Recognition; SVM
URL: https://hal.archives-ouvertes.fr/hal-02432632
https://hal.archives-ouvertes.fr/hal-02432632/file/ICANN2017_KL.pdf
https://hal.archives-ouvertes.fr/hal-02432632/document
BASE
Hide details

Page: 1 2 3 4 5

Catalogues
1
0
0
0
0
0
0
Bibliographies
1
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
82
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern