1 |
Contribution d'informations syntaxiques aux capacités de généralisation compositionelle des modèles seq2seq convolutifs
|
|
|
|
In: Actes de la 28e Conférence sur le Traitement Automatique des Langues Naturelles. Volume 1 : conférence principale ; Traitement Automatique des Langues Naturelles ; https://hal.archives-ouvertes.fr/hal-03265890 ; Traitement Automatique des Langues Naturelles, 2021, Lille, France. pp.134-141 (2021)
|
|
BASE
|
|
Show details
|
|
2 |
Catplayinginthesnow: Impact of Prior Segmentation on a Model of Visually Grounded Speech
|
|
|
|
In: Conference on Natural Language Learning (CoNLL) ; https://hal.archives-ouvertes.fr/hal-02962275 ; Conference on Natural Language Learning (CoNLL), Nov 2020, Virtual, France (2020)
|
|
BASE
|
|
Show details
|
|
3 |
MaSS: A Large and Clean Multilingual Corpus of Sentence-aligned Spoken Utterances Extracted from the Bible
|
|
|
|
In: Proceedings of The 12th Language Resources and Evaluation Conference ; https://hal.archives-ouvertes.fr/hal-02611059 ; Proceedings of The 12th Language Resources and Evaluation Conference, May 2020, Marseille, France. pp.6486 - 6493 (2020)
|
|
Abstract:
International audience ; The CMU Wilderness Multilingual Speech Dataset (Black, 2019) is a newly published multilingual speech dataset based on recorded readings of the New Testament. It provides data to build Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) models for potentially 700 languages. However, the fact that the source content (the Bible) is the same for all the languages is not exploited to date. Therefore, this article proposes to add multilingual links between speech segments in different languages, and shares a large and clean dataset of 8,130 parallel spoken utterances across 8 languages (56 language pairs). We name this corpus MaSS (Multilingual corpus of Sentence-aligned Spoken utterances). The covered languages (Basque, English, Finnish, French, Hungarian, Romanian, Russian and Spanish) allow researches on speech-to-speech alignment as well as on translation for typologically different language pairs. The quality of the final corpus is attested by human evaluation performed on a corpus subset (100 utterances, 8 language pairs). Lastly, we showcase the usefulness of the final product on a bilingual speech retrieval task.
|
|
Keyword:
[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]; [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL]; [SCCO.LING]Cognitive science/Linguistics; [SHS.LANGUE]Humanities and Social Sciences/Linguistics; multilingual alignment; parallel speech corpus; speech retrieval; speech-to-speech alignment; speech-to-speech translation
|
|
URL: https://hal.archives-ouvertes.fr/hal-02611059 https://hal.archives-ouvertes.fr/hal-02611059/document https://hal.archives-ouvertes.fr/hal-02611059/file/2020.lrec-1.799.pdf
|
|
BASE
|
|
Hide details
|
|
4 |
Catplayinginthesnow: Impact of Prior Segmentation on a Model of Visually Grounded Speech ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Word Recognition, Competition, and Activation in a Model of Visually Grounded Speech
|
|
|
|
In: Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL) ; https://hal.archives-ouvertes.fr/hal-02359540 ; Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), Nov 2019, Hong Kong, China. pp.339-348, ⟨10.18653/v1/K19-1032⟩ (2019)
|
|
BASE
|
|
Show details
|
|
6 |
Models of Visually Grounded Speech Signal Pay Attention to Nouns: A Bilingual Experiment on English and Japanese
|
|
|
|
In: International Conference on Acoustics, Speech and Signal Processing (ICASSP) ; https://hal.archives-ouvertes.fr/hal-02013984 ; International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2019, Brighton, United Kingdom. pp.8618-8622, ⟨10.1109/ICASSP.2019.8683069⟩ (2019)
|
|
BASE
|
|
Show details
|
|
7 |
MaSS - Multilingual corpus of Sentence-aligned Spoken utterances ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
MaSS - Multilingual corpus of Sentence-aligned Spoken utterances ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Word Recognition, Competition, and Activation in a Model of Visually Grounded Speech ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Models of Visually Grounded Speech Signal Pay Attention To Nouns: a Bilingual Experiment on English and Japanese ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Emergence of attention in a neural model of visually grounded speech
|
|
|
|
In: Learning Language in Humans and in Machines 2018 conference ; https://hal.archives-ouvertes.fr/hal-01970514 ; Learning Language in Humans and in Machines 2018 conference, Jul 2018, Paris, France (2018)
|
|
BASE
|
|
Show details
|
|
16 |
SPEECH-COCO: 600k Visually Grounded Spoken Captions Aligned to MSCOCO Data Set ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|