1 |
Investigating alignment interpretability for low-resource NMT
|
|
|
|
In: ISSN: 0922-6567 ; EISSN: 1573-0573 ; Machine Translation ; https://hal.archives-ouvertes.fr/hal-03139744 ; Machine Translation, Springer Verlag, 2021, ⟨10.1007/s10590-020-09254-w⟩ (2021)
|
|
BASE
|
|
Show details
|
|
2 |
Impact of Encoding and Segmentation Strategies on End-to-End Simultaneous Speech Translation
|
|
|
|
In: INTERSPEECH 2021 ; https://hal.archives-ouvertes.fr/hal-03372487 ; INTERSPEECH 2021, Aug 2021, Brno, Czech Republic (2021)
|
|
BASE
|
|
Show details
|
|
3 |
Alternate Endings: Improving Prosody for Incremental Neural TTS with Predicted Future Text Input
|
|
|
|
In: Interspeech 2021 - 22nd Annual Conference of the International Speech Communication Association ; https://hal.archives-ouvertes.fr/hal-03372802 ; Interspeech 2021 - 22nd Annual Conference of the International Speech Communication Association, Aug 2021, Brno, Czech Republic. pp.3865-3869, ⟨10.21437/Interspeech.2021-275⟩ (2021)
|
|
BASE
|
|
Show details
|
|
4 |
LeBenchmark: A Reproducible Framework for Assessing Self-Supervised Representation Learning from Speech
|
|
|
|
In: INTERSPEECH 2021: Conference of the International Speech Communication Association ; https://hal.archives-ouvertes.fr/hal-03317730 ; INTERSPEECH 2021: Conference of the International Speech Communication Association, Aug 2021, Brno, Czech Republic (2021)
|
|
BASE
|
|
Show details
|
|
5 |
LeBenchmark: A Reproducible Framework for Assessing Self-Supervised Representation Learning from Speech
|
|
|
|
In: INTERSPEECH 2021: ; INTERSPEECH 2021: Conference of the International Speech Communication Association ; https://hal.archives-ouvertes.fr/hal-03317730 ; INTERSPEECH 2021: Conference of the International Speech Communication Association, Aug 2021, Brno, Czech Republic (2021)
|
|
BASE
|
|
Show details
|
|
6 |
LeBenchmark: A Reproducible Framework for Assessing Self-Supervised Representation Learning from Speech
|
|
|
|
In: INTERSPEECH 2021: ; INTERSPEECH 2021: Conference of the International Speech Communication Association ; https://hal.archives-ouvertes.fr/hal-03317730 ; INTERSPEECH 2021: Conference of the International Speech Communication Association, Aug 2021, Brno, Czech Republic (2021)
|
|
BASE
|
|
Show details
|
|
7 |
Contribution d'informations syntaxiques aux capacités de généralisation compositionelle des modèles seq2seq convolutifs
|
|
|
|
In: Actes de la 28e Conférence sur le Traitement Automatique des Langues Naturelles. Volume 1 : conférence principale ; Traitement Automatique des Langues Naturelles ; https://hal.archives-ouvertes.fr/hal-03265890 ; Traitement Automatique des Langues Naturelles, 2021, Lille, France. pp.134-141 (2021)
|
|
BASE
|
|
Show details
|
|
8 |
Lightweight Adapter Tuning for Multilingual Speech Translation
|
|
|
|
In: The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021) ; https://hal.archives-ouvertes.fr/hal-03294912 ; The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021), Aug 2021, Bangkok (Virtual), Thailand (2021)
|
|
BASE
|
|
Show details
|
|
9 |
Do Multilingual Neural Machine Translation Models Contain Language Pair Specific Attention Heads?
|
|
|
|
In: Findings of ACL 2021 ; https://hal.archives-ouvertes.fr/hal-03299010 ; Findings of ACL 2021, Aug 2021, Bangkok (virtual), Thailand (2021)
|
|
BASE
|
|
Show details
|
|
10 |
User-friendly automatic transcription of low-resource languages: Plugging ESPnet into Elpis
|
|
|
|
In: ComputEL-4: Fourth Workshop on the Use of Computational Methods in the Study of Endangered Languages ; https://halshs.archives-ouvertes.fr/halshs-03030529 ; ComputEL-4: Fourth Workshop on the Use of Computational Methods in the Study of Endangered Languages, Mar 2021, Hawai‘i, United States (2021)
|
|
BASE
|
|
Show details
|
|
11 |
User-friendly automatic transcription of low-resource languages: Plugging ESPnet into Elpis
|
|
|
|
In: ComputEL-4: Fourth Workshop on the Use of Computational Methods in the Study of Endangered Languages ; https://halshs.archives-ouvertes.fr/halshs-03030529 ; ComputEL-4: Fourth Workshop on the Use of Computational Methods in the Study of Endangered Languages, Mar 2021, Hawai‘i, United States (2021)
|
|
BASE
|
|
Show details
|
|
12 |
Catplayinginthesnow: Impact of Prior Segmentation on a Model of Visually Grounded Speech
|
|
|
|
In: Conference on Natural Language Learning (CoNLL) ; https://hal.archives-ouvertes.fr/hal-02962275 ; Conference on Natural Language Learning (CoNLL), Nov 2020, Virtual, France (2020)
|
|
BASE
|
|
Show details
|
|
13 |
Investigating Language Impact in Bilingual Approaches for Computational Language Documentation
|
|
|
|
In: Proceedings of the 1st Joint SLTU and CCURL Workshop (SLTU-CCURL 2020), ; SLTU-CCURL workshop, LREC 2020 ; https://hal.archives-ouvertes.fr/hal-02895907 ; SLTU-CCURL workshop, LREC 2020, May 2020, Marseille, France (2020)
|
|
BASE
|
|
Show details
|
|
14 |
The Zero Resource Speech Challenge 2020: Discovering discrete subword and word units
|
|
|
|
In: Interspeech 2020 - Conference of the International Speech Communication Association ; https://hal.archives-ouvertes.fr/hal-02962224 ; Interspeech 2020 - Conference of the International Speech Communication Association, Oct 2020, Shangai / Virtual, China (2020)
|
|
BASE
|
|
Show details
|
|
15 |
Speech technology for unwritten languages
|
|
|
|
In: ISSN: 2329-9290 ; EISSN: 2329-9304 ; IEEE/ACM Transactions on Audio, Speech and Language Processing ; https://hal.inria.fr/hal-02480675 ; IEEE/ACM Transactions on Audio, Speech and Language Processing, Institute of Electrical and Electronics Engineers, 2020, ⟨10.1109/TASLP.2020.2973896⟩ (2020)
|
|
BASE
|
|
Show details
|
|
16 |
MaSS: A Large and Clean Multilingual Corpus of Sentence-aligned Spoken Utterances Extracted from the Bible
|
|
|
|
In: Proceedings of The 12th Language Resources and Evaluation Conference ; https://hal.archives-ouvertes.fr/hal-02611059 ; Proceedings of The 12th Language Resources and Evaluation Conference, May 2020, Marseille, France. pp.6486 - 6493 (2020)
|
|
BASE
|
|
Show details
|
|
17 |
Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation
|
|
|
|
In: COLING 2020 (long paper) ; https://hal.archives-ouvertes.fr/hal-02991564 ; COLING 2020 (long paper), Dec 2020, Virtual, Spain (2020)
|
|
BASE
|
|
Show details
|
|
18 |
ON-TRAC Consortium for End-to-End and Simultaneous Speech Translation Challenge Tasks at IWSLT 2020
|
|
|
|
In: Proceedings of the 17th International Conference on Spoken Language Translation ; https://hal.archives-ouvertes.fr/hal-02895893 ; Proceedings of the 17th International Conference on Spoken Language Translation, Jul 2020, Seattle, WA, United States. pp.35-43, ⟨10.18653/v1/2020.iwslt-1.2⟩ (2020)
|
|
BASE
|
|
Show details
|
|
19 |
A Data-Efficient End-to-End Spoken Language Understanding Architecture
|
|
|
|
In: International Conference on Acoustics, Speech, and Signal Processing (ICASSP) ; https://hal.archives-ouvertes.fr/hal-03094850 ; International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May 2020, Barcellone, Spain (2020)
|
|
Abstract:
International audience ; End-to-end architectures have been recently proposed for spoken language understanding (SLU) and semantic parsing. Based on a large amount of data, those models learn jointly acoustic and linguistic-sequential features. Such architectures give very good results in the context of domain, intent and slot detection, their application in a more complex semantic chunking and tagging task is less easy. For that, in many cases, models are combined with an external a language model to enhance their performance. In this paper we introduce a data efficient system which is trained end-to-end, with no additional, pre-trained external module. One key feature of our approach is an incremental training procedure where acoustic, language and semantic models are trained sequentially one after the other. The proposed model has a reasonable size and achieves competitive results with respect to state-of-the-art while using a small training dataset. In particular, we reach 24.02% Concept Error Rate (CER) on MEDIA/test while training on MEDIA/train without any additional data.
|
|
Keyword:
[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]; [INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG]; [INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE]; [INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing; [INFO.INFO-TT]Computer Science [cs]/Document and Text Processing; [INFO]Computer Science [cs]; data efficiency; End-to-End SLU; joint learning; MEDIA corpus; sequence-to-sequence models
|
|
URL: https://hal.archives-ouvertes.fr/hal-03094850 https://hal.archives-ouvertes.fr/hal-03094850/document https://hal.archives-ouvertes.fr/hal-03094850/file/2020_ICASSP_EndToEndSLU.pdf
|
|
BASE
|
|
Hide details
|
|
20 |
Online Versus Offline NMT Quality: An In-depth Analysis on English–German and German–English
|
|
|
|
In: COLING 2020 - 28th International Conference on Computational Linguistics ; https://hal.archives-ouvertes.fr/hal-02991539 ; COLING 2020 - 28th International Conference on Computational Linguistics, Dec 2020, Virtual, Spain. pp.5047-5058, ⟨10.18653/v1/2020.coling-main.443⟩ (2020)
|
|
BASE
|
|
Show details
|
|
|
|