DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 28

1
Segmentation en mots faiblement supervisée pour la documentation automatique des langues
In: https://hal.archives-ouvertes.fr/hal-03477475 ; 2021 (2021)
BASE
Show details
2
Do Multilingual Neural Machine Translation Models Contain Language Pair Specific Attention Heads? ...
BASE
Show details
3
Lightweight Adapter Tuning for Multilingual Speech Translation ...
Le, Hang; Pino, Juan; Wang, Changhan. - : arXiv, 2021
BASE
Show details
4
Multilingual Unsupervised Neural Machine Translation with Denoising Adapters ...
BASE
Show details
5
Unsupervised Word Segmentation from Discrete Speech Units in Low-Resource Settings ...
BASE
Show details
6
User-friendly automatic transcription of low-resource languages: Plugging ESPnet into Elpis
In: ComputEL-4: Fourth Workshop on the Use of Computational Methods in the Study of Endangered Languages ; https://halshs.archives-ouvertes.fr/halshs-03030529 ; 2020 ; https://computel-workshop.org/ (2020)
BASE
Show details
7
A Data Efficient End-To-End Spoken Language Understanding Architecture ...
BASE
Show details
8
Catplayinginthesnow: Impact of Prior Segmentation on a Model of Visually Grounded Speech ...
BASE
Show details
9
Investigating Language Impact in Bilingual Approaches for Computational Language Documentation ...
BASE
Show details
10
Controlling Utterance Length in NMT-based Word Segmentation with Attention ...
BASE
Show details
11
MaSS - Multilingual corpus of Sentence-aligned Spoken utterances ...
BASE
Show details
12
MaSS - Multilingual corpus of Sentence-aligned Spoken utterances ...
BASE
Show details
13
How Does Language Influence Documentation Workflow? Unsupervised Word Discovery Using Translations in Multiple Languages ...
BASE
Show details
14
Word Recognition, Competition, and Activation in a Model of Visually Grounded Speech ...
Abstract: In this paper, we study how word-like units are represented and activated in a recurrent neural model of visually grounded speech. The model used in our experiments is trained to project an image and its spoken description in a common representation space. We show that a recurrent model trained on spoken sentences implicitly segments its input into word-like units and reliably maps them to their correct visual referents. We introduce a methodology originating from linguistics to analyse the representation learned by neural networks -- the gating paradigm -- and show that the correct representation of a word is only activated if the network has access to first phoneme of the target word, suggesting that the network does not rely on a global acoustic pattern. Furthermore, we find out that not all speech frames (MFCC vectors in our case) play an equal role in the final encoded representation of a given word, but that some frames have a crucial effect on it. Finally, we suggest that word representation could be ... : Accepted at CoNLL2019 ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
URL: https://dx.doi.org/10.48550/arxiv.1909.08491
https://arxiv.org/abs/1909.08491
BASE
Hide details
15
Models of Visually Grounded Speech Signal Pay Attention To Nouns: a Bilingual Experiment on English and Japanese ...
BASE
Show details
16
Synthetically Spoken STAIR ...
BASE
Show details
17
Synthetically Spoken STAIR ...
BASE
Show details
18
Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop ...
BASE
Show details
19
Unsupervised Word Segmentation from Speech with Attention ...
BASE
Show details
20
SPEECH-COCO ...
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
28
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern