DE eng

Search in the Catalogues and Directories

Hits 1 – 4 of 4

1
LIBRI-LIGHT: a benchmark for asr with limited or no supervision
In: ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing ; https://hal.archives-ouvertes.fr/hal-02959460 ; ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, May 2020, Barcelona / Virtual, Spain. pp.7669-7673, ⟨10.1109/ICASSP40776.2020.9052942⟩ (2020)
Abstract: International audience ; We introduce a new collection of spoken English audio suitable for training speech recognition systems under limited or no supervision. It is derived from open-source audio books from the LibriVox project. It contains over 60K hours of audio , which is, to our knowledge, the largest freely-available corpus of speech. The audio has been segmented using voice activity detection and is tagged with SNR, speaker ID and genre descriptions. Additionally, we provide baseline systems and evaluation metrics working under three settings: (1) the zero resource/unsupervised setting (ABX), (2) the semi-supervised setting (PER, CER) and (3) the distant supervision setting (WER). Settings (2) and (3) use limited textual resources (10 minutes to 10 hours) aligned with the speech. Setting (3) uses large amounts of unaligned text. They are evaluated on the standard LibriSpeech dev and test sets for comparison with the supervised state-of-the-art. Index Terms-unsupervised and semi-supervised learning , distant supervision, dataset, zero-and low resource ASR.
Keyword: [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL]; [INFO.INFO-SD]Computer Science [cs]/Sound [cs.SD]
URL: https://doi.org/10.1109/ICASSP40776.2020.9052942
https://hal.archives-ouvertes.fr/hal-02959460/document
https://hal.archives-ouvertes.fr/hal-02959460
https://hal.archives-ouvertes.fr/hal-02959460/file/1912.07875.pdf
BASE
Hide details
2
Data Augmenting Contrastive Learning of Speech Representations in the Time Domain
In: SLT 2020 - IEEE Spoken Language Technology Workshop ; https://hal.archives-ouvertes.fr/hal-03070321 ; SLT 2020 - IEEE Spoken Language Technology Workshop, Dec 2020, Shenzhen / Virtual, China (2020)
BASE
Show details
3
Unsupervised pretraining transfers well across languages
In: ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing ; https://hal.archives-ouvertes.fr/hal-02959418 ; ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, May 2020, Barcelona / Virtual, Spain. pp.7414-7418, ⟨10.1109/ICASSP40776.2020.9054548⟩ (2020)
BASE
Show details
4
Unsupervised pretraining transfers well across languages ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
4
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern