1 |
XTREME-S: Evaluating Cross-lingual Speech Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Self-supervised Learning with Random-projection Quantizer for Speech Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Unsupervised Data Selection via Discrete Speech Representation for ASR ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
mSLAM: Massively multilingual joint pre-training for speech and text ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Topic Discovery via Latent Space Clustering of Pretrained Language Model Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
MAESTRO: Matched Speech Text Representations through Modality Matching ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
LPInsider: a webserver for lncRNA–protein interaction extraction from the literature
|
|
|
|
In: BMC Bioinformatics (2022)
|
|
BASE
|
|
Show details
|
|
8 |
Observation of new excited ${B} ^0_{s} $ states
|
|
|
|
In: Eur.Phys.J.C ; https://hal.archives-ouvertes.fr/hal-03010999 ; Eur.Phys.J.C, 2021, 81 (7), pp.601. ⟨10.1140/epjc/s10052-021-09305-3⟩ (2021)
|
|
BASE
|
|
Show details
|
|
9 |
Parental use of relational language with 3-year-olds in math and spatial activities: A cross-cultural perspective
|
|
Zhang, Yu. - : eScholarship, University of California, 2021
|
|
BASE
|
|
Show details
|
|
10 |
Joint Unsupervised and Supervised Training for Multilingual ASR ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Scaling End-to-End Models for Large-Scale Multilingual ASR ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Improving Confidence Estimation on Out-of-Domain Data for End-to-End Speech Recognition ...
|
|
|
|
Abstract:
As end-to-end automatic speech recognition (ASR) models reach promising performance, various downstream tasks rely on good confidence estimators for these systems. Recent research has shown that model-based confidence estimators have a significant advantage over using the output softmax probabilities. If the input data to the speech recogniser is from mismatched acoustic and linguistic conditions, the ASR performance and the corresponding confidence estimators may exhibit severe degradation. Since confidence models are often trained on the same in-domain data as the ASR, generalising to out-of-domain (OOD) scenarios is challenging. By keeping the ASR model untouched, this paper proposes two approaches to improve the model-based confidence estimators on OOD data: using pseudo transcriptions and an additional OOD language model. With an ASR model trained on LibriSpeech, experiments show that the proposed methods can greatly improve the confidence metrics on TED-LIUM and Switchboard datasets while preserving ... : Accepted as a conference paper at ICASSP 2022 ...
|
|
Keyword:
Audio and Speech Processing eess.AS; FOS Computer and information sciences; FOS Electrical engineering, electronic engineering, information engineering; Machine Learning cs.LG
|
|
URL: https://arxiv.org/abs/2110.03327 https://dx.doi.org/10.48550/arxiv.2110.03327
|
|
BASE
|
|
Hide details
|
|
15 |
Book Review: Data Collection Research Methods in Applied Linguistics
|
|
|
|
In: Front Psychol (2021)
|
|
BASE
|
|
Show details
|
|
19 |
Offline Handwritten Chinese Text Recognition with Convolutional Neural Networks ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Is POS Tagging Necessary or Even Helpful for Neural Dependency Parsing? ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|