DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5
Hits 1 – 20 of 89

1
Universal Conditional Masked Language Pre-training for Neural Machine Translation ...
Li, Pengfei; Li, Liangyou; Zhang, Meng. - : arXiv, 2022
BASE
Show details
2
Compilable Neural Code Generation with Compiler Feedback ...
Wang, Xin; Wang, Yasheng; Wan, Yao. - : arXiv, 2022
BASE
Show details
3
Sub-Character Tokenization for Chinese Pretrained Language Models ...
BASE
Show details
4
Training Multilingual Pre-trained Language Model with Byte-level Subwords ...
Wei, Junqiu; Liu, Qun; Guo, Yinpeng. - : arXiv, 2021
BASE
Show details
5
Multilingual Speech Translation with Unified Transformer: Huawei Noah's Ark Lab at IWSLT 2021 ...
Zeng, Xingshan; Li, Liangyou; Liu, Qun. - : arXiv, 2021
Abstract: This paper describes the system submitted to the IWSLT 2021 Multilingual Speech Translation (MultiST) task from Huawei Noah's Ark Lab. We use a unified transformer architecture for our MultiST model, so that the data from different modalities (i.e., speech and text) and different tasks (i.e., Speech Recognition, Machine Translation, and Speech Translation) can be exploited to enhance the model's ability. Specifically, speech and text inputs are firstly fed to different feature extractors to extract acoustic and textual features, respectively. Then, these features are processed by a shared encoder--decoder architecture. We apply several training techniques to improve the performance, including multi-task learning, task-level curriculum learning, data augmentation, etc. Our final system achieves significantly better results than bilingual baselines on supervised language pairs and yields reasonable results on zero-shot language pairs. ... : IWSLT 2021 ...
Keyword: Audio and Speech Processing eess.AS; Computation and Language cs.CL; FOS Computer and information sciences; FOS Electrical engineering, electronic engineering, information engineering; Sound cs.SD
URL: https://dx.doi.org/10.48550/arxiv.2106.00197
https://arxiv.org/abs/2106.00197
BASE
Hide details
6
JABER and SABER: Junior and Senior Arabic BERt ...
BASE
Show details
7
Learning Multilingual Representation for Natural Language Understanding with Enhanced Cross-Lingual Supervision ...
Guo, Yinpeng; Li, Liangyou; Jiang, Xin. - : arXiv, 2021
BASE
Show details
8
LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation ...
BASE
Show details
9
Uncertainty-Aware Balancing for Multilingual and Multi-Domain Neural Machine Translation Training ...
Wu, Minghao; Li, Yitong; Zhang, Meng. - : arXiv, 2021
BASE
Show details
10
Improving Unsupervised Question Answering via Summarization-Informed Question Generation ...
BASE
Show details
11
CCA-MDD: A Coupled Cross-Attention based Framework for Streaming Mispronunciation detection and diagnosis ...
BASE
Show details
12
A Mutual Information Maximization Approach for the Spurious Solution Problem in Weakly Supervised Question Answering ...
BASE
Show details
13
HyKnow: End-to-End Task-Oriented Dialog Modeling with Hybrid Knowledge Management ...
BASE
Show details
14
AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models ...
BASE
Show details
15
TGEA: An Error-Annotated Dataset and Benchmark Tasks for TextGeneration from Pretrained Language Models ...
BASE
Show details
16
Two Parents, One Child: {D}ual Transfer for Low-Resource Neural Machine Translation ...
BASE
Show details
17
RealTranS: End-to-End Simultaneous Speech Translation with Convolutional Weighted-Shrinking Transformer ...
BASE
Show details
18
Uncertainty-Aware Balancing for Multilingual and Multi-Domain Neural Machine Translation Training ...
BASE
Show details
19
DyLex: Incorporating Dynamic Lexicons into BERT for Sequence Labeling ...
Wang, Baojun; Zhang, Zhao; Xu, Kun. - : arXiv, 2021
BASE
Show details
20
Document Graph for Neural Machine Translation ...
BASE
Show details

Page: 1 2 3 4 5

Catalogues
3
0
2
0
3
0
0
Bibliographies
3
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
79
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern