DE eng

Search in the Catalogues and Directories

Hits 1 – 15 of 15

1
Wukong: 100 Million Large-scale Chinese Cross-modal Pre-training Dataset and A Foundation Framework ...
Gu, Jiaxi; Meng, Xiaojun; Lu, Guansong. - : arXiv, 2022
BASE
Show details
2
Compilable Neural Code Generation with Compiler Feedback ...
Wang, Xin; Wang, Yasheng; Wan, Yao. - : arXiv, 2022
BASE
Show details
3
Leveraging Part-of-Speech Tagging Features and a Novel Regularization Strategy for Chinese Medical Named Entity Recognition
In: Mathematics; Volume 10; Issue 9; Pages: 1386 (2022)
BASE
Show details
4
Training Multilingual Pre-trained Language Model with Byte-level Subwords ...
Wei, Junqiu; Liu, Qun; Guo, Yinpeng. - : arXiv, 2021
BASE
Show details
5
JABER and SABER: Junior and Senior Arabic BERt ...
BASE
Show details
6
Learning Multilingual Representation for Natural Language Understanding with Enhanced Cross-Lingual Supervision ...
Guo, Yinpeng; Li, Liangyou; Jiang, Xin. - : arXiv, 2021
BASE
Show details
7
LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation ...
BASE
Show details
8
Improving Unsupervised Question Answering via Summarization-Informed Question Generation ...
BASE
Show details
9
CCA-MDD: A Coupled Cross-Attention based Framework for Streaming Mispronunciation detection and diagnosis ...
BASE
Show details
10
AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models ...
BASE
Show details
11
DyLex: Incorporating Dynamic Lexicons into BERT for Sequence Labeling ...
Wang, Baojun; Zhang, Zhao; Xu, Kun. - : arXiv, 2021
BASE
Show details
12
Zero-Shot Paraphrase Generation with Multilingual Language Models ...
Abstract: Leveraging multilingual parallel texts to automatically generate paraphrases has drawn much attention as size of high-quality paraphrase corpus is limited. Round-trip translation, also known as the pivoting method, is a typical approach to this end. However, we notice that the pivoting process involves multiple machine translation models and is likely to incur semantic drift during the two-step translations. In this paper, inspired by the Transformer-based language models, we propose a simple and unified paraphrasing model, which is purely trained on multilingual parallel data and can conduct zero-shot paraphrase generation in one step. Compared with the pivoting approach, paraphrases generated by our model is more semantically similar to the input sentence. Moreover, since our model shares the same architecture as GPT (Radford et al., 2018), we are able to pre-train the model on large-scale unparallel corpus, which further improves the fluency of the output sentences. In addition, we introduce the mechanism ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/1911.03597
https://dx.doi.org/10.48550/arxiv.1911.03597
BASE
Hide details
13
Decomposable Neural Paraphrase Generation ...
Li, Zichao; Jiang, Xin; Shang, Lifeng. - : arXiv, 2019
BASE
Show details
14
Affective Neural Response Generation ...
BASE
Show details
15
Deep Active Learning for Dialogue Generation ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
15
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern