1 |
SMDT: Selective Memory-Augmented Neural Document Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
StableMoE: Stable Routing Strategy for Mixture of Experts ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified Multilingual Prompt ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
On the Representation Collapse of Sparse Mixture of Experts ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
A Unified Strategy for Multilingual Grammatical Error Correction with Pre-trained Cross-Lingual Language Model ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Towards Making the Most of Multilingual Pretraining for Zero-Shot Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Zero-shot Cross-lingual Transfer of Neural Machine Translation with Multilingual Pretrained Encoders ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
MT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
DeltaLM: Encoder-Decoder Pre-training for Language Generation and Translation by Augmenting Pretrained Multilingual Encoders ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
XLM-E: Cross-lingual Language Model Pre-training via ELECTRA ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
How Does Distilled Data Complexity Impact the Quality and Confidence of Non-Autoregressive Machine Translation? ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
XLM-T: Scaling up Multilingual Machine Translation with Pretrained Cross-lingual Transformer Encoders ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Deconvolution-Based Global Decoding for Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
A Semantic Relevance Based Neural Network for Text Summarization and Text Simplification ...
|
|
|
|
Abstract:
Text summarization and text simplification are two major ways to simplify the text for poor readers, including children, non-native speakers, and the functionally illiterate. Text summarization is to produce a brief summary of the main ideas of the text, while text simplification aims to reduce the linguistic complexity of the text and retain the original meaning. Recently, most approaches for text summarization and text simplification are based on the sequence-to-sequence model, which achieves much success in many text generation tasks. However, although the generated simplified texts are similar to source texts literally, they have low semantic relevance. In this work, our goal is to improve semantic relevance between source texts and simplified texts for text summarization and text simplification. We introduce a Semantic Relevance Based neural model to encourage high semantic similarity between texts and summaries. In our model, the source text is represented by a gated attention encoder, while the ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/1710.02318 https://dx.doi.org/10.48550/arxiv.1710.02318
|
|
BASE
|
|
Hide details
|
|
|
|