DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 31

1
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
In: Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021) ; https://hal.archives-ouvertes.fr/hal-03466171 ; Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), Aug 2021, Online, France. pp.96-120, ⟨10.18653/v1/2021.gem-1.10⟩ (2021)
BASE
Show details
2
BiSECT: Learning to Split and Rephrase Sentences with Bitexts ...
BASE
Show details
3
Neural semi-Markov CRF for Monolingual Word Alignment ...
Lan, Wuwei; Jiang, Chao; Xu, Wei. - : arXiv, 2021
BASE
Show details
4
Pre-train or Annotate? Domain Adaptation with a Constrained Budget ...
BASE
Show details
5
Neural semi-Markov CRF for Monolingual Word Alignment ...
BASE
Show details
6
BiSECT: Learning to Split and Rephrase Sentences with Bitexts ...
BASE
Show details
7
Sample data for "Design and Collection Challenges of Building an Academic Email Corpus for Linguistics and Computational Research" ...
Diaz, Damian Yukio Romero; Hanyu Jia; Xu, Wei. - : University of Arizona Research Data Repository, 2021
BASE
Show details
8
Sample data for "Design and Collection Challenges of Building an Academic Email Corpus for Linguistics and Computational Research" ...
Romero Diaz, Damian Yukio; Jia, Hanyu; Xu, Wei. - : University of Arizona Research Data Repository, 2021
BASE
Show details
9
Controllable Text Simplification with Explicit Paraphrasing ...
NAACL 2021 2021; Alva-Manchego, Fernando; Maddela, Mounica. - : Underline Science Inc., 2021
BASE
Show details
10
The effectiveness of the problem-based learning in medical cell biology education: A systematic meta-analysis
In: Medicine (Baltimore) (2021)
BASE
Show details
11
Controllable text simplification with explicit paraphrasing
Maddela, Mounica; Alva-Manchego, Fernando; Xu, Wei. - : Association for Computational Linguistics, 2021
BASE
Show details
12
An Empirical Study of Pre-trained Transformers for Arabic Information Extraction ...
Lan, Wuwei; Chen, Yang; Xu, Wei. - : arXiv, 2020
BASE
Show details
13
Controllable Text Simplification with Explicit Paraphrasing ...
BASE
Show details
14
Interactive Grounded Language Acquisition and Generalization in a 2D World ...
Yu, Haonan; Zhang, Haichao; Xu, Wei. - : arXiv, 2018
BASE
Show details
15
Interactive Language Acquisition with One-shot Visual Concept Learning through a Conversational Game ...
Zhang, Haichao; Yu, Haonan; Xu, Wei. - : arXiv, 2018
BASE
Show details
16
Guided Feature Transformation (GFT): A Neural Language Grounding Module for Embodied Agents ...
BASE
Show details
17
A Word-Complexity Lexicon and A Neural Readability Ranking Model for Lexical Simplification ...
Maddela, Mounica; Xu, Wei. - : arXiv, 2018
BASE
Show details
18
A Deep Compositional Framework for Human-like Language Acquisition in Virtual Environment ...
Yu, Haonan; Zhang, Haichao; Xu, Wei. - : arXiv, 2017
BASE
Show details
19
A Continuously Growing Dataset of Sentential Paraphrases ...
Lan, Wuwei; Qiu, Siyu; He, Hua; Xu, Wei. - : arXiv, 2017
Abstract: A major challenge in paraphrase research is the lack of parallel corpora. In this paper, we present a new method to collect large-scale sentential paraphrases from Twitter by linking tweets through shared URLs. The main advantage of our method is its simplicity, as it gets rid of the classifier or human in the loop needed to select data before annotation and subsequent application of paraphrase identification algorithms in the previous work. We present the largest human-labeled paraphrase corpus to date of 51,524 sentence pairs and the first cross-domain benchmarking for automatic paraphrase identification. In addition, we show that more than 30,000 new sentential paraphrases can be easily and continuously captured every month at ~70% precision, and demonstrate their utility for downstream NLP tasks through phrasal paraphrase extraction. We make our code and data freely available. ... : 11 pages, accepted to EMNLP 2017 ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://dx.doi.org/10.48550/arxiv.1708.00391
https://arxiv.org/abs/1708.00391
BASE
Hide details
20
Spectral Entropy Can Predict Changes of Working Memory Performance Reduced by Short-Time Training in the Delayed-Match-to-Sample Task
Tian, Yin; Zhang, Huiling; Xu, Wei. - : Frontiers Media S.A., 2017
BASE
Show details

Page: 1 2

Catalogues
0
0
3
0
0
0
0
Bibliographies
2
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
28
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern