1 |
Neural Natural Language Processing for Unstructured Data in Electronic Health Records: a Review ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
CLICKER: A Computational LInguistics Classification Scheme for Educational Resources ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
CONFIT: Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive Summarization with Argument Mining ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
R-VGAE: Relational-variational Graph Autoencoder for Unsupervised Prerequisite Chain Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Improving Low-Resource Cross-lingual Document Retrieval by Reranking with Deep Bilingual Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
ScisummNet: A Large Annotated Corpus and Content-Impact Models for Scientific Paper Summarization with Citation Networks ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Multi-News: a Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
The CL-SciSumm Shared Task 2018: Results and Key Insights ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Editing-Based SQL Query Generation for Cross-Domain Context-Dependent Questions ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
TutorialBank: A Manually-Collected Corpus for Prerequisite Chains, Survey Extraction and Resource Recommendation ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Selecting and Generating Computational Meaning Representations for Short Texts
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Robust Multilingual Part-of-Speech Tagging via Adversarial Training ...
|
|
|
|
Abstract:
Adversarial training (AT) is a powerful regularization method for neural networks, aiming to achieve robustness to input perturbations. Yet, the specific effects of the robustness obtained from AT are still unclear in the context of natural language processing. In this paper, we propose and analyze a neural POS tagging model that exploits AT. In our experiments on the Penn Treebank WSJ corpus and the Universal Dependencies (UD) dataset (27 languages), we find that AT not only improves the overall tagging accuracy, but also 1) prevents over-fitting well in low resource languages and 2) boosts tagging accuracy for rare / unseen words. We also demonstrate that 3) the improved tagging performance by AT contributes to the downstream task of dependency parsing, and that 4) AT helps the model to learn cleaner word representations. 5) The proposed AT model is generally effective in different sequence labeling tasks. These positive results motivate further use of AT for natural language tasks. ... : NAACL 2018 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
|
|
URL: https://dx.doi.org/10.48550/arxiv.1711.04903 https://arxiv.org/abs/1711.04903
|
|
BASE
|
|
Hide details
|
|
16 |
Classifying Syntactic Regularities for Hundreds of Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Predicting the impact of scientific concepts using full‐text features
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Sentence simplification, compression, and disaggregation for summarization of sophisticated documents
|
|
|
|
BASE
|
|
Show details
|
|
|
|