1 |
TextFlint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
SpanNER: Named Entity Re-/Recognition as Span Prediction ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Align Voting Behavior with Public Statements for Legislator Representation Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
fastHan: A BERT-based Multi-Task Toolkit for Chinese NLP ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
{K-Adapter}: {I}nfusing {K}nowledge into {P}re-{T}rained {M}odels with {A}dapters ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Defense against Synonym Substitution-based Adversarial Attacks via Dirichlet Neighborhood Ensemble ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Causal Direction of Data Collection Matters: Implications of Causal and Anticausal Learning for NLP
|
|
|
|
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
|
|
BASE
|
|
Show details
|
|
8 |
Classifying Dyads for Militarized Conflict Analysis
|
|
|
|
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
|
|
BASE
|
|
Show details
|
|
9 |
Efficient Sampling of Dependency Structure
|
|
|
|
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
|
|
BASE
|
|
Show details
|
|
10 |
Searching for More Efficient Dynamic Programs
|
|
|
|
In: Findings of the Association for Computational Linguistics: EMNLP 2021 (2021)
|
|
BASE
|
|
Show details
|
|
11 |
“Let Your Characters Tell Their Story”: A Dataset for Character-Centric Narrative Understanding
|
|
|
|
In: Findings of the Association for Computational Linguistics: EMNLP 2021 (2021)
|
|
BASE
|
|
Show details
|
|
12 |
A Bayesian Framework for Information-Theoretic Probing
|
|
|
|
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
|
|
BASE
|
|
Show details
|
|
13 |
Improving Dialogue State Tracking with Turn-based Loss Function and Sequential Data Augmentation
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Come hither or go away? Recognising pre-electoral coalition signals in the news
|
|
|
|
BASE
|
|
Show details
|
|
15 |
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters ...
|
|
Wang, Ruize; Tang, Duyu; Duan, Nan; Wei, Zhongyu; Huang, Xuanjing; ji, Jianshu; Cao, Guihong; Jiang, Daxin; Zhou, Ming. - : arXiv, 2020
|
|
Abstract:
We study the problem of injecting knowledge into large pre-trained models like BERT and RoBERTa. Existing methods typically update the original parameters of pre-trained models when injecting knowledge. However, when multiple kinds of knowledge are injected, the historically injected knowledge would be flushed away. To address this, we propose K-Adapter, a framework that retains the original parameters of the pre-trained model fixed and supports the development of versatile knowledge-infused model. Taking RoBERTa as the backbone model, K-Adapter has a neural adapter for each kind of infused knowledge, like a plug-in connected to RoBERTa. There is no information flow between different adapters, thus multiple adapters can be efficiently trained in a distributed way. As a case study, we inject two kinds of knowledge in this work, including (1) factual knowledge obtained from automatically aligned text-triplets on Wikipedia and Wikidata and (2) linguistic knowledge obtained via dependency parsing. Results on ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
|
|
URL: https://arxiv.org/abs/2002.01808 https://dx.doi.org/10.48550/arxiv.2002.01808
|
|
BASE
|
|
Hide details
|
|
17 |
A Graph-based Model for Joint Chinese Word Segmentation and Dependency Parsing
|
|
|
|
In: Transactions of the Association for Computational Linguistics, Vol 8, Pp 78-92 (2020) (2020)
|
|
BASE
|
|
Show details
|
|
19 |
GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Distantly Supervised Named Entity Recognition using Positive-Unlabeled Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|