1 |
Enhancing Cross-lingual Prompting with Mask Token Augmentation ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Cross-lingual Aspect-based Sentiment Analysis with Aspect Term Code-Switching ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Towards Multi-Sense Cross-Lingual Alignment of Contextual Embeddings ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
MELM: Data Augmentation with Masked Entity Language Modeling for Low-Resource NER ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Multilingual AMR Parsing with Noisy Knowledge Distillation ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
GlobalWoZ: Globalizing MultiWoZ to Develop Multilingual Task-Oriented Dialogue Systems ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Multi-perspective Coherent Reasoning for Helpfulness Prediction of Multimodal Reviews ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation ...
|
|
The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing 2021; Bing, Lidong; Cheng, liying.cheng@alibaba-inc.com; Ding, Bosheng; He, Ruidan; Liu, Linlin; Low, Jia-Wei; Si, Luo; Tan, Qingyu; Ye, Hai. - : Underline Science Inc., 2021
|
|
Abstract:
Read paper: https://www.aclanthology.org/2021.acl-long.172 Abstract: Adapter-based tuning has recently arisen as an alternative to fine-tuning. It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task. As such, it adds only a few trainable parameters per new task, allowing a high degree of parameter sharing. Prior studies have shown that adapter-based tuning often achieves comparable results to fine-tuning. However, existing work only focuses on the parameter-efficient aspect of adapter-based tuning while lacking further investigation on its effectiveness. In this paper, we study the latter. We first show that adapter-based tuning better mitigates forgetting issues than fine-tuning since it yields representations with less deviation from those generated by the initial PrLM. We then empirically compare the two tuning methods on several downstream NLP tasks and settings. We demonstrate that 1) ...
|
|
Keyword:
Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
|
|
URL: https://dx.doi.org/10.48448/hke2-0g48 https://underline.io/lecture/25492-on-the-effectiveness-of-adapter-based-tuning-for-pretrained-language-model-adaptation
|
|
BASE
|
|
Hide details
|
|
11 |
Argument Pair Extraction via Attention-guided Multi-Layer Multi-Cross Encoding ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
MulDA: A Multilingual Data Augmentation Framework for Low-Resource Cross-Lingual NER ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Unsupervised Cross-lingual Adaptation for Sequence Tagging and Beyond ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Transferable End-to-End Aspect-based Sentiment Analysis with Selective Adversarial Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
A knowledge regularized hierarchical approach for emotion cause analysis
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Neural Rating Regression with Abstractive Tips Generation for Recommendation ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Reader-Aware Multi-Document Summarization via Sparse Coding ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Abstractive Multi-Document Summarization via Phrase Selection and Merging ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|