Page: 1 2 3 4 5 6 7 8 9... 68
81 |
The Dark Side of the Language: Pre-trained Transformers in the DarkNet ...
|
|
|
|
BASE
|
|
Show details
|
|
82 |
Discontinuous Constituency and BERT: A Case Study of Dutch ...
|
|
|
|
BASE
|
|
Show details
|
|
83 |
Cross-Platform Difference in Facebook and Text Messages Language Use: Illustrated by Depression Diagnosis ...
|
|
|
|
BASE
|
|
Show details
|
|
84 |
Improving Word Translation via Two-Stage Contrastive Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
85 |
nigam@COLIEE-22: Legal Case Retrieval and Entailment using Cascading of Lexical and Semantic-based models ...
|
|
|
|
BASE
|
|
Show details
|
|
86 |
Learning grammar with a divide-and-concur neural network ...
|
|
|
|
BASE
|
|
Show details
|
|
87 |
Self-Supervised Representation Learning for Speech Using Visual Grounding and Masked Language Modeling ...
|
|
|
|
BASE
|
|
Show details
|
|
88 |
Accurate Online Posterior Alignments for Principled Lexically-Constrained Decoding ...
|
|
|
|
BASE
|
|
Show details
|
|
89 |
Introducing Neural Bag of Whole-Words with ColBERTer: Contextualized Late Interactions using Enhanced Reduction ...
|
|
|
|
BASE
|
|
Show details
|
|
90 |
Can Rationalization Improve Robustness? ...
|
|
|
|
Abstract:
A growing line of work has investigated the development of neural NLP models that can produce rationales--subsets of input that can explain their model predictions. In this paper, we ask whether such rationale models can also provide robustness to adversarial attacks in addition to their interpretable nature. Since these models need to first generate rationales ("rationalizer") before making predictions ("predictor"), they have the potential to ignore noise or adversarially added text by simply masking it out of the generated rationale. To this end, we systematically generate various types of 'AddText' attacks for both token and sentence-level rationalization tasks, and perform an extensive empirical evaluation of state-of-the-art rationale models across five different tasks. Our experiments reveal that the rationale models show the promise to improve robustness, while they struggle in certain scenarios--when the rationalizer is sensitive to positional bias or lexical choices of attack text. Further, ... : Accepted to NAACL 2022 ...
|
|
Keyword:
Computation and Language cs.CL; Cryptography and Security cs.CR; FOS Computer and information sciences; Machine Learning cs.LG
|
|
URL: https://dx.doi.org/10.48550/arxiv.2204.11790 https://arxiv.org/abs/2204.11790
|
|
BASE
|
|
Hide details
|
|
91 |
Improving Time Sensitivity for Question Answering over Temporal Knowledge Graphs ...
|
|
|
|
BASE
|
|
Show details
|
|
92 |
HistBERT: A Pre-trained Language Model for Diachronic Lexical Semantic Analysis ...
|
|
|
|
BASE
|
|
Show details
|
|
93 |
Towards Explainable Evaluation Metrics for Natural Language Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
94 |
ASL Video Corpora & Sign Bank: Resources Available through the American Sign Language Linguistic Research Project (ASLLRP) ...
|
|
|
|
BASE
|
|
Show details
|
|
95 |
How do lexical semantics affect translation? An empirical study ...
|
|
|
|
BASE
|
|
Show details
|
|
97 |
How Effective is Incongruity? Implications for Code-mix Sarcasm Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
98 |
Learning Meta Word Embeddings by Unsupervised Weighted Concatenation of Source Embeddings ...
|
|
|
|
BASE
|
|
Show details
|
|
99 |
COLD Decoding: Energy-based Constrained Text Generation with Langevin Dynamics ...
|
|
|
|
BASE
|
|
Show details
|
|
100 |
LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval ...
|
|
|
|
BASE
|
|
Show details
|
|
Page: 1 2 3 4 5 6 7 8 9... 68
|
|