1 |
WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Specializing Multilingual Language Models: An Empirical Study ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand? ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Sentence Bottleneck Autoencoders from Transformer Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Measuring Association Between Labels and Free-Text Rationales ...
|
|
|
|
Abstract:
Anthology paper link: https://aclanthology.org/2021.emnlp-main.804/ Abstract: In interpretable NLP, we require faithful rationales that reflect the model’s decision-making process for an explained instance. While prior work focuses on extractive rationales (a subset of the input words), we investigate their less-studied counterpart: free-text natural language rationales. We demonstrate that pipelines, models for faithful rationalization on information-extraction style tasks, do not work as well on “reasoning” tasks requiring free-text rationales. We turn to models that jointly predict and rationalize, a class of widely used high-performance models for free-text rationalization. We investigate the extent to which the labels and rationales predicted by these models are associated, a necessary property of faithful explanation. Via two tests, robustness equivalence and feature importance agreement, we find that state-of-the-art T5-based joint models exhibit desirable properties for explaining commonsense ...
|
|
Keyword:
Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Natural language generation; Natural Language Processing
|
|
URL: https://underline.io/lecture/37462-measuring-association-between-labels-and-free-text-rationales https://dx.doi.org/10.48448/a2z2-ha04
|
|
BASE
|
|
Hide details
|
|
11 |
Promoting Graph Awareness in Linearized Graph-to-Text Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Shortformer: Better Language Modeling using Shorter Inputs ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Specializing Multilingual Language Models: An Empirical Study ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Challenges in Automated Debiasing for Toxic Language Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Competency Problems: On Finding and Removing Artifacts in Language Data ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|