1 |
WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Specializing Multilingual Language Models: An Empirical Study ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand? ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Sentence Bottleneck Autoencoders from Transformer Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Measuring Association Between Labels and Free-Text Rationales ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Promoting Graph Awareness in Linearized Graph-to-Text Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Shortformer: Better Language Modeling using Shorter Inputs ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Specializing Multilingual Language Models: An Empirical Study ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Challenges in Automated Debiasing for Toxic Language Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Competency Problems: On Finding and Removing Artifacts in Language Data ...
|
|
|
|
Abstract:
Anthology paper link: https://aclanthology.org/2021.emnlp-main.135/ Abstract: Much recent work in NLP has documented dataset artifacts, bias, and spurious correlations between input features and output labels. However, how to tell which features have "spurious" instead of legitimate correlations is typically left unspecified. In this work we argue that for complex language understanding tasks, all simple feature correlations are spurious, and we formalize this notion into a class of problems which we call competency problems. For example, the word "amazing" on its own should not give information about a sentiment label independent of the context in which it appears, which could include negation, metaphor, sarcasm, etc. We theoretically analyze the difficulty of creating data for competency problems when human bias is taken into account, showing that realistic datasets will increasingly deviate from competency problems as dataset size increases. This analysis gives us a simple statistical test for dataset ...
|
|
Keyword:
Language Models; Natural Language Processing; Semantic Evaluation; Sociolinguistics
|
|
URL: https://underline.io/lecture/37929-competency-problems-on-finding-and-removing-artifacts-in-language-data https://dx.doi.org/10.48448/xnpn-5692
|
|
BASE
|
|
Hide details
|
|
|
|