DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 33

1
WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation ...
BASE
Show details
2
COLD Decoding: Energy-based Constrained Text Generation with Langevin Dynamics ...
BASE
Show details
3
Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection ...
BASE
Show details
4
PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World ...
BASE
Show details
5
Contrastive Explanations for Model Interpretability ...
BASE
Show details
6
PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World ...
BASE
Show details
7
Reflective Decoding: Beyond Unidirectional Generation with Off-the-Shelf Language Models ...
BASE
Show details
8
Conversational Multi-Hop Reasoning with Neural Commonsense Knowledge and Symbolic Logic Rules ...
BASE
Show details
9
Edited Media Understanding Frames: Reasoning About the Intent and Implications of Visual Misinformation ...
BASE
Show details
10
GO FIGURE: A Meta Evaluation of Factuality in Summarization ...
BASE
Show details
11
CLIPScore: A Reference-free Evaluation Metric for Image Captioning ...
BASE
Show details
12
Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences ...
BASE
Show details
13
DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts ...
BASE
Show details
14
Challenges in Automated Debiasing for Toxic Language Detection ...
BASE
Show details
15
NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics ...
Lu, Ximing; Welleck, Sean; West, Peter. - : arXiv, 2021
BASE
Show details
16
From Physical to Social Commonsense: Natural Language and the Natural World
Forbes, Maxwell. - 2021
BASE
Show details
17
Positive AI with Social Commonsense Models
Sap, Maarten. - 2021
BASE
Show details
18
Generative Data Augmentation for Commonsense Reasoning ...
Abstract: Recent advances in commonsense reasoning depend on large-scale human-annotated training data to achieve peak performance. However, manual curation of training examples is expensive and has been shown to introduce annotation artifacts that neural models can readily exploit and overfit on. We investigate G-DAUG^C, a novel generative data augmentation method that aims to achieve more accurate and robust learning in the low-resource setting. Our approach generates synthetic examples using pretrained language models, and selects the most informative and diverse set of examples for data augmentation. In experiments with multiple commonsense reasoning benchmarks, G-DAUG^C consistently outperforms existing data augmentation methods based on back-translation, and establishes a new state-of-the-art on WinoGrande, CODAH, and CommonsenseQA. Further, in addition to improvements in in-distribution accuracy, G-DAUG^C-augmented training also enhances out-of-distribution generalization, showing greater robustness against ... : Findings of the Association for Computational Linguistics: EMNLP 2020 ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/2004.11546
https://dx.doi.org/10.48550/arxiv.2004.11546
BASE
Hide details
19
NeuroLogic Decoding: (Un)supervised Neural Text Generation with Predicate Logic Constraints ...
BASE
Show details
20
Understanding Natural Language with Commonsense Knowledge Representation, Reasoning, and Simulation
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
33
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern