DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...8
Hits 1 – 20 of 143

1
WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation ...
BASE
Show details
2
Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection ...
BASE
Show details
3
Probing Across Time: What Does RoBERTa Know and When? ...
BASE
Show details
4
Specializing Multilingual Language Models: An Empirical Study ...
Chau, Ethan C.; Smith, Noah A.. - : arXiv, 2021
BASE
Show details
5
Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand? ...
BASE
Show details
6
Finetuning Pretrained Transformers into RNNs ...
BASE
Show details
7
Green NLP panel ...
BASE
Show details
8
Sentence Bottleneck Autoencoders from Transformer Language Models ...
BASE
Show details
9
All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text ...
BASE
Show details
10
Measuring Association Between Labels and Free-Text Rationales ...
BASE
Show details
11
Promoting Graph Awareness in Linearized Graph-to-Text Generation ...
BASE
Show details
12
Shortformer: Better Language Modeling using Shorter Inputs ...
BASE
Show details
13
DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.522 Abstract: Despite recent advances in natural language generation, it remains challenging to control attributes of generated text. We propose DExperts: Decoding-time Experts, a decoding-time method for controlled text generation that combines a pretrained language model with "expert" LMs and/or "anti-expert" LMs in a product of experts. Intuitively, under the ensemble, tokens only get high probability if they are considered likely by the experts, and unlikely by the anti-experts. We apply DExperts to language detoxification and sentiment-controlled generation, where we outperform existing controllable generation methods on both automatic and human evaluations. Moreover, because DExperts operates only on the output of the pretrained LM, it is effective with (anti-)experts of smaller size, including when operating on GPT-3. Our work highlights the promise of tuning small LMs on text with (un)desirable attributes for efficient decoding-time steering. ...
URL: https://dx.doi.org/10.48448/12f3-6592
https://underline.io/lecture/25748-dexperts-decoding-time-controlled-text-generation-with-experts-and-anti-experts
BASE
Hide details
14
Specializing Multilingual Language Models: An Empirical Study ...
BASE
Show details
15
Challenges in Automated Debiasing for Toxic Language Detection ...
BASE
Show details
16
NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics ...
Lu, Ximing; Welleck, Sean; West, Peter. - : arXiv, 2021
BASE
Show details
17
Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent ...
BASE
Show details
18
Competency Problems: On Finding and Removing Artifacts in Language Data ...
BASE
Show details
19
Infusing Finetuning with Semantic Dependencies ...
BASE
Show details
20
Extracting and Inferring Personal Attributes from Dialogue
Wang, Zhilin. - 2021
BASE
Show details

Page: 1 2 3 4 5...8

Catalogues
0
0
4
0
0
0
0
Bibliographies
7
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
135
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern