DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6 7...83
Hits 41 – 60 of 1.643

41
What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb Constructions?
In: Computational Linguistics, Vol 46, Iss 4, Pp 763-784 (2021) (2021)
BASE
Show details
42
Efficient Computation of Expectations under Spanning Tree Distributions
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 675-690 (2021) (2021)
BASE
Show details
43
Revisiting Multi-Domain Machine Translation
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 17-35 (2021) (2021)
BASE
Show details
44
Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve
In: Computational Linguistics, Vol 47, Iss 1, Pp 117-140 (2021) (2021)
BASE
Show details
45
Semantic Data Set Construction from Human Clustering and Spatial Arrangement
In: Computational Linguistics, Vol 47, Iss 1, Pp 69-116 (2021) (2021)
BASE
Show details
46
WikiAsp: A Dataset for Multi-domain Aspect-based Summarization
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 211-225 (2021) (2021)
BASE
Show details
47
Latent Compositional Representations Improve Systematic Generalization in Grounded Question Answering
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 195-210 (2021) (2021)
BASE
Show details
48
Aligning Faithful Interpretations with their Social Attribution
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 294-310 (2021) (2021)
BASE
Show details
49
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1408-1424 (2021) (2021)
Abstract: Abstract⚠ This paper contains prompts and model outputs that are offensive in nature.When trained on large, unfiltered crawls from the Internet, language models pick up and reproduce all kinds of undesirable biases that can be found in the data: They often generate racist, sexist, violent, or otherwise toxic language. As large models require millions of training examples to achieve good performance, it is difficult to completely prevent them from being exposed to such content. In this paper, we first demonstrate a surprising finding: Pretrained language models recognize, to a considerable degree, their undesirable biases and the toxicity of the content they produce. We refer to this capability as self-diagnosis. Based on this finding, we then propose a decoding algorithm that, given only a textual description of the undesired behavior, reduces the probability of a language model producing problematic text. We refer to this approach as self-debiasing. Self-debiasing does not rely on manually curated word lists, nor does it require any training data or changes to the model’s parameters. While we by no means eliminate the issue of language models generating biased text, we believe our approach to be an important step in this direction.1
Keyword: Computational linguistics. Natural language processing; P98-98.5
URL: https://doaj.org/article/7865d581bc554481bb1d3d28fe5f98e4
https://doi.org/10.1162/tacl_a_00434
BASE
Hide details
50
Evaluating Document Coherence Modeling
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 621-640 (2021) (2021)
BASE
Show details
51
Comparing Knowledge-Intensive and Data-Intensive Models for English Resource Semantic Parsing
In: Computational Linguistics, Vol 47, Iss 1, Pp 43-68 (2021) (2021)
BASE
Show details
52
Model Compression for Domain Adaptation through Causal Effect Estimation
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1355-1373 (2021) (2021)
BASE
Show details
53
Planning with Learned Entity Prompts for Abstractive Summarization
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1475-1492 (2021) (2021)
BASE
Show details
54
Supertagging the Long Tail with Tree-Structured Decoding of Complex Categories
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 243-260 (2021) (2021)
BASE
Show details
55
Lexically Aware Semi-Supervised Learning for OCR Post-Correction
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1285-1302 (2021) (2021)
BASE
Show details
56
Supervised and Unsupervised Neural Approaches to Text Readability
In: Computational Linguistics, Vol 47, Iss 1, Pp 141-179 (2021) (2021)
BASE
Show details
57
A Graph-Based Framework for Structured Prediction Tasks in Sanskrit
In: Computational Linguistics, Vol 46, Iss 4, Pp 785-845 (2021) (2021)
BASE
Show details
58
What Helps Transformers Recognize Conversational Structure? Importance of Context, Punctuation, and Labels in Dialog Act Recognition
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1163-1179 (2021) (2021)
BASE
Show details
59
Efficient Outside Computation
In: Computational Linguistics, Vol 46, Iss 4, Pp 745-762 (2021) (2021)
BASE
Show details
60
There Once Was a Really Bad Poet, It Was Automated but You Didn’t Know It
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 605-620 (2021) (2021)
BASE
Show details

Page: 1 2 3 4 5 6 7...83

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.643
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern