DE eng

Search in the Catalogues and Directories

Hits 1 – 14 of 14

1
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
2
Conditional Poisson Stochastic Beams ...
BASE
Show details
3
Language Model Evaluation Beyond Perplexity ...
BASE
Show details
4
Differentiable Subset Pruning of Transformer Heads ...
BASE
Show details
5
A Bayesian Framework for Information-Theoretic Probing ...
BASE
Show details
6
Classifying Dyads for Militarized Conflict Analysis ...
BASE
Show details
7
A surprisal--duration trade-off across and within the world's languages ...
BASE
Show details
8
Determinantal Beam Search ...
BASE
Show details
9
Is Sparse Attention more Interpretable? ...
BASE
Show details
10
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs ...
BASE
Show details
11
A Cognitive Regularizer for Language Modeling ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.404 Abstract: The uniform information density (UID) hypothesis, which posits that speakers behaving optimally tend to distribute information uniformly across a linguistic signal, has gained traction in psycholinguistics as an explanation for certain syntactic, morphological, and prosodic choices. In this work, we explore whether the UID hypothesis can be operationalized as an inductive bias for statistical language modeling. Specifically, we augment the canonical MLE objective for training language models with a regularizer that encodes UID. In experiments on ten languages spanning five language families, we find that using UID regularization consistently improves perplexity in language models, having a larger effect when training data is limited. Moreover, via an analysis of generated sequences, we find that UID-regularized language models have other desirable properties, e.g., they generate text that is more lexically diverse. Our results not only ...
Keyword: Cognitive Linguistics; Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://underline.io/lecture/25822-a-cognitive-regularizer-for-language-modeling
https://dx.doi.org/10.48448/y299-yz80
BASE
Hide details
12
Are All Languages Equally Hard to Language-Model?
In: Proceedings of the Society for Computation in Linguistics (2019)
BASE
Show details
13
Rethinking Phonotactic Complexity
In: Proceedings of the Society for Computation in Linguistics (2019)
BASE
Show details
14
Quantifying the Trade-off Between Two Types of Morphological Complexity
In: Proceedings of the Society for Computation in Linguistics (2018)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
14
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern