DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6
Hits 1 – 20 of 119

1
Probing for the Usage of Grammatical Number ...
BASE
Show details
2
Estimating the Entropy of Linguistic Distributions ...
BASE
Show details
3
A Latent-Variable Model for Intrinsic Probing ...
BASE
Show details
4
On Homophony and Rényi Entropy ...
BASE
Show details
5
On Homophony and Rényi Entropy ...
BASE
Show details
6
Searching for Search Errors in Neural Morphological Inflection ...
BASE
Show details
7
Applying the Transformer to Character-level Transduction ...
Wu, Shijie; Cotterell, Ryan; Hulden, Mans. - : ETH Zurich, 2021
BASE
Show details
8
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
9
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
10
Conditional Poisson Stochastic Beams ...
BASE
Show details
11
Examining the Inductive Bias of Neural Language Models with Artificial Languages ...
BASE
Show details
12
Modeling the Unigram Distribution ...
BASE
Show details
13
Language Model Evaluation Beyond Perplexity ...
BASE
Show details
14
Differentiable Subset Pruning of Transformer Heads ...
BASE
Show details
15
A Bayesian Framework for Information-Theoretic Probing ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.229/ Abstract: Pimentel et al. (2020) recently analysed probing from an information-theoretic perspective. They argue that probing should be seen as approximating a mutual information. This led to the rather unintuitive conclusion that representations encode exactly the same information about a target task as the original sentences. The mutual information, however, assumes the true probability distribution of a pair of random variables is known, leading to unintuitive results in settings where it is not. This paper proposes a new framework to measure what we term Bayesian mutual information, which analyses information from the perspective of Bayesian agents -- allowing for more intuitive findings in scenarios with finite data. For instance, under Bayesian MI we have that data can add information, processing can help, and information can hurt, which makes it more intuitive for machine learning applications. Finally, we apply our framework to ...
Keyword: Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
URL: https://underline.io/lecture/37413-a-bayesian-framework-for-information-theoretic-probing
https://dx.doi.org/10.48448/gnht-ez32
BASE
Hide details
16
Classifying Dyads for Militarized Conflict Analysis ...
BASE
Show details
17
Higher-order Derivatives of Weighted Finite-state Machines ...
BASE
Show details
18
On Finding the K-best Non-projective Dependency Trees ...
BASE
Show details
19
A surprisal--duration trade-off across and within the world's languages ...
BASE
Show details
20
Determinantal Beam Search ...
BASE
Show details

Page: 1 2 3 4 5 6

Catalogues
0
0
0
0
0
0
1
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
118
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern