Home
Catalogue search
Refine your search:
Keyword:
Computational Linguistics (3)
Machine Learning (3)
Machine Learning and Data Mining (3)
Natural Language Processing (3)
Cognitive Modeling (2)
Language Models (1)
Creator / Publisher:
Cotterell, Ryan (3)
Pimentel, Tiago (3)
The 2021 Conference on Empirical Methods in Natural Language Processing 2021 (3)
Meister, Clara (2)
Blasi, Damián (1)
Haller, Patrick (1)
Jäger, Lena (1)
Levy, Roger (1)
Salesky, Elizabeth (1)
Teufel, Simone (1)
Year
Medium
Type
BLLDB-Access:
free (3)
subject to license (0)
Search in the Catalogues and Directories
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
Sort by
creator [A → Z]
'
creator [Z → A]
'
publishing year ↑ (asc)
'
publishing year ↓ (desc)
'
title [A → Z]
'
title [Z → A]
'
Simple Search
Hits 1 – 3 of 3
1
Revisiting the Uniform Information Density Hypothesis ...
The 2021 Conference on Empirical Methods in Natural Language Processing 2021
;
Cotterell, Ryan
;
Haller, Patrick
. - : Underline Science Inc., 2021
BASE
Show details
2
A Bayesian Framework for Information-Theoretic Probing ...
The 2021 Conference on Empirical Methods in Natural Language Processing 2021
;
Cotterell, Ryan
;
Pimentel, Tiago
. - : Underline Science Inc., 2021
Abstract:
Anthology paper link: https://aclanthology.org/2021.emnlp-main.229/ Abstract: Pimentel et al. (2020) recently analysed probing from an information-theoretic perspective. They argue that probing should be seen as approximating a mutual information. This led to the rather unintuitive conclusion that representations encode exactly the same information about a target task as the original sentences. The mutual information, however, assumes the true probability distribution of a pair of random variables is known, leading to unintuitive results in settings where it is not. This paper proposes a new framework to measure what we term Bayesian mutual information, which analyses information from the perspective of Bayesian agents -- allowing for more intuitive findings in scenarios with finite data. For instance, under Bayesian MI we have that data can add information, processing can help, and information can hurt, which makes it more intuitive for machine learning applications. Finally, we apply our framework to ...
Keyword:
Computational Linguistics
;
Machine Learning
;
Machine Learning and Data Mining
;
Natural Language Processing
URL:
https://underline.io/lecture/37413-a-bayesian-framework-for-information-theoretic-probing
https://dx.doi.org/10.48448/gnht-ez32
BASE
Hide details
3
A surprisal--duration trade-off across and within the world's languages ...
The 2021 Conference on Empirical Methods in Natural Language Processing 2021
;
Blasi, Damián
;
Cotterell, Ryan
. - : Underline Science Inc., 2021
BASE
Show details
Mobile view
All
Catalogues
UB Frankfurt Linguistik
0
IDS Mannheim
0
OLC Linguistik
0
UB Frankfurt Retrokatalog
0
DNB Subject Category Language
0
Institut für Empirische Sprachwissenschaft
0
Leibniz-Centre General Linguistics (ZAS)
0
Bibliographies
BLLDB
0
BDSL
0
IDS Bibliografie zur deutschen Grammatik
0
IDS Bibliografie zur Gesprächsforschung
0
IDS Konnektoren im Deutschen
0
IDS Präpositionen im Deutschen
0
IDS OBELEX meta
0
MPI-SHH Linguistics Collection
0
MPI for Psycholinguistics
0
Linked Open Data catalogues
Annohub
0
Online resources
Link directory
0
Journal directory
0
Database directory
0
Dictionary directory
0
Open access documents
BASE
3
Linguistik-Repository
0
IDS Publikationsserver
0
Online dissertations
0
Language Description Heritage
0
© 2013 - 2024 Lin|gu|is|tik
|
Imprint
|
Privacy Policy
|
Datenschutzeinstellungen ändern