2 |
Quantifying the Task-Specific Information in Text-Based Classifications ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
How is BERT surprised? Layerwise detection of linguistic anomalies ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Semantic coordinates analysis reveals language changes in the AI field ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
An information theoretic view on selecting linguistic probes ...
|
|
|
|
Abstract:
There is increasing interest in assessing the linguistic knowledge encoded in neural representations. A popular approach is to attach a diagnostic classifier -- or "probe" -- to perform supervised classification from internal representations. However, how to select a good probe is in debate. Hewitt and Liang (2019) showed that a high performance on diagnostic classification itself is insufficient, because it can be attributed to either "the representation being rich in knowledge", or "the probe learning the task", which Pimentel et al. (2020) challenged. We show this dichotomy is valid information-theoretically. In addition, we find that the methods to construct and select good probes proposed by the two papers, *control task* (Hewitt and Liang, 2019) and *control function* (Pimentel et al., 2020), are equivalent -- the errors of their approaches are identical (modulo irrelevant terms). Empirically, these two selection criteria lead to results that highly agree with each other. ... : EMNLP 2020 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2009.07364 https://dx.doi.org/10.48550/arxiv.2009.07364
|
|
BASE
|
|
Hide details
|
|
6 |
Examining the rhetorical capacities of neural language models ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Detecting cognitive impairments by agreeing on interpretations of linguistic features ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Deconfounding age effects with fair representation learning when assessing dementia ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|