1 |
DirectProbe: Studying Representations without Classifiers ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
DirectProbe: Studying Representations without Classifiers ...
|
|
|
|
Abstract:
Read the paper on the folowing link: https://www.aclweb.org/anthology/2021.naacl-main.401/ Abstract: Understanding how the linguistic structure is encoded in contextualized embedding could help explain their impressive performance across NLP. Existing approaches for probing them usually call for training classifiers and use the accuracy, mutual information, or complexity as a proxy for the representation's goodness. In this work, we argue that doing so can be unreliable because different representations may need different classifiers. We develop a heuristic, DirectProbe, that directly studies the geometry of a representation by building upon the notion of a version space for a task. Experiments with several linguistic tasks and contextualized embeddings show that, even without training classifiers, DirectProbe can shine lights on how an embedding space represents labels and also anticipate the classifier performance for the representation. ...
|
|
Keyword:
Artificial Intelligence; Computer Science and Engineering; Intelligent System; Natural Language Processing
|
|
URL: https://dx.doi.org/10.48448/vyzv-j336 https://underline.io/lecture/19706-directprobe-studying-representations-without-classifiers
|
|
BASE
|
|
Hide details
|
|
|
|