DE eng

Search in the Catalogues and Directories

Hits 1 – 5 of 5

1
Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing ...
Abstract: Analysing whether neural language models encode linguistic information has become popular in NLP. One method of doing so, which is frequently cited to support the claim that models like BERT encode syntax, is called probing; probes are small supervised models trained to extract linguistic information from another model's output. If a probe is able to predict a particular structure, it is argued that the model whose output it is trained on must have implicitly learnt to encode it. However, drawing a generalisation about a model's linguistic knowledge about a specific phenomena based on what a probe is able to learn may be problematic: in this work, we show that semantic cues in training data means that syntactic probes do not properly isolate syntax. We generate a new corpus of semantically nonsensical but syntactically well-formed Jabberwocky sentences, which we use to evaluate two probes trained on normal data. We train the probes on several popular language models (BERT, GPT, and RoBERTa), and find that in ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
URL: https://arxiv.org/abs/2106.02559
https://dx.doi.org/10.48550/arxiv.2106.02559
BASE
Hide details
2
SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection ...
BASE
Show details
3
Information-Theoretic Probing for Linguistic Structure ...
BASE
Show details
4
Metaphor Detection Using Context and Concreteness
In: Proceedings of the Second Workshop on Figurative Language Processing (2020)
BASE
Show details
5
A Tale of a Probe and a Parser ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
5
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern