DE eng

Search in the Catalogues and Directories

Page: 1 2 3
Hits 1 – 20 of 50

1
Probing for the Usage of Grammatical Number ...
BASE
Show details
2
Estimating the Entropy of Linguistic Distributions ...
BASE
Show details
3
A Latent-Variable Model for Intrinsic Probing ...
BASE
Show details
4
On Homophony and Rényi Entropy ...
BASE
Show details
5
Towards Zero-shot Language Modeling ...
BASE
Show details
6
Differentiable Generative Phonology ...
BASE
Show details
7
Finding Concept-specific Biases in Form--Meaning Associations ...
BASE
Show details
8
Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models ...
BASE
Show details
9
Probing as Quantifying Inductive Bias ...
BASE
Show details
10
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
11
How (Non-)Optimal is the Lexicon? ...
BASE
Show details
12
Disambiguatory Signals are Stronger in Word-initial Positions ...
BASE
Show details
13
A Cognitive Regularizer for Language Modeling ...
BASE
Show details
14
Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing ...
Abstract: Analysing whether neural language models encode linguistic information has become popular in NLP. One method of doing so, which is frequently cited to support the claim that models like BERT encode syntax, is called probing; probes are small supervised models trained to extract linguistic information from another model's output. If a probe is able to predict a particular structure, it is argued that the model whose output it is trained on must have implicitly learnt to encode it. However, drawing a generalisation about a model's linguistic knowledge about a specific phenomena based on what a probe is able to learn may be problematic: in this work, we show that semantic cues in training data means that syntactic probes do not properly isolate syntax. We generate a new corpus of semantically nonsensical but syntactically well-formed Jabberwocky sentences, which we use to evaluate two probes trained on normal data. We train the probes on several popular language models (BERT, GPT, and RoBERTa), and find that in ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
URL: https://arxiv.org/abs/2106.02559
https://dx.doi.org/10.48550/arxiv.2106.02559
BASE
Hide details
15
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs ...
BASE
Show details
16
Investigating Cross-Linguistic Adjective Ordering Tendencies with a Latent-Variable Model ...
BASE
Show details
17
SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection ...
BASE
Show details
18
Intrinsic Probing through Dimension Selection ...
BASE
Show details
19
SIGTYP 2020 Shared Task: Prediction of Typological Features ...
BASE
Show details
20
Information-Theoretic Probing for Linguistic Structure ...
BASE
Show details

Page: 1 2 3

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
50
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern