DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...30
Hits 1 – 20 of 600

1
Priorless Recurrent Networks Learn Curiously ...
BASE
Show details
2
Character Alignment in Morphologically Complex Translation Sets for Related Languages ...
BASE
Show details
3
Composing Byte-Pair Encodings for Morphological Sequence Classification ...
BASE
Show details
4
Variation in Universal Dependencies annotation: A token based typological case study on adpossessive constructions ...
BASE
Show details
5
Corpus evidence for word order freezing in Russian and German ...
BASE
Show details
6
An analysis of language models for metaphor recognition ...
BASE
Show details
7
Noise Isn't Always Negative: Countering Exposure Bias in Sequence-to-Sequence Inflection Models ...
BASE
Show details
8
Exhaustive Entity Recognition for Coptic - Challenges and Solutions ...
BASE
Show details
9
Imagining Grounded Conceptual Representations from Perceptual Information in Situated Guessing Games ...
BASE
Show details
10
Attentively Embracing Noise for Robust Latent Representation in BERT ...
BASE
Show details
11
Catching Attention with Automatic Pull Quote Selection ...
BASE
Show details
12
Opening Ceremony ...
BASE
Show details
13
Classifier Probes May Just Learn from Linear Context Features ...
Abstract: "Classifiers trained on auxiliary probing tasks are a popular tool to analyze the representations learned by neural sentence encoders such as BERT and ELMo. While many authors are aware of the difficulty to distinguish between extracting the linguistic structure encoded in the representations'' andlearning the probing task,'' the validity of probing methods calls for further research. Using a neighboring word identity prediction task, we show that the token embeddings learned by neural sentence encoders contain a significant amount of information about the exact linear context of the token, and hypothesize that, with such information, learning standard probing tasks may be feasible even without additional linguistic structure. We develop this hypothesis into a framework in which analysis efforts can be scrutinized and argue that, with current models and baselines, conclusions that representations contain linguistic structure are not well-founded. Current probing methodology, such as restricting the ...
Keyword: Computer and Information Science; Natural Language Processing; Neural Network
URL: https://dx.doi.org/10.48448/ydk3-v029
https://underline.io/lecture/6288-classifier-probes-may-just-learn-from-linear-context-features
BASE
Hide details
14
Seeing the world through text: Evaluating image descriptions for commonsense reasoning in machine reading comprehension ...
BASE
Show details
15
Part 6 - Cross-linguistic Studies ...
BASE
Show details
16
Manifold Learning-based Word Representation Refinement Incorporating Global and Local Information ...
BASE
Show details
17
HMSid and HMSid2 at PARSEME Shared Task 2020: Computational Corpus Linguistics and unseen-in-training MWEs ...
BASE
Show details
18
Multi-dialect Arabic BERT for Country-level Dialect Identification ...
BASE
Show details
19
Autoencoding Improves Pre-trained Word Embeddings ...
BASE
Show details
20
Exploring End-to-End Differentiable Natural Logic Modeling ...
BASE
Show details

Page: 1 2 3 4 5...30

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
600
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern