DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...30
Hits 1 – 20 of 600

1
Priorless Recurrent Networks Learn Curiously ...
BASE
Show details
2
Character Alignment in Morphologically Complex Translation Sets for Related Languages ...
BASE
Show details
3
Composing Byte-Pair Encodings for Morphological Sequence Classification ...
BASE
Show details
4
Variation in Universal Dependencies annotation: A token based typological case study on adpossessive constructions ...
BASE
Show details
5
Corpus evidence for word order freezing in Russian and German ...
BASE
Show details
6
An analysis of language models for metaphor recognition ...
BASE
Show details
7
Noise Isn't Always Negative: Countering Exposure Bias in Sequence-to-Sequence Inflection Models ...
Abstract: Morphological inflection, like many sequence-to-sequence tasks, sees great performance from recurrent neural architectures when data is plentiful, but performance falls off sharply in lower-data settings. We investigate one aspect of neural seq2seq models that we hypothesize contributes to overfitting - teacher forcing. By creating different training and test conditions, exposure bias increases the likelihood that a system too closely models its training data. Experiments show that teacher-forced models struggle to recover when they enter unknown territory. However, a simple modification to the training algorithm to more closely mimic test conditions creates models that are better able to generalize to unseen environments. ...
Keyword: Computer and Information Science; Natural Language Processing; Neural Network
URL: https://dx.doi.org/10.48448/3zab-w963
https://underline.io/lecture/6227-noise-isn't-always-negative-countering-exposure-bias-in-sequence-to-sequence-inflection-models
BASE
Hide details
8
Exhaustive Entity Recognition for Coptic - Challenges and Solutions ...
BASE
Show details
9
Imagining Grounded Conceptual Representations from Perceptual Information in Situated Guessing Games ...
BASE
Show details
10
Attentively Embracing Noise for Robust Latent Representation in BERT ...
BASE
Show details
11
Catching Attention with Automatic Pull Quote Selection ...
BASE
Show details
12
Opening Ceremony ...
BASE
Show details
13
Classifier Probes May Just Learn from Linear Context Features ...
BASE
Show details
14
Seeing the world through text: Evaluating image descriptions for commonsense reasoning in machine reading comprehension ...
BASE
Show details
15
Part 6 - Cross-linguistic Studies ...
BASE
Show details
16
Manifold Learning-based Word Representation Refinement Incorporating Global and Local Information ...
BASE
Show details
17
HMSid and HMSid2 at PARSEME Shared Task 2020: Computational Corpus Linguistics and unseen-in-training MWEs ...
BASE
Show details
18
Multi-dialect Arabic BERT for Country-level Dialect Identification ...
BASE
Show details
19
Autoencoding Improves Pre-trained Word Embeddings ...
BASE
Show details
20
Exploring End-to-End Differentiable Natural Logic Modeling ...
BASE
Show details

Page: 1 2 3 4 5...30

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
600
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern