DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...26
Hits 1 – 20 of 515

1
Priorless Recurrent Networks Learn Curiously ...
BASE
Show details
2
Composing Byte-Pair Encodings for Morphological Sequence Classification ...
BASE
Show details
3
Variation in Universal Dependencies annotation: A token based typological case study on adpossessive constructions ...
BASE
Show details
4
Corpus evidence for word order freezing in Russian and German ...
BASE
Show details
5
An analysis of language models for metaphor recognition ...
BASE
Show details
6
Noise Isn't Always Negative: Countering Exposure Bias in Sequence-to-Sequence Inflection Models ...
BASE
Show details
7
Exhaustive Entity Recognition for Coptic - Challenges and Solutions ...
BASE
Show details
8
Imagining Grounded Conceptual Representations from Perceptual Information in Situated Guessing Games ...
BASE
Show details
9
Attentively Embracing Noise for Robust Latent Representation in BERT ...
BASE
Show details
10
Catching Attention with Automatic Pull Quote Selection ...
BASE
Show details
11
Classifier Probes May Just Learn from Linear Context Features ...
BASE
Show details
12
Seeing the world through text: Evaluating image descriptions for commonsense reasoning in machine reading comprehension ...
BASE
Show details
13
Part 6 - Cross-linguistic Studies ...
BASE
Show details
14
Manifold Learning-based Word Representation Refinement Incorporating Global and Local Information ...
BASE
Show details
15
HMSid and HMSid2 at PARSEME Shared Task 2020: Computational Corpus Linguistics and unseen-in-training MWEs ...
Abstract: This paper is a system description of HMSid, officially sent to the PARSEME Shared Task 2020 for one language (French), in the open track. It also describes HMSid2, sent to the organizers of the workshop after the deadline and using the same methodology but in the closed track. Both systems do not rely on machine learning, but on computational corpus linguistics. Their score for unseen MWEs is very promising, especially in the case of HMSid2, which would have received the best score for unseen MWEs in the French closed track. ...
Keyword: Computer and Information Science; Natural Language Processing; Neural Network
URL: https://dx.doi.org/10.48448/z9st-5y85
https://underline.io/lecture/6491-hmsid-and-hmsid2-at-parseme-shared-task-2020-computational-corpus-linguistics-and-unseen-in-training-mwes-
BASE
Hide details
16
Autoencoding Improves Pre-trained Word Embeddings ...
BASE
Show details
17
Exploring End-to-End Differentiable Natural Logic Modeling ...
BASE
Show details
18
AutoMeTS: The Autocomplete for Medical Text Simplification. ...
BASE
Show details
19
A Closer Look at Linguistic Knowledge in Masked Language Models: The Case of Relative Clauses in American English ...
BASE
Show details
20
SemEval Task 6: DeftEval ...
BASE
Show details

Page: 1 2 3 4 5...26

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
515
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern