DE eng

Search in the Catalogues and Directories

Hits 1 – 9 of 9

1
Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand? ...
BASE
Show details
2
Green NLP panel ...
BASE
Show details
3
Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent ...
BASE
Show details
4
LSTMs Exploit Linguistic Attributes of Data ...
BASE
Show details
5
Annotation Artifacts in Natural Language Inference Data ...
BASE
Show details
6
The Effect of Different Writing Tasks on Linguistic Style: A Case Study of the ROC Story Cloze Task ...
BASE
Show details
7
Automatic Selection of Context Configurations for Improved Class-Specific Word Representations ...
Vulić, Ivan; Schwartz, Roy; Rappoport, Ari. - : Apollo - University of Cambridge Repository, 2017
BASE
Show details
8
Automatic Selection of Context Configurations for Improved Class-Specific Word Representations
Rappoport, Ari; Reichart, Roi; Korhonen, Anna-Leena. - : Association for Computational Linguistics, 2017. : https://arxiv.org/pdf/1608.05528.pdf, 2017. : Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), 2017
BASE
Show details
9
Automatic Selection of Context Configurations for Improved Class-Specific Word Representations ...
Abstract: This paper is concerned with identifying contexts useful for training word representation models for different word classes such as adjectives (A), verbs (V), and nouns (N). We introduce a simple yet effective framework for an automatic selection of class-specific context configurations. We construct a context configuration space based on universal dependency relations between words, and efficiently search this space with an adapted beam search algorithm. In word similarity tasks for each word class, we show that our framework is both effective and efficient. Particularly, it improves the Spearman's rho correlation with human scores on SimLex-999 over the best previously proposed class-specific contexts by 6 (A), 6 (V) and 5 (N) rho points. With our selected context configurations, we train on only 14% (A), 26.2% (V), and 33.6% (N) of all dependency-based contexts, resulting in a reduced training time. Our results generalise: we show that the configurations our algorithm learns for one English training setup ... : CoNLL 2017 ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://dx.doi.org/10.48550/arxiv.1608.05528
https://arxiv.org/abs/1608.05528
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
9
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern