Page: 1 2 3 4 5 6 7 8 9 10
81 |
XCOPA: A multilingual dataset for causal commonsense reasoning
|
|
|
|
BASE
|
|
Show details
|
|
82 |
Improving bilingual lexicon induction with unsupervised post-processing of monolingual word vector spaces
|
|
|
|
BASE
|
|
Show details
|
|
83 |
SemEval-2020 Task 2: Predicting multilingual and cross-lingual (graded) lexical entailment
|
|
|
|
BASE
|
|
Show details
|
|
84 |
Modeling Language Variation and Universals: A Survey on Typological Linguistics for Natural Language Processing
|
|
|
|
In: ISSN: 0891-2017 ; EISSN: 1530-9312 ; Computational Linguistics ; https://hal.archives-ouvertes.fr/hal-02425462 ; Computational Linguistics, Massachusetts Institute of Technology Press (MIT Press), 2019, 45 (3), pp.559-601. ⟨10.1162/coli_a_00357⟩ ; https://www.mitpressjournals.org/doi/abs/10.1162/coli_a_00357 (2019)
|
|
BASE
|
|
Show details
|
|
85 |
Modeling Language Variation and Universals: A Survey on Typological Linguistics for Natural Language Processing ...
|
|
|
|
BASE
|
|
Show details
|
|
86 |
Show Some Love to Your n-grams: A Bit of Progress and Stronger n-gram Language Modeling Baselines ...
|
|
|
|
BASE
|
|
Show details
|
|
87 |
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
|
|
|
|
Abstract:
Unsupervised pretraining models have been shown to facilitate a wide range of downstream NLP applications. These models, however, retain some of the limitations of traditional static word embeddings. In particular, they encode only the distributional knowledge available in raw text corpora, incorporated through language modeling objectives. In this work, we complement such distributional knowledge with external lexical knowledge, that is, we integrate the discrete knowledge on word-level semantic similarity into pretraining. To this end, we generalize the standard BERT model to a multi-task learning setting where we couple BERT's masked language modeling and next sentence prediction objectives with an auxiliary task of binary word relation classification. Our experiments suggest that our "Lexically Informed" BERT (LIBERT), specialized for the word-level semantic similarity, yields better performance than the lexically blind "vanilla" BERT on several language understanding tasks. Concretely, LIBERT ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/1909.02339 https://dx.doi.org/10.48550/arxiv.1909.02339
|
|
BASE
|
|
Hide details
|
|
88 |
Do We Really Need Fully Unsupervised Cross-Lingual Embeddings? ...
|
|
|
|
BASE
|
|
Show details
|
|
89 |
A neural classification method for supporting the creation of BioVerbNet ...
|
|
|
|
BASE
|
|
Show details
|
|
90 |
A neural classification method for supporting the creation of BioVerbNet ...
|
|
|
|
BASE
|
|
Show details
|
|
91 |
Investigating cross-lingual alignment methods for contextualized embeddings with Token-level evaluation ...
|
|
|
|
BASE
|
|
Show details
|
|
92 |
A neural classification method for supporting the creation of BioVerbNet ...
|
|
|
|
BASE
|
|
Show details
|
|
93 |
Second-order contexts from lexical substitutes for few-shot learning of word representations ...
|
|
|
|
BASE
|
|
Show details
|
|
94 |
A Neural Classification Method for Supporting the Creation of BioVerbNet ...
|
|
|
|
BASE
|
|
Show details
|
|
95 |
Enhancing biomedical word embeddings by retrofitting to verb clusters ...
|
|
|
|
BASE
|
|
Show details
|
|
97 |
A Neural Classification Method for Supporting the Creation of BioVerbNet
|
|
|
|
BASE
|
|
Show details
|
|
98 |
Second-order contexts from lexical substitutes for few-shot learning of word representations
|
|
|
|
BASE
|
|
Show details
|
|
99 |
Investigating cross-lingual alignment methods for contextualized embeddings with Token-level evaluation
|
|
|
|
BASE
|
|
Show details
|
|
100 |
A neural classification method for supporting the creation of BioVerbNet
|
|
|
|
BASE
|
|
Show details
|
|
Page: 1 2 3 4 5 6 7 8 9 10
|
|