3 |
Listening to Affected Communities to Define Extreme Speech: Dataset and Experiments ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Differentiable Multi-Agent Actor-Critic for Multi-Step Radiology Report Summarization ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Graph Algorithms for Multiparallel Word Alignment
|
|
|
|
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing ; The 2021 Conference on Empirical Methods in Natural Language Processing ; https://hal.archives-ouvertes.fr/hal-03424044 ; The 2021 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Nov 2021, Punta Cana, Dominica ; https://2021.emnlp.org/ (2021)
|
|
BASE
|
|
Show details
|
|
10 |
Static Embeddings as Efficient Knowledge Bases? ...
|
|
|
|
Abstract:
Recent research investigates factual knowledge stored in large pretrained language models (PLMs). Instead of structural knowledge base (KB) queries, masked sentences such as "Paris is the capital of [MASK]" are used as probes. The good performance on this analysis task has been interpreted as PLMs becoming potential repositories of factual knowledge. In experiments across ten linguistically diverse languages, we study knowledge contained in static embeddings. We show that, when restricting the output space to a candidate set, simple nearest neighbor matching using static embeddings performs better than PLMs. E.g., static embeddings perform 1.6% points better than BERT while just using 0.3% of energy for training. One important factor in their good comparative performance is that static embeddings are standardly learned for a large vocabulary. In contrast, BERT exploits its more sophisticated, but expensive ability to compose meaningful representations from a much smaller subword vocabulary. ... : NAACL2021 CRV; first two authors contributed equally ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://dx.doi.org/10.48550/arxiv.2104.07094 https://arxiv.org/abs/2104.07094
|
|
BASE
|
|
Hide details
|
|
11 |
Does He Wink or Does He Nod? A Challenging Benchmark for Evaluating Word Understanding of Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Superbizarre Is Not Superb: Derivational Morphology Improves BERT's Interpretation of Complex Words ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
ParCourE: A Parallel Corpus Explorer for a Massively Multilingual Corpus ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Multilingual LAMA: Investigating Knowledge in Multilingual Pretrained Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Wine is Not v i n. -- On the Compatibility of Tokenizations Across Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Locating Language-Specific Information in Contextualized Embeddings ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Measuring and Improving Consistency in Pretrained Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|