DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...7
Hits 1 – 20 of 121

1
Graph Neural Networks for Multiparallel Word Alignment ...
BASE
Show details
2
CaMEL: Case Marker Extraction without Labels ...
BASE
Show details
3
Geographic Adaptation of Pretrained Language Models ...
BASE
Show details
4
Graph Algorithms for Multiparallel Word Alignment
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing ; The 2021 Conference on Empirical Methods in Natural Language Processing ; https://hal.archives-ouvertes.fr/hal-03424044 ; The 2021 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Nov 2021, Punta Cana, Dominica ; https://2021.emnlp.org/ (2021)
BASE
Show details
5
Static Embeddings as Efficient Knowledge Bases? ...
BASE
Show details
6
Does He Wink or Does He Nod? A Challenging Benchmark for Evaluating Word Understanding of Language Models ...
BASE
Show details
7
Discrete and Soft Prompting for Multilingual Models ...
Zhao, Mengjie; Schütze, Hinrich. - : arXiv, 2021
BASE
Show details
8
ParCourE: A Parallel Corpus Explorer for a Massively Multilingual Corpus ...
BASE
Show details
9
Multilingual LAMA: Investigating Knowledge in Multilingual Pretrained Language Models ...
BASE
Show details
10
Wine is Not v i n. -- On the Compatibility of Tokenizations Across Languages ...
BASE
Show details
11
Graph Algorithms for Multiparallel Word Alignment ...
BASE
Show details
12
Locating Language-Specific Information in Contextualized Embeddings ...
BASE
Show details
13
Measuring and Improving Consistency in Pretrained Language Models ...
Abstract: Consistency of a model — that is, the invariance of its behavior under meaning preserving alternations in its input — is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge? To this end, we create PARAREL , a high-quality resource of cloze-style query English paraphrases. It contains a total of 328 paraphrases for 38 relations. Using PARAREL , we show that the consistency of all PLMs we experiment with is poor – though with high variance between relations. Our analysis of the representational spaces of PLMs suggests that they have a poor structure and are currently not suitable for representing knowledge robustly. Finally, we propose a method for improving model consistency and experimentally demonstrate its effectiveness. ...
Keyword: Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
URL: https://underline.io/lecture/38196-measuring-and-improving-consistency-in-pretrained-language-models
https://dx.doi.org/10.48448/rb2w-8959
BASE
Hide details
14
Static Embeddings as Efficient Knowledge Bases? ...
NAACL 2021 2021; Dufter, Philipp; Kassner, Nora. - : Underline Science Inc., 2021
BASE
Show details
15
BUSINESS MEETING ...
BASE
Show details
16
Discrete and Soft Prompting for Multilingual Models ...
BASE
Show details
17
Continuous Entailment Patterns for Lexical Inference in Context ...
BASE
Show details
18
Language Models for Lexical Inference in Context ...
BASE
Show details
19
Continuous Entailment Patterns for Lexical Inference in Context ...
BASE
Show details
20
SimAlign: High Quality Word Alignments Without Parallel Training Data Using Static and Contextualized Embeddings
In: EMNLP 2020 ; https://hal.archives-ouvertes.fr/hal-03013194 ; EMNLP 2020, Association for Computational Linguistics, Nov 2020, Online, United States. pp.1627 - 1643 (2020)
BASE
Show details

Page: 1 2 3 4 5...7

Catalogues
5
1
8
0
0
0
0
Bibliographies
14
0
0
1
0
0
0
0
5
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
96
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern