DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...12
Hits 1 – 20 of 225

1
Differentiable Multi-Agent Actor-Critic for Multi-Step Radiology Report Summarization ...
BASE
Show details
2
Graph Algorithms for Multiparallel Word Alignment
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing ; The 2021 Conference on Empirical Methods in Natural Language Processing ; https://hal.archives-ouvertes.fr/hal-03424044 ; The 2021 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Nov 2021, Punta Cana, Dominica ; https://2021.emnlp.org/ (2021)
BASE
Show details
3
Superbizarre Is Not Superb: Derivational Morphology Improves BERT's Interpretation of Complex Words ...
BASE
Show details
4
Dynamic Contextualized Word Embeddings ...
BASE
Show details
5
Measuring and Improving Consistency in Pretrained Language Models ...
BASE
Show details
6
Static Embeddings as Efficient Knowledge Bases? ...
NAACL 2021 2021; Dufter, Philipp; Kassner, Nora. - : Underline Science Inc., 2021
BASE
Show details
7
BUSINESS MEETING ...
BASE
Show details
8
ParCourE: A Parallel Corpus Explorer for a Massively Multilingual Corpus ...
BASE
Show details
9
Discrete and Soft Prompting for Multilingual Models ...
BASE
Show details
10
Graph Algorithms for Multiparallel Word Alignment ...
BASE
Show details
11
Continuous Entailment Patterns for Lexical Inference in Context ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.556/ Abstract: Combining a pretrained language model (PLM) with textual patterns has been shown to help in both zero- and few-shot settings. For zero-shot performance, it makes sense to design patterns that closely resemble the text seen during self-supervised pretraining because the model has never seen anything else. Supervised training allows for more flexibility. If we allow for tokens outside the PLM's vocabulary, patterns can be adapted more flexibly to a PLM's idiosyncrasies. Contrasting patterns where a "token" can be any continuous vector vs. those where a discrete choice between vocabulary elements has to be made, we call our method CONtinuous pAtterNs (CONAN). We evaluate CONAN on two established benchmarks for lexical inference in context (LIiC) a.k.a. predicate entailment, a challenging natural language understanding task with relatively small training sets. In a direct comparison with discrete patterns, CONAN consistently leads to ...
Keyword: Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
URL: https://dx.doi.org/10.48448/yxps-pp51
https://underline.io/lecture/37504-continuous-entailment-patterns-for-lexical-inference-in-context
BASE
Hide details
12
A Closer Look at Few-Shot Crosslingual Transfer: The Choice of Shots Matters ...
BASE
Show details
13
Measuring and Improving Consistency in Pretrained Language Models
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1012-1031 (2021) (2021)
BASE
Show details
14
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1408-1424 (2021) (2021)
BASE
Show details
15
SimAlign: High Quality Word Alignments Without Parallel Training Data Using Static and Contextualized Embeddings
In: EMNLP 2020 ; https://hal.archives-ouvertes.fr/hal-03013194 ; EMNLP 2020, Association for Computational Linguistics, Nov 2020, Online, United States. pp.1627 - 1643 (2020)
BASE
Show details
16
Combining Word Embeddings with Bilingual Orthography Embeddings for Bilingual Dictionary Induction ...
Severini, Silvia; Hangya, Viktor; Fraser, Alexander. - : Universitätsbibliothek der Ludwig-Maximilians-Universität München, 2020
BASE
Show details
17
Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly
Schütze, Hinrich; Kassner, Nora. - : Ludwig-Maximilians-Universität München, 2020
BASE
Show details
18
BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance
BASE
Show details
19
Combining Word Embeddings with Bilingual Orthography Embeddings for Bilingual Dictionary Induction
BASE
Show details
20
Predicting the Growth of Morphological Families from Social and Linguistic Factors
Hofmann, Valentin; Schütze, Hinrich; Pierrehumbert, Janet. - : Ludwig-Maximilians-Universität München, 2020
BASE
Show details

Page: 1 2 3 4 5...12

Catalogues
0
1
23
0
0
0
2
Bibliographies
42
2
2
2
0
0
0
0
4
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
160
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern