DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5
Hits 1 – 20 of 90

1
ANLIzing the Adversarial Natural Language Inference Dataset
In: Proceedings of the Society for Computation in Linguistics (2022)
BASE
Show details
2
Automatic Fact-Checking with Document-level Annotations using BERT and Multiple Instance Learning ...
BASE
Show details
3
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging ...
BASE
Show details
4
Open Aspect Target Sentiment Classification with Natural Language Prompts ...
BASE
Show details
5
Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings ...
BASE
Show details
6
ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts ...
BASE
Show details
7
IndoNLI: A Natural Language Inference Dataset for Indonesian ...
BASE
Show details
8
Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLI ...
BASE
Show details
9
Don't Discard All the Biased Instances: Investigating a Core Assumption in Dataset Bias Mitigation Techniques ...
BASE
Show details
10
Continual Few-Shot Learning for Text Classification ...
BASE
Show details
11
Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics ...
BASE
Show details
12
Scheduled Sampling Based on Decoding Steps for Neural Machine Translation ...
BASE
Show details
13
Pairwise Supervised Contrastive Learning of Sentence Representations ...
BASE
Show details
14
Finding a Balanced Degree of Automation for Summary Evaluation ...
BASE
Show details
15
A Multilingual Benchmark for Probing Negation-Awareness with Minimal Pairs ...
BASE
Show details
16
Universal Sentence Representation Learning with Conditional Masked Language Model ...
BASE
Show details
17
BARThez: a Skilled Pretrained French Sequence-to-Sequence Model ...
BASE
Show details
18
Nearest Neighbour Few-Shot Learning for Cross-lingual Classification ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.131/ Abstract: Even though large pre-trained multilingual models (e.g. mBERT, XLM-R) have led to significant performance gains on a wide range of cross-lingual NLP tasks, success on many downstream tasks still relies on the availability of sufficient annotated data. Traditional fine-tuning of pre-trained models using only a few target samples can cause over-fitting. This can be quite limiting as most languages in the world are under-resourced. In this work, we investigate cross-lingual adaptation using a simple nearest neighbor few-shot ($<15$ samples) inference technique for classification tasks. We experiment using a total of 16 distinct languages across two NLP tasks- XNLI and PAWS-X. Our approach consistently improves traditional fine-tuning using only a handful of labeled samples in target locales. We also demonstrate its generalization capability across tasks. ...
Keyword: Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Inference; Natural Language Processing
URL: https://dx.doi.org/10.48448/bprr-5m31
https://underline.io/lecture/37955-nearest-neighbour-few-shot-learning-for-cross-lingual-classification
BASE
Hide details
19
Subword Mapping and Anchoring across Languages ...
BASE
Show details
20
Hy-NLI : a Hybrid system for state-of-the-art Natural Language Inference
BASE
Show details

Page: 1 2 3 4 5

Catalogues
5
1
0
0
0
0
0
Bibliographies
6
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
83
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern