DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5
Hits 1 – 20 of 90

1
ANLIzing the Adversarial Natural Language Inference Dataset
In: Proceedings of the Society for Computation in Linguistics (2022)
BASE
Show details
2
Automatic Fact-Checking with Document-level Annotations using BERT and Multiple Instance Learning ...
BASE
Show details
3
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging ...
BASE
Show details
4
Open Aspect Target Sentiment Classification with Natural Language Prompts ...
BASE
Show details
5
Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings ...
BASE
Show details
6
ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts ...
BASE
Show details
7
IndoNLI: A Natural Language Inference Dataset for Indonesian ...
BASE
Show details
8
Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLI ...
BASE
Show details
9
Don't Discard All the Biased Instances: Investigating a Core Assumption in Dataset Bias Mitigation Techniques ...
BASE
Show details
10
Continual Few-Shot Learning for Text Classification ...
BASE
Show details
11
Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics ...
BASE
Show details
12
Scheduled Sampling Based on Decoding Steps for Neural Machine Translation ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.264/ Abstract: Scheduled sampling is widely used to mitigate the exposure bias problem for neural machine translation. Its core motivation is to simulate the inference scene during training by replacing ground-truth tokens with predicted tokens, thus bridging the gap between training and inference. However, vanilla scheduled sampling is merely based on training steps and equally treats all decoding steps. Namely, it simulates an inference scene with uniform error rates, which disobeys the real inference scene, where larger decoding steps usually have higher error rates due to error accumulations. To alleviate the above discrepancy, we propose scheduled sampling methods based on decoding steps, increasing the selection chance of predicted tokens with the growth of decoding steps. Consequently, we can more realistically simulate the inference scene during training, thus better bridging the gap between training and inference. Moreover, we ...
Keyword: Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Machine translation; Natural Language Inference; Natural Language Processing
URL: https://dx.doi.org/10.48448/924y-b333
https://underline.io/lecture/37571-scheduled-sampling-based-on-decoding-steps-for-neural-machine-translation
BASE
Hide details
13
Pairwise Supervised Contrastive Learning of Sentence Representations ...
BASE
Show details
14
Finding a Balanced Degree of Automation for Summary Evaluation ...
BASE
Show details
15
A Multilingual Benchmark for Probing Negation-Awareness with Minimal Pairs ...
BASE
Show details
16
Universal Sentence Representation Learning with Conditional Masked Language Model ...
BASE
Show details
17
BARThez: a Skilled Pretrained French Sequence-to-Sequence Model ...
BASE
Show details
18
Nearest Neighbour Few-Shot Learning for Cross-lingual Classification ...
BASE
Show details
19
Subword Mapping and Anchoring across Languages ...
BASE
Show details
20
Hy-NLI : a Hybrid system for state-of-the-art Natural Language Inference
BASE
Show details

Page: 1 2 3 4 5

Catalogues
5
1
0
0
0
0
0
Bibliographies
6
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
83
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern