4 |
deepQuest-py: large and distilled models for quality estimation
|
|
|
|
BASE
|
|
Show details
|
|
5 |
deepQuest-py: large and distilled models for quality estimation
|
|
|
|
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations ; 382 ; 389 (2021)
|
|
BASE
|
|
Show details
|
|
6 |
Knowledge distillation for quality estimation
|
|
|
|
In: 5091 ; 5099 (2021)
|
|
BASE
|
|
Show details
|
|
7 |
Bilinear Fusion of Commonsense Knowledge with Attention-Based NLI Models ...
|
|
|
|
Abstract:
We consider the task of incorporating real-world commonsense knowledge into deep Natural Language Inference (NLI) models. Existing external knowledge incorporation methods are limited to lexical level knowledge and lack generalization across NLI models, datasets, and commonsense knowledge sources. To address these issues, we propose a novel NLI model-independent neural framework, BiCAM. BiCAM incorporates real-world commonsense knowledge into NLI models. Combined with convolutional feature detectors and bilinear feature fusion, BiCAM provides a conceptually simple mechanism that generalizes well. Quantitative evaluations with two state-of-the-art NLI baselines on SNLI and SciTail datasets in conjunction with ConceptNet and Aristo Tuple KGs show that BiCAM considerably improves the accuracy the incorporated NLI baselines. For example, our BiECAM model, an instance of BiCAM, on the challenging SciTail dataset, improves the accuracy of incorporated baselines by 7.0% with ConceptNet, and 8.0% with Aristo Tuple ... : Published in Lecture Notes in Computer Science, Springer International Publishing ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
|
|
URL: https://arxiv.org/abs/2010.11562 https://dx.doi.org/10.48550/arxiv.2010.11562
|
|
BASE
|
|
Hide details
|
|
8 |
Enhancing the Reasoning Capabilities of Natural Language Inference Models with Attention Mechanisms and External Knowledge
|
|
|
|
BASE
|
|
Show details
|
|
|
|