DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6 7...30
Hits 41 – 60 of 600

41
Neural Coreference Resolution for Arabic ...
BASE
Show details
42
Improving Abstractive Dialogue Summarization with Graph Structures and Topic Words ...
BASE
Show details
43
Choosing a Feature Set for Generating Referring Expressions in Context ...
BASE
Show details
44
Aspect-Similarity-Aware Historical Influence Modeling for Rating Prediction ...
BASE
Show details
45
Information Extraction from Federal Open Market Committee Statements ...
BASE
Show details
46
Social Event: Music for COLING'2020 ...
BASE
Show details
47
Sentiment Forecasting in Dialog ...
BASE
Show details
48
Exploiting Microblog Conversation Structures to Detect Rumors ...
BASE
Show details
49
Individual corpora predict fast memory retrieval during reading ...
BASE
Show details
50
Improving Document-Level Sentiment Analysis with User and Product Context ...
BASE
Show details
51
Knowledge-Enhanced Natural Language Inference Based on Knowledge Graphs ...
BASE
Show details
52
Measuring Alignment to Authoritarian State Media as Framing Bias ...
BASE
Show details
53
Hitachi at SemEval-2020 Task 7: Stacking at Scalewith Heterogeneous Language Models for Humor Recognition ...
BASE
Show details
54
Exploring the Zero Shot Limit of FewRel ...
BASE
Show details
55
ASR for Non-standardised Languages with Dialectal Variation: the case of Swiss German ...
BASE
Show details
56
DisenE: Disentangling Knowledge Graph Embeddings ...
BASE
Show details
57
IIE-NLP-NUT at SemEval-2020 Task 4: Guiding PLM with Prompt Template Reconstruction Strategy for ComVE ...
BASE
Show details
58
Query Distillation: BERT-based Distillation for Ensemble Ranking ...
Abstract: Recent years have witnessed substantial progress in the development of neural ranking networks, but also an increasingly heavy computational burden due to growing numbers of parameters and the adoption of model ensembles. Knowledge Distillation (KD) is a common solution to balance the effectiveness and efficiency. However, it is not straightforward to apply KD to ranking problems. Ranking Distillation (RD) has been proposed to address this issue, but only shows effectiveness on recommendation tasks. We present a novel two-stage distillation method for ranking problems that allows a smaller student model to be trained while benefitting from the better performance of the teacher model, providing better control of the inference latency and computational burden. We design a novel BERT-based ranking model structure for list-wise ranking to serve as our student model. All ranking candidates are fed to the BERT model simultaneously, such that the self-attention mechanism can enable joint inference to rank the ...
Keyword: Computer and Information Science; Natural Language Processing; Neural Network
URL: https://underline.io/lecture/6113-query-distillation-bert-based-distillation-for-ensemble-ranking
https://dx.doi.org/10.48448/neg7-rd75
BASE
Hide details
59
Complaint Identification in Social Media with Transformer Networks ...
BASE
Show details
60
AraBench: Benchmarking Dialectal Arabic-English Machine Translation ...
BASE
Show details

Page: 1 2 3 4 5 6 7...30

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
600
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern