DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6...72
Hits 21 – 40 of 1.423

21
HIT - A Hierarchically Fused Deep Attention Network for Robust Code-mixed Language Representation ...
BASE
Show details
22
Minimally-Supervised Morphological Segmentation using Adaptor Grammars with Linguistic Priors ...
BASE
Show details
23
Bridging Subword Gaps in Pretrain-Finetune Paradigm for Natural Language Generation ...
BASE
Show details
24
LearnDA: Learnable Knowledge-Guided Data Augmentation for Event Causality Identification ...
BASE
Show details
25
Quotation Recommendation and Interpretation Based on Transformation from Queries to Quotations ...
BASE
Show details
26
How Did This Get Funded?! Automatically Identifying Quirky Scientific Achievements ...
BASE
Show details
27
Minimax and Neyman–Pearson Meta-Learning for Outlier Languages ...
BASE
Show details
28
CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding ...
BASE
Show details
29
Towards Protecting Vital Healthcare Programs by Extracting Actionable Knowledge from Policy ...
BASE
Show details
30
DYPLOC: Dynamic Planning of Content Using Mixed Language Models for Text Generation ...
BASE
Show details
31
Automated Concatenation of Embeddings for Structured Prediction ...
BASE
Show details
32
QASR: QCRI Aljazeera Speech Resource A Large Scale Annotated Arabic Speech Corpus ...
BASE
Show details
33
Code Generation from Natural Language with Less Prior Knowledge and More Monolingual Data ...
BASE
Show details
34
On the Distribution, Sparsity, and Inference-time Quantization of Attention Values in Transformers ...
BASE
Show details
35
Learning Disentangled Latent Topics for Twitter Rumour Veracity Classification ...
BASE
Show details
36
Sequence Models for Computational Etymology of Borrowings ...
BASE
Show details
37
Scaling Within Document Coreference to Long Texts ...
Abstract: Read paper: https://www.aclanthology.org/2021.findings-acl.343 Abstract: State of the art end-to-end coreference resolution models use expensive span representations and antecedent prediction mechanisms. These approaches are expensive both in terms of their memory requirements as well as compute time, and are particularly ill-suited for long documents. In this paper, we propose an approximation to end-to-end models which scales gracefully to documents of any length. Replacing span representations with token representations, we reduce the time/memory complexity via token windows and nearest neighbor sparsification methods for more efficient antecedent prediction. We show our approach's resulting reduction of training and inference time compared to state-of-the-art methods with only a minimal loss in accuracy. ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://dx.doi.org/10.48448/yb32-xp75
https://underline.io/lecture/26434-scaling-within-document-coreference-to-long-texts
BASE
Hide details
38
How to Split: the Effect of Word Segmentation on Gender Bias in Speech Translation ...
BASE
Show details
39
Prefix-Tuning: Optimizing Continuous Prompts for Generation ...
BASE
Show details
40
Chase: A Large-Scale and Pragmatic Chinese Dataset for Cross-Database Context-Dependent Text-to-SQL ...
BASE
Show details

Page: 1 2 3 4 5 6...72

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.423
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern