DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6 7 8 9...68
Hits 81 – 100 of 1.343

81
The Dark Side of the Language: Pre-trained Transformers in the DarkNet ...
BASE
Show details
82
Discontinuous Constituency and BERT: A Case Study of Dutch ...
BASE
Show details
83
Cross-Platform Difference in Facebook and Text Messages Language Use: Illustrated by Depression Diagnosis ...
BASE
Show details
84
Improving Word Translation via Two-Stage Contrastive Learning ...
BASE
Show details
85
nigam@COLIEE-22: Legal Case Retrieval and Entailment using Cascading of Lexical and Semantic-based models ...
BASE
Show details
86
Learning grammar with a divide-and-concur neural network ...
Deyo, Sean; Elser, Veit. - : arXiv, 2022
BASE
Show details
87
Self-Supervised Representation Learning for Speech Using Visual Grounding and Masked Language Modeling ...
Peng, Puyuan; Harwath, David. - : arXiv, 2022
BASE
Show details
88
Accurate Online Posterior Alignments for Principled Lexically-Constrained Decoding ...
BASE
Show details
89
Introducing Neural Bag of Whole-Words with ColBERTer: Contextualized Late Interactions using Enhanced Reduction ...
Abstract: Recent progress in neural information retrieval has demonstrated large gains in effectiveness, while often sacrificing the efficiency and interpretability of the neural model compared to classical approaches. This paper proposes ColBERTer, a neural retrieval model using contextualized late interaction (ColBERT) with enhanced reduction. Along the effectiveness Pareto frontier, ColBERTer's reductions dramatically lower ColBERT's storage requirements while simultaneously improving the interpretability of its token-matching scores. To this end, ColBERTer fuses single-vector retrieval, multi-vector refinement, and optional lexical matching components into one model. For its multi-vector component, ColBERTer reduces the number of stored vectors per document by learning unique whole-word representations for the terms in each document and learning to identify and remove word representations that are not essential to effective scoring. We employ an explicit multi-task, multi-stage training to facilitate using very ...
Keyword: Artificial Intelligence cs.AI; Computation and Language cs.CL; FOS Computer and information sciences; Information Retrieval cs.IR; Machine Learning cs.LG
URL: https://arxiv.org/abs/2203.13088
https://dx.doi.org/10.48550/arxiv.2203.13088
BASE
Hide details
90
Can Rationalization Improve Robustness? ...
BASE
Show details
91
Improving Time Sensitivity for Question Answering over Temporal Knowledge Graphs ...
Shang, Chao; Wang, Guangtao; Qi, Peng. - : arXiv, 2022
BASE
Show details
92
HistBERT: A Pre-trained Language Model for Diachronic Lexical Semantic Analysis ...
Qiu, Wenjun; Xu, Yang. - : arXiv, 2022
BASE
Show details
93
Towards Explainable Evaluation Metrics for Natural Language Generation ...
BASE
Show details
94
ASL Video Corpora & Sign Bank: Resources Available through the American Sign Language Linguistic Research Project (ASLLRP) ...
BASE
Show details
95
How do lexical semantics affect translation? An empirical study ...
BASE
Show details
96
Compositionality as Lexical Symmetry ...
Akyürek, Ekin; Andreas, Jacob. - : arXiv, 2022
BASE
Show details
97
How Effective is Incongruity? Implications for Code-mix Sarcasm Detection ...
BASE
Show details
98
Learning Meta Word Embeddings by Unsupervised Weighted Concatenation of Source Embeddings ...
Bollegala, Danushka. - : arXiv, 2022
BASE
Show details
99
COLD Decoding: Energy-based Constrained Text Generation with Langevin Dynamics ...
BASE
Show details
100
LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval ...
Xu, Canwen; Guo, Daya; Duan, Nan. - : arXiv, 2022
BASE
Show details

Page: 1 2 3 4 5 6 7 8 9...68

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.343
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern