Page: 1 2 3 4 5 6 7 8 9... 68
81 |
The Dark Side of the Language: Pre-trained Transformers in the DarkNet ...
|
|
|
|
BASE
|
|
Show details
|
|
82 |
Discontinuous Constituency and BERT: A Case Study of Dutch ...
|
|
|
|
BASE
|
|
Show details
|
|
83 |
Cross-Platform Difference in Facebook and Text Messages Language Use: Illustrated by Depression Diagnosis ...
|
|
|
|
BASE
|
|
Show details
|
|
84 |
Improving Word Translation via Two-Stage Contrastive Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
85 |
nigam@COLIEE-22: Legal Case Retrieval and Entailment using Cascading of Lexical and Semantic-based models ...
|
|
|
|
BASE
|
|
Show details
|
|
86 |
Learning grammar with a divide-and-concur neural network ...
|
|
|
|
BASE
|
|
Show details
|
|
87 |
Self-Supervised Representation Learning for Speech Using Visual Grounding and Masked Language Modeling ...
|
|
|
|
BASE
|
|
Show details
|
|
88 |
Accurate Online Posterior Alignments for Principled Lexically-Constrained Decoding ...
|
|
|
|
BASE
|
|
Show details
|
|
89 |
Introducing Neural Bag of Whole-Words with ColBERTer: Contextualized Late Interactions using Enhanced Reduction ...
|
|
|
|
BASE
|
|
Show details
|
|
91 |
Improving Time Sensitivity for Question Answering over Temporal Knowledge Graphs ...
|
|
|
|
BASE
|
|
Show details
|
|
92 |
HistBERT: A Pre-trained Language Model for Diachronic Lexical Semantic Analysis ...
|
|
|
|
BASE
|
|
Show details
|
|
93 |
Towards Explainable Evaluation Metrics for Natural Language Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
94 |
ASL Video Corpora & Sign Bank: Resources Available through the American Sign Language Linguistic Research Project (ASLLRP) ...
|
|
|
|
BASE
|
|
Show details
|
|
95 |
How do lexical semantics affect translation? An empirical study ...
|
|
|
|
BASE
|
|
Show details
|
|
97 |
How Effective is Incongruity? Implications for Code-mix Sarcasm Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
98 |
Learning Meta Word Embeddings by Unsupervised Weighted Concatenation of Source Embeddings ...
|
|
|
|
Abstract:
Given multiple source word embeddings learnt using diverse algorithms and lexical resources, meta word embedding learning methods attempt to learn more accurate and wide-coverage word embeddings. Prior work on meta-embedding has repeatedly discovered that simple vector concatenation of the source embeddings to be a competitive baseline. However, it remains unclear as to why and when simple vector concatenation can produce accurate meta-embeddings. We show that weighted concatenation can be seen as a spectrum matching operation between each source embedding and the meta-embedding, minimising the pairwise inner-product loss. Following this theoretical analysis, we propose two \emph{unsupervised} methods to learn the optimal concatenation weights for creating meta-embeddings from a given set of source embeddings. Experimental results on multiple benchmark datasets show that the proposed weighted concatenated meta-embedding methods outperform previously proposed meta-embedding learning methods. ... : Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI-2022) ...
|
|
Keyword:
Artificial Intelligence cs.AI; Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
|
|
URL: https://arxiv.org/abs/2204.12386 https://dx.doi.org/10.48550/arxiv.2204.12386
|
|
BASE
|
|
Hide details
|
|
99 |
COLD Decoding: Energy-based Constrained Text Generation with Langevin Dynamics ...
|
|
|
|
BASE
|
|
Show details
|
|
100 |
LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval ...
|
|
|
|
BASE
|
|
Show details
|
|
Page: 1 2 3 4 5 6 7 8 9... 68
|
|