1 |
Learning to Borrow -- Relation Representation for Without-Mention Entity-Pairs for Knowledge Graph Completion ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Learning Meta Word Embeddings by Unsupervised Weighted Concatenation of Source Embeddings ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Sense Embeddings are also Biased--Evaluating Social Biases in Static and Contextualised Sense Embeddings
|
|
|
|
BASE
|
|
Show details
|
|
4 |
I Wish I Would Have Loved This One, But I Didn't -- A Multilingual Dataset for Counterfactual Detection in Product Reviews ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Detect and Classify – Joint Span Detection and Classification for Health Outcomes ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Unsupervised Abstractive Opinion Summarization by Generating Sentences with Tree-Structured Topic Guidance ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Fine-Tuning Word Embeddings for Hierarchical Representation of Data Using a Corpus and a Knowledge Base for Various Machine Learning Applications
|
|
|
|
In: Comput Math Methods Med (2021)
|
|
BASE
|
|
Show details
|
|
8 |
RelWalk - A Latent Variable Model Approach to Knowledge Graph Embedding.
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Unsupervised Abstractive Opinion Summarization by Generating Sentences with Tree-Structured Topic Guidance
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Unsupervised Abstractive Opinion Summarization by Generating Sentences with Tree-Structured Topic Guidance
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Autoencoding Improves Pre-trained Word Embeddings ...
|
|
|
|
Abstract:
Prior work investigating the geometry of pre-trained word embeddings have shown that word embeddings to be distributed in a narrow cone and by centering and projecting using principal component vectors one can increase the accuracy of a given set of pre-trained word embeddings. However, theoretically, this post-processing step is equivalent to applying a linear autoencoder to minimise the squared l2 reconstruction error. This result contradicts prior work (Mu and Viswanath, 2018) that proposed to remove the top principal components from pre-trained embeddings. We experimentally verify our theoretical claims and show that retaining the top principal components is indeed useful for improving pre-trained word embeddings, without requiring access to additional linguistic resources or labelled data. ...
|
|
Keyword:
Computer and Information Science; Natural Language Processing; Neural Network
|
|
URL: https://underline.io/lecture/6158-autoencoding-improves-pre-trained-word-embeddings https://dx.doi.org/10.48448/x54c-4398
|
|
BASE
|
|
Hide details
|
|
15 |
Graph Convolution over Multiple Dependency Sub-graphs for Relation Extraction ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Language-Independent Tokenisation Rivals Language-Specific Tokenisation for Word Similarity Prediction ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Graph Convolution over Multiple Dependency Sub-graphs for Relation Extraction.
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Learning to Compose Relational Embeddings in Knowledge Graphs
|
|
|
|
BASE
|
|
Show details
|
|
|
|