1 |
Injecting Inductive Biases into Distributed Representations of Text ...
|
|
|
|
Abstract:
Distributed real-valued vector representations of text (a.k.a. embeddings), learned by neural networks, encode various (linguistic) knowledge. To encode this knowledge into the embeddings the common approach is to train a large neural network on large corpora. There is, however, a growing concern regarding the sustainability and rationality of pursuing this approach further. We depart from the mainstream trend and instead, to incorporate the desired properties into embeddings, use inductive biases. First, we use Knowledge Graphs (KGs) as a data-based inductive bias to derive the semantic representation of words and sentences. The explicit semantics that is encoded in a structure of a KG allows us to acquire the semantic representations without the need of employing a large amount of text. We use graph embedding techniques to learn the semantic representation of words and the sequence-to-sequence model to learn the semantic representation of sentences. We demonstrate the efficacy of the inductive bias for ...
|
|
Keyword:
Distributed Representations of Text; Inductive Biases; Knowledge Graphs; Sentence Embeddings; Variational Autoencoders; Word Embeddings
|
|
URL: https://www.repository.cam.ac.uk/handle/1810/330972 https://dx.doi.org/10.17863/cam.78416
|
|
BASE
|
|
Hide details
|
|
2 |
Injecting Inductive Biases into Distributed Representations of Text
|
|
|
|
BASE
|
|
Show details
|
|
|
|