DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...15
Hits 1 – 20 of 289

1
DWUG ES: Diachronic Word Usage Graphs for Spanish ...
BASE
Show details
2
DWUG ES: Diachronic Word Usage Graphs for Spanish ...
BASE
Show details
3
DWUG ES: Diachronic Word Usage Graphs for Spanish ...
BASE
Show details
4
DWUG ES: Diachronic Word Usage Graphs for Spanish ...
BASE
Show details
5
DWUG ES: Diachronic Word Usage Graphs for Spanish ...
BASE
Show details
6
DWUG ES: Diachronic Word Usage Graphs for Spanish ...
BASE
Show details
7
Analyzing COVID-19 Medical Papers Using Artificial Intelligence: Insights for Researchers and Medical Professionals
In: Big Data and Cognitive Computing; Volume 6; Issue 1; Pages: 4 (2022)
BASE
Show details
8
Towards a theoretical understanding of word and relation representation
Allen, Carl S.. - : The University of Edinburgh, 2022
BASE
Show details
9
Representation of Explanations of Possibilistic Inference Decisions
In: Symbolic and Quantitative Approaches to Reasoning with Uncertainty ; ECSQARU 2021: European Conference on Symbolic and Quantitative Approaches with Uncertainty ; https://hal-cea.archives-ouvertes.fr/cea-03406884 ; ECSQARU 2021: European Conference on Symbolic and Quantitative Approaches with Uncertainty, Sep 2021, Prague, Czech Republic. pp.513-527, ⟨10.1007/978-3-030-86772-0_37⟩ (2021)
BASE
Show details
10
Injecting Inductive Biases into Distributed Representations of Text ...
Prokhorov, Victor. - : Apollo - University of Cambridge Repository, 2021
Abstract: Distributed real-valued vector representations of text (a.k.a. embeddings), learned by neural networks, encode various (linguistic) knowledge. To encode this knowledge into the embeddings the common approach is to train a large neural network on large corpora. There is, however, a growing concern regarding the sustainability and rationality of pursuing this approach further. We depart from the mainstream trend and instead, to incorporate the desired properties into embeddings, use inductive biases. First, we use Knowledge Graphs (KGs) as a data-based inductive bias to derive the semantic representation of words and sentences. The explicit semantics that is encoded in a structure of a KG allows us to acquire the semantic representations without the need of employing a large amount of text. We use graph embedding techniques to learn the semantic representation of words and the sequence-to-sequence model to learn the semantic representation of sentences. We demonstrate the efficacy of the inductive bias for ...
Keyword: Distributed Representations of Text; Inductive Biases; Knowledge Graphs; Sentence Embeddings; Variational Autoencoders; Word Embeddings
URL: https://www.repository.cam.ac.uk/handle/1810/330972
https://dx.doi.org/10.17863/cam.78416
BASE
Hide details
11
Querying knowledge graphs in natural language ...
BASE
Show details
12
Graphs, Computation, and Language ...
Ustalov, Dmitry. - : Zenodo, 2021
BASE
Show details
13
DWUG SV: Diachronic Word Usage Graphs for Swedish ...
BASE
Show details
14
DWUG SV: Diachronic Word Usage Graphs for Swedish ...
BASE
Show details
15
DWUG DE: Diachronic Word Usage Graphs for German ...
BASE
Show details
16
RefWUG: Diachronic Reference Word Usage Graphs for German ...
BASE
Show details
17
DWUG EN: Diachronic Word Usage Graphs for English ...
BASE
Show details
18
Αναγνώριση νοηματικής γλώσσας με τεχνικές βαθιάς μηχανικής μάθησης ... : Deep learning based sign language recognition ...
Parelli, Maria. - : National Technological University of Athens, 2021
BASE
Show details
19
Graphs, Computation, and Language ...
Ustalov, Dmitry. - : Zenodo, 2021
BASE
Show details
20
RefWUG: Diachronic Reference Word Usage Graphs for German ...
BASE
Show details

Page: 1 2 3 4 5...15

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
288
1
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern