DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 31

1
Natural Language Descriptions of Deep Visual Features ...
BASE
Show details
2
Compositionality as Lexical Symmetry ...
Akyürek, Ekin; Andreas, Jacob. - : arXiv, 2022
BASE
Show details
3
Language as a bootstrap for compositional visual reasoning
In: Proceedings of the Annual Meeting of the Cognitive Science Society, vol 43, iss 43 (2021)
BASE
Show details
4
Compositional Models for Few Shot Sequence Learning
Akyurek, Ekin. - : Massachusetts Institute of Technology, 2021
BASE
Show details
5
Cetacean Translation Initiative: a roadmap to deciphering the communication of sperm whales ...
BASE
Show details
6
Implicit Representations of Meaning in Neural Language Models ...
BASE
Show details
7
Implicit Representations of Meaning in Neural Language Models ...
BASE
Show details
8
How Do Neural Sequence Models Generalize? Local and Global Cues for Out-of-Distribution Prediction ...
BASE
Show details
9
Value-Agnostic Conversational Semantic Parsing ...
BASE
Show details
10
Lexicon Learning for Few-Shot Neural Sequence Modeling ...
Akyürek, Ekin; Andreas, Jacob. - : arXiv, 2021
BASE
Show details
11
What Context Features Can Transformer Language Models Use? ...
O'Connor, Joe; Andreas, Jacob. - : arXiv, 2021
BASE
Show details
12
Quantifying Adaptability in Pre-trained Language Models with 500 Tasks ...
BASE
Show details
13
One-Shot Lexicon Learning for Low-Resource Machine Translation ...
BASE
Show details
14
Lexicon Learning for Few Shot Sequence Modeling ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.382 Abstract: Sequence-to-sequence transduction is the core problem in language processing applications as diverse as semantic parsing, machine translation, and instruction following. The neural network models that provide the dominant solution to these problems are brittle, especially in low-resource settings: they fail to generalize correctly or systematically from small datasets. Past work has shown that many failures of systematic generalization arise from neural models’ inability to disentangle lexical phenomena from syntactic ones. To address this, we augment neural decoders with a lexical translation mechanism that generalizes existing copy mechanisms to incorporate learned, decontextualized, token-level translation rules. We describe how to initialize this mechanism using a variety of lexicon learning algorithms, and show that it improves systematic generalization on a diverse set of sequence modeling tasks drawn from cognitive science, formal ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://underline.io/lecture/25704-lexicon-learning-for-few-shot-sequence-modeling
https://dx.doi.org/10.48448/pje6-2v96
BASE
Hide details
15
What Context Features Can Transformer Language Models Use? ...
BASE
Show details
16
The Low-Dimensional Linear Geometry of Contextualized Word Representations ...
Hernandez, Evan; Andreas, Jacob. - : arXiv, 2021
BASE
Show details
17
The Low-Dimensional Linear Geometry of Contextualized Word Representations ...
BASE
Show details
18
Experience Grounds Language ...
BASE
Show details
19
Compositional Explanations of Neurons ...
Mu, Jesse; Andreas, Jacob. - : arXiv, 2020
BASE
Show details
20
A Benchmark for Systematic Generalization in Grounded Language Understanding ...
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
31
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern