DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 31

1
Natural Language Descriptions of Deep Visual Features ...
BASE
Show details
2
Compositionality as Lexical Symmetry ...
Akyürek, Ekin; Andreas, Jacob. - : arXiv, 2022
BASE
Show details
3
Language as a bootstrap for compositional visual reasoning
In: Proceedings of the Annual Meeting of the Cognitive Science Society, vol 43, iss 43 (2021)
BASE
Show details
4
Compositional Models for Few Shot Sequence Learning
Akyurek, Ekin. - : Massachusetts Institute of Technology, 2021
BASE
Show details
5
Cetacean Translation Initiative: a roadmap to deciphering the communication of sperm whales ...
BASE
Show details
6
Implicit Representations of Meaning in Neural Language Models ...
BASE
Show details
7
Implicit Representations of Meaning in Neural Language Models ...
BASE
Show details
8
How Do Neural Sequence Models Generalize? Local and Global Cues for Out-of-Distribution Prediction ...
BASE
Show details
9
Value-Agnostic Conversational Semantic Parsing ...
BASE
Show details
10
Lexicon Learning for Few-Shot Neural Sequence Modeling ...
Akyürek, Ekin; Andreas, Jacob. - : arXiv, 2021
BASE
Show details
11
What Context Features Can Transformer Language Models Use? ...
O'Connor, Joe; Andreas, Jacob. - : arXiv, 2021
BASE
Show details
12
Quantifying Adaptability in Pre-trained Language Models with 500 Tasks ...
BASE
Show details
13
One-Shot Lexicon Learning for Low-Resource Machine Translation ...
BASE
Show details
14
Lexicon Learning for Few Shot Sequence Modeling ...
BASE
Show details
15
What Context Features Can Transformer Language Models Use? ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.70 Abstract: Transformer-based language models benefit from conditioning on contexts of hundreds to thousands of previous tokens. What aspects of these contexts contribute to accurate model prediction? We describe a series of experiments that measure usable information by selectively ablating lexical and structural information in transformer language models trained on English Wikipedia. In both mid- and long-range contexts, we find that several extremely destructive context manipulations---including shuffling word order within sentences and deleting all words other than nouns---remove less than 15% of the usable information. Our results suggest that long contexts, but not their detailed syntactic and propositional content, are important for the low perplexity of current transformer language models. ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://dx.doi.org/10.48448/sqvk-y475
https://underline.io/lecture/25433-what-context-features-can-transformer-language-models-usequestion
BASE
Hide details
16
The Low-Dimensional Linear Geometry of Contextualized Word Representations ...
Hernandez, Evan; Andreas, Jacob. - : arXiv, 2021
BASE
Show details
17
The Low-Dimensional Linear Geometry of Contextualized Word Representations ...
BASE
Show details
18
Experience Grounds Language ...
BASE
Show details
19
Compositional Explanations of Neurons ...
Mu, Jesse; Andreas, Jacob. - : arXiv, 2020
BASE
Show details
20
A Benchmark for Systematic Generalization in Grounded Language Understanding ...
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
31
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern