DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 22

1
Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality ...
BASE
Show details
2
ANLIzing the Adversarial Natural Language Inference Dataset
In: Proceedings of the Society for Computation in Linguistics (2022)
BASE
Show details
3
Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection ...
BASE
Show details
4
FLAVA: A Foundational Language And Vision Alignment Model ...
BASE
Show details
5
I like fish, especially dolphins: Addressing Contradictions in Dialogue Modeling ...
BASE
Show details
6
Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation ...
BASE
Show details
7
Reservoir Transformers ...
BASE
Show details
8
Gradient-based Adversarial Attacks against Text Transformers ...
BASE
Show details
9
DynaSent: A Dynamic Benchmark for Sentiment Analysis ...
BASE
Show details
10
On the Efficacy of Adversarial Data Collection for Question Answering: Results from a Large-Scale Randomized Study ...
BASE
Show details
11
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little ...
BASE
Show details
12
Deep Artificial Neural Networks Reveal a Distributed Cortical Network Encoding Propositional Sentence-Level Meaning
In: J Neurosci (2021)
Abstract: Understanding how and where in the brain sentence-level meaning is constructed from words presents a major scientific challenge. Recent advances have begun to explain brain activation elicited by sentences using vector models of word meaning derived from patterns of word co-occurrence in text corpora. These studies have helped map out semantic representation across a distributed brain network spanning temporal, parietal, and frontal cortex. However, it remains unclear whether activation patterns within regions reflect unified representations of sentence-level meaning, as opposed to superpositions of context-independent component words. This is because models have typically represented sentences as “bags-of-words” that neglect sentence-level structure. To address this issue, we interrogated fMRI activation elicited as 240 sentences were read by 14 participants (9 female, 5 male), using sentences encoded by a recurrent deep artificial neural-network trained on a sentence inference task (InferSent). Recurrent connections and nonlinear filters enable InferSent to transform sequences of word vectors into unified “propositional” sentence representations suitable for evaluating intersentence entailment relations. Using voxelwise encoding modeling, we demonstrate that InferSent predicts elements of fMRI activation that cannot be predicted by bag-of-words models and sentence models using grammatical rules to assemble word vectors. This effect occurs throughout a distributed network, which suggests that propositional sentence-level meaning is represented within and across multiple cortical regions rather than at any single site. In follow-up analyses, we place results in the context of other deep network approaches (ELMo and BERT) and estimate the degree of unpredicted neural signal using an “experiential” semantic model and cross-participant encoding. SIGNIFICANCE STATEMENT A modern-day scientific challenge is to understand how the human brain transforms word sequences into representations of sentence meaning. A recent approach, emerging from advances in functional neuroimaging, big data, and machine learning, is to computationally model meaning, and use models to predict brain activity. Such models have helped map a cortical semantic information-processing network. However, how unified sentence-level information, as opposed to word-level units, is represented throughout this network remains unclear. This is because models have typically represented sentences as unordered “bags-of-words.” Using a deep artificial neural network that recurrently and nonlinearly combines word representations into unified propositional sentence representations, we provide evidence that sentence-level information is encoded throughout a cortical network, rather than in a single region.
Keyword: Research Articles
URL: https://doi.org/10.1523/JNEUROSCI.1152-20.2021
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8176751/
http://www.ncbi.nlm.nih.gov/pubmed/33753548
BASE
Hide details
13
Emergent Linguistic Phenomena in Multi-Agent Communication Games ...
BASE
Show details
14
Inferring concept hierarchies from text corpora via hyperbolic embeddings ...
BASE
Show details
15
Inferring concept hierarchies from text corpora via hyperbolic embeddings
In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019) (2019)
BASE
Show details
16
Countering Language Drift via Visual Grounding ...
BASE
Show details
17
Emergent Translation in Multi-Agent Communication ...
BASE
Show details
18
Visually Grounded and Textual Semantic Models Differentially Decode Brain Activity Associated with Concrete and Abstract Nouns ...
Anderson, AJ; Kiela, Douwe; Clark, Stephen. - : Apollo - University of Cambridge Repository, 2017
BASE
Show details
19
Virtual Embodiment: A Scalable Long-Term Strategy for Artificial Intelligence Research ...
BASE
Show details
20
HyperLex: A Large-Scale Evaluation of Graded Lexical Entailment ...
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
22
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern