DE eng

Search in the Catalogues and Directories

Hits 1 – 5 of 5

1
Generalized Quantifiers as a Source of Error in Multilingual NLU Benchmarks ...
BASE
Show details
2
Challenges and Strategies in Cross-Cultural NLP ...
BASE
Show details
3
Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color ...
BASE
Show details
4
A Multilingual Benchmark for Probing Negation-Awareness with Minimal Pairs ...
BASE
Show details
5
Does injecting linguistic structure into language models lead to better alignment with brain recordings? ...
Abstract: Neuroscientists evaluate deep neural networks for natural language processing as possible candidate models for how language is processed in the brain. These models are often trained without explicit linguistic supervision, but have been shown to learn some linguistic structure in the absence of such supervision (Manning et al., 2020), potentially questioning the relevance of symbolic linguistic theories in modeling such cognitive processes (Warstadt and Bowman, 2020). We evaluate across two fMRI datasets whether language models align better with brain recordings, if their attention is biased by annotations from syntactic or semantic formalisms. Using structure from dependency or minimal recursion semantic annotations, we find alignments improve significantly for one of the datasets. For another dataset, we see more mixed results. We present an extensive analysis of these results. Our proposed approach enables the evaluation of more targeted hypotheses about the composition of meaning in the brain, expanding ...
Keyword: Artificial Intelligence cs.AI; Computation and Language cs.CL; FOS Computer and information sciences
URL: https://dx.doi.org/10.48550/arxiv.2101.12608
https://arxiv.org/abs/2101.12608
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
5
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern