Home
Catalogue search
Refine your search:
Keyword
Creator / Publisher:
Hershcovich, Daniel (5)
Søgaard, Anders (5)
Abdou, Mostafa (3)
Cui, Ruixiang (2)
Frank, Stella (2)
The 2021 Conference on Empirical Methods in Natural Language Processing 2021 (2)
de Lhoneux, Miryam (2)
Brandl, Stephanie (1)
Bugliarello, Emanuele (1)
Chalkidis, Ilias (1)
more
Year
Medium
Type
BLLDB-Access
Search in the Catalogues and Directories
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
Sort by
creator [A → Z]
'
creator [Z → A]
'
publishing year ↑ (asc)
'
publishing year ↓ (desc)
'
title [A → Z]
'
title [Z → A]
'
Simple Search
Hits 1 – 5 of 5
1
Generalized Quantifiers as a Source of Error in Multilingual NLU Benchmarks ...
Cui, Ruixiang
;
Hershcovich, Daniel
;
Søgaard, Anders
. - : arXiv, 2022
BASE
Show details
2
Challenges and Strategies in Cross-Cultural NLP ...
Hershcovich, Daniel
;
Frank, Stella
;
Lent, Heather
. - : arXiv, 2022
BASE
Show details
3
Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color ...
The 2021 Conference on Empirical Methods in Natural Language Processing 2021
;
Abdou, Mostafa
;
Frank, Stella
. - : Underline Science Inc., 2021
BASE
Show details
4
A Multilingual Benchmark for Probing Negation-Awareness with Minimal Pairs ...
The 2021 Conference on Empirical Methods in Natural Language Processing 2021
;
de Lhoneux, Miryam
;
Hartmann, Mareike
. - : Underline Science Inc., 2021
BASE
Show details
5
Does injecting linguistic structure into language models lead to better alignment with brain recordings? ...
Abdou, Mostafa
;
Gonzalez, Ana Valeria
;
Toneva, Mariya
;
Hershcovich, Daniel
;
Søgaard, Anders
. - : arXiv, 2021
Abstract:
Neuroscientists evaluate deep neural networks for natural language processing as possible candidate models for how language is processed in the brain. These models are often trained without explicit linguistic supervision, but have been shown to learn some linguistic structure in the absence of such supervision (Manning et al., 2020), potentially questioning the relevance of symbolic linguistic theories in modeling such cognitive processes (Warstadt and Bowman, 2020). We evaluate across two fMRI datasets whether language models align better with brain recordings, if their attention is biased by annotations from syntactic or semantic formalisms. Using structure from dependency or minimal recursion semantic annotations, we find alignments improve significantly for one of the datasets. For another dataset, we see more mixed results. We present an extensive analysis of these results. Our proposed approach enables the evaluation of more targeted hypotheses about the composition of meaning in the brain, expanding ...
Keyword:
Artificial Intelligence cs.AI
;
Computation and Language cs.CL
;
FOS Computer and information sciences
URL:
https://dx.doi.org/10.48550/arxiv.2101.12608
https://arxiv.org/abs/2101.12608
BASE
Hide details
Mobile view
All
Catalogues
UB Frankfurt Linguistik
0
IDS Mannheim
0
OLC Linguistik
0
UB Frankfurt Retrokatalog
0
DNB Subject Category Language
0
Institut für Empirische Sprachwissenschaft
0
Leibniz-Centre General Linguistics (ZAS)
0
Bibliographies
BLLDB
0
BDSL
0
IDS Bibliografie zur deutschen Grammatik
0
IDS Bibliografie zur Gesprächsforschung
0
IDS Konnektoren im Deutschen
0
IDS Präpositionen im Deutschen
0
IDS OBELEX meta
0
MPI-SHH Linguistics Collection
0
MPI for Psycholinguistics
0
Linked Open Data catalogues
Annohub
0
Online resources
Link directory
0
Journal directory
0
Database directory
0
Dictionary directory
0
Open access documents
BASE
5
Linguistik-Repository
0
IDS Publikationsserver
0
Online dissertations
0
Language Description Heritage
0
© 2013 - 2024 Lin|gu|is|tik
|
Imprint
|
Privacy Policy
|
Datenschutzeinstellungen ändern