Home
Catalogue search
Refine your search:
Keyword
Creator / Publisher:
Aletras, Nikolaos (4)
Chrysostomou, George (4)
Margatina, Katerina (2)
The 2021 Conference on Empirical Methods in Natural Language Processing 2021 (2)
Yamaguchi, Atsuki (2)
The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing 2021 (1)
Year
Medium
Type
BLLDB-Access:
free (4)
subject to license (0)
Search in the Catalogues and Directories
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
Sort by
creator [A → Z]
'
creator [Z → A]
'
publishing year ↑ (asc)
'
publishing year ↓ (desc)
'
title [A → Z]
'
title [Z → A]
'
Simple Search
Hits 1 – 4 of 4
1
Frustratingly Simple Pretraining Alternatives to Masked Language Modeling ...
Yamaguchi, Atsuki
;
Chrysostomou, George
;
Margatina, Katerina
. - : arXiv, 2021
BASE
Show details
2
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification ...
The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing 2021
;
Aletras, Nikolaos
;
Chrysostomou, George
. - : Underline Science Inc., 2021
Abstract:
Read paper: https://www.aclanthology.org/2021.acl-long.40 Abstract: Neural network architectures in natural language processing often use attention mechanisms to produce probability distributions over input token representations. Attention has empirically been demonstrated to improve performance in various tasks, while its weights have been extensively used as explanations for model predictions. Recent studies (Jain and Wallace, 2019; Serrano and Smith, 2019; Wiegreffe and Pinter, 2019) have showed that it cannot generally be considered as a faithful explanation (Jacovi and Goldberg, 2020) across encoders and tasks. In this paper, we seek to improve the faithfulness of attention-based explanations for text classification. We achieve this by proposing a new family of Task-Scaling (TaSc) mechanisms that learn task-specific non-contextualised information to scale the original attention weights. Evaluation tests for explanation faithfulness, show that the three proposed variants of TaSc improve attention-based ...
Keyword:
Computational Linguistics
;
Condensed Matter Physics
;
Deep Learning
;
Electromagnetism
;
FOS Physical sciences
;
Information and Knowledge Engineering
;
Neural Network
;
Semantics
URL:
https://underline.io/lecture/25394-improving-the-faithfulness-of-attention-based-explanations-with-task-specific-information-for-text-classification
https://dx.doi.org/10.48448/q32p-7d89
BASE
Hide details
3
Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience ...
The 2021 Conference on Empirical Methods in Natural Language Processing 2021
;
Aletras, Nikolaos
;
Chrysostomou, George
. - : Underline Science Inc., 2021
BASE
Show details
4
Frustratingly Simple Pretraining Alternatives to Masked Language Modeling ...
The 2021 Conference on Empirical Methods in Natural Language Processing 2021
;
Aletras, Nikolaos
;
Chrysostomou, George
. - : Underline Science Inc., 2021
BASE
Show details
Mobile view
All
Catalogues
UB Frankfurt Linguistik
0
IDS Mannheim
0
OLC Linguistik
0
UB Frankfurt Retrokatalog
0
DNB Subject Category Language
0
Institut für Empirische Sprachwissenschaft
0
Leibniz-Centre General Linguistics (ZAS)
0
Bibliographies
BLLDB
0
BDSL
0
IDS Bibliografie zur deutschen Grammatik
0
IDS Bibliografie zur Gesprächsforschung
0
IDS Konnektoren im Deutschen
0
IDS Präpositionen im Deutschen
0
IDS OBELEX meta
0
MPI-SHH Linguistics Collection
0
MPI for Psycholinguistics
0
Linked Open Data catalogues
Annohub
0
Online resources
Link directory
0
Journal directory
0
Database directory
0
Dictionary directory
0
Open access documents
BASE
4
Linguistik-Repository
0
IDS Publikationsserver
0
Online dissertations
0
Language Description Heritage
0
© 2013 - 2024 Lin|gu|is|tik
|
Imprint
|
Privacy Policy
|
Datenschutzeinstellungen ändern