DE eng

Search in the Catalogues and Directories

Hits 1 – 14 of 14

1
TextFlint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing ...
Gui, Tao; Wang, Xiao; Zhang, Qi. - : arXiv, 2021
BASE
Show details
2
SpanNER: Named Entity Re-/Recognition as Span Prediction ...
BASE
Show details
3
Align Voting Behavior with Public Statements for Legislator Representation Learning ...
BASE
Show details
4
fastHan: A BERT-based Multi-Task Toolkit for Chinese NLP ...
BASE
Show details
5
{K-Adapter}: {I}nfusing {K}nowledge into {P}re-{T}rained {M}odels with {A}dapters ...
BASE
Show details
6
Defense against Synonym Substitution-based Adversarial Attacks via Dirichlet Neighborhood Ensemble ...
BASE
Show details
7
Causal Direction of Data Collection Matters: Implications of Causal and Anticausal Learning for NLP
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
BASE
Show details
8
Classifying Dyads for Militarized Conflict Analysis
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
BASE
Show details
9
Efficient Sampling of Dependency Structure
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
BASE
Show details
10
Searching for More Efficient Dynamic Programs
In: Findings of the Association for Computational Linguistics: EMNLP 2021 (2021)
BASE
Show details
11
“Let Your Characters Tell Their Story”: A Dataset for Character-Centric Narrative Understanding
In: Findings of the Association for Computational Linguistics: EMNLP 2021 (2021)
BASE
Show details
12
A Bayesian Framework for Information-Theoretic Probing
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
Abstract: Pimentel et al. (2020) recently analysed probing from an information-theoretic perspective. They argue that probing should be seen as approximating a mutual information. This led to the rather unintuitive conclusion that representations encode exactly the same information about a target task as the original sentences. The mutual information, however, assumes the true probability distribution of a pair of random variables is known, leading to unintuitive results in settings where it is not. This paper proposes a new framework to measure what we term Bayesian mutual information, which analyses information from the perspective of Bayesian agents—allowing for more intuitive findings in scenarios with finite data. For instance, under Bayesian MI we have that data can add information, processing can help, and information can hurt, which makes it more intuitive for machine learning applications. Finally, we apply our framework to probing where we believe Bayesian mutual information naturally operationalises ease of extraction by explicitly limiting the available background knowledge to solve a task.
URL: https://doi.org/10.3929/ethz-b-000518995
https://hdl.handle.net/20.500.11850/518995
BASE
Hide details
13
Improving Dialogue State Tracking with Turn-based Loss Function and Sequential Data Augmentation
BASE
Show details
14
Come hither or go away? Recognising pre-electoral coalition signals in the news
Rehbein, Ines; Ponzetto, Simone Paolo; Adendorf, Anna. - : Association for Computational Linguistics, 2021
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
14
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern