2 |
Quantifying the Task-Specific Information in Text-Based Classifications ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
How is BERT surprised? Layerwise detection of linguistic anomalies ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Semantic coordinates analysis reveals language changes in the AI field ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
An information theoretic view on selecting linguistic probes ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Examining the rhetorical capacities of neural language models ...
|
|
|
|
Abstract:
Recently, neural language models (LMs) have demonstrated impressive abilities in generating high-quality discourse. While many recent papers have analyzed the syntactic aspects encoded in LMs, there has been no analysis to date of the inter-sentential, rhetorical knowledge. In this paper, we propose a method that quantitatively evaluates the rhetorical capacities of neural LMs. We examine the capacities of neural LMs understanding the rhetoric of discourse by evaluating their abilities to encode a set of linguistic features derived from Rhetorical Structure Theory (RST). Our experiments show that BERT-based LMs outperform other Transformer LMs, revealing the richer discourse knowledge in their intermediate layer representations. In addition, GPT-2 and XLNet apparently encode less rhetorical knowledge, and we suggest an explanation drawing from linguistic philosophy. Our method shows an avenue towards quantifying the rhetorical capacities of neural LMs. ... : EMNLP 2020 BlackboxNLP Workshop ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2010.00153 https://dx.doi.org/10.48550/arxiv.2010.00153
|
|
BASE
|
|
Hide details
|
|
7 |
Detecting cognitive impairments by agreeing on interpretations of linguistic features ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Deconfounding age effects with fair representation learning when assessing dementia ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|