2 |
On the Use of Linguistic Features for the Evaluation of Generative Dialogue Systems ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
TorontoCL at CMCL 2021 Shared Task: RoBERTa with Multi-Stage Fine-Tuning for Eye-Tracking Prediction ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Quantifying the Task-Specific Information in Text-Based Classifications ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
An {E}valuation of {D}isentangled {R}epresentation {L}earning for {T}exts ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
How is BERT surprised? Layerwise detection of linguistic anomalies ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Comparing Pre-trained and Feature-Based Models for Prediction of Alzheimer's Disease Based on Speech
|
|
|
|
In: Front Aging Neurosci (2021)
|
|
BASE
|
|
Show details
|
|
8 |
Identification of primary and collateral tracks in stuttered speech
|
|
|
|
In: LREC 2020 - 12th Conference on Language Resources and Evaluation ; https://hal.archives-ouvertes.fr/hal-02959454 ; LREC 2020 - 12th Conference on Language Resources and Evaluation, May 2020, Marseille, France (2020)
|
|
BASE
|
|
Show details
|
|
9 |
Semantic coordinates analysis reveals language changes in the AI field ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
To BERT or Not To BERT: Comparing Speech and Language-based Approaches for Alzheimer's Disease Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
An information theoretic view on selecting linguistic probes ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Examining the rhetorical capacities of neural language models ...
|
|
|
|
Abstract:
Recently, neural language models (LMs) have demonstrated impressive abilities in generating high-quality discourse. While many recent papers have analyzed the syntactic aspects encoded in LMs, there has been no analysis to date of the inter-sentential, rhetorical knowledge. In this paper, we propose a method that quantitatively evaluates the rhetorical capacities of neural LMs. We examine the capacities of neural LMs understanding the rhetoric of discourse by evaluating their abilities to encode a set of linguistic features derived from Rhetorical Structure Theory (RST). Our experiments show that BERT-based LMs outperform other Transformer LMs, revealing the richer discourse knowledge in their intermediate layer representations. In addition, GPT-2 and XLNet apparently encode less rhetorical knowledge, and we suggest an explanation drawing from linguistic philosophy. Our method shows an avenue towards quantifying the rhetorical capacities of neural LMs. ... : EMNLP 2020 BlackboxNLP Workshop ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2010.00153 https://dx.doi.org/10.48550/arxiv.2010.00153
|
|
BASE
|
|
Hide details
|
|
14 |
A textual analysis of US corporate social responsibility reports
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Lexical Features Are More Vulnerable, Syntactic Features Have More Predictive Power ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Representation Learning for Discovering Phonemic Tone Contours ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
The Effect of Heterogeneous Data for Alzheimer's Disease Detection from Speech ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Detecting cognitive impairments by agreeing on interpretations of linguistic features ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Deconfounding age effects with fair representation learning when assessing dementia ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|