DE eng

Search in the Catalogues and Directories

Page: 1...38 39 40 41 42
Hits 821 – 830 of 830

821
Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction ...
BASE
Show details
822
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little ...
BASE
Show details
823
A Data Bootstrapping Recipe for Low-Resource Multilingual Relation Classification ...
BASE
Show details
824
SOM-NCSCM : An Efficient Neural Chinese Sentence Compression Model Enhanced with Self-Organizing Map ...
BASE
Show details
825
The Low-Dimensional Linear Geometry of Contextualized Word Representations ...
BASE
Show details
826
Data Collection vs. Knowledge Graph Completion: What is Needed to Improve Coverage? ...
BASE
Show details
827
Linguistic Dependencies and Statistical Dependence ...
BASE
Show details
828
Does referent predictability affect the choice of referential form? A computational approach using masked coreference resolution ...
BASE
Show details
829
Coreference-aware Surprisal Predicts Brain Response ...
BASE
Show details
830
Pragmatic competence of pre-trained language models through the lens of discourse connectives ...
Abstract: As pre-trained language models (LMs) continue to dominate NLP, it is increasingly important that we understand the depth of language capabilities in these models. In this paper, we target pre-trained LMs' competence in pragmatics, with a focus on pragmatics relating to discourse connectives. We formulate cloze-style tests using a combination of naturally-occurring data and controlled inputs drawn from psycholinguistics. We focus on testing models' ability to use pragmatic cues to predict discourse connectives, models' ability to understand implicatures relating to connectives, and the extent to which models show humanlike preferences regarding temporal dynamics of connectives. We find that although models predict connectives reasonably well in the context of naturally-occurring data, when we control contexts to isolate high-level pragmatic cues, model sensitivity is much lower. Models also do not show substantial humanlike temporal preferences. Overall, the findings suggest that at present, dominant ...
Keyword: Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
URL: https://underline.io/lecture/39856-pragmatic-competence-of-pre-trained-language-models-through-the-lens-of-discourse-connectives
https://dx.doi.org/10.48448/x840-9k05
BASE
Hide details

Page: 1...38 39 40 41 42

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
830
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern