DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...16
Hits 1 – 20 of 307

1
Le modèle Transformer: un « couteau suisse » pour le traitement automatique des langues
In: Techniques de l'Ingenieur ; https://hal.archives-ouvertes.fr/hal-03619077 ; Techniques de l'Ingenieur, Techniques de l'ingénieur, 2022, ⟨10.51257/a-v1-in195⟩ ; https://www.techniques-ingenieur.fr/base-documentaire/innovation-th10/innovations-en-electronique-et-tic-42257210/transformer-des-reseaux-de-neurones-pour-le-traitement-automatique-des-langues-in195/ (2022)
BASE
Show details
2
Structured, flexible, and robust: comparing linguistic plans and explanations generated by humans and large language models ...
Wei, Megan. - : Open Science Framework, 2022
Abstract: How much can be learned about the structure of thinking from the statistics of language alone? Large language models -- neural models trained on next-word prediction tasks over large corpuses of text -- have made striking advances in modeling the statistical distribution of language. Sufficiently large corpuses contain language in which humans describe their beliefs and intentions, their goals and plans, and their stories about occurrences in real and imaginary worlds. Richly structured cognitive processes underlie this language that we produce; however, can such structure be captured when modeling distributional co-occurance of words alone? Is language modeling alone sufficiently flexible, accurate, and robust enough to generate language for novel, out-of-distribution queries, or are model-based approaches needed? In this study, we compare human and large-language-model performance on two domains which draw on structured, model-based thinking: 1) goal-based planning, and 2) explanation generation for causal ...
Keyword: Artificial Intelligence and Robotics; Computer Sciences; GPT-3; Language Models; Natural Language Processing; Physical Sciences and Mathematics; Planning
URL: https://dx.doi.org/10.17605/osf.io/cy72b
https://osf.io/cy72b/
BASE
Hide details
3
Easy-to-use combination of POS and BERT model for domain-specific and misspelled terms
In: NL4IA Workshop Proceedings ; https://hal.archives-ouvertes.fr/hal-03474696 ; NL4IA Workshop Proceedings, Nov 2021, Milan, Italy (2021)
BASE
Show details
4
Globalizing BERT-based Transformer Architectures for Long DocumentSummarization
In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume ; 16th Conference of the European Chapter of the Association for Computational Linguistics ; https://hal.univ-grenoble-alpes.fr/hal-03367913 ; 16th Conference of the European Chapter of the Association for Computational Linguistics, Association for Computational Linguistics, Apr 2021, Online, France (2021)
BASE
Show details
5
Automatic Error Type Annotation for Arabic ...
BASE
Show details
6
Navigating the Kaleidoscope of COVID-19 Misinformation Using Deep Learning ...
BASE
Show details
7
HittER: Hierarchical Transformers for Knowledge Graph Embeddings ...
BASE
Show details
8
Detecting Gender Bias using Explainability ...
BASE
Show details
9
HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization ...
BASE
Show details
10
Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification ...
BASE
Show details
11
Contrastive Code Representation Learning ...
BASE
Show details
12
Unsupervised Multi-View Post-OCR Error Correction With Language Models ...
BASE
Show details
13
AttentionRank: Unsupervised Keyphrase Extraction using Self and Cross Attentions ...
BASE
Show details
14
Automatic Fact-Checking with Document-level Annotations using BERT and Multiple Instance Learning ...
BASE
Show details
15
Towards the Early Detection of Child Predators in Chat Rooms: A BERT-based Approach ...
BASE
Show details
16
Semantic Categorization of Social Knowledge for Commonsense Question Answering ...
BASE
Show details
17
Pre-train or Annotate? Domain Adaptation with a Constrained Budget ...
BASE
Show details
18
Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you? ...
BASE
Show details
19
CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization ...
BASE
Show details
20
Automatic Text Evaluation through the Lens of Wasserstein Barycenters ...
BASE
Show details

Page: 1 2 3 4 5...16

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
307
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern