DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4
Hits 1 – 20 of 62

1
Multilingual Hate Speech Detection ...
Λαυρεντιάδου, Βασιλική Γεωργίου. - : Aristotle University of Thessaloniki, 2022
BASE
Show details
2
Methods, Models and Tools for Improving the Quality of Textual Annotations
In: Modelling; Volume 3; Issue 2; Pages: 224-242 (2022)
BASE
Show details
3
Models of diachronic semantic change using word embeddings ; Modèles diachroniques à base de plongements de mot pour l'analyse du changement sémantique
Montariol, Syrielle. - : HAL CCSD, 2021
In: https://tel.archives-ouvertes.fr/tel-03199801 ; Document and Text Processing. Université Paris-Saclay, 2021. English. ⟨NNT : 2021UPASG006⟩ (2021)
BASE
Show details
4
Teachers of Color's Perception on Identity and Academic Success: A Reflective Narrative
In: All Antioch University Dissertations & Theses (2021)
BASE
Show details
5
A Survey on Multilingual Hate Speech Detection and Classification by Machine Learning Techniques ...
BASE
Show details
6
A Survey on Multilingual Hate Speech Detection and Classification by Machine Learning Techniques ...
BASE
Show details
7
APiCS-Ligt: Towards Semantic Enrichment of Interlinear Glossed Text ...
Ionov, Maxim. - : Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021
BASE
Show details
8
Towards Learning Terminological Concept Systems from Multilingual Natural Language Text ...
Wachowiak, Lennart; Lang, Christian; Heinisch, Barbara. - : Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021
BASE
Show details
9
Improving Multilingual Models for the Swedish Language : Exploring CrossLingual Transferability and Stereotypical Biases
Katsarou, Styliani. - : KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021
BASE
Show details
10
NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish
In: Applied Sciences ; Volume 11 ; Issue 21 (2021)
Abstract: Most of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish.
Keyword: abstractive summarization; monolingual models; multilingual models; transfer learning; transformer models
URL: https://doi.org/10.3390/app11219872
BASE
Hide details
11
Inductive Bias and Modular Design for Sample-Efficient Neural Language Learning ...
Ponti, Edoardo. - : Apollo - University of Cambridge Repository, 2021
BASE
Show details
12
Analyzing Non-Textual Content Elements to Detect Academic Plagiarism
BASE
Show details
13
Learning to scale multilingual representations for vision-language tasks
Burns, Andrea; Kim, Donghyun; Wijaya, Derry. - : Springer, 2020
BASE
Show details
14
MT models for multilingual CLuBS engine (en-de-fr-es) ...
BASE
Show details
15
MT models for multilingual CLuBS engine (en-de-fr-es) ...
BASE
Show details
16
Inductive Bias and Modular Design for Sample-Efficient Neural Language Learning
Ponti, Edoardo. - : University of Cambridge, 2020. : St Johns, 2020
BASE
Show details
17
SberQuAD – Russian Reading Comprehension Dataset: Description and Analysis
In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2020)
BASE
Show details
18
Assessing English Writing in Multilingual Writers in Higher Education: A Longitudinal Study
In: Applied Linguistics and English as a Second Language Dissertations (2019)
BASE
Show details
19
Character language models for generalization of multilingual named entity recognition
Yu, Xiaodong. - 2019
BASE
Show details
20
Multilingual Information Access (MLIA) Tools on Google and WorldCat: Bi/Multilingual University Students’ Experience and Perceptions
In: FIMS Publications (2019)
BASE
Show details

Page: 1 2 3 4

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
62
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern