DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6...8
Hits 21 – 40 of 158

21
The Language of Dreams: Application of Linguistics-Based Approaches for the Automated Analysis of Dream Experiences
In: Clocks & Sleep ; Volume 3 ; Issue 3 ; Pages 35-514 (2021)
BASE
Show details
22
Completing WordNets with Sememe Knowledge
In: Electronics; Volume 11; Issue 1; Pages: 79 (2021)
BASE
Show details
23
Combination of Time Series Analysis and Sentiment Analysis for Stock Market Forecasting
Chou, Hsiao-Chuan. - : Digital Commons @ University of South Florida, 2021
In: Graduate Theses and Dissertations (2021)
BASE
Show details
24
Comparing vector document representation methods for authorship identification ; Comparando métodos de representação vectorial de documentos para identificação de autoria
Quintanilla, Pamela Rosy Revuelta. - : Biblioteca Digital de Teses e Dissertações da USP, 2021. : Universidade de São Paulo, 2021. : Instituto de Matemática e Estatística, 2021
BASE
Show details
25
Sulle tracce dell’espressione dell’interiorità: analisi diacronica di un corpus di narrativa italiana del XIX-XX secolo
Sciandra, Andrea; Trevisani, Matilde; Tuzzi, Arjuna. - : EUT Edizioni Università di Trieste, 2021
BASE
Show details
26
Improved GloVe Word Embedding Using Linear Weighting Scheme for Word Similarity Tasks
Lu, Qinglan. - : University of Windsor, 2021
In: Electronic Theses and Dissertations (2021)
BASE
Show details
27
On The Role of Machine Learning in A Human Learning Process
In: Teaching Culturally and Linguistically Diverse International Students in Open or Online Learning Environments: A Research Symposium (2021)
BASE
Show details
28
Detection of Hate Speech Spreaders using Convolutional Neural Networks
Siino Marco, Di Nuovo Elisa, Ilenia Tinnirello, Marco La Cascia. - : CEUR, 2021. : country:DEU, 2021. : place:Aachen, 2021
BASE
Show details
29
Whatever it takes to understand a central banker: Embedding their words using neural networks
Zahner, Johannes; Baumgärtner, Martin. - : Marburg: Philipps-University Marburg, School of Business and Economics, 2021
BASE
Show details
30
Complete Variable-Length Codes: An Excursion into Word Edit Operations
In: LATA 2020 ; https://hal.archives-ouvertes.fr/hal-02389403 ; LATA 2020, Mar 2020, Milan, Italy (2020)
BASE
Show details
31
Apprentissage de plongements de mots sur des corpus en langue de spécialité : une étude d’impact
In: Actes de la 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Volume 3 : Rencontre des Étudiants Chercheurs en Informatique pour le TAL ; 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Volume 3 : Rencontre des Étudiants Chercheurs en Informatique pour le TAL ; https://hal.archives-ouvertes.fr/hal-02786198 ; 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Volume 3 : Rencontre des Étudiants Chercheurs en Informatique pour le TAL, Jun 2020, Nancy, France. pp.164-178 (2020)
BASE
Show details
32
Unsupervised cross-lingual representation modeling for variable length phrases ; Apprentissage de représentations cross-lingue d’expressions de longueur variable
Liu, Jingshu. - : HAL CCSD, 2020
In: https://hal.archives-ouvertes.fr/tel-02938554 ; Computation and Language [cs.CL]. Université de Nantes, 2020. English (2020)
BASE
Show details
33
Word Representations Concentrate and This is Good News!
In: CoNLL 2020 - 24th Conference on Computational Natural Language Learning ; https://hal.univ-grenoble-alpes.fr/hal-03356609 ; CoNLL 2020 - 24th Conference on Computational Natural Language Learning, Association for Computational Linguistics (ACL), Nov 2020, Online, France. pp.325-334, ⟨10.18653/v1/2020.conll-1.25⟩ (2020)
BASE
Show details
34
Implementing Eco’s Model Reader with WordEmbeddings. An Experiment on Facebook Ideological Bots
In: JADT - Journées d'analyse des données textuelles ; https://hal.archives-ouvertes.fr/hal-03144105 ; JADT - Journées d'analyse des données textuelles, Jun 2020, Toulouse, France (2020)
BASE
Show details
35
Natural language understanding in argumentative dialogue systems ...
Shigehalli, Pavan Rajashekhar. - : Universität Ulm, 2020
BASE
Show details
36
Automatic Creation of Correspondence Table of Meaning Tags from Two Dictionaries in One Language Using Bilingual Word Embedding
Teruo Hirabayashi; Kanako Komiya; Masayuki Asahara. - : European Language Resources Association, 2020
BASE
Show details
37
ArAutoSenti: Automatic annotation and new tendencies for sentiment classification of Arabic messages
BASE
Show details
38
French AXA Insurance Word Embeddings : Effects of Fine-tuning BERT and Camembert on AXA France’s data
Zouari, Hend. - : KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020
Abstract: We explore in this study the different Natural Language Processing state-of-the art technologies that allow transforming textual data into numerical representation. We go through the theory of the existing traditional methods as well as the most recent ones. This thesis focuses on the recent advances in Natural Language processing being developed upon the Transfer model. One of the most relevant innovations was the release of a deep bidirectional encoder called BERT that broke several state of the art results. BERT utilises Transfer Learning to improve modelling language dependencies in text. BERT is used for several different languages, other specialized model were released like the french BERT: Camembert. This thesis compares the language models of these different pre-trained models and compares their capability to insure a domain adaptation. Using the multilingual and the french pre-trained version of BERT and a dataset from AXA France’s emails, clients’ messages, legal documents, insurance documents containing over 60 million words. We fine-tuned the language models in order to adapt them on the Axa insurance’s french context to create a French AXAInsurance BERT model. We evaluate the performance of this model on the capability of the language model of predicting a masked token based on the context. BERT proves to perform better : modelling better the french AXA’s insurance text without finetuning than Camembert. However, with this small amount of data, Camembert is more capable of adaptation to this specific domain of insurance. ; I denna studie undersöker vi de senaste teknologierna för Natural Language Processing, som gör det möjligt att omvandla textdata till numerisk representation. Vi går igenom teorin om befintliga traditionella metoder såväl som de senaste. Denna avhandling fokuserar på de senaste framstegen inom bearbetning av naturliga språk som utvecklats med hjälp av överföringsmodellen. En av de mest relevanta innovationerna var lanseringen av en djup dubbelriktad kodare som heter BERT som bröt flera toppmoderna resultat. BERT använder Transfer Learning för att förbättra modelleringsspråkberoenden i text. BERT används för flera olika språk, andra specialmodeller släpptes som den franska BERT: Camembert. Denna avhandling jämför språkmodellerna för dessa olika förutbildade modeller och jämför deras förmåga att säkerställa en domänanpassning. Med den flerspråkiga och franska förutbildade versionen av BERT och en dataset från AXA Frankrikes epostmeddelanden, kundmeddelanden, juridiska dokument, försäkringsdokument som innehåller över 60 miljoner ord. Vi finjusterade språkmodellerna för att anpassa dem till Axas försäkrings franska sammanhang för att skapa en fransk AXAInsurance BERT-modell. Vi utvärderar prestandan för denna modell på förmågan hos språkmodellen att förutsäga en maskerad token baserat på sammanhanget. BERTpresterar bättre: modellerar bättre den franska AXA-försäkringstexten utan finjustering än Camembert. Men med denna lilla mängd data är Camembert mer kapabel att anpassa sig till denna specifika försäkringsdomän.
Keyword: BERT; camemBERT; Computer and Information Sciences; Data- och informationsvetenskap; Language model; NLP; Word embedding
URL: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-284108
BASE
Hide details
39
Entropy-Based Approach for the Detection of Changes in Arabic Newspapers’ Content
In: Entropy ; Volume 22 ; Issue 4 (2020)
BASE
Show details
40
A Framework for Word Embedding Based Automatic Text Summarization and Evaluation
In: Information ; Volume 11 ; Issue 2 (2020)
BASE
Show details

Page: 1 2 3 4 5 6...8

Catalogues
1
0
3
0
0
0
0
Bibliographies
33
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
125
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern