DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...80
Hits 1 – 20 of 1.584

1
RETRIEVING SPEAKER INFORMATION FROM PERSONALIZED ACOUSTIC MODELS FOR SPEECH RECOGNITION
In: IEEE ICASSP 2022 ; https://hal.archives-ouvertes.fr/hal-03539741 ; IEEE ICASSP 2022, 2022, Singapour, Singapore (2022)
BASE
Show details
2
From FreEM to D'AlemBERT ; From FreEM to D'AlemBERT: a Large Corpus and a Language Model for Early Modern French
In: Proceedings of the 13th Language Resources and Evaluation Conference ; https://hal.inria.fr/hal-03596653 ; Proceedings of the 13th Language Resources and Evaluation Conference, European Language Resources Association, Jun 2022, Marseille, France (2022)
BASE
Show details
3
Le modèle Transformer: un « couteau suisse » pour le traitement automatique des langues
In: Techniques de l'Ingenieur ; https://hal.archives-ouvertes.fr/hal-03619077 ; Techniques de l'Ingenieur, Techniques de l'ingénieur, 2022, ⟨10.51257/a-v1-in195⟩ ; https://www.techniques-ingenieur.fr/base-documentaire/innovation-th10/innovations-en-electronique-et-tic-42257210/transformer-des-reseaux-de-neurones-pour-le-traitement-automatique-des-langues-in195/ (2022)
BASE
Show details
4
Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost
In: ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics ; https://hal.archives-ouvertes.fr/hal-03613101 ; ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, May 2022, Dublin, Ireland (2022)
BASE
Show details
5
Imputing out-of-vocabulary embeddings with LOVE makes language models robust with little cost
In: ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics ; https://hal.archives-ouvertes.fr/hal-03613101 ; ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, May 2022, Dublin, Ireland (2022)
BASE
Show details
6
Structured, flexible, and robust: comparing linguistic plans and explanations generated by humans and large language models ...
Wei, Megan. - : Open Science Framework, 2022
BASE
Show details
7
On the Transferability of Pre-trained Language Models for Low-Resource Programming Languages ...
Chen, Fuxiang. - : Federated Research Data Repository / dépôt fédéré de données de recherche, 2022
BASE
Show details
8
Sentence Level Embedding Detoxification via Toxic Component Removal ...
: University of Virginia, 2022
BASE
Show details
9
MIss RoBERTa WiLDe: Metaphor Identification Using Masked Language Model with Wiktionary Lexical Definitions
In: Applied Sciences; Volume 12; Issue 4; Pages: 2081 (2022)
BASE
Show details
10
Considering Commonsense in Solving QA: Reading Comprehension with Semantic Search and Continual Learning
In: Applied Sciences; Volume 12; Issue 9; Pages: 4099 (2022)
BASE
Show details
11
Analysis of the Full-Size Russian Corpus of Internet Drug Reviews with Complex NER Labeling Using Deep Learning Neural Networks and Language Models
In: Applied Sciences; Volume 12; Issue 1; Pages: 491 (2022)
Abstract: The paper presents the full-size Russian corpus of Internet users’ reviews on medicines with complex named entity recognition (NER) labeling of pharmaceutically relevant entities. We evaluate the accuracy levels reached on this corpus by a set of advanced deep learning neural networks for extracting mentions of these entities. The corpus markup includes mentions of the following entities: medication (33,005 mentions), adverse drug reaction (1778), disease (17,403), and note (4490). Two of them—medication and disease—include a set of attributes. A part of the corpus has a coreference annotation with 1560 coreference chains in 300 documents. A multi-label model based on a language model and a set of features has been developed for recognizing entities of the presented corpus. We analyze how the choice of different model components affects the entity recognition accuracy. Those components include methods for vector representation of words, types of language models pre-trained for the Russian language, ways of text normalization, and other pre-processing methods. The sufficient size of our corpus allows us to study the effects of particularities of annotation and entity balancing. We compare our corpus to existing ones by the occurrences of entities of different types and show that balancing the corpus by the number of texts with and without adverse drug event (ADR) mentions improves the ADR recognition accuracy with no notable decline in the accuracy of detecting entities of other types. As a result, the state of the art for the pharmacological entity extraction task for the Russian language is established on a full-size labeled corpus. For the ADR entity type, the accuracy achieved is 61.1% by the F1-exact metric, which is on par with the accuracy level for other language corpora with similar characteristics and ADR representativeness. The accuracy of the coreference relation extraction evaluated on our corpus is 71%, which is higher than the results achieved on the other Russian-language corpora.
Keyword: adverse drug events; annotated corpus; coreference relation extraction; deep learning; information extraction; language models; machine learning; MESHRUS; named entity recognition; neural networks; pharmacovigilance; social media; UMLS
URL: https://doi.org/10.3390/app12010491
BASE
Hide details
12
Commonsense Knowledge-Aware Prompt Tuning for Few-Shot NOTA Relation Classification
In: Applied Sciences; Volume 12; Issue 4; Pages: 2185 (2022)
BASE
Show details
13
Transformer-Based Abstractive Summarization for Reddit and Twitter: Single Posts vs. Comment Pools in Three Languages
In: Future Internet; Volume 14; Issue 3; Pages: 69 (2022)
BASE
Show details
14
Correcting Diacritics and Typos with a ByT5 Transformer Model
In: Applied Sciences; Volume 12; Issue 5; Pages: 2636 (2022)
BASE
Show details
15
Language Competition and Language Shift in Friuli-Venezia Giulia: Projection and Trajectory for the Number of Friulian Speakers to 2050
In: Sustainability; Volume 14; Issue 6; Pages: 3319 (2022)
BASE
Show details
16
An Information Theoretic Approach to Symbolic Learning in Synthetic Languages
In: Entropy; Volume 24; Issue 2; Pages: 259 (2022)
BASE
Show details
17
Comparison of Text Mining Models for Food and Dietary Constituent Named-Entity Recognition
In: Machine Learning and Knowledge Extraction; Volume 4; Issue 1; Pages: 254-275 (2022)
BASE
Show details
18
Regression modeling for linguistic data ...
Sonderegger, Morgan. - : Open Science Framework, 2022
BASE
Show details
19
Language and vision in conceptual processing: Multilevel analysis and statistical power ...
Bernabeu, Pablo. - : Open Science Framework, 2022
BASE
Show details
20
Exploring the Representations of Individual Entities in the Brain Combining EEG and Distributional Semantics.
Bruera, A; Poesio, M. - 2022
BASE
Show details

Page: 1 2 3 4 5...80

Catalogues
21
0
3
0
0
2
0
Bibliographies
63
0
0
0
0
0
0
10
17
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.492
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern