DE eng

Search in the Catalogues and Directories

Hits 1 – 8 of 8

1
Analyse en dépendances du français avec des plongements contextualisés
In: 28e Conférence sur le Traitement Automatique des Langues Naturelles ; https://hal.archives-ouvertes.fr/hal-03223424 ; 28e Conférence sur le Traitement Automatique des Langues Naturelles, Jun 2021, Lille (virtuel), France (2021)
BASE
Show details
2
Analyse en dépendances du français avec des plongements contextualisés
In: Actes de la 28e Conférence sur le Traitement Automatique des Langues Naturelles. Volume 1 : conférence principale ; Traitement Automatique des Langues Naturelles ; https://hal.archives-ouvertes.fr/hal-03265893 ; Traitement Automatique des Langues Naturelles, 2021, Lille, France. pp.106-114 (2021)
BASE
Show details
3
Contrasting distinct structured views to learn sentence embeddings
In: Proceedings of the 16th Conference of the European Chapter of the Associationfor Computational Linguistics: Student Research Workshop, ; European Chapter of the Association of Computational Linguistics (student) ; https://hal.archives-ouvertes.fr/hal-03601428 ; European Chapter of the Association of Computational Linguistics (student), 2021, Kyiv, Ukraine (2021)
BASE
Show details
4
How Many Layers and Why? An Analysis of the Model Depth in Transformers
In: Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: ; Association of Computational Linguistics (student) ; https://hal.archives-ouvertes.fr/hal-03601412 ; Association of Computational Linguistics (student), 2021, Bangkok, Thailand (2021)
Abstract: International audience ; In this study, we investigate the role of the multiple layers in deep transformer models. We design a variant of ALBERT that dynamically adapts the number of layers for each token of the input. The key specificity of ALBERT is that weights are tied across layers. Therefore, the stack of encoder layers iteratively repeats the application of the same transformation function on the input. We interpret the repetition of this application as an iterative process where the token contextualized representations are progressively refined. We analyze this process at the token level during pretraining, fine-tuning, and inference. We show that tokens do not require the same amount of iterations and that difficult or crucial tokens for the task are subject to more iterations.
Keyword: [INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]
URL: https://hal.archives-ouvertes.fr/hal-03601412
https://hal.archives-ouvertes.fr/hal-03601412/file/2021.acl-srw.23.pdf
https://hal.archives-ouvertes.fr/hal-03601412/document
BASE
Hide details
5
Deep Sequoia corpus - PARSEME-FR corpus - FrSemCor
BASE
Show details
6
Word order in French: the role of animacy
In: Glossa: a journal of general linguistics; Vol 6, No 1 (2021); 55 ; 2397-1835 (2021)
BASE
Show details
7
Are Transformers a Modern Version of ELIZA? Observations on French Object Verb Agreement ...
BASE
Show details
8
Can RNNs learn Recursive Nested Subject-Verb Agreements? ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
8
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern