DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...80
Hits 1 – 20 of 1.582

1
The Role of Sensorimotor and Linguistic Distributional Information in Categorisation ...
Van Hoef, Rens. - : Lancaster University, 2022
BASE
Show details
2
Αναπαραστάσεις προσφύγων στο Ελληνικό ηλεκτρονικό τύπο, Μελέτη περίπτωσης Απρίλιος 2020 ...
Κορμπύλα, Βασιλική Κωνσταντίνου. - : Aristotle University of Thessaloniki, 2022
BASE
Show details
3
Visual generics: How children understand generic language with different visualizations ...
Menendez, David. - : Open Science Framework, 2022
BASE
Show details
4
Attention-Language Interface in Multilingual Assessment Instrument for Narratives (MAIN) ...
Sekerina, Irina. - : Open Science Framework, 2022
BASE
Show details
5
Attention-Language Interface in Multilingual Assessment Instrument for Narratives (MAIN) ...
Sekerina, Irina. - : Open Science Framework, 2022
BASE
Show details
6
Comparative Study of Multiclass Text Classification in Research Proposals Using Pretrained Language Models
In: Applied Sciences; Volume 12; Issue 9; Pages: 4522 (2022)
BASE
Show details
7
The Semantics of Natural Objects and Tools in the Brain: A Combined Behavioral and MEG Study
In: Brain Sciences; Volume 12; Issue 1; Pages: 97 (2022)
BASE
Show details
8
Hebrew Transformed: Machine Translation of Hebrew Using the Transformer Architecture
Crater, David T. - 2022
BASE
Show details
9
Phonological Contrast and Conflict in Dutch Vowels: Neurobiological and Psycholinguistic Evidence from Children and Adults ...
Rue, N.P.W.D. De. - : Data Archiving and Networked Services (DANS), 2022
BASE
Show details
10
Linguistic Representations of Black Characters in Cuban Fiction of the New Millennium: A tale about continuity and subversion
In: Caribbean Quilt; Vol. 6 No. 1 (2021): Resiliency ; 97-110 ; 1929-235X ; 1925-5829 ; 10.33137/cq.v6i1 (2022)
BASE
Show details
11
The case of the Indian detective: Native American mystery novels
BASE
Show details
12
The polarity effect of evaluative language
BASE
Show details
13
How to train your self-supervised NLP model: Investigating pre-training objectives, data, and scale
Joshi, Mandar. - 2022
Abstract: Thesis (Ph.D.)--University of Washington, 2022 ; A robust language processing machine should be able to encode linguistic and factual knowledge across a wide variety of domains, languages, and even modalities. The paradigm of pre-training self-supervised models on large text corpora has driven much of recent progress towards this goal. In spite of this large scale pre-training, the best performing models have to be further fine-tuned on downstream tasks -- often containing hundreds of thousands of examples -- to achieve state of the art performance. The aim of this thesis is twofold: (a) to design efficient scalable pre-training methods which capture different kinds of linguistic and world knowledge, and (b) to enable better downstream performance with fewer human-labeled examples. The first part of the thesis focuses on self-supervised objectives for reasoning about relationships between pairs of words. In NLI, for example, given the premise "golf is prohibitively expensive", inferring that the hypothesis "golf is a cheap pastime" is a contradiction requires one to know that expensive and cheap are antonyms. We show that with the right kind of self-supervised objectives, such knowledge learned with word pair vectors (pair2vec) directly from text without using curated knowledge bases and ontologies. The second part of the thesis seeks to build models which encode knowledge beyond word pair relations into model parameters. We present SpanBERT, a scalable pre-training method that is designed to better represent and predict spans of text. Span-based pre-training objectives seek to efficiently encode a wider variety of knowledge, and improve the state of the art for a range of NLP tasks. The third part of the thesis focuses integrating dynamically retrieved textual knowledge. Specifically, even large scale representations are not able to preserve all factual knowledge they have "read'" during pre-training due to the long tail of entity and event-specific information. We show that training models to integrate background knowledge during pre-training is especially useful for downstream tasks which require reasoning over this long tail. The last part of the thesis targets a major weakness of self-supervised models -- while such models requires no explicit human supervision during pre-training, they still need lots of human-labeled downstream task data. We seek to remedy this by mining input-output pairs (and thus obtaining direct task-level supervision) from corpora using supervision from very few labeled examples. Overall, this thesis presents a range of ideas required for effective pre-training and fine-tuning -- (a) self-supervised objectives, (b) model scale, and (c) new types of data. As we will show in the following chapters, self-supervised objectives could have a large influence on the form of knowledge that is acquired during pre-training. Moreover, efficient objectives directly enable model scale both in terms of data and parameters. Finally, the training data and the kind of supervision derived from it itself dictates how well a model can learn different kinds of downstream tasks.
Keyword: Computer science; Computer science and engineering; nlp; pretraining; representations; self supervised
URL: http://hdl.handle.net/1773/48474
BASE
Hide details
14
Derrière la langue, les positionnements sociaux. Pouvoir faire entendre sa voix en langue seconde
In: Nouvelle Revue Synergies Canada; No. 15 (2022): La notion de « voix » en sociolinguistique et sciences sociales ; 2292-2261 (2022)
BASE
Show details
15
SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking
In: SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval ; https://hal.sorbonne-universite.fr/hal-03290774 ; SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Jul 2021, Virtual Event, Canada. pp.2288-2292, ⟨10.1145/3404835.3463098⟩ (2021)
BASE
Show details
16
Social Representations of e-Mental Health Among the Actors of the Health Care System: Free-Association Study
In: ISSN: 2368-7959 ; JMIR Mental Health ; https://hal.archives-ouvertes.fr/hal-03240149 ; JMIR Mental Health, JMIR Publications, 2021, 8 (5), pp.e25708. ⟨10.2196/25708⟩ ; https://mental.jmir.org/2021/5/e25708/ (2021)
BASE
Show details
17
Social representations of the undernourished child and health‐seeking behaviour in Nepal: From othering to different types of otherness
In: ISSN: 1052-9284 ; EISSN: 1099-1298 ; Journal of Community and Applied Social Psychology ; https://hal.univ-lyon2.fr/hal-03416642 ; Journal of Community and Applied Social Psychology, Wiley, In press, ⟨10.1002/casp.2576⟩ (2021)
BASE
Show details
18
From Parisian sojourn to French tour : travel writings of Soviet authors (1922-1991) ; Du voyage à Paris au tour de France : les récits de voyages des écrivains soviétiques (1922-1991)
Kharatyan, Tatevik. - : HAL CCSD, 2021
In: https://tel.archives-ouvertes.fr/tel-03537805 ; Linguistique. Normandie Université, 2021. Français. ⟨NNT : 2021NORMC030⟩ (2021)
BASE
Show details
19
Dysphonia among school teachers : perception and representations ; La dysphonie chez les professeures des écoles : perception et représentations
Pettirossi, Amelia. - : HAL CCSD, 2021
In: https://tel.archives-ouvertes.fr/tel-03564756 ; Linguistique. Université de la Sorbonne nouvelle - Paris III, 2021. Français. ⟨NNT : 2021PA030011⟩ (2021)
BASE
Show details
20
Dysphonia among school teachers: perception and representations ; La dysphonie chez les professeures des écoles : perception et représentations
Pettirossi, Amelia. - : HAL CCSD, 2021
In: https://tel.archives-ouvertes.fr/tel-03152574 ; Linguistique. Université Sorbonne Nouvelle, 2021. Français (2021)
BASE
Show details

Page: 1 2 3 4 5...80

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.580
1
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern