DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 26

1
Morphological Processing of Low-Resource Languages: Where We Are and What's Next ...
BASE
Show details
2
Match the Script, Adapt if Multilingual: Analyzing the Effect of Multilingual Pretraining on Cross-lingual Transferability ...
BASE
Show details
3
Don't Rule Out Monolingual Speakers: A Method For Crowdsourcing Machine Translation Data ...
BASE
Show details
4
Findings of the LoResMT 2021 Shared Task on COVID and Sign Language for Low-resource Languages ...
BASE
Show details
5
How to Adapt Your Pretrained Multilingual Model to 1600 Languages ...
Ebrahimi, Abteen; Kann, Katharina. - : arXiv, 2021
BASE
Show details
6
Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas ...
Mager, Manuel; Oncevay, Arturo; Ebrahimi, Abteen. - : Association for Computational Linguistics, 2021
BASE
Show details
7
{PROST}: {P}hysical Reasoning about Objects through Space and Time ...
BASE
Show details
8
Don't Rule Out Monolingual Speakers: A Method For Crowdsourcing Machine Translation Data ...
BASE
Show details
9
What Would a Teacher Do? {P}redicting Future Talk Moves ...
BASE
Show details
10
How to Adapt Your Pretrained Multilingual Model to 1600 Languages ...
BASE
Show details
11
AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages ...
BASE
Show details
12
CLiMP: A Benchmark for Chinese Language Model Evaluation ...
Abstract: Linguistically informed analyses of language models (LMs) contribute to the understanding and improvement of these models. Here, we introduce the corpus of Chinese linguistic minimal pairs (CLiMP), which can be used to investigate what knowledge Chinese LMs acquire. CLiMP consists of sets of 1,000 minimal pairs (MPs) for 16 syntactic contrasts in Mandarin, covering 9 major Mandarin linguistic phenomena. The MPs are semi-automatically generated, and human agreement with the labels in CLiMP is 95.8%. We evaluated 11 different LMs on CLiMP, covering n-grams, LSTMs, and Chinese BERT. We find that classifier-noun agreement and verb complement selection are the phenomena that models generally perform best at. However, models struggle the most with the ba construction, binding, and filler-gap dependencies. Overall, Chinese BERT achieves an 81.8% average accuracy, while the performances of LSTMs and 5-grams are only moderately above chance level. ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/2101.11131
https://dx.doi.org/10.48550/arxiv.2101.11131
BASE
Hide details
13
Proceedings of the First Workshop on Natural Language Processing for Indigenous Languages of the Americas
In: Proceedings of the First Workshop on Natural Language Processing for Indigenous Languages of the Americas. Edited by: Mager, Manuel; Oncevay, Arturo; Rios, Annette; Meza Ruiz, Ivan Vladimir; Palmer, Alexis; Neubig, Graham; Kann, Katharina (2021). Online: Association for Computational Linguistics. (2021)
BASE
Show details
14
Unsupervised Morphological Paradigm Completion ...
Jin, Huiming; Cai, Liwei; Peng, Yihui. - : arXiv, 2020
BASE
Show details
15
Learning to Learn Morphological Inflection for Resource-Poor Languages ...
BASE
Show details
16
Acquisition of Inflectional Morphology in Artificial Neural Networks With Prior Knowledge
In: Proceedings of the Society for Computation in Linguistics (2020)
BASE
Show details
17
Neural sequence-to-sequence models for low-resource morphology
Kann, Katharina [Verfasser]; Schütze, Hinrich [Akademischer Betreuer]. - München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2019
DNB Subject Category Language
Show details
18
Probing for Semantic Classes: Diagnosing the Meaning Content of Word Embeddings
Yaghoobzadeh, Yadollah; Kann, Katharina; Hazen, Timothy. - : Ludwig-Maximilians-Universität München, 2019
BASE
Show details
19
Probing for Semantic Classes: Diagnosing the Meaning Content of Word Embeddings ...
Yaghoobzadeh, Yadollah; Kann, Katharina; Hazen, Timothy. - : Association for Computational Linguistics, 2019
BASE
Show details
20
Acquisition of Inflectional Morphology in Artificial Neural Networks With Prior Knowledge ...
Kann, Katharina. - : arXiv, 2019
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
1
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
25
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern