DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4
Hits 1 – 20 of 79

1
A Quantitative and Qualitative Analysis of Schizophrenia Language ...
BASE
Show details
2
Towards Responsible Natural Language Annotation for the Varieties of Arabic ...
Bergman, A. Stevie; Diab, Mona T.. - : arXiv, 2022
BASE
Show details
3
Gender Bias Amplification During Speed-Quality Optimization in Neural Machine Translation ...
BASE
Show details
4
Few-shot Learning with Multilingual Language Models ...
BASE
Show details
5
AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization ...
BASE
Show details
6
Green NLP panel ...
BASE
Show details
7
Detecting Hallucinated Content in Conditional Neural Sequence Generation ...
BASE
Show details
8
Gender bias amplification during Speed-Quality optimization in Neural Machine Translation ...
BASE
Show details
9
Adapting High-resource NMT Models to Translate Low-resource Related Languages without Parallel Data ...
BASE
Show details
10
Discrete Cosine Transform as Universal Sentence Encoder ...
Almarwani, Nada; Diab, Mona. - : arXiv, 2021
BASE
Show details
11
Discrete Cosine Transform as Universal Sentence Encoder ...
BASE
Show details
12
Detecting Urgency Status of Crisis Tweets: A Transfer Learning Approach for Low Resource Languages ...
BASE
Show details
13
DeSePtion: Dual Sequence Prediction and Adversarial Examples for Improved Fact-Checking ...
BASE
Show details
14
Mutlitask Learning for Cross-Lingual Transfer of Semantic Dependencies ...
Abstract: We describe a method for developing broad-coverage semantic dependency parsers for languages for which no semantically annotated resource is available. We leverage a multitask learning framework coupled with an annotation projection method. We transfer supervised semantic dependency parse annotations from a rich-resource language to a low-resource language through parallel data, and train a semantic parser on projected data. We make use of supervised syntactic parsing as an auxiliary task in a multitask learning framework, and show that with different multitask learning settings, we consistently improve over the single-task baseline. In the setting in which English is the source, and Czech is the target language, our best multitask model improves the labeled F1 score over the single-task baseline by 1.8 in the in-domain SemEval data (Oepen et al., 2015), as well as 2.5 in the out-of-domain test set. Moreover, we observe that syntactic and semantic dependency direction match is an important factor in ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/2004.14961
https://dx.doi.org/10.48550/arxiv.2004.14961
BASE
Hide details
15
Overview for the Second Shared Task on Language Identification in Code-Switched Data ...
BASE
Show details
16
WASA: A Web Application for Sequence Annotation ...
AlGhamdi, Fahad; Diab, Mona. - : arXiv, 2019
BASE
Show details
17
Creating a Large Multi-Layered Representational Repository of Linguistic Code Switched Arabic Data ...
BASE
Show details
18
Identifying Nuances in Fake News vs. Satire: Using Semantic and Linguistic Cues ...
Levi, Or; Hosseini, Pedram; Diab, Mona. - : arXiv, 2019
BASE
Show details
19
Part of speech tagging for code switched data ...
BASE
Show details
20
Named Entity Recognition on Code-Switched Data: Overview of the CALCS 2018 Shared Task ...
BASE
Show details

Page: 1 2 3 4

Catalogues
0
0
2
0
0
0
0
Bibliographies
3
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
75
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern