1 |
UNKs Everywhere: Adapting Multilingual Language Models to New Scripts ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Emergent Communication Pretraining for Few-Shot Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Manual Clustering and Spatial Arrangement of Verbs for Multilingual Evaluation and Typology Analysis ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
SemEval-2020 Task 2: Predicting Multilingual and Cross-Lingual (Graded) Lexical Entailment ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Modeling Language Variation and Universals: A Survey on Typological Linguistics for Natural Language Processing
|
|
|
|
In: https://hal.archives-ouvertes.fr/hal-01856176 ; 2018 (2018)
|
|
BASE
|
|
Show details
|
|
10 |
A deep learning approach to bilingual lexicon induction in the biomedical domain. ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
A deep learning approach to bilingual lexicon induction in the biomedical domain.
|
|
|
|
Abstract:
BACKGROUND: Bilingual lexicon induction (BLI) is an important task in the biomedical domain as translation resources are usually available for general language usage, but are often lacking in domain-specific settings. In this article we consider BLI as a classification problem and train a neural network composed of a combination of recurrent long short-term memory and deep feed-forward networks in order to obtain word-level and character-level representations. RESULTS: The results show that the word-level and character-level representations each improve state-of-the-art results for BLI and biomedical translation mining. The best results are obtained by exploiting the synergy between these word-level and character-level representations in the classification model. We evaluate the models both quantitatively and qualitatively. CONCLUSIONS: Translation of domain-specific biomedical terminology benefits from the character-level representations compared to relying solely on word-level representations. It is beneficial to take a deep learning approach and learn character-level representations rather than relying on handcrafted representations that are typically used. Our combined model captures the semantics at the word level while also taking into account that specialized terminology often originates from a common root form (e.g., from Greek or Latin).
|
|
Keyword:
Data Mining; Deep Learning; Humans; Knowledge Bases; Multilingualism; Natural Language Processing; Semantics
|
|
URL: https://www.repository.cam.ac.uk/handle/1810/288980 https://doi.org/10.17863/CAM.36243
|
|
BASE
|
|
Hide details
|
|
12 |
Bio-SimVerb and Bio-SimLex: wide-coverage evaluation sets of word similarity in biomedicine.
|
|
|
|
BASE
|
|
Show details
|
|
|
|