1 |
One model for the learning of language.
|
|
|
|
In: Proceedings of the National Academy of Sciences of the United States of America, vol 119, iss 5 (2022)
|
|
BASE
|
|
Show details
|
|
2 |
One model for the learning of language
|
|
|
|
In: Proc Natl Acad Sci U S A (2022)
|
|
BASE
|
|
Show details
|
|
3 |
Developing an Automatic System for Classifying Chatter About Health Services on Twitter: Case Study for Medicaid
|
|
|
|
In: J Med Internet Res (2021)
|
|
BASE
|
|
Show details
|
|
4 |
Self-reported COVID-19 symptoms on Twitter: an analysis and a research resource
|
|
|
|
In: J Am Med Inform Assoc (2020)
|
|
BASE
|
|
Show details
|
|
5 |
A Light-Weight Text Summarization System for Fast Access to Medical Evidence
|
|
|
|
In: Front Digit Health (2020)
|
|
BASE
|
|
Show details
|
|
6 |
Entwicklung interkultureller Handlungskompetenz : ein didaktisches Konzept für den Wirtschaftsdeutschunterricht in China am Beispiel des Einsatzes von Lernvideos
|
|
Yang, Yuan [Verfasser]. - München : Iudicium, 2019
|
|
DNB Subject Category Language
|
|
Show details
|
|
7 |
Entwicklung interkultureller Handlungskompetenz. Ein didaktisches Konzept für den Wirtschaftsdeutschunterricht in China am Beispiel des Einsatzes von Lernvideos
|
|
|
|
DNB Subject Category Language
|
|
Show details
|
|
8 |
Punctuation and Parallel Corpus Based Word Embedding Model for Low-Resource Languages
|
|
|
|
In: Information ; Volume 11 ; Issue 1 (2019)
|
|
Abstract:
To overcome the data sparseness in word embedding trained in low-resource languages, we propose a punctuation and parallel corpus based word embedding model. In particular, we generate the global word-pair co-occurrence matrix with the punctuation-based distance attenuation function, and integrate it with the intermediate word vectors generated from the small-scale bilingual parallel corpus to train word embedding. Experimental results show that compared with several widely used baseline models such as GloVe and Word2vec, our model improves the performance of word embedding for low-resource language significantly. Trained on the restricted-scale English-Chinese corpus, our model has improved by 0.71 percentage points in the word analogy task, and achieved the best results in all of the word similarity tasks.
|
|
Keyword:
distance attenuation function; GloVe; word alignment probability; word embedding; Word2vec
|
|
URL: https://doi.org/10.3390/info11010024
|
|
BASE
|
|
Hide details
|
|
|
|