2 |
Data for Training and Evaluating Metadata Extraction Models based on 15 Thousand Cyrillic Script Publications ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Token-level Multilingual Epidemic Dataset for Event Extraction ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Token-level Multilingual Epidemic Dataset for Event Extraction ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Data for Training and Evaluating Metadata Extraction Models based on 15 Thousand Cyrillic Script Publications ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
HTLinker: A Head-to-Tail Linker for Nested Named Entity Recognition
|
|
|
|
In: Symmetry ; Volume 13 ; Issue 9 (2021)
|
|
BASE
|
|
Show details
|
|
8 |
A Deep Neural Network-Based Model for Named Entity Recognition for Hindi Language
|
|
|
|
In: ETSU Faculty Works (2020)
|
|
Abstract:
The aim of this work is to develop efficient named entity recognition from the given text that in turn improves the performance of the systems that use natural language processing (NLP). The performance of IoT-based devices such as Alexa and Cortana significantly depends upon an efficient NLP model. To increase the capability of the smart IoT devices in comprehending the natural language, named entity recognition (NER) tools play an important role in these devices. In general, the NER is a two-step process that initially the proper nouns are identified from text and then classify them into predefined categories of entities such as person, location, measure, organization and time. NER is often performed as a subtask while processing natural languages which increases the accuracy level of a NLP task. In this paper, we propose deep neural network architecture for named entity recognition for the resource-scarce language Hindi, based on convolutional neural network (CNN), bidirectional long short-term memory (Bi-LSTM) neural network and conditional random field (CRF). In the proposed approach, initially, we use skip-gram word2vec model and GloVe model to represent words in semantic vectors which are further used in different deep neural network-based architectures. In the proposed approach, we use character- and word-level embedding to represent the text that includes information at fine-grained level. Due to the use of character-level embeddings, the proposed model is robust for the out-of-vocabulary words. Experimental results show that the combination of Bi-LSTM, CNN and CRF algorithms performs better as compared to the other baseline methods such as recurrent neural network, long short-term memory and Bi-LSTM individually.
|
|
Keyword:
Bi-LSTM; Computing; convolutional neural network; deep learning; machine learning; neural networks; sequence labeling
|
|
URL: https://doi.org/10.1007/s00521-020-04881-z https://dc.etsu.edu/etsu-works/9092
|
|
BASE
|
|
Hide details
|
|
9 |
NAT: Noise-Aware Training for Robust Neural Sequence Labeling
|
|
|
|
In: Fraunhofer IAIS (2020)
|
|
BASE
|
|
Show details
|
|
10 |
Modeling a label global context for sequence tagging in recurrent neural networks ; Modélisation d'un contexte global d'étiquettes pour l'étiquetage de séquences dans les réseaux neuronaux récurrents
|
|
|
|
In: Journée commune AFIA-ATALA sur le Traitement Automatique des Langues et l’Intelligence Artificielle pendant la onzième édition de la plate-forme Intelligence Artificielle (PFIA 2018) ; https://hal.archives-ouvertes.fr/hal-02002111 ; Journée commune AFIA-ATALA sur le Traitement Automatique des Langues et l’Intelligence Artificielle pendant la onzième édition de la plate-forme Intelligence Artificielle (PFIA 2018), Jul 2018, Nancy, France ; https://pfia2018.loria.fr/journee-tal/ (2018)
|
|
BASE
|
|
Show details
|
|
11 |
A Simple and Effective biLSTM Approach to Aspect-Based Sentiment Analysis in Social Media Customer Feedback
|
|
|
|
In: Clematide, Simon (2018). A Simple and Effective biLSTM Approach to Aspect-Based Sentiment Analysis in Social Media Customer Feedback. In: Barbaresi, Adrien; Biber, Hanno; Neubarth, Friedrich; Osswald, Rainer. 14th Conference on Natural Language Processing - KONVENS 2018. Vienna: Verlag der Österreichischen Akademie der Wissenschaften, 29-33. (2018)
|
|
BASE
|
|
Show details
|
|
12 |
Semi-Markov models for sequence segmentation
|
|
|
|
In: Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2007) ; http://www.aclweb.org/anthology-new/D/D07/D07-1.pdf (2015)
|
|
BASE
|
|
Show details
|
|
13 |
Elephant: Sequence Labeling for Word and Sentence Segmentation
|
|
|
|
In: EMNLP 2013 ; https://hal.archives-ouvertes.fr/hal-01344500 ; EMNLP 2013, Oct 2013, Seattle, United States (2013)
|
|
BASE
|
|
Show details
|
|
14 |
Unsupervised Large-Vocabulary Word Sense Disambiguation with Graph-based Algorithms for Sequence Data Labeling
|
|
|
|
In: Joint Conference on Human Language Technology / Empirical Methods in Natural Language Processing (HLT/EMNLP), 2005, Vancouver, British Columbia, Canada (2005)
|
|
BASE
|
|
Show details
|
|
15 |
Feature-Rich Information Extraction for the Technical Trend-Map Creation
|
|
|
|
In: http://research.nii.ac.jp/ntcir/workshop/OnlineProceedings8/NTCIR/04-NTCIR8-PATMN-NishiyamaR.pdf
|
|
BASE
|
|
Show details
|
|
|
|