1 |
Hippocampal ensembles represent sequential relationships among an extended sequence of nonspatial events.
|
|
|
|
In: Nature communications, vol 13, iss 1 (2022)
|
|
BASE
|
|
Show details
|
|
2 |
Are multi-modal lexical representations forgotten in an all-or-none manner? ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Canary Song Decoder: Transduction and Implicit Segmentation with ESNs and LTSMs
|
|
|
|
In: https://hal.inria.fr/hal-03203374 ; 2021 (2021)
|
|
BASE
|
|
Show details
|
|
4 |
Learning emotions latent representation with CVAE for Text-Driven Expressive AudioVisual Speech Synthesis
|
|
|
|
In: ISSN: 0893-6080 ; Neural Networks ; https://hal.inria.fr/hal-03204193 ; Neural Networks, Elsevier, 2021, 141, pp.315-329. ⟨10.1016/j.neunet.2021.04.021⟩ (2021)
|
|
BASE
|
|
Show details
|
|
5 |
Canary Song Decoder: Transduction and Implicit Segmentation with ESNs and LTSMs
|
|
|
|
In: ICANN 2021 - 30th International Conference on Artificial Neural Networks ; https://hal.inria.fr/hal-03203374 ; ICANN 2021 - 30th International Conference on Artificial Neural Networks, Sep 2021, Bratislava, Slovakia. pp.71--82, ⟨10.1007/978-3-030-86383-8_6⟩ ; https://link.springer.com/chapter/10.1007/978-3-030-86383-8_6 (2021)
|
|
BASE
|
|
Show details
|
|
6 |
Files to support: "Prior Experience with Unlabeled Actions Promotes 3-Year-Old Children’s Verb Learning" ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Explanatory Item Response Analysis of the CERAD List Learning Test ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
The Role of Working Memory in Statistical Word Learning ...
|
|
Li, Ye. - : Open Science Framework, 2021
|
|
BASE
|
|
Show details
|
|
13 |
Motoric and Language Systems Associated with Note-taking: Going Beyond Handwriting Speed ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Do adults with dyslexia have syntactic processing difficulties that affect their word learning through reading as they read syntactically complex passages? ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Are multi-modal lexical representations retrieved in an all-or-none manner? ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Complex vocal learning and three-dimensional mating environments
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Danger changes the way the brain processes innocuous information: a study of sensory preconditioning in rats
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Generating Effective Sentence Representations: Deep Learning and Reinforcement Learning Approaches
|
|
|
|
In: Electronic Thesis and Dissertation Repository (2021)
|
|
Abstract:
Natural language processing (NLP) is one of the most important technologies of the information age. Understanding complex language utterances is also a crucial part of artificial intelligence. Many Natural Language applications are powered by machine learning models performing a large variety of underlying tasks. Recently, deep learning approaches have obtained very high performance across many NLP tasks. In order to achieve this high level of performance, it is crucial for computers to have an appropriate representation of sentences. The tasks addressed in the thesis are best approached having shallow semantic representations. These representations are vectors that are then embedded in a semantic space. We present a variety of novel approaches in deep learning applied to NLP for generating effective sentence representations in this space. These semantic representations can either be general or task-specific. We focus on learning task-specific sentence representations, where often these tasks have a good amount of overlap. We design a set of general purpose and task specific sentence encoders combining both word-level semantic knowledge and word- and sentence-level syntactic information. As a method for the former, we perform an intelligent amalgamation of word vectors using modern deep learning modules. For the latter, we use word-level knowledge, such as parts of speech, spelling, and suffix features, and sentence-level information drawn from natural language parse trees which provide the hierarchical structure of a sentence together with grammatical relations between the words. Further expertise is added with reinforcement learning which guides a machine learning model through a reward-penalty game. Rather than just striving for good performance, we always try to design models that are more transparent and explainable. We provide an intuitive explanation about the design of each model and how the model is making a decision. Our extensive experiments show that these models achieve competitive performance compared with the currently available state-of-the-art generalized and task-specific sentence encoders. All but one of the tasks dealt with English language texts. The multilingual semantic similarity task required creating a multilingual corpus for which we provide a novel semi-supervised approach to make artificial negative samples in the presence of just positive samples.
|
|
Keyword:
Artificial Intelligence and Robotics; Attention; Computer Sciences; Deep Learning; Long Short Term Memory; Reinforcement Learning; Sentence Representation; Transformer
|
|
URL: https://ir.lib.uwo.ca/etd/7780 https://ir.lib.uwo.ca/cgi/viewcontent.cgi?article=10295&context=etd
|
|
BASE
|
|
Hide details
|
|
20 |
Kesan emosi terhadap penerimaan bantuan dan hubungannya dengan kesejahteraan hidup dalam kalangan asnaf : satu kajian di Sabah
|
|
|
|
BASE
|
|
Show details
|
|
|
|