1 |
Injecting Inductive Biases into Distributed Representations of Text ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Injecting Inductive Biases into Distributed Representations of Text
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Towards High-End Scalability on Bio-Inspired Computational Models
|
|
|
|
In: Computer Science: Faculty Publications and Other Works (2020)
|
|
BASE
|
|
Show details
|
|
4 |
Exploring Explicit and Implicit Feature Spaces in Natural Language Processing Using Self-Enrichment and Vector Space Analysis
|
|
|
|
In: Electronic Thesis and Dissertation Repository (2020)
|
|
Abstract:
Machine Learning in Natural Language Processing (NLP) deals directly with distributed representations of words and sentences. Words are transformed into vectors of real values, called embeddings, and used as the inputs to machine learning models. These architectures are then used to solve NLP tasks such as Sentiment Analysis and Natural Language Inference. While solving these tasks many models will create word embeddings and sentence embeddings as outputs. We are interested in how we can transform and analyze these output embeddings and modify our models, to both improve the task result and give us an understanding of the spaces. To this end we introduce the notion of explicit features, the actual values of the embeddings, and implicit features, information encoded into the space of vectors by solving the task, and hypothesis on an idealized spaces, where implicit features directly create the explicit features by means of basic linear algebra and set theory. To test if our output spaces are similar to our ideal space we vary the model and, motivated by Transformer architectures, introduce the notion of Self-Enriching layers. We also create idealized spaces, and run task experiments to see if the patterns of results can give us insight into the output spaces, as well we run transfer learning experiments to see what kinds of information are being represented by our models. Finally, we run direct analysis of the vectors of the word and sentence outputs for comparison.
|
|
Keyword:
Artificial Intelligence and Robotics; distributed representations; natural language inference; natural language processing; sentence embeddings; sentiment analysis; word embeddings
|
|
URL: https://ir.lib.uwo.ca/cgi/viewcontent.cgi?article=9954&context=etd https://ir.lib.uwo.ca/etd/7471
|
|
BASE
|
|
Hide details
|
|
5 |
neurophon/neurophon: Minor update to ensure future updates include Zenodo author/keyword metadata ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
neurophon/neurophon: A Computational Theory for the Emergence of Grammatical Categories in Cortical Dynamics ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
neurophon/neurophon: A Computational Theory for the Emergence of Grammatical Categories in Cortical Dynamics ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Neurocomputational cortical memory for spectro-temporal phonetic abstraction.
|
|
|
|
In: Computer Science: Faculty Publications and Other Works (2019)
|
|
BASE
|
|
Show details
|
|
10 |
Datasets used to train and test the Cortical Spectro-Temporal Model (CSTM). ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Datasets used to train and test the Cortical Spectro-Temporal Model (CSTM). ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
A Systematic Study of Knowledge Graph Analysis for Cross-language Plagiarism Detection
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Language variety identification using distributed representations of words and documents
|
|
|
|
BASE
|
|
Show details
|
|
14 |
What Machines Understand about Personality Words after Reading the News
|
|
|
|
In: http://rave.ohiolink.edu/etdc/view?acc_num=wright1404902086 (2014)
|
|
BASE
|
|
Show details
|
|
15 |
What Machines Understand about Personality Words after Reading the News
|
|
|
|
In: Browse all Theses and Dissertations (2014)
|
|
BASE
|
|
Show details
|
|
16 |
Large-Scale Acquisition of Feature-Based Conceptual Representations from Textual Corpora
|
|
|
|
In: Proceedings of the Annual Meeting of the Cognitive Science Society ; The Annual Meeting of the Cognitive Science Society ; https://hal.archives-ouvertes.fr/hal-00507103 ; The Annual Meeting of the Cognitive Science Society, 2010, United States. 6 p (2010)
|
|
BASE
|
|
Show details
|
|
17 |
Modelling the effects of semantic ambiguity in word recognition
|
|
|
|
In: COGNITIVE SCI , 28 (1) 89 - 104. (2004) (2004)
|
|
BASE
|
|
Show details
|
|
18 |
Modelling the effects of semantic ambiguity in word recognition
|
|
|
|
In: http://csl.psychol.cam.ac.uk/publications/04_Rodd_CogSci.pdf (2003)
|
|
BASE
|
|
Show details
|
|
19 |
Making sense of semantic ambiguity: Semantic competition in lexical access
|
|
|
|
In: JOURNAL OF MEMORY AND LANGUAGE , 46 (2) pp. 245-266. (2002) (2002)
|
|
BASE
|
|
Show details
|
|
20 |
Toward a connectionist model of recursion in human linguistic performance
|
|
|
|
In: COGNITIVE SCI , 23 (2) 157 - 205. (1999) (1999)
|
|
BASE
|
|
Show details
|
|
|
|