1 |
From Biological Synapses to “Intelligent” Robots
|
|
|
|
In: ISSN: 2079-9292 ; Electronics ; https://hal.archives-ouvertes.fr/hal-03590998 ; Electronics, MDPI, 2022, 11 (5), pp.707. ⟨10.3390/electronics11050707⟩ (2022)
|
|
BASE
|
|
Show details
|
|
2 |
Cross-Situational Learning Towards Robot Grounding
|
|
|
|
In: https://hal.archives-ouvertes.fr/hal-03628290 ; 2022 (2022)
|
|
Abstract:
How do children acquire language through unsupervised or noisy supervision? How do their brain process language? We take this perspective to machine learning and robotics, where part of the problem is understanding how language models can perform grounded language acquisition through noisy supervision and discussing how they can account for brain learning dynamics. Most prior works have tracked the co-occurrence between single words and referents to model how infants learn wordreferent mappings. This paper studies cross-situational learning (CSL) with full sentences: we want to understand brain mechanisms that enable children to learn mappings between words and their meanings from full sentences in early language learning. We investigate the CSL task on a few training examples with two sequence-based models: (i) Echo State Networks (ESN) and (ii) Long-Short Term Memory Networks (LSTM). Most importantly, we explore several word representations including One-Hot, GloVe, pretrained BERT, and fine-tuned BERT representations (last layer token representations) to perform the CSL task. We apply our approach to three diverse datasets (two grounded language datasets and a robotic dataset) and observe that (1) One-Hot, GloVe, and pretrained BERT representations are less efficient when compared to representations obtained from fine-tuned BERT. (2) ESN online with final learning (FL) yields superior performance over ESN online continual learning (CL), offline learning, and LSTMs, indicating the more biological plausibility of ESNs and the cognitive process of sentence reading. (2) LSTM with fewer hidden units showcases higher performance for small datasets, but LSTM with more hidden units is Cross-Situational Learning needed to perform reasonably well on larger corpora. (4) ESNs demonstrate better generalization than LSTM models for increasingly large vocabularies. Overall, these models are able to learn from scratch to link complex relations between words and their corresponding meaning concepts, handling polysemous and synonymous words. Moreover, we argue that such models can extend to help current human-robot interaction studies on language grounding and better understand children's developmental language acquisition. We make the code publicly available * .
|
|
Keyword:
[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]; [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL]; [INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG]; [INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE]; [INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO]; [SDV.NEU]Life Sciences [q-bio]/Neurons and Cognition [q-bio.NC]; BERT; cross-situational learning; echo state networks; grounded language; LSTM
|
|
URL: https://hal.archives-ouvertes.fr/hal-03628290 https://hal.archives-ouvertes.fr/hal-03628290v2/file/Journal_of_Social_and_Robotics.pdf https://hal.archives-ouvertes.fr/hal-03628290v2/document
|
|
BASE
|
|
Hide details
|
|
3 |
Cross-Situational Learning Towards Robot Grounding
|
|
|
|
In: https://hal.archives-ouvertes.fr/hal-03628290 ; 2022 (2022)
|
|
BASE
|
|
Show details
|
|
4 |
Finding the best way to put media bias research into practice via an annotation app ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Are neural language models sensitive to false belief? A computational study. ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Structured, flexible, and robust: comparing linguistic plans and explanations generated by humans and large language models ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Can distributional semantics explain performance on the false belief task? ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Learning Bidirectional Translation between Descriptions and Actions with Small Paired Data ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
SGL: Symbolic Goal Learning in a Hybrid, Modular Framework for Human Instruction Following ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Self-supervised 3D Semantic Representation Learning for Vision-and-Language Navigation ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Interactive Robotic Grasping with Attribute-Guided Disambiguation ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
The Enforcers: Consistent Sparse-Discrete Methods for Constraining Informative Emergent Communication ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
The Construction of the Robot in Language and Culture, “Intercultural Robotics” and the “Third Robot Culture” ...
|
|
Cheng, Lin. - : Technology and Language, 3(1), 1-8, 2022
|
|
BASE
|
|
Show details
|
|
17 |
The influence of animacy on perspective-taking and word order during language production ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Unsupervised Multimodal Word Discovery based on Double Articulation Analysis with Co-occurrence cues ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Machine infelicity in a poignant visitor setting: Comparing human and AI’s ability to analyze discourse
|
|
|
|
In: Research outputs 2014 to 2021 (2022)
|
|
BASE
|
|
Show details
|
|
20 |
Integrating Blockchains and Intelligent Agents in the Pursuit of Artificial General Intelligence
|
|
|
|
In: Senior Honors Theses (2022)
|
|
BASE
|
|
Show details
|
|
|
|