DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...7
Hits 1 – 20 of 129

1
Cross-Situational Learning Towards Robot Grounding
In: https://hal.archives-ouvertes.fr/hal-03628290 ; 2022 (2022)
BASE
Show details
2
Cross-Situational Learning Towards Robot Grounding
In: https://hal.archives-ouvertes.fr/hal-03628290 ; 2022 (2022)
Abstract: How do children acquire language through unsupervised or noisy supervision? How do their brain process language? We take this perspective to machine learning and robotics, where part of the problem is understanding how language models can perform grounded language acquisition through noisy supervision and discussing how they can account for brain learning dynamics. Most prior works have tracked the co-occurrence between single words and referents to model how infants learn wordreferent mappings. This paper studies cross-situational learning (CSL) with full sentences: we want to understand brain mechanisms that enable children to learn mappings between words and their meanings from full sentences in early language learning. We investigate the CSL task on a few training examples with two sequence-based models: (i) Echo State Networks (ESN) and (ii) Long-Short Term Memory Networks (LSTM). Most importantly, we explore several word representations including One-Hot, GloVe, pretrained BERT, and fine-tuned BERT representations (last layer token representations) to perform the CSL task. We apply our approach to three diverse datasets (two grounded language datasets and a robotic dataset) and observe that (1) One-Hot, GloVe, and pretrained BERT representations are less efficient when compared to representations obtained from fine-tuned BERT. (2) ESN online with final learning (FL) yields superior performance over ESN online continual learning (CL), offline learning, and LSTMs, indicating the more biological plausibility of ESNs and the cognitive process of sentence reading. (2) LSTM with fewer hidden units showcases higher performance for small datasets, but LSTM with more hidden units is Cross-Situational Learning needed to perform reasonably well on larger corpora. (4) ESNs demonstrate better generalization than LSTM models for increasingly large vocabularies. Overall, these models are able to learn from scratch to link complex relations between words and their corresponding meaning concepts, handling polysemous and synonymous words. Moreover, we argue that such models can extend to help current human-robot interaction studies on language grounding and better understand children's developmental language acquisition. We make the code publicly available * .
Keyword: [INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]; [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL]; [INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG]; [INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE]; [INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO]; [SDV.NEU]Life Sciences [q-bio]/Neurons and Cognition [q-bio.NC]; BERT; cross-situational learning; echo state networks; grounded language; LSTM
URL: https://hal.archives-ouvertes.fr/hal-03628290/document
https://hal.archives-ouvertes.fr/hal-03628290/file/Journal_of_Social_and_Robotics.pdf
https://hal.archives-ouvertes.fr/hal-03628290
BASE
Hide details
3
AI for mapping multi-lingual academic papers to the United Nations' Sustainable Development Goals (SDGs) ...
BASE
Show details
4
AI for mapping multi-lingual academic papers to the United Nations' Sustainable Development Goals (SDGs) ...
BASE
Show details
5
AI for mapping multi-lingual academic papers to the United Nations' Sustainable Development Goals (SDGs) ...
BASE
Show details
6
AI for mapping multi-lingual academic papers to the United Nations' Sustainable Development Goals (SDGs) ...
BASE
Show details
7
Reproducibility of the Experimental Result of BERT for Evidence Retrieval and Claim Verification ...
BASE
Show details
8
Reproducibility of the Experimental Result of BERT for Evidence Retrieval and Claim Verification ...
BASE
Show details
9
Lexicon-Based vs. Bert-Based Sentiment Analysis: A Comparative Study in Italian
In: Electronics; Volume 11; Issue 3; Pages: 374 (2022)
BASE
Show details
10
MIss RoBERTa WiLDe: Metaphor Identification Using Masked Language Model with Wiktionary Lexical Definitions
In: Applied Sciences; Volume 12; Issue 4; Pages: 2081 (2022)
BASE
Show details
11
Detection of Chinese Deceptive Reviews Based on Pre-Trained Language Model
In: Applied Sciences; Volume 12; Issue 7; Pages: 3338 (2022)
BASE
Show details
12
S-NER: A Concise and Efficient Span-Based Model for Named Entity Recognition
In: Sensors; Volume 22; Issue 8; Pages: 2852 (2022)
BASE
Show details
13
A Multitask Learning Framework for Abuse Detection and Emotion Classification
In: Algorithms; Volume 15; Issue 4; Pages: 116 (2022)
BASE
Show details
14
Visual and Phonological Feature Enhanced Siamese BERT for Chinese Spelling Error Correction
In: Applied Sciences; Volume 12; Issue 9; Pages: 4578 (2022)
BASE
Show details
15
An Empirical Comparison of Portuguese and Multilingual BERT Models for Auto-Classification of NCM Codes in International Trade
In: Big Data and Cognitive Computing; Volume 6; Issue 1; Pages: 8 (2022)
BASE
Show details
16
A Lite Romanian BERT: ALR-BERT
In: Computers; Volume 11; Issue 4; Pages: 57 (2022)
BASE
Show details
17
Performance Study on Extractive Text Summarization Using BERT Models
In: Information; Volume 13; Issue 2; Pages: 67 (2022)
BASE
Show details
18
Analyzing COVID-19 Medical Papers Using Artificial Intelligence: Insights for Researchers and Medical Professionals
In: Big Data and Cognitive Computing; Volume 6; Issue 1; Pages: 4 (2022)
BASE
Show details
19
Leveraging Part-of-Speech Tagging Features and a Novel Regularization Strategy for Chinese Medical Named Entity Recognition
In: Mathematics; Volume 10; Issue 9; Pages: 1386 (2022)
BASE
Show details
20
Realistic Image Generation from Text by Using BERT-Based Embedding
In: Electronics; Volume 11; Issue 5; Pages: 764 (2022)
BASE
Show details

Page: 1 2 3 4 5...7

Catalogues
2
0
7
0
0
1
0
Bibliographies
20
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
107
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern