1 |
DeepL et Google Translate face à l'ambiguïté phraséologique
|
|
|
|
In: https://hal.archives-ouvertes.fr/hal-03583995 ; 2022 (2022)
|
|
BASE
|
|
Show details
|
|
3 |
VEREINDEUTIGUNG ZUR KLASSIFIZIERUNG LEXIKALISCHER OBJEKTE ; DISAMBIGUATION FOR THE CLASSIFICATION OF LEXICAL ITEMS ; DÉSAMBÏGUISATION POUR LA CLASSIFICATION DE LEXÈMES
|
|
|
|
In: https://hal.archives-ouvertes.fr/hal-03598242 ; France, Patent n° : EP3937059A1. 2022 (2022)
|
|
BASE
|
|
Show details
|
|
4 |
Automatic Speech Recognition and Query By Example for Creole Languages Documentation
|
|
|
|
In: Findings of the Association for Computational Linguistics: ACL 2022 ; https://hal.archives-ouvertes.fr/hal-03625303 ; Findings of the Association for Computational Linguistics: ACL 2022, May 2022, Dublin, Ireland (2022)
|
|
BASE
|
|
Show details
|
|
5 |
Cross-Situational Learning Towards Robot Grounding
|
|
|
|
In: https://hal.archives-ouvertes.fr/hal-03628290 ; 2022 (2022)
|
|
BASE
|
|
Show details
|
|
6 |
Cross-Situational Learning Towards Robot Grounding
|
|
|
|
In: https://hal.archives-ouvertes.fr/hal-03628290 ; 2022 (2022)
|
|
Abstract:
How do children acquire language through unsupervised or noisy supervision? How do their brain process language? We take this perspective to machine learning and robotics, where part of the problem is understanding how language models can perform grounded language acquisition through noisy supervision and discussing how they can account for brain learning dynamics. Most prior works have tracked the co-occurrence between single words and referents to model how infants learn wordreferent mappings. This paper studies cross-situational learning (CSL) with full sentences: we want to understand brain mechanisms that enable children to learn mappings between words and their meanings from full sentences in early language learning. We investigate the CSL task on a few training examples with two sequence-based models: (i) Echo State Networks (ESN) and (ii) Long-Short Term Memory Networks (LSTM). Most importantly, we explore several word representations including One-Hot, GloVe, pretrained BERT, and fine-tuned BERT representations (last layer token representations) to perform the CSL task. We apply our approach to three diverse datasets (two grounded language datasets and a robotic dataset) and observe that (1) One-Hot, GloVe, and pretrained BERT representations are less efficient when compared to representations obtained from fine-tuned BERT. (2) ESN online with final learning (FL) yields superior performance over ESN online continual learning (CL), offline learning, and LSTMs, indicating the more biological plausibility of ESNs and the cognitive process of sentence reading. (2) LSTM with fewer hidden units showcases higher performance for small datasets, but LSTM with more hidden units is Cross-Situational Learning needed to perform reasonably well on larger corpora. (4) ESNs demonstrate better generalization than LSTM models for increasingly large vocabularies. Overall, these models are able to learn from scratch to link complex relations between words and their corresponding meaning concepts, handling polysemous and synonymous words. Moreover, we argue that such models can extend to help current human-robot interaction studies on language grounding and better understand children's developmental language acquisition. We make the code publicly available * .
|
|
Keyword:
[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]; [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL]; [INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG]; [INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE]; [INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO]; [SDV.NEU]Life Sciences [q-bio]/Neurons and Cognition [q-bio.NC]; BERT; cross-situational learning; echo state networks; grounded language; LSTM
|
|
URL: https://hal.archives-ouvertes.fr/hal-03628290/document https://hal.archives-ouvertes.fr/hal-03628290/file/Journal_of_Social_and_Robotics.pdf https://hal.archives-ouvertes.fr/hal-03628290
|
|
BASE
|
|
Hide details
|
|
10 |
Emotional Speech Recognition Method Based on Word Transcription
|
|
|
|
In: Sensors; Volume 22; Issue 5; Pages: 1937 (2022)
|
|
BASE
|
|
Show details
|
|
11 |
A Combined System Metrics Approach to Cloud Service Reliability Using Artificial Intelligence
|
|
|
|
In: Big Data and Cognitive Computing; Volume 6; Issue 1; Pages: 26 (2022)
|
|
BASE
|
|
Show details
|
|
12 |
Situational Awareness: Techniques, Challenges, and Prospects
|
|
|
|
In: AI; Volume 3; Issue 1; Pages: 55-77 (2022)
|
|
BASE
|
|
Show details
|
|
13 |
End-to-end speaker segmentation for overlap-aware resegmentation
|
|
|
|
In: Interspeech 2021 ; https://hal-univ-lemans.archives-ouvertes.fr/hal-03257524 ; Interspeech 2021, Aug 2021, Brno, Czech Republic ; https://www.interspeech2021.org/ (2021)
|
|
BASE
|
|
Show details
|
|
14 |
Innovative Vineyards Environmental Monitoring System Using Deep Edge AI
|
|
|
|
In: Artificial Intelligence for Digitising Industry Applications ; https://hal.univ-reims.fr/hal-03355270 ; Artificial Intelligence for Digitising Industry Applications, River Publishers, pp.261-278, 2021, 9788770226646 ; https://www.riverpublishers.com/research_details.php?book_id=967 (2021)
|
|
BASE
|
|
Show details
|
|
15 |
High-resolution speaker counting in reverberant rooms using CRNN with Ambisonics features
|
|
|
|
In: EUSIPCO 2020 - 28th European Signal Processing Conference (EUSIPCO) ; https://hal.archives-ouvertes.fr/hal-03537323 ; EUSIPCO 2020 - 28th European Signal Processing Conference (EUSIPCO), Jan 2021, Amsterdam, Netherlands. pp.71-75, ⟨10.23919/Eusipco47968.2020.9287637⟩ (2021)
|
|
BASE
|
|
Show details
|
|
16 |
Tackling Morphological Analogies Using Deep Learning -- Extended Version
|
|
|
|
In: https://hal.inria.fr/hal-03425776 ; 2021 (2021)
|
|
BASE
|
|
Show details
|
|
17 |
Sentiment Analysis of Arabic Documents
|
|
|
|
In: Natural Language Processing for Global and Local Business ; https://hal.archives-ouvertes.fr/hal-03124729 ; Fatih Pinarbasi; M. Nurdan Taskiran. Natural Language Processing for Global and Local Business, pp.307-331, 2021, 9781799842408. ⟨10.4018/978-1-7998-4240-8.ch013⟩ ; https://www.igi-global.com/ (2021)
|
|
BASE
|
|
Show details
|
|
18 |
Recognizing lexical units in low-resource language contexts with supervised and unsupervised neural networks
|
|
|
|
In: https://hal.archives-ouvertes.fr/hal-03429051 ; [Research Report] LACITO (UMR 7107). 2021 (2021)
|
|
BASE
|
|
Show details
|
|
19 |
What does the Canary Say? Low-Dimensional GAN Applied to Birdsong
|
|
|
|
In: https://hal.inria.fr/hal-03244723 ; 2021 (2021)
|
|
BASE
|
|
Show details
|
|
20 |
What does the Canary Say? Low-Dimensional GAN Applied to Birdsong
|
|
|
|
In: https://hal.inria.fr/hal-03244723 ; 2021 (2021)
|
|
BASE
|
|
Show details
|
|
|
|