1 |
VISITRON: Visual Semantics-Aligned Interactively Trained Object-Navigator ...
|
|
|
|
Abstract:
Interactive robots navigating photo-realistic environments need to be trained to effectively leverage and handle the dynamic nature of dialogue in addition to the challenges underlying vision-and-language navigation (VLN). In this paper, we present VISITRON, a multi-modal Transformer-based navigator better suited to the interactive regime inherent to Cooperative Vision-and-Dialog Navigation (CVDN). VISITRON is trained to: i) identify and associate object-level concepts and semantics between the environment and dialogue history, ii) identify when to interact vs. navigate via imitation learning of a binary classification head. We perform extensive pre-training and fine-tuning ablations with VISITRON to gain empirical insights and improve performance on CVDN. VISITRON's ability to identify when to interact leads to a natural generalization of the game-play mode introduced by Roman et al. (arXiv:2005.00728) for enabling the use of such models in different environments. VISITRON is competitive with models on the ... : Accepted at Findings of the Annual Meeting of the Association for Computational Linguistics (ACL) 2022, previous version accepted at Visually Grounded Interaction and Language (ViGIL) Workshop at NAACL 2021 ...
|
|
Keyword:
Artificial Intelligence cs.AI; Computation and Language cs.CL; Computer Vision and Pattern Recognition cs.CV; FOS Computer and information sciences; I.2.9; Machine Learning cs.LG; Robotics cs.RO
|
|
URL: https://dx.doi.org/10.48550/arxiv.2105.11589 https://arxiv.org/abs/2105.11589
|
|
BASE
|
|
Hide details
|
|
|
|