DE eng

Search in the Catalogues and Directories

Page: 1 2 3
Hits 1 – 20 of 48

1
Trajectory Prediction with Linguistic Representations
Kuo, Yen-Ling; Huang, Xin; Barbu, Andrei. - : Center for Brains, Minds and Machines (CBMM), International Conference on Robotics and Automation (ICRA), 2022
BASE
Show details
2
Universal Dependencies 2.9
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
3
Universal Dependencies 2.8.1
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
4
Universal Dependencies 2.8
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
5
Compositional Networks Enable Systematic Generalization for Grounded Language Understanding
Kuo, Yen-Ling; Katz, Boris; Barbu, Andrei. - : Center for Brains, Minds and Machines (CBMM), Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
BASE
Show details
6
Trajectory Prediction with Linguistic Representations ...
BASE
Show details
7
Measuring Social Biases in Grounded Vision and Language Embeddings ...
NAACL 2021 2021; Barbu, Andrei; Katz, Boris. - : Underline Science Inc., 2021
BASE
Show details
8
Compositional Networks Enable Systematic Generalization for Grounded Language Understanding ...
BASE
Show details
9
Assessing Language Proficiency from Eye Movements in Reading
In: Association for Computational Linguistics (2021)
BASE
Show details
10
Measuring Social Biases in Grounded Vision and Language Embeddings
Ross, Candace; Barbu, Andrei; Katz, Boris. - : Center for Brains, Minds and Machines (CBMM), Annual Conference of the North American Chapter of the Association for Computational Linguistics (HLT/NAACL), 2021
BASE
Show details
11
Universal Dependencies 2.7
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2020
BASE
Show details
12
Universal Dependencies 2.6
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2020
BASE
Show details
13
Learning a natural-language to LTL executable semantic parser for grounded robotics
Wang, Christopher; Ross, Candace; Kuo, Yen-Ling; Katz, Boris; Barbu, Andrei. - : Center for Brains, Minds and Machines (CBMM), Conference on Robot Learning (CoRL), 2020
Abstract: Children acquire their native language with apparent ease by observing how language is used in context and attempting to use it themselves. They do so without laborious annotations, negative examples, or even direct corrections. We take a step toward robots that can do the same by training a grounded semantic parser, which discovers latent linguistic representations that can be used for the execution of natural-language commands. In particular, we focus on the difficult domain of commands with a temporal aspect, whose semantics we capture with Linear Temporal Logic, LTL. Our parser is trained with pairs of sentences and executions as well as an executor. At training time, the parser hypothesizes a meaning representation for the input as a formula in LTL. Three competing pressures allow the parser to discover meaning from language. First, any hypothesized meaning for a sentence must be permissive enough to reflect all the annotated execution trajectories. Second, the executor — a pretrained end-to-end LTL planner — must find that the observed trajectories are likely executions of the meaning. Finally, a generator, which reconstructs the original input, encourages the model to find representations that conserve knowledge about the command. Together these ensure that the meaning is neither too general nor too specific. Our model generalizes well, being able to parse and execute both machine-generated and human-generated commands, with near-equal accuracy, despite the fact that the human-generated sentences are much more varied and complex with an open lexicon. The approach presented here is not specific to LTL: it can be applied to any domain where sentence meanings can be hypothesized and an executor can verify these meanings, thus opening the door to many applications for robotic agents. ; This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216.
Keyword: LTL; semantic parsing; weak supervision
URL: https://hdl.handle.net/1721.1/141340
BASE
Hide details
14
Compositional Networks Enable Systematic Generalization for Grounded Language Understanding ...
BASE
Show details
15
Learning a natural-language to LTL executable semantic parser for grounded robotics ...
BASE
Show details
16
Measuring Social Biases in Grounded Vision and Language Embeddings ...
BASE
Show details
17
Encoding formulas as deep networks: Reinforcement learning for zero-shot execution of LTL formulas
Kuo, Yen-Ling; Katz, Boris; Barbu, Andrei. - : Center for Brains, Minds and Machines (CBMM), The Ninth International Conference on Learning Representations (ICLR), 2020
BASE
Show details
18
Universal Dependencies 2.5
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2019
BASE
Show details
19
Universal Dependencies 2.4
Nivre, Joakim; Abrams, Mitchell; Agić, Željko. - : Universal Dependencies Consortium, 2019
BASE
Show details
20
Universal Dependencies 2.3
Nivre, Joakim; Abrams, Mitchell; Agić, Željko. - : Universal Dependencies Consortium, 2018
BASE
Show details

Page: 1 2 3

Catalogues
1
0
0
0
0
0
0
Bibliographies
1
0
0
1
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
46
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern