DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 39

1
Learning to Generate Code Comments from Class Hierarchies ...
BASE
Show details
2
Zero-shot Task Adaptation using Natural Language ...
BASE
Show details
3
TellMeWhy: A Dataset for Answering Why-Questions in Narratives ...
BASE
Show details
4
Supervised attention from natural language feedback for reinforcement learning
BASE
Show details
5
Learning to Update Natural Language Comments Based on Code Changes ...
BASE
Show details
6
Continually improving grounded natural language understanding through human-robot dialog
Abstract: As robots become ubiquitous in homes and workplaces such as hospitals and factories, they must be able to communicate with humans. Several kinds of knowledge are required to understand and respond to a human's natural language commands and questions. If a person requests an assistant robot to take me to Alice's office, the robot must know that Alice is a person who owns some unique office, and that take me means it should navigate there. Similarly, if a person requests bring me the heavy, green mug, the robot must have accurate mental models of the physical concepts heavy, green, and mug. To avoid forcing humans to use key phrases or words robots already know, this thesis focuses on helping robots understanding new language constructs through interactions with humans and with the world around them. To understand a command in natural language, a robot must first convert that command to an internal representation that it can reason with. Semantic parsing is a method for performing this conversion, and the target representation is often semantic forms represented as predicate logic with lambda calculus. Traditional semantic parsing relies on hand-crafted resources from a human expert: an ontology of concepts, a lexicon connecting language to those concepts, and training examples of language with abstract meanings. One thrust of this thesis is to perform semantic parsing with sparse initial data. We use the conversations between a robot and human users to induce pairs of natural language utterances with the target semantic forms a robot discovers through its questions, reducing the annotation effort of creating training examples for parsing. We use this data to build more dialog-capable robots in new domains with much less expert human effort (Thomason et al., 2015; Padmakumar et al., 2017). Meanings of many language concepts are bound to the physical world. Understanding object properties and categories, such as heavy, green, and mug requires interacting with and perceiving the physical world. Embodied robots can use manipulation capabilities, such as pushing, picking up, and dropping objects to gather sensory data about them. This data can be used to understand non-visual concepts like heavy and empty (e.g. get the empty carton of milk from the fridge), and assist with concepts that have both visual and non-visual expression (e.g. tall things look big and also exert force sooner than short things when pressed down on). A second thrust of this thesis focuses on strategies for learning these concepts using multi-modal sensory information. We use human-in-the-loop learning to get labels between concept words and actual objects in the environment (Thomason et al., 2016, 2017). We also explore ways to tease out polysemy and synonymy in concept words (Thomason and Mooney, 2017) such as light, which can refer to a weight or a color, the latter sense being synonymous with pale. Additionally, pushing, picking up, and dropping objects to gather sensory information is prohibitively time-consuming, so we investigate strategies for using linguistic information and human input to expedite exploration when learning a new concept (Thomason et al., 2018). Finally, we build an integrated agent with both parsing and perception capabilities that learns from conversations with users to improve both components over time. We demonstrate that parser learning from conversations (Thomason et al., 2015) can be combined with multi-modal perception (Thomason et al., 2016) using predicate-object labels gathered through opportunistic active learning (Thomason et al., 2017) during those conversations to improve performance for understanding natural language commands from humans. Human users also qualitatively rate this integrated learning agent as more usable after it has improved from conversation-based learning. ; Computer Sciences
Keyword: Human-robot dialog; Natural language processing
URL: https://doi.org/10.15781/T2902011J
http://hdl.handle.net/2152/68120
BASE
Hide details
7
STATISTICAL RELATIONAL LEARNING AND SCRIPT INDUCTION FOR TEXTUAL INFERENCE
Mooney,Raymond. - 2017
BASE
Show details
8
Identifying lexical relationships and entailments with distributional semantics
BASE
Show details
9
Advances in statistical script learning
Pichotta, Karl. - 2017
BASE
Show details
10
Natural-language video description with deep recurrent neural networks
BASE
Show details
11
Dialog for natural language to code
BASE
Show details
12
Improving LSTM-based Video Description with Linguistic Knowledge Mined from Text ...
BASE
Show details
13
Using Sentence-Level LSTM Language Models for Script Inference ...
BASE
Show details
14
Natural language semantics using probabilistic logic
BASE
Show details
15
Representing Meaning with a Combination of Logical and Distributional Models ...
BASE
Show details
16
Inducing grammars from linguistic universals and realistic amounts of supervision
BASE
Show details
17
Training a Multilingual Sportscaster: Using Perceptual Context to Learn Language ...
BASE
Show details
18
Grounded language learning models for ambiguous supervision
BASE
Show details
19
Learning from natural instructions
Goldwasser, Dan. - 2012
BASE
Show details
20
Learning language from ambiguous perceptual context
BASE
Show details

Page: 1 2

Catalogues
2
0
1
0
0
0
0
Bibliographies
6
0
0
0
0
0
0
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
31
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern