DE eng

Search in the Catalogues and Directories

Hits 1 – 13 of 13

1
A novel image-based approach for interactive characterization of rock fracture spacing in a tunnel face
Chen, J; Chen, Y; Cohn, AG. - : Elsevier, 2022
BASE
Show details
2
Categorisation, Typicality & Object-Specific Features in Spatial Referring Expressions
Richard-Bollans, A; Gómez Álvarez, L; Cohn, AG. - : Association for Computational Linguistics, 2020
BASE
Show details
3
Modelling the Polysemy of Spatial Prepositions in Referring Expressions
Richard-Bollans, A; Gomez Alvarez, L; Cohn, AG. - : IJCAI Organization, 2020
BASE
Show details
4
Investigating the Dimensions of Spatial Language
BASE
Show details
5
The Role of Pragmatics in Solving the Winograd Schema Challenge
Richard-Bollans, AL; Gomez Alvarez, L; Cohn, AG. - : CEUR Workshop Proceedings, 2018
BASE
Show details
6
Learning of Object Properties, Spatial Relations, and Actions for Embodied Agents from Language and Vision
Alomari, M; Duckworth, P; Hogg, DC. - : AAAI Press, 2017
BASE
Show details
7
Natural Language Grounding and Grammar Induction for Robotic Manipulation Commands
Alomari, M; Duckworth, P; Hawasly, M; Hogg, DC; Cohn, AG. - : The Association for Computational Linguistics, 2017
Abstract: We present a cognitively plausible system capable of acquiring knowledge in language and vision from pairs of short video clips and linguistic descriptions. The aim of this work is to teach a robot manipulator how to execute natural language commands by demonstration. This is achieved by first learning a set of visual `concepts' that abstract the visual feature spaces into concepts that have human-level meaning. Second, learning the mapping/grounding between words and the extracted visual concepts. Third, inducing grammar rules via a semantic representation known as Robot Control Language (RCL). We evaluate our approach against state-of-the-art supervised and unsupervised grounding and grammar induction systems, and show that a robot can learn to execute never seen-before commands from pairs of unlabelled linguistic and visual inputs.
URL: http://aclweb.org/anthology/W/W17/W17-28.pdf
http://eprints.whiterose.ac.uk/119757/
BASE
Hide details
8
Natural Language Acquisition and Grounding for Embodied Robotic Systems
Duckworth, P; Al-Omari, M; Hogg, DC. - : Association for the Advancement of Artificial Intelligence, 2017
BASE
Show details
9
Grounding language in perception for scene conceptualization in autonomous robots
Dubba, KSR; De Oliveira, MR; Lim, GH. - : AI Access Foundation, 2014
BASE
Show details
10
Interactive semantic feedback for intuitive ontology authoring
Denaux, R; Thakker, DA; Dimitrova, V. - : IOS Press, 2012
BASE
Show details
11
From Video to RCC8: Exploiting a Distance Based Semantics to Stabilise the Interpretation of Mereotopological Relations
Sridhar, M; Cohn, AG; Hogg, DC. - : Springer, 2011
BASE
Show details
12
The automated evaluation of inferred word classifications
Hughes, J; Atwell, E. - : John Wiley & Sons, 1994
BASE
Show details
13
Online perceptual learning and natural language acquisition for autonomous robots
Alomari, M; Li, F; Hogg, DC. - : Elsevier, 1479
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
13
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern