DE eng

Search in the Catalogues and Directories

Hits 1 – 13 of 13

1
A novel image-based approach for interactive characterization of rock fracture spacing in a tunnel face
Chen, J; Chen, Y; Cohn, AG. - : Elsevier, 2022
BASE
Show details
2
Categorisation, Typicality & Object-Specific Features in Spatial Referring Expressions
Richard-Bollans, A; Gómez Álvarez, L; Cohn, AG. - : Association for Computational Linguistics, 2020
BASE
Show details
3
Modelling the Polysemy of Spatial Prepositions in Referring Expressions
Richard-Bollans, A; Gomez Alvarez, L; Cohn, AG. - : IJCAI Organization, 2020
BASE
Show details
4
Investigating the Dimensions of Spatial Language
BASE
Show details
5
The Role of Pragmatics in Solving the Winograd Schema Challenge
Richard-Bollans, AL; Gomez Alvarez, L; Cohn, AG. - : CEUR Workshop Proceedings, 2018
BASE
Show details
6
Learning of Object Properties, Spatial Relations, and Actions for Embodied Agents from Language and Vision
Alomari, M; Duckworth, P; Hogg, DC. - : AAAI Press, 2017
BASE
Show details
7
Natural Language Grounding and Grammar Induction for Robotic Manipulation Commands
Alomari, M; Duckworth, P; Hawasly, M. - : The Association for Computational Linguistics, 2017
BASE
Show details
8
Natural Language Acquisition and Grounding for Embodied Robotic Systems
Duckworth, P; Al-Omari, M; Hogg, DC; Cohn, AG. - : Association for the Advancement of Artificial Intelligence, 2017
Abstract: We present a cognitively plausible novel framework capable of learning the grounding in visual semantics and the grammar of natural language commands given to a robot in a table top environment. The input to the system consists of video clips of a manually controlled robot arm, paired with natural language commands describing the action. No prior knowledge is assumed about the meaning of words, or the structure of the language, except that there are different classes of words (corresponding to observable actions, spatial relations, and objects and their observable properties). The learning process automatically clusters the continuous perceptual spaces into concepts corresponding to linguistic input. A novel relational graph representation is used to build connections between language and vision. As well as the grounding of language to perception, the system also induces a set of probabilistic grammar rules. The knowledge learned is used to parse new commands involving previously unseen objects.
URL: https://eprints.whiterose.ac.uk/109107/
https://ojs.aaai.org/index.php/AAAI/article/view/11161
BASE
Hide details
9
Grounding language in perception for scene conceptualization in autonomous robots
Dubba, KSR; De Oliveira, MR; Lim, GH. - : AI Access Foundation, 2014
BASE
Show details
10
Interactive semantic feedback for intuitive ontology authoring
Denaux, R; Thakker, DA; Dimitrova, V. - : IOS Press, 2012
BASE
Show details
11
From Video to RCC8: Exploiting a Distance Based Semantics to Stabilise the Interpretation of Mereotopological Relations
Sridhar, M; Cohn, AG; Hogg, DC. - : Springer, 2011
BASE
Show details
12
The automated evaluation of inferred word classifications
Hughes, J; Atwell, E. - : John Wiley & Sons, 1994
BASE
Show details
13
Online perceptual learning and natural language acquisition for autonomous robots
Alomari, M; Li, F; Hogg, DC. - : Elsevier, 1479
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
13
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern