Show simple item record

dc.contributor.authorLemaignan, S
dc.contributor.authorWarnier, M
dc.contributor.authorSisbot, EA
dc.contributor.authorClodic, A
dc.contributor.authorAlami, R
dc.date.accessioned2017-02-14T17:02:12Z
dc.date.available2017-02-14T17:02:12Z
dc.date.issued2017-06
dc.identifier.issn0004-3702
dc.identifier.issn1872-7921
dc.identifier.urihttp://hdl.handle.net/10026.1/8468
dc.description.abstract

Human–Robot Interaction challenges Artificial Intelligence in many regards: dynamic, partially unknown environments that were not originally designed for robots; a broad variety of situations with rich semantics to understand and interpret; physical interactions with humans that requires fine, low-latency yet socially acceptable control strategies; natural and multi-modal communication which mandates common-sense knowledge and the representation of possibly divergent mental models. This article is an attempt to characterise these challenges and to exhibit a set of key decisional issues that need to be addressed for a cognitive robot to successfully share space and tasks with a human.

We identify first the needed individual and collaborative cognitive skills: geometric reasoning and situation assessment based on perspective-taking and affordance analysis; acquisition and representation of knowledge models for multiple agents (humans and robots, with their specificities); situated, natural and multi-modal dialogue; human-aware task planning; human–robot joint task achievement. The article discusses each of these abilities, presents working implementations, and shows how they combine in a coherent and original deliberative architecture for human–robot interaction. Supported by experimental results, we eventually show how explicit knowledge management, both symbolic and geometric, proves to be instrumental to richer and more natural human–robot interactions by pushing for pervasive, human-level semantics within the robot's deliberative system.

dc.format.extent45-69
dc.languageen
dc.language.isoen
dc.publisherElsevier
dc.subjectHuman-robot interaction
dc.subjectCognitive robotics
dc.subjectPerspective taking
dc.subjectCognitive architecture
dc.subjectKnowledge representation and reasoning
dc.titleArtificial Cognition for Social Human-Robot Interaction: An Implementation
dc.typejournal-article
dc.typearticle
plymouth.volume247
plymouth.publisher-urlhttp://dx.doi.org/10.1016/j.artint.2016.07.002
plymouth.publication-statusPublished
plymouth.journalArtificial Intelligence
dc.identifier.doi10.1016/j.artint.2016.07.002
plymouth.organisational-group/Plymouth
plymouth.organisational-group/Plymouth/REF 2021 Researchers by UoA
plymouth.organisational-group/Plymouth/REF 2021 Researchers by UoA/UoA11 Computer Science and Informatics
dcterms.dateAccepted2016-07-09
dc.identifier.eissn1872-7921
dc.rights.embargoperiod12 months
rioxxterms.versionofrecord10.1016/j.artint.2016.07.002
rioxxterms.licenseref.urihttp://www.rioxx.net/licenses/under-embargo-all-rights-reserved
rioxxterms.licenseref.startdate2017-06
rioxxterms.typeJournal Article/Review
plymouth.oa-locationhttp://www.sciencedirect.com/science/article/pii/S0004370216300790


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record


All items in PEARL are protected by copyright law.
Author manuscripts deposited to comply with open access mandates are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author.
Theme by 
Atmire NV