Show simple item record

dc.contributor.supervisorCangelosi, Angelo
dc.contributor.authorZanatto, Debora
dc.contributor.otherSchool of Engineering, Computing and Mathematicsen_US
dc.date.accessioned2019-07-24T07:40:23Z
dc.date.available2019-07-24T07:40:23Z
dc.date.issued2019
dc.identifier10513659en_US
dc.identifier.urihttp://hdl.handle.net/10026.1/14677
dc.description.abstract

Robotic usage is entering the world into many diverse ways, from advanced surgical areas to assistive technologies for disabled persons. Robots are increasingly designed and developed to assist humans with everyday tasks. However, they are still perceived as tools to be manipulated and controlled by humans, rather than complete and autonomous helpers. One of the main reasons can be addressed to the development of their capabilities to appear credible and trustworthy. This dissertation explores the challenge of interactions with social robots, investigating which specific situations and environments lead to an increase in trust and cooperation between humans and robots. After discussing the multifaceted concept of anthropomorphism and its key role on cooperation through literature, three open issues are faced: the lack of a clear definition of anthropomorphic contribution to robots acceptance, the lack of defined anthropomorphic boundaries that should not be crossed to maintain a satisfying interaction in HRI and the absence of a real cooperative interaction with a robotic peer. In Chapter 2, the first issue is addressed, demonstrating that robots credibility can be affected by experience and anthropomorphic stereotype activation. Chapter 3, 4, 5 and 6 are focussed in resolving the remaining two issues in parallel. By using the Economic Investment Game in four different studies, the emergence of human cooperative attitudes towards robots is demonstrated. Finally, the limits of anthropomorphism are investigated through comparisons of social human-like behaviours with machine-like static nature. Results show that the type of payoff can selectively affect trust and cooperation in HRI: in case of low payoff participants’ increase their tendency to look for the robots anthropomorphic cues, while a condition of high payoff is more suitable for machine-like agents.

en_US
dc.description.sponsorshipTHRIVE, Air Force Office of Scientific Research, Award No. FA9550-15-1-0025en_US
dc.language.isoen
dc.publisherUniversity of Plymouth
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/*
dc.subjectHuman-Robot Interactionen_US
dc.subjectTrusten_US
dc.subjectCooperationen_US
dc.subjectAnthropomorphismen_US
dc.subjectJoint Attentionen_US
dc.subjectDecision Makingen_US
dc.subjectImitationen_US
dc.subject.classificationPhDen_US
dc.titleWHEN DO WE COOPERATE WITH ROBOTS?en_US
dc.typeThesis
plymouth.versionpublishableen_US
dc.identifier.doihttp://dx.doi.org/10.24382/1015
dc.rights.embargoperiodNo embargoen_US
dc.type.qualificationDoctorateen_US
rioxxterms.versionNA
plymouth.orcid_idhttps://orcid.org/0000-0002-7903-3491en_US


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 United States
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States

All items in PEARL are protected by copyright law.
Author manuscripts deposited to comply with open access mandates are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author.
Theme by 
Atmire NV