WHEN DO WE COOPERATE WITH ROBOTS?
MetadataShow full item record
Robotic usage is entering the world into many diverse ways, from advanced surgical areas to assistive technologies for disabled persons. Robots are increasingly designed and developed to assist humans with everyday tasks. However, they are still perceived as tools to be manipulated and controlled by humans, rather than complete and autonomous helpers. One of the main reasons can be addressed to the development of their capabilities to appear credible and trustworthy. This dissertation explores the challenge of interactions with social robots, investigating which specific situations and environments lead to an increase in trust and cooperation between humans and robots. After discussing the multifaceted concept of anthropomorphism and its key role on cooperation through literature, three open issues are faced: the lack of a clear definition of anthropomorphic contribution to robots acceptance, the lack of defined anthropomorphic boundaries that should not be crossed to maintain a satisfying interaction in HRI and the absence of a real cooperative interaction with a robotic peer. In Chapter 2, the first issue is addressed, demonstrating that robots credibility can be affected by experience and anthropomorphic stereotype activation. Chapter 3, 4, 5 and 6 are focussed in resolving the remaining two issues in parallel. By using the Economic Investment Game in four different studies, the emergence of human cooperative attitudes towards robots is demonstrated. Finally, the limits of anthropomorphism are investigated through comparisons of social human-like behaviours with machine-like static nature. Results show that the type of payoff can selectively affect trust and cooperation in HRI: in case of low payoff participants’ increase their tendency to look for the robots anthropomorphic cues, while a condition of high payoff is more suitable for machine-like agents.