Trust in artificial voices
Date
2018-04-05Author
Subject
Metadata
Show full item recordAbstract
Societies rely on trustworthy communication in order to function, and the need for trust clearly extends to human-machine communication. Therefore, it is essential to design machines to elicit trust, so as to make interactions with them acceptable and successful. However, while there is a substantial literature on first impressions of trustworthiness based on various characteristics, including voice, not much is known about the trust development process. Are first impressions maintained over time? Or are they influenced by the experience of an agent's behaviour? We addressed these questions in three experiments using the "iterated investment game", a methodology derived from game theory that allows implicit measures of trust to be collected over time. Participants played the game with various agents having different voices: in the first experiment, participants played with a computer agent that had either a Standard Southern British English accent or a Liverpool accent; in the second experiment, they played with a computer agent that had either an SSBE or a Birmingham accent; in the third experiment, they played with a robot that had either a natural or a synthetic voice. All these agents behaved either trustworthily or untrustworthily. In all three experiments, participants trusted the agent with one voice more when it was trustworthy, and the agent with the other voice more when it was untrustworthy. This suggests that participants might change their trusting behaviour based on the congruency of the agent's behaviour with the participant's first impression. Implications for human-machine interaction design are discussed.
Collections
Publisher
Journal
Pagination
Conference name
Recommended, similar items
The following license files are associated with this item: