Show simple item record

dc.contributor.supervisorWennekers, Thomas
dc.contributor.authorMarmpena, Asimina
dc.contributor.otherSchool of Engineering, Computing and Mathematicsen_US
dc.date.accessioned2021-06-10T09:40:08Z
dc.date.available2021-06-10T09:40:08Z
dc.date.issued2021
dc.date.issued2021
dc.identifier10564801en_US
dc.identifier.urihttp://hdl.handle.net/10026.1/17244
dc.descriptionSome of the chapters of this thesis are based on research published by the author. Chapter 4 is based on Marmpena M., Lim, A., and Dahl, T. S. (2018). How does the robot feel? Perception of valence and arousal in emotional body language. Paladyn, Journal of Behavioral Robotics, 9(1), 168-182. DOI: https://doi.org/10.1515/pjbr-2018-0012. Chapter 6 is based on Marmpena M., Lim, A., Dahl, T. S., and Hemion, N. (2019). Generating robotic emotional body language with Variational Autoencoders. In Proceedings of the 8th International Conference on Affective Computing and Intelligent Interaction (ACII), pages 545–551. DOI:10.1109/ACII.2019.8925459. Chapter 7 extends Marmpena M., Garcia, F., and Lim, A. (2020). Generating robotic emotional body language of targeted valence and arousal with Conditional Variational Autoencoders. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’20, page 357–359. DOI: https://doi.org/10.1145/3371382.3378360. The designed or generated robotic emotional body language expressions data presented in this thesis are publicly available: https://github.com/minamar/rebl-pepper-dataen_US
dc.description.abstract

In the next decade, societies will witness a rise in service robots deployed in social environments, such as schools, homes, or shops, where they will operate as assistants, public relation agents, or companions. People are expected to willingly engage and collaborate with these robots to accomplish positive outcomes. To facilitate collaboration, robots need to comply with the behavioural and social norms used by humans in their daily interactions. One such behavioural norm is the expression of emotion through body language.

Previous work on emotional body language synthesis for humanoid robots has been mainly focused on hand-coded design methods, often employing features extracted from human body language. However, the hand-coded design is cumbersome and results in a limited number of expressions with low variability. This limitation can be at the expense of user engagement since the robotic behaviours will appear repetitive and predictable, especially in long-term interaction. Furthermore, design approaches strictly based on human emotional body language might not transfer effectively on robots because of their simpler morphology. Finally, most previous work is using six or fewer basic emotion categories in the design and the evaluation phase of emotional expressions. This approach might result in lossy compression of the granularity in emotion expression.

The current thesis presents a methodology for developing a complete framework of emotional body language generation for a humanoid robot, intending to address these three limitations. Our starting point is a small set of animations designed by professional animators with the robot morphology in mind. We conducted an initial user study to acquire reliable dimensional labels of valence and arousal for each animation. In the next step, we used the motion sequences from these animations to train a Variational Autoencoder, a deep learning model, to generate numerous new animations in an unsupervised setting. Finally, we extended the model to condition the generative process with valence and arousal attributes, and we conducted a user study to evaluate the interpretability of the animations in terms of valence, arousal, and dominance. The results indicate moderate to strong interpretability.

en_US
dc.language.isoen
dc.publisherUniversity of Plymouth
dc.rightsAttribution 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/us/*
dc.subjectHuman-robot Interactionen_US
dc.subjectSocial roboticsen_US
dc.subjectAffective Computingen_US
dc.subjectGenerative modelsen_US
dc.subjectVariational autoencoderen_US
dc.subjectDeep Learningen_US
dc.subjectEmotional body language generationen_US
dc.subject.classificationPhDen_US
dc.titleEmotional body language synthesis for humanoid robotsen_US
dc.typeThesis
plymouth.versionpublishableen_US
dc.identifier.doihttp://dx.doi.org/10.24382/981
dc.rights.embargoperiodNo embargoen_US
dc.type.qualificationDoctorateen_US
rioxxterms.funderHorizon 2020en_US
rioxxterms.identifier.projectAPRILen_US
rioxxterms.versionNA
plymouth.orcid.idhttps://orcid.org/0000-0002-8835-1176en_US


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 3.0 United States
Except where otherwise noted, this item's license is described as Attribution 3.0 United States

All items in PEARL are protected by copyright law.
Author manuscripts deposited to comply with open access mandates are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author.
Theme by 
Atmire NV