Show simple item record

dc.contributor.supervisorWennekers, Thomas
dc.contributor.authorMarmpena, Asimina
dc.contributor.otherFaculty of Science and Engineeringen_US
dc.descriptionSome of the chapters of this thesis are based on research published by the author. Chapter 4 is based on Marmpena M., Lim, A., and Dahl, T. S. (2018). How does the robot feel? Perception of valence and arousal in emotional body language. Paladyn, Journal of Behavioral Robotics, 9(1), 168-182. DOI: Chapter 6 is based on Marmpena M., Lim, A., Dahl, T. S., and Hemion, N. (2019). Generating robotic emotional body language with Variational Autoencoders. In Proceedings of the 8th International Conference on Affective Computing and Intelligent Interaction (ACII), pages 545–551. DOI:10.1109/ACII.2019.8925459. Chapter 7 extends Marmpena M., Garcia, F., and Lim, A. (2020). Generating robotic emotional body language of targeted valence and arousal with Conditional Variational Autoencoders. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’20, page 357–359. DOI: The designed or generated robotic emotional body language expressions data presented in this thesis are publicly available:

In the next decade, societies will witness a rise in service robots deployed in social environments, such as schools, homes, or shops, where they will operate as assistants, public relation agents, or companions. People are expected to willingly engage and collaborate with these robots to accomplish positive outcomes. To facilitate collaboration, robots need to comply with the behavioural and social norms used by humans in their daily interactions. One such behavioural norm is the expression of emotion through body language.

Previous work on emotional body language synthesis for humanoid robots has been mainly focused on hand-coded design methods, often employing features extracted from human body language. However, the hand-coded design is cumbersome and results in a limited number of expressions with low variability. This limitation can be at the expense of user engagement since the robotic behaviours will appear repetitive and predictable, especially in long-term interaction. Furthermore, design approaches strictly based on human emotional body language might not transfer effectively on robots because of their simpler morphology. Finally, most previous work is using six or fewer basic emotion categories in the design and the evaluation phase of emotional expressions. This approach might result in lossy compression of the granularity in emotion expression.

The current thesis presents a methodology for developing a complete framework of emotional body language generation for a humanoid robot, intending to address these three limitations. Our starting point is a small set of animations designed by professional animators with the robot morphology in mind. We conducted an initial user study to acquire reliable dimensional labels of valence and arousal for each animation. In the next step, we used the motion sequences from these animations to train a Variational Autoencoder, a deep learning model, to generate numerous new animations in an unsupervised setting. Finally, we extended the model to condition the generative process with valence and arousal attributes, and we conducted a user study to evaluate the interpretability of the animations in terms of valence, arousal, and dominance. The results indicate moderate to strong interpretability.

dc.publisherUniversity of Plymouth
dc.rightsAttribution 3.0 United States*
dc.subjecthuman-robot interactionen_US
dc.subjectsocial roboticsen_US
dc.subjectaffective computingen_US
dc.subjectgenerative modelsen_US
dc.subjectvariational autoencoderen_US
dc.subjectdeep learningen_US
dc.subjectemotional body language generationen_US
dc.titleEmotional body language synthesis for humanoid robotsen_US
dc.rights.embargoperiodNo embargoen_US
rioxxterms.funderHorizon 2020en_US

Files in this item


This item appears in the following Collection(s)

Show simple item record

Attribution 3.0 United States
Except where otherwise noted, this item's license is described as Attribution 3.0 United States

All items in PEARL are protected by copyright law.
Author manuscripts deposited to comply with open access mandates are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author.
Theme by 
@mire NV