Bilkent University
Department of Computer Engineering
PhD THESIS PRESENTATION

 

Target-Sensitive Interpersonal Animation Generation with Personality Expression

 

Sinan Sonlu
Phd Student
(Supervisor: Prof.Dr.Uğur Güdükbay)

Computer Engineering Department
Bilkent University

Abstract: Expressing personalities is essential in human animation for realism and improved communication. We focus on interpersonal animations that portray target-sensitive actions, such as handshaking, to build a data-driven model for personality-based dyadic motion synthesis. However, since datasets with personality annotations are rare, we first introduce personality labels to a subset of animations from well-known datasets with high style variance. To this end, we perform a user study for participants to rate the animation samples regarding perceived personality. We form a set of motion parameters based on various distances, angles, and volumes calculated over different combinations of body joints. Then, we identify the motion parameters that influence personality expression through an in-depth correlation analysis. These motion parameters influence the personality expression in our generative model. We build a transformer-based generative autoencoder that inputs the previous and current poses of a secondary agent, together with the previous pose and the desired personality of the main agent, to output the current pose of the main agent expressing the desired traits. We train this system using dyadic motion samples to learn the semantic actions, also utilizing a randomly selected personality sample for learning to replicate different personality styles based on their motion parameters. We evaluate the generative model by synthesizing motions where the main agent expresses different personality traits. The results suggest that our approach can generate a significant effect on perceived extraversion and agreeableness, while the influence on the remaining factors is limited, likely due to the secondary agent's motion being a limiting factor. Consequently, the resulting model can effectively generate personality-specific poses in time but has less variety based on motion speed.

 

DATE: September 03, Wednesday @ 09:30 Place: EA 516