• Personality Expression with Large Language Models with GPT-Generated Appearance and Voice

    We will try to generate a holistic agent that provides responses using LLM but also has a GPT-generated appearance or voice. The goal here is to compare the personality perception obtained from different prompts for a character with a specific appearance/voice using only LLM and GPT-style structures. Personality can be measured automatically for text responses, but a user study is necessary when testing the entire system for image/voice combinations. Here, ready-made structures can be used directly to test the effects of different prompts. For example, you initially generate the words to be spoken for introvert/extravert/a specific person with LLM. Then, you generate a view with a prompt for these three options. For example, a user study could be conducted to see how different combinations of these (for example, how introverts react when spoken with an extrovert) affect personality. For example, ElevenLabs could provide audio, and one of Google's or OpenAI's systems could be used for image. Moving the mouth of the image is not necessary, but it can be created directly with Google Veo 3.

  • Comparing Personality Expression with Large Language Models with GPT-Generated Appearance and Voice

    Different LLM personalities can be compared using other criteria. For example, a group of researchers evaluated LLM responses in different personalities in terms of toxicity. Similarly, various emotions can be measured based on the frequency of word use when being an extrovert/introvert/certain person. For example, an analysis could be made of which personality type uses more positive sentences, and which has longer sentences. Francois Mairesse uses various text-based features for analyzing/creating personality from text, and the automatically measured ones can be used for comparison. For example, analyses such as punctuation, sentence length, pauses, and the positive/negative nature of selected words can be performed automatically. The aim here is to explore whether LLM reflects different personalities by using the personality-word usage relationships identified in previous studies, or by following a different path. For example, LLM might prioritize sentence length for certain personalities and employ different methods for others.

  • Comparing Personality Expression Generated with Large Language Models

    We will compare the resulting personalities using a person's name or personality traits we have previously created using LLMs, using a similarly trained LLM. For example, when training an LLM, one version might provide only text about that person, while the other might provide text from people with a specific personality. Then, personality comparisons can be performed using the same automated method.

  • Impact of Facial Expressions and Head Position on Personality Perception for Conversational Agents

    We will investigate the impact of facial expressions and head position on personality perception during conversation. To do this, we need to create videos featuring different facial expressions (e.g., sad, happy, angry) and different head angles (looking up, down, sideways, or forward) of the same person. We can achieve this with neural rendering.

  • Automated Analysis of Structural Navigability in Public spaces

    We will do a study about automatically analyzing the navigability (emergency or regular navigability) of building (or outdoor environment) designs for pedestrians with different physical disabilities (not just wheelchairs) or special social situations (parents and children). It even seems like it could be developed into a plug-in for software like Autodesk 3D Max and/or lead to a rating system or standard for accessibility.

  • Apparent Personality Transfer

    We will transfer the apparent personality of an individual to the image of a different person. There will be two inputs: the video of the first person and the image of the second person. The personality metrics of the first person will be known (from a user study), and we will modify the second person to have similar features. The second person may change his/her pose, facial expression, body shape, or clothing without a change of identity (should not become a different person). You may transfer some attributes directly, such as the pose or the facial expression; and you may also use some existing techniques that alter personality, such as Laban Movement Analysis-based parameters. The output could be an image or a video. We will perform a user study for the apparent personality in the outputs and assess the performance based on the similarity to the personality in the input video.

  • Expressing Personality in Animal Animation

    This project focuses on expressing different personality traits in animal characters' animation through procedural modifications. Previous research achieves a similar task for human motion using animation adjustments based on Laban Movement Analysis. Animals have different skeletal structures, so we want to focus on a few selected animals people are more familiar with, such as cats and dogs. The related theory applied to human movement can be adapted to the animals with various changes. For example, introversion can be expressed in humans with a slanted posture and lack of activity; bending the animal's spine to have a similar effect would require altering the rotated bones. Animals use their ears and tails actively; these limbs should be well utilized to express the desired traits. The animation modifications are to be done on neutral motions. We will perform a user study to show the effectiveness of the implemented system. We plan to use Unity, but Unreal or ThreeJS is also accepted.

  • Personality-Driven Co-speech Gesture Generation

    Gestures accompany people's speech in conversations. Existing works can synthesize co-speech gesture animations from input speech but do not consider the mood or personality of the generated animation. While adding personality using procedural approaches applies to the generated animations, including mood or personality in the generative model would be more effective. The input can be the speech file as in previous research, or it can also include the dialogue text, together with the target mood or personality. We expect to use a data-driven model; however, approaching the problem using procedural methods is also acceptable. For example, certain speech patterns can be mapped to specific gestures where mood or personality determine the extent of motion. An angry or extraverted agent may move hands faster, covering more space. In a procedural gesture animation, these features can control the movement range, speed, and characteristics of the hand motion.

  • Laban-based Inverse Kinematics System

    Inverse Kinematics aims to find plausible body positions given target end effector locations. There are many techniques for calculating Inverse Kinematics, and data-driven methods are one possibility. To this end, a neural network can learn the relationship between the end effector positions and skeleton configurations (bone rotations). Laban Movement Analysis is a way of understanding high-level aspects of human motion using low-level measurements. Given a body pose, the corresponding Laban parameters can be calculated, which can also become the input of the neural network model. This project aims to choose descriptive Laban features and make a data-driven Inverse Kinematics system that receives end effector positions and the desired Laban parameter values to output a plausible body pose. We will interpolate the end effector positions to prepare different animations and run a user study to evaluate the realism and accuracy of representing the desired Laban features.

  • Expanding Personality-Animation Database

    We have a dataset of human animations matched with their personality scores. We want to expand this dataset using animations in the wild; this means we can utilize any public video (MediaPipe or similar systems support extracting 3D animations from 2D videos) or animation as new data, but they will not have any style, personality, or semantic labels. In this project, the task is first to gather a set of animations from various sources. Then, we would like to label the new samples in terms of what semantic action they contain and which personality traits they convey. This task will utilize existing animations with labels and action recognition networks. We can use off-the-shelf action recognition models, but for personality labeling, either a clustering model or a different data-driven approach is required. We will validate the estimated labels with a user study and publish the dataset for future studies.