So I bet you're wondering what Nelskati has done since you last talked to Nelskati.
Nicole was able to piece together the conversation given input from Libo (which contained the time of the keyframe, emotion, agent, and gesture name). Here's a video of the script running:
And here is the resulting conversation:
At the moment this conversation has no emotions.
We have started finding the beta spline parameters for each emotion. We also added textures to the models.
This video shows the result of the angry_fail conversation, with emotions. The video is lagging a bit, but if you go to 0:45, for instance, you'll see that some of the gestures have been tweaked/sped up.
Omar created a GUI that streamlines the interaction between our two parts and makes for easy emotion-infusing over gestures. By clicking a 'Load GKP' button, the scene file containing the textured agent model is opened. Then, the user is prompted to select the .gkp file corresponding with the gesture that he/she wants to modify. This creates the set of beta-splines for that gesture. There is a GUI with 3 panes:
1)"all-joint controls" that can control the bias, tension, speed, and number of divisions of all the curves
2)the individual joint controls (bias, tension, speed)
3)a pane containing 3 buttons--'Play New Gesture', 'Write to File', and 'Load New GKP'
(*Omar added curve divisions as an interactive option because he noticed that some gestures, when the number of divisions was too high, would lose the intention of the original gesture. Some gestures became too fluid and lost key parts of the gesture that made the gesture recognizable. So a slider for divisions has been really helpful)
The user can tweak the beta-splines and then hit the 'Play new Gesture' button to immediately key the new gesture and play it. This allows the user to easily see how his/her curve manipulations are affecting the animation. Once the user finds a combination of parameters that fits the desired emotion, the user can hit 'Write to File', select which emotion the new data represents, and then the program writes that out to a new .gkp file. Here is a short video that illustrates how it works:
Omar was also able to determine the beta-spline and speed parameters that gave certain gestures the look of being infused with emotions (happy, angry, sad). However, it is clear that there is no perfect set of parameters that can produce "angry" for every gesture, for example. In fact, we can barely use the bias parameter because it actually introduces a lot of error that is not easy to deal with within our current framework. Emotional gestures do not start and end in the same position as their neutral counterpart when using bias, which creates problems when stringing them together to make conversations.
Omar has found that speed is a very large factor in illustrating the emotion: faster looks angrier, slower looks sadder. Happy is a difficult emotion to achieve using just the beta-spline parameters. An increase in speed seems to denote happiness, but without the facial animation, happy and angry become difficult to tell apart. Also, some gestures connote a certain emotion to begin with, so to infuse it with an emotion of opposite connotation is very difficult. But still, for many of the gestures in the database currently, halving the speed is enough to make the agent look sad, and doubling the speed is enough to make the agent look angry. Repeating the gesture several times in a row would also increase the appearance of anger.
Well, we've submitted our final code, and now we're done! (although it doesn't feel like it) We can always keep working to improve what we've built, but unfortunately we are out of time. Still, we have accomplished a great deal. And Nelskati has had a lot of fun. But, of course, we couldn't have done this without the help of Joe, Norm, Pengfei, Libo, and Amy. They have all been incredibly helpful and supportive.
Also, congrats to all our fellow classmates on a job well done! You all did amazing work.
Till we meet again.
With our undying love,
Nelskati