Road to Echo: Rigging Echo

Work In Progress / 28 September 2020

With Echo's textures and model ready and prepped, it was time to start the rigging phase to be able to bring Echo to life in the film. I learned how to rig over the course of a spring break my junior year of college, but every time I start to work on a new character I still learn something new.

If you're not a rigger or do not know that part of the pipeline, a character rig comes in two parts: the physical joints that act like a skeleton of a character and the controllers that allow for the animator to animate the character via the joints underneath. The first step that I take whenever I start a rig is planning what I want for the character to do.


As Echo is the main character, she'll be the focus of attention and the primary character on screen for nearly the entire film. She must be expressive and movable with the ability to tweak minor details for up close shots. Because she floats, however, her legs do not need a lot of attention beyond a simple setup that can be adjusted.


With this in mind, I began laying out all of the actual bones and taking time to adjust the placement. Thinking about the joints acting like an actual underlying skeleton is really helpful as it lets me visualize where the body parts are actually pivoting from. Because Echo doesn't really have knees, however, or rather, she won't be walking a ton, placing the knees was more of an educated guess.

Parenting the joints to a proxy mesh also lets me see how the rig underneath starts to function before hitting the bind phase.


Because of how expressive I wanted Echo to be, I decided to go with a hybrid approach between joints and blend shapes in where the face will primarily be driven by joints while using blend shapes more as correctives to the face. This approach was more helpful for me because I have more experience with joint based rigs than I do with creating blend shapes. 

I placed joints in the eyebrows, eyelids, cheeks, upper mouth, lower mouth, and tongue. Adding joints to the eyelids as well as a pupil joint allowed for me to manipulate the eye blink and make it adjustable in the future while also letting me make the pupil larger or smaller depending on the emotion.


The hair was going to be a tricky challenge because I as a rigger, didn't want to make the rig to heavy but I as an animator also wanted to have a much control as possible when animating depending not the scenario. Because of that, I met somewhere in the middle of rigging each strand of the hair and the base ponytail and having all of the joint chains be driven through IK chains. A gif of that can be found on my twitter. This was done before I realized I could also simulate the hair via physics in Unreal, but by already having this system in place, I have the option to use either physics, animatable hair, or a combination of the two. No lost work!


In my opinion, the most time consuming part of rigging is binding the skin and painting weights. This is one of the most important parts of the rig altogether, however, so it makes sense that it's the most time consuming. I had the model broken up into different pieces, which I found helps me separate out the weight blending at first. This was especially helplful when it came to the hair ad separating all of the hair joints from the body so that one didn't accidentally influence the other areas. My workflow for skinning typically starts with the body and major ligaments (arms, torso, etc.) before locking all of those and focusing my attention on the face. 

All of these poses and testing is before corrective blend shapes took effect, but the goal here was to get the joints as even as possible when it came to how they deformed. It wasn't so much as making sure every face was "camera-ready" as it was every face be smooth and consistent. This part was a lot of fun to test, though. 

Once I got the initial joint influences to be where I wanted them to be, I added blend shapes to drive the faces further. For example, if I wanted to corner of the mouth to be up, the joint would move it up and the blend shape would create the crease of the cheek to drive home the motion.  

Some of the blendshapes were more subtle than others, but in terms of how the model deformed it was better. In total, there was over 30 corrective Target Blend Shapes and Combination targets. In Unreal, these are called Morph Targets. I hooked up all the animations to my main facial animation controllers so that they triggered as I moved the corresponding controller.

The final facial rig is made up of 28 different controls with the jaw and eyes being on the actual model as opposed to on the side. 


Now that my model was skinned, I was able to also hook up fleshy eyes on the face, which allows for the eyelids to give a subtle move for more realistic eye movement. A gif of that can be found on my twitter. The way this is set up is due in part with how the joints are laid on in the face. I skinned individual eyelid joints that are connected to a main upper and lower eyelid joint that not only allowed for me to create a set driven key for the top and bottom eyelids for blinking purposes, but also allowed for me to include corrective controllers should the animator want a little more animation in the eyelids. I tried a similar approach on the mouth, but to really do it right I would have to redo some of my blend shapes. Therefore, I elected to not do it for the mouth for now.


Once I was comfortable with how the rig was looking and how all of the deformations played with each other, I decided to go an animated face gym test and render it out in Unreal. The final larger version can be found on my twitter and also below:


I learned a ton while working on this rig, especially when it comes to facial rigging and animation. After actually animating with the rig, I realized I would like to have a separate slider to control the center of the mouth and mess more with the eyes themselves due to how the geometry isn't as smooth in that area as I would have liked. Doing the test in Unreal also shed some light into using the Sequencer and playing with animation files, physics assets, and the skeletal mesh. I also needed to redo the arms FK system, as it was during the animation stage that I realized my arms get into gimbal lock whenever I try to move he forearm up and down once it's been moved inward. Additionally, I added eye and head world space parent constraint changes that would allow for me to change what those pieces of the model would follow if I moved anything under them. They came especially in handy whenever I'd want to rotate the world controller but wanted the eyes to stay in the same place, for example.