Continuing our excerpts from the Inspired 3D series, Tom Capizzi presents an in-depth character construction tutorial.
This excerpt is the next in a number of adaptations from the new Inspired 3D series published by Premier Press. Comprised of four titles and edited by Kyle Clark and Michael Ford, these books are designed to provide animators and curious moviegoers with tips and tricks from Hollywood veterans. The following is excerpted from Modeling & Texture Mapping.
Facial Animation and Blend Shapes
In the production of an animated character, the character can have the face animated in two basic ways. One way is to have animation setup control the face using various setup techniques. This requires expertise on the part of the setup technical directors. In a large production facility, the efficiency of scale can make difficult jobs like this commonplace. Many characters have already been set up that can be taken apart and reused. In a small production, the process of creating facial controls can be time consuming. This is especially true because no other similar characters can have their controls recycled for the new character.
On this production, despite my best efforts to avoid this, the production decided early on that the facial animation would be controlled using blend shapes. Blend shapes are 3D morph targets that have the exact topology as the face they are controlling.
Full Face Shapes vs. Local Face Shapes
In the creation of face shapes for an animated character, there are two basic schools of thought regarding the way the face should be animated using blend shapes. One method is to use the entire face as a specific target. If the character is going to frown, then the entire face is sculpted into a frown shape. The eyebrows are sculpted into a furrowed appearance, and the entire mouth is sculpted into a real scowl.
Shapes and Phonemes
The decision of which shapes to build came from two primary sources. First, Rick Grandy, the technical editor for this book, came up with a preliminary list, and then the animator, Kyle Clark, came up with some items that he needed to get this project done. Overall there were 66 targets built for this animation, and there will probably be some more that need to be built as more animation is done. Some models used for production have more than 200 blend shapes modeled. The face shapes were broken down by region:
Eyebrows, left, center and right. The shapes created for this region allowed the eyebrows to animate up and down, slide inward toward the center or out away from the center, and bow in the middle upward and downward. Shapes for the brow animate the eyebrows into even smaller regions: the left, right and center for the left brow, and the left, right and center for the right brow.
Eyelids, left and right. The shapes made for the eyelids allowed for the eye to close, by pulling the upper lid down, and allowed the eye to squint, by pulling the upper and lower lids to meet in the center.
Face (for broad shapes), left and right. The face groups had shapes that animated the cheeks up and down, moved the cheeks in and out, and puffed the cheeks out and sucked them in. There was also some cheek deformation on broad mouth shapes, such as the dread, sneer, smirk and grin.
Mouth (the largest group), left, center and right. These shapes created simple as well as complex mouth movement. The simple movement includes moving the each lip up, down, curl in, curl out, corner up, corner down, corner side movement inward, corner side movement outward and shapes that smoothed out the corners of the mouth.
The complex shapes required the modeling of the frown, smile, furrowing, puckering, pouting, yawning and kissing.
Overall face shapes, localized by region. These shapes simply used large areas of the face to accomplish a single task. This kind of approach is preferable when there is a specific target that the animator may want to hit with a single blend shape.
These shapes include mouth smirk, mouth sneer, mouth dread, mouth wince, eye furrow, eye squint and mouth smile.
The mouth regions were extended to include the cheeks, and the eye regions were extended to include the forehead and eyebrows.
Phonemes are face shapes directly related to speech. Different theories exist as to which phonemes are required for animation of speech. Thirteen accepted shapes are recognized as vismemes, which are used in the creation of English speech. These shapes are as follows:
Closed mouth: P in pie, B in book, M in mother.
Pursed lips: W in wicked, OO in root.
Rounded lips, corners of the mouth slightly puckered: R at the beginning of a word, OO in book.
Lower lip drawn to upper teeth: V in victory, F in French.
Tongue between teeth with gaps on the side of the tongue: TH in think.
Tongue behind teeth with gaps on each side of tongue: L in look.
Relaxed mouth, mostly closed teeth, tongue visible behind the teeth: D in dog, T in tag, Z in zebra, S in sit, R in car, N in nothing.
Slightly open mouth, mostly closed teeth, corners of the lips slightly tightened: CHI in chime, JI in jive, SH in shy, VI in vision.
Slightly open mouth, mostly closed teeth: Y yawn, G in get, K in kitchen.
Wide mouth, slightly open lips: EA in meat, I in rip.
Neutral mouth, teeth slightly parted, jaw dropped slightly: E in bet, U in but, AI in bait.
Round lips, jaw dropped slightly: OA in toad, O in rope.
Open mouth, jaw dropped: A in math, O in shop.
For the list of shapes that would be modeled for this model, the phonemes were reduced to eight basic shapes:
Wide, slightly open lips: E in evening.
Round lips, jaw slightly open: O in oh, O in toast.
Round lips, corners of the mouth puckered: OO in book.
Closed mouth: P in pie, B in book.
Lower lip drawn to upper teeth: F in fine, V in vase.
Lips pursed: W in work.
Mouth open, tongue visible from inside mouth: T in tank, D in dog.
Relaxed mouth, mostly closed teeth, tongue visible behind the teeth: S in sit.
Wire Deformer Rig for Face Shape Creation
For the purposes of creating a fast way to create blend shapes, I created a wire deformer rig. The wire deformer makes the creation of expressions very quick. By manipulating the points on the curves, I was able to move the surface of the skin in a very elastic, natural way.
Another thing that was working in my favor, in a big way, was that the model being manipulated was a low-resolution cage. This version of the model was very fast to edit, and the smoothed results always looked better than if the model had been edited in high-resolution.
During the process of modeling blend shapes, the animation rig that had the jaw rotation skeleton was used to ensure that the rotation used for the blend shape jaw matched the rotation used by the jaw on the actual animation rig.
Cleanup and Testing The modeler needs to test and clean up blend shapes after making them. Testing blend shapes is a critical part of the modeling process. Many things can go wrong during the creation of blend shapes. Any time the model is exported from Maya in another format (like .obj) will scramble the order of the polygons in the model. Anything that affects polygon ordering will create many problems.
When testing the model, the modeler should be looking for technical problems as well as aesthetic problems. The technical problems will become evident quickly and require no additional discussion.
The aesthetic problems include the following:
Does the shape look natural? Does it look like a shape that would normally occur on the face?
Does the shape cause undesirable stretching and twisting? Most expressions on a real face do not cause too much stretching of the skin, but on a cartoon character, this is not the case. In extreme poses, there will be some stretching that needs to be dealt with, so the modeler needs to determine whether the stretching is acceptable or not.
Are the polygons distributed as evenly as possible for the blend shape? Uneven distribution of the polygonal topology will cause the geometry and textures to deform unnaturally. The skin in a character is an elastic sheet that covers the bones and muscles, so the modeler has to determine if that sheet is getting stretched too much in one place.
Test the final blend shapes with the hair, eyes and teeth in place. Are there any intersections of the skin surface with the hair, eyes and teeth?
In order to get the character rendered, the modeler needs to apply UV coordinates to the character. The process of editing UVs has a fairly straightforward goal: Will the texture artist be able to paint textures on this character that will not twist or deform unnaturally?
There are many methods for applying UVs. For this section, the basic application types will not be discussed. In order to texture this model, there were two primary methods employed in the application of UV coordinates. One method was used solely on the head, and the other method was used on the rest of the character.
When unfolding UVs on a model, several things need to be accomplished:
The spaces between the UV coordinates should have roughly the same size proportions as the polygons that the UVs are associated with. If the polygons in the eyelid area of a character are tightly packed together compared to the polygons on the side of the head, then the UV spaces in the eyelids should not be spread out either. Uniform application of texture coordinates depends on a uniform distribution of points relative to the original polygonal model.
UV coordinates have a tendency to get tangled up. The mesh of UV coordinates should be organized on the final model in such a way that it is clear that no tangled UVs are on the model.
UV coordinates should not overlap with other UV coordinates. When an orthographic texture-mapping scheme is used, the UVs on the front of a model will overlap with the UVs on the back of the model. If the texture that was being applied were a bullet hole that shot directly through the object, this would be fine. Otherwise, organic models should not have UV coordinates that overlap. Overlapping UVs duplicate texture in two or more areas of the model. A common place that this occurs is the ear. The front of the ear will get a map that shows the detail of the ear, but the detail of the ear will often appear behind the ear as well if the coordinates are not taken care of.
The UV space of the UV coordinates should fall between the UV space ranging from 0 to 1. Many texture mapping programs allow for the distribution of texture space to fall well outside of these coordinates, and if the texture mapper knows what he or she is doing, this rule can be broken to increase efficiency. However, because paint programs paint maps that fall within the actual map and not outside the maps own parameterization, then using the parameterization of 0 to 1 will ensure that the map that is being painted will fit correctly.
[Figures 49 & 50] The colored grid map used for unwrapping the UVs on an organic shape (above). The same map applied to the model (right).
UV Editing Using Texture Maps
The best way to check to see if the UVs are working correctly is to preview the UV work using a texture map on the model. Different maps accomplish different things while unmeshing the UVs on a production model. These maps can be used in progression to work out the UV mapping issues one step at a time. The most common test maps, shown in progression, are: 1. A color and number grid. These maps are useful in the initial unwrapping of UVs on a model. Several maps of this type are commonly available. The map shown in Figure 13.49 is a map I made in Photoshop in about 15 minutes. Maps such as these are designed so no numeral (or letter, depending on the map) will fall in the same colored square twice throughout the map. This map has 10 numerals used 10 times each. The placement of the numerals in rows helps establish orientation while viewing the map on the model. The colors are more random but at the same time are somewhat organized diagonally, also helping to establish orientation.
These maps are useful during the initial unwrapping stage because the unique pattern helps establish which areas are overlapping, are being repeated, or are twisted. Because each numeral or color combination only appears once, checking for repeating numeral or color blocks can help eliminate overlapping and tiling.
[Figures 5153] Easy-to-read black-and-white maps used for unwrapping the UVs on an organic shape (above left, center). The grid map applied to the model (right).
2. Checkerboards or grids. These maps are useful in getting the excessive twisting and deformation out of the UV set in a model. The colored grid can be distracting to view and can hide many problems associated with UV work. By using a less distracting pattern that has evenly spaced partitions, the modeler can check for twisting and stretching much more easily.
[Figures 54 &55] A typical noise map (above). The noise map applied to the model (right).
3. Organic noise patterns. The noise map is the acid test for fine-tuning the UVs so there is no stretching. By the time the noise map is applied, there should be no overlapping UVs or twisted UVs; the noise map will not be able to help identify those problems. The noise map does one thing, but it does it well. The noise map shows stretching.
In the example shown in Figure 54, the noise map is applied to the model. This map in Figure 54 was generated in Photoshop using the texturizer filter in about a minute. In previous tests, the mapping seemed fine, and the stretching was minimal. After the noise map is used, the stretching in the ear and across the nose is easily apparent (Figure 55).The stretching inside the mouth is really out of control, but that is an area that will not be seen, so it will normally not have to be as accurately mapped and modeled as the rest of the head.
It will be helpful to view these textures in realtime so rendering is only used for a final test. Real-time texture viewing will speed up the UV editing process dramatically. Unfortunately, procedural textures do not update in real time; they can only be viewed using a software render. Most graphics cards are optimized to accept bitmapped or file-based textures. A file that has a grid or checkerboard in it can be assigned to the model, and the file-based texture will update in realtime if the modeler is on a machine with a decent graphics card.