Rhythm & Hues Tom Capizzi presents an in-depth tutorial for organic texture mapping.
The first of two articles about Organic Texture Mapping, this excerpt is the next in a number of adaptations from the new Inspired 3D series published by Premier Press. Comprised of four titles and edited by Kyle Clark and Michael Ford, these books are designed to provide animators and curious moviegoers with tips and tricks from Hollywood veterans. The following is excerpted from Modeling & Texture Mapping.
In this chapter, I explain the process of applying organic textures to models, using a model of a bird. Lopsie Schwartz, a texture painter who has worked on several films, including Dr. Dolittle 2 (2001) and The Lord of the Rings: The Two Towers (2003), created the textures on the owls for Harry Potter and the Sorcerers Stone (2001), and the techniques she used are explained in this chapter.
The most common question asked by people trying to learn the process is, How do you attack a project? Although there is no one right way, this tutorial presents detailed steps on how textures are created and applied in a production environment, in which the model is usually already designed and modeled for you. In addition, the animation setup technical directors and the animators may be working concurrently on the same model as you paint the textures. As a result, you often do not have the luxury of being able to change the model to fit your needs. You must adapt to the model. This tutorial walks you through the same steps taken when texturing the digital owls for Harry Potter and the Sorcerers Stone, using a model of an owl and aiming for a photo-real style of textures. Usually, the model is built before any texture mapping is done. The model shown in Figure 1 is a different model than the one used in the actual film, but is similar enough to be used in this tutorial.
For the Harry Potter movie, the textures were created using Rhythm & Hues proprietary software and Alias Studio Paint, but for this tutorial, Maya, Photoshop and Deep Paint will be used.
The key to making something seem photo-real is to get photographic reference material. For this project, the first step is to determine what type of owl this is supposed to be. For the purposes of this tutorial, the model shown in Figure 1 has the textures of a barn owl.
First, check the UVs on the object. Sometimes the modeler will apply a checkerboard pattern to get approval of the UVs on a model. Using a simple checkerboard pattern may not provide enough information about the UVs.
The map illustrated in Figure 2 provides more information about the UVs than a simple checkerboard. Using this illustration as a guide, this map can be replicated. This map can be helpful in many different situations, and a map like this one should be kept handy so it can be used on a regular basis. A map helps you identify problems quickly during the approval process. When the UVs have been checked, the modeler can be informed of changes that need to be made to the UVs, or the changes that can be made by the texture-mapping artist.
This modified checkerboard pattern gives much more information than the simple black-and-white checkers. This checkerboard technique gives you a quick visual cue on the UVs and can easily solve 80 percent of your texturing problems by catching them before it is too late. Some of the things that are being checked when inspecting UVs using this technique are strange non-square patterns, overall map placement, the number of pixels allocated to high-definition areas, and how much of the map is actually being used.
By checking the model with this texture map, the seam where the two symmetrical halves of the model were joined is visible. This seam is nothing to worry about. It is also possible to see something that would not be apparent with the black-and-white checkerboard. There are different roupings of feathers, and they have different UV coordinates. This condition makes it impossible to create a single texture map that can be used on all the feathers.
After examining the model with the map applied to it, the UVs on the model can now be approved, and the actual texture mapping can begin.
The first things needed to begin texture mapping are a few high-resolution rendered images of the bird. The images need to be higher resolution than the maps that will be created later. For the purposes of this tutorial, the rendered images are 2K, or 2048 x 1536 pixels.
The side view, a top view, and a bottom view need to be rendered. These images should include the full view of the bird (Figures 6-8). The front view is required to texture map the face, so the face should be larger in the frame for this image (Figure 9). These images will be used as templates for placing photographic images.
Creating the Look with Scans
The next step in the process is to acquire photographic reference and begin scanning it to create texture maps. At this point many novices will ask: What is the challenge in that? Anyone can scan and drop. Well, yes and no. Anyone can do it, but very few do it well. In film and television, you are commonly creating something that has to match a live or existing subject. At the very least, you are matching a background plate, and starting with photos or scans is more efficient. The more photographic reference you can get, the more realistic the finished product will be.
Anything that is scanned from print material, such as books and magazines, will need to be de-screened. De-screening is a setting that is available in most scanning software packages. Also, if copyrighted material is used, it should be used according to the laws regarding such material.
In production, images, such as photos taken on the set, conceptual paintings and inspirational photos, will often be supplied for texture reference. It may be necessary to take additional photographs. The texture mapper should be ready and able to take reference photographs when needed. It is part of the job and is cleaner in all ways over print and Internet material.
Figures 10-13 are examples of photographs taken explicitly for production. Pauline Tso, the visual effects art director, took these photographs. These images display the way owls look in the real world. Many subtleties about the way an object behaves in the real world can only be determined by using photographs.
When the person who has taken the photographs is working on the project, that is an added bonus. There is even more information about the way something looks that can only be derived from seeing the owl oneself.
Once the reference images have been gathered, they need to be color corrected to be in the same color space. Use specific color information sampled from the owl itself to match the color values in the other images. One way to accomplish this is to use the color eyedropper tool in Photoshop to sample color values of a specific part of the bird, and match those values for the same part of the bird in another image.
When gathering photographic reference, it will be nearly impossible to get a photograph of the bird in the exact pose that will match the rendered images that were created in Figures 6-9. That is part of the challenge. Advanced scanning manipulation skills need to be used to create source images that are more useful.
The next series of steps take the images created in Figures 6-9 and adapt scanned images to fit these rendered images. The intent is to create ideal texture images that match the rendered images from scanned images that do not match the rendered images. The eventual goal is to place the images that are created in the steps that follow directly on the 3D model. Without ideal texture images, the details of the image and the model will not line up.
Throughout the following steps, several images will be created for 3D projection in a 3D paint package. The face of the owl will be the first texture image that will be created using this process.
- 1. Open the paint program of your choice. This tutorial will use Photoshop. Bring in the render of the face (refer to Figure 9), and bring in a front face picture of a barn owl. As you can see in Figure 14, the proportions are completely different.
2. Next, get the outer face ring to fit the model. Transform or deform the image until it fits the head outer size (Figure 15).
- 3. Notice that the eyes do not fit at all. The first step for correcting this is copying the region of an eye and pasting it to a different layer (Figure 16).
4. By reducing the opacity of both the new layer and the head, the shape and size of the models eye can be seen below. Next, the eye is resized to fit, using the same transformation method used for the head (Figure 17).
- 5. The head and eye layers are merged together after transforming the eye, and increasing the opacity for both layers back to 100% (Figure 18).
6. The same copying and transforming steps are repeated for the other eye. After merging the second eye onto the face, any visible pasting artifacts should be blended away. The clone tool and smudge tool used in combination work well for blending these artifacts (Figure 19).
7. Save this work as a single flattened image. This process has generated an image that can be directly placed on the model in a 3D paint program.
The Top View
The top of the bird will be textured now. Beginning with the top view of the owl produced earlier (Figure 20), scanned images of bird feathers will be positioned on the wings of the owl until the entire topside of the bird is mapped.
- 1. A barn owl has a brown and gray mottled look on its back and topsides of the wings. Set the clone tool to 100% to shape a wing, using the top view as a template. Once this is done, duplicate and flip the wing texture image horizontally (Figure 23).
2. Position the flipped wing to the other side to form the other wing. You could make one wing and apply the same texture to the other wing later (Figure 24).
If this owl will receive any close attention, however, the textures should not be too symmetrical. Part of getting a photo-realistic look is to not have anything too perfect. The model is already perfectly symmetrical, so it will be necessary to add some variation to the final image by adjusting the textures.
- 3. Fill in the body and top of the head, blending in the wings. Keeping different parts of the image on different layers allows for added flexibility and control. For example, despite previous color correcting, there will still be image layers that dont match up. In Figure 25, the pattern used on the head was too bright and of a slightly pinker hue than the rest of the bird.
4. By adjusting the hue, saturation, brightness and contrast, the head layer blends in with the rest of the owl texture. Keeping it on a different layer helps simplify the color correction. Once the wings and body and head all look good, merge the layers (Figure 26).
The edges of the texture image extend beyond the edges of the model image. Because this is going to be used as a projection, it is better to give some leeway beyond the model. Any image that extends beyond the model in the 3D paint package will simply fall outside the boundaries of the model and will not be used. Also, Deep Paint will allow the parts of the image that should not be used in the projection to be erased before the image is projected on the model. It is simpler to create data that will be thrown away later than to add that additional detail later on.
The Side View
The side view (Figure 27) shows different challenges than the face and top views. In this section, the legs of the bird need to be addressed, as well as the side of the body.
- 1. The barn owl has a light spotted underbelly a very different color than its topside. An area of this lighter underside can be cloned from one of the photographic reference images and used to fill in the underside (Figure 28).
2. Next, texture the feet. The feet on this owl are a yellowish color that blended about halfway down the owls white spotted feathers. Using a close-up image of a hawks feet, the feet of the owl are created using the photographic reference of the owl as a guide for color and placement (Figure 29).
The hawks feet are color corrected to the right color and then cloned to create the shape of the model. This is all that needs to be done from this side. Normally, the side of the face would be done at this time also. However, lots of blending will be done on the face using the image created for the front projection of the face; no additional work will be needed in the side view.
Creating these images using this technique may seem like a lot of trouble to go through if there is a 3D paint program available. However, if these images are accurately and cleanly created in 2D using Photoshop before projecting them as 3D textures, this process should save time in the long run. The blending and painting done in most 3D paint programs do not give the textures the kind of color and detail consistency that can be achieved using this technique.
The model will be imported into Deep Paint for 3D texture projection. There are options available for importing the model into Deep Paint from Maya. Deep Paint has a great plug-in that can be used to import the model and textures from Maya into Deep Paint. This plug-in creates a special texture node that loads multiple image-based texture files into a single texture node. The process of loading many image-based texture images with very cryptic names into Maya for rendering is made easy and seamless. But, if there is a need to access the individual texture images, these complex texture nodes created by Deep Paint are difficult to extract specific data from. This process can either save a lot of time or cause confusion depending on the final requirement. For the purposes of this tutorial, the Maya plug-in for Deep Paint will not be used.
Import the Model into Deep Paint
Deep Paint has many high-quality paint options and can create high-resolution images that conform cleanly to the UVs assigned to the model. It is capable of creating professional quality work for projects like this one.
- 1. First project the face. The face is done first because there will be a lot of stretching down the body after projection which can be fixed by layering the body textures over the stretched areas. The model is scaled and rotated face forward. The Deep Paint projection options should be set to first surface only so the face image is not projected all the way back to the tail feathers. The face photo created earlier is imported and aligned with the model (Figure 30).
2. When the image of the face is projected on the model, the white part of the image stretches across the body. The white area outside the face will be covered with other projections later on. The model should be rotated into the model in top view. The top photo that was created in earlier steps is imported as a 2D image in Deep Paint.
- 3. Make sure that the head part of the top view image is erased in projection mode before the image is projected onto the model. If the previous projection is accidentally projected over, the face will have to be projected again. Once the top view is aligned and properly edited, project the image onto the model. As with the face projection, the projection mode should be set to first surface only (Figure 31).
4. Rotate the model to the side view. Make sure that the projection mode is set to project both sides because the side view of the model will be projected through the model to create symmetrical textures on both sides of the model. Import the image created earlier for the side texture.
5. Before projecting the side image onto the model, the face and wings should be erased from the image while it is in projection mode. If this step is skipped, the projections on the face and the top will have to be redone (Figure 32).
Later this month VFXWorld will publish Part 2 of the tutorial on Organic Texture Mapping, which will include details on how to import textures into Photoshop. To learn more about constructing 3D characters and other topics of interest to animators, check out Inspired 3D Modeling and Texture Mapping by Tom Capizzi; series edited by Kyle Clark and Michael Ford: Premier Press, 2002. 272 pages with illustrations. ISBN 1-931841-50-0 ($59.99) Read more about all four titles in the Inspired series and check back to VFXWorld frequently to read new excerpts.
Author Tom Capizzi (left), series editor Mike Ford (center) and series editor Kyle Clark (right).
Tom Capizzi is a technical director at Rhythm & Hues Studios. He has teaching experience at such respected schools as Center for Creative Studies in Detroit, Academy of Art in San Francisco and Art Center College of Design in Pasadena. He has been in film production in LA as a modeling and lighting technical director on many feature productions including Dr. Doolittle 2, The Flintstones: Viva Rock Vegas, Stuart Little, Mystery Men, Babe 2: Pig in the City and Mouse Hunt.
Series editor Kyle Clark is a lead animator at Microsoft's Digital Anvil Studios and co-founder of Animation Foundation. He majored in film, video and computer animation at USC and has since worked on a number of feature, commercial and game projects. He has also taught at various schools, including San Francisco Academy of Art College, San Francisco State University, UCLA School of Design and Texas A&M University.
Michael Ford, series editor, is a senior technical animator at Sony Pictures Imageworks and co-founder of Animation Foundation. A graduate of UCLAs School of Design, he has since worked on numerous feature and commercial projects at ILM, Centropolis FX and Digital Magic. He has lectured at the UCLA School of Design, USC, DeAnza College and San Francisco Academy of Art College.