Continuing our run of excerpts from the Inspired 3D series, David Parish, in the second of a two-part article, addresses the dead give-aways between the real world and the CG world.
This is the latest in a number of adaptations from the new Inspired series published by Premier Press. Comprised of four titles and edited by Kyle Clark and Michael Ford, these books are designed to provide animators and curious moviegoers with tips and tricks from Hollywood veterans. The following is excerpted from Lighting and Compositing. Read Part 1 of this article.
[Figures 1 & 2] Block with no motion blur (left) composited over a background plate. Block with motion blur and a shutter speed setting of 0.1 (right).
3D CG Motion Blur Comparisons
The amount of motion blur depends on the speed of the motion, and this factor can have a large impact on how computer graphics elements are viewed. With the simple composite shown in Figure 1, the block is rendered with no motion blur. The block is actually moving from the top right to the bottom left of the picture plane, but this image gives no indication of that animation. The clarity of the render, along with its size in the frame, gives away the lack of detail in the blocks textures. The sharp edges of the block are a dead giveaway that it has not been thrown across the view of the camera, even though the 3D computer graphics scene indicates that it has.
The image in Figure 2 shows the block with the same animation as in Figure 1, but this time motion blur has been turned on in the renderer with a shutter speed setting of 0.1. The block, when in motion, now appears as if it was tossed gently in front of the camera. The shutter speed is set lower than a normal film camera would record. Because the normal setting would be 0.5, this image produces 1/5 the amount of blur one would expect to see with a block moving this speed and recorded with a 180-degree shutter. Although the motion blur is slight, it still makes quite a difference in the look of the image. The interior of the block is blurred, but note the edges. A small range of pixels around the outside of the image is now semi-transparent, allowing portions of the background scene to be viewed through the blocks edges. 3D motion blur calculates the opacity of the object by making it proportional to the amount of time the object resides in that pixel. With the shutter setting at 0.1, the moving block is exposed to the film for only 1/240 of a second, so the amount of motion blur and transparency are small.
[Figures 3 & 4] Block with motion blur and a shutter speed setting of 0.25 (left). Block with motion blur and a shutter speed setting of 0.5 (right).
Increasing the shutter speed to 0.25 yields the result shown in Figure 3. This amount of blur creates the perception that the block is moving much faster. The transparent area around the edges is much larger, as the two frames combined to create this image are now farther apart. The blurring has also reduced the contrast and saturation of the block, because the values are mixed together.
If the block is moving very fast across the picture frame (which it is in this particular animation), an image like that in Figure 4 is appropriate. The shutter speed setting for this frame is 0.5, which matches a typical film camera with a 180-degree shutter. At this setting, the block has moved a great distance between frames 22 and 22.5, creating large areas of transparency. The blur is so great that the object is now barely recognizable as a block. If this block only appears in shots with a great deal of motion blur, making its color and texture almost indiscernible, the texture painting and shader departments should be alerted ahead of time. It does not make sense to spend time painting detailed textures and developing complex shaders for a computer graphics object only viewed as a blur. If the object is to be used in other scenes in which it is moving less or not at all, attention to detail in the textures and shaders is appropriate. If the block is viewed only as in Figure 4, the time is better spent elsewhere.
2D Motion Blur Techniques
Motion blur can also be accomplished using 2D techniques, which are computationally simpler than but not as accurate as the 3D method. 2D motion blur evaluates the change in position of pixels on the picture plane, and does not take into account the actual 3D distance covered by objects. This method works fairly well for motion across the picture plane, but is less successful when objects rotate, deform or fly directly toward or away from the camera. 2D blur evaluates the difference between two images and blurs differing pixels by an amount proportionate to the distance (in 2D screen space) they have changed. Some 3D software packages incorporate the 2D motion blur technique, with the shutter setting defining times for two renders, much the same as with the 3D method. If 2D motion blur is added, the difference between the two renders required for the blur can only be measured in the x and y directions. No movement in the z direction is taken into account. As with 3D motion blur, though, the edges will still be semi-transparent.
This is a result of combining images in which the block exists in different positions. If the two renders (one at frame 22, for instance, and one at frame 22.1) are overlaid, the transparency is defined by the areas where one image extends beyond the other (see Figure 5). In this figure, one block is semi-transparent and the other darkened to clearly illustrate the areas in which one extends beyond the other. In the interior areas of the block, in which each render has red, green, blue and alpha values, the two images are blurred together. In the areas in which one block extends beyond the other, one block is blurred to fill the space of the second blocks alpha or each block is blurred into the other blocks alpha and then those images are averaged. The blurring of color and alpha channels creates semi-transparency. There are several variations in 2D motion blur methods implemented by different software packages or used for different situations. They all come down mixing, blurring and repositioning a combination of images.
Reflections
Reflections help clearly integrate computer graphics objects into scenes. They are most noticeable in very reflective objects, but are also helpful in providing subtle details to many elements in a scene. A reflection is, in essence, the same phenomenon that provides a specular highlight on any surface. It is the light reflecting directly from a surface into the camera lens. If the surface is very smooth, as with a perfect mirror, then the reflected light provides a clear reproduction of the elements in the scene. As the roughness of the surface increases, the clarity of the reflection is reduced due to scattering of light. The brightest light sources and objects reflect off almost any surface causing specular highlights.
Collecting Textures
A surface reflects the world around it, but unfortunately the camera only records a very narrow view of this world. To create an accurate reflection in computer graphics, additional footage is required of the surrounding scene. One way to collect this information is to take several shots of the surrounding area using a wide-angle lens. With the camera placed in the position of the subject, shots are taken of the surrounding area to the left and right, as well as above and below. The sky and the ground in most scenes are the simplest part of this process, because they can be generic textures. The viewer expects to see a blue sky reflected in the top of a shiny element, and grass, concrete or any generic ground texture as the bottom portion of the reflection (see Figures 6 and 7).
[Figure 6] Possible ground textures.
[Figure 7] Possible sky textures.
The other elements in the scene may require more attention, because they are typically at eye level and more clearly in the cameras view (depending on the angle of the camera), but still do not require extremely detailed texture maps. Because reflections are distorted by both the shape of the object they reflect off, as well as the roughness of that surface, complete precision when creating reflection texture maps is not necessary. A 360-degree panorama, created by piecing together many shots from the position of the subject in a scene can be used without the necessity of exactly matching and blending every seam between images (see Figure 8). The images used for the environment map in Figure 8 have been reduced in scale in the x direction to make a more manageable image file (at actual size, the image is more than 10,000 pixels wide) and to display more easily on the page. If the portions of the reflection map showing up on the subject appear incorrect, the environment texture can be scaled during the texture-mapping phase in the shader. Unless viewed in a perfect, flat mirror, the reflections are distorted and faint on most surfaces and any inaccuracy is not noticeable.
CG Environment Types
To project the environment texture onto a subject, a choice is made in the shader for the shape defining the environment. This shape is most commonly a sphere, a cube or a cylinder. Each type creates a slightly different reflection and has advantages and disadvantages. The spherical projection method most closely recreates the real-world environment, but warps standard rectilinear images to fit to its shape. To provide an accurate map, this method requires a texture recorded with a fisheye lens. These lenses are expensive and not always readily available. A cube environment shape lends itself for use as the walls, ceiling and floor of a room containing reflective objects. In the example presented here, a beach ball is placed into the scene in Figure 8. The reflection map in this case uses cylindrical mapping and is shown at full intensity in the image on the left of Figure 9. The reflection is completely independent of lights in the scene, so even with no lights or each lights intensity set to zero, the reflection is still evident. The amount of reflection is controlled in the surface shader, and is typically a percentage or a normalized value between zero and one. When the amount of reflection is reduced to an appropriate level for the surface material, the effect offers another touch of realism for blending the computer graphics element into the shot (see Figure 9, right).
Specific Reflections
In some cases, the reflection is vital to the scene, and the director asks for a specific image to be clearly defined. In Terminator 2: Judgment Day, the liquid metal Terminator taking control of a helicopter looks at the pilot and tells him to get out. For this scene, it is important to the story that the pilots frightened face be clearly shown in the reflections on the Terminators head. In such a case, the most reliable way to attain the desired result is recording a clear shot of the image required for the reflections. This image can then be manipulated or placed over an appropriate background to fit with the scene, and then be used as the reflection texture. The image can simply be another texture added to the appropriate area of the object. The other reflections are non-descript, simply matching the color and value of the surrounding scene. When using reflections, identify the important element, and make the remaining areas of the reflection map unobtrusive, even if not entirely accurate.
Additional data for creating environment maps is found in the reference shots of a chrome sphere. The chrome sphere offers precise data for creating reflections in a scene from the cameras point of view. If the camera is zoomed in on the chrome sphere to capture enough detail, this image can be used as an environment texture. This method creates accurate and believable reflections, and it is a valuable tool in a digital production pipeline.
Eyes
Eyes represent the life force behind believable computer graphics creatures and characters. Every director and supervisor on the projects I have worked on has concentrated a great deal of energy and attention on the eyes of CG characters. There are several methods for ensuring that the audience notices the eyes and believes there is life and energy behind them. We look people in the eye every day during conversations and interactions, so we have a great deal of experience with how eyes should look. You may not be able to explain that a glisten is missing from the caruncula lacrymalis (the fleshy area on the inner angle of the eye), but the lifeless eye is noticed.
Textures
A believable eye begins with a high-quality texture (see Figure 10). The texture shown here has a reasonable amount of detail in the cornea (the orange section), as well as some vein detail in the whites of the eyes. The distortion of this texture (the black at the top represents the pupil) is the result of a square image for the spherical map corresponding to the shape of an eyeball. A great deal of reference study is invested in the construction of eyes and in the painting of their textures. Depending on how close the character or creature comes to the camera, details such as individual red blood vessels in the whites of the eyes are added or ignored.
The lighting and reflections also play an important role in the look of the eyes on a CG character or creature. Reflections in eyes, because they are a smooth, wet, shiny surface, are sometimes quite distinctive. If the eyes come close to the camera, an identifiable shape, such as a window, light fixture or cloud can add a great deal of realism to the character (see Figure 11). A particular reflection can be added to the eye with a special environment texture map. The sky and clouds here were specifically chosen and oriented to be distinctive on the characters eyes.
[Figure 11] A sky with clouds reflected in the characters eyes.
Specular Highlights
Although reflections are important, every special effects supervisor I have ever worked with has asked specifically to see specular highlights on the eyes of characters and creatures. It has long been standard procedure to use special eye lights on close-ups of live actors to accentuate their performance. To blend CG characters into the film requires the same treatment. These lights can be placed fairly close to the eyes, and often a separate one is created for each eye. The light should be a specular-only light, and can be roughly located with the specular information offered in the hardware render window, before fine-tuning the location in the rendered image (see Figure 12). The specular highlights on the eyes are usually small and tight, once again because of the wet, shiny and smooth surface of the eyeball. Parameters for the size of the specular highlight can be adjusted in the light.
Animated characters are likely to move out of the lights providing the specular highlights with the setup just described. It is sometimes beneficial to parent (attach to maintain a fixed relative position and orientation) the eye lights to the character so that they move wherever he does. For this to appear believable, it is important to have the lights inherit the translations but not the rotations of the character. If they follow both the translations and rotations, the specular highlights will appear on the exact same spot in each eye, no matter how much the character moves and rotates, in every frame of the shot.
The best reference for creating believable CG eyes is found in studying the eyes of everyone around you. Look closely and see what gives a persons eyes depth and life. Also, look at the films with CG characters you think are particularly lifelike, and study their eyes. Chances are the lighting artists who created them paid a great deal of attention to the eyes.
The Dangers of Accuracy
Several of the headings in this chapter point out ways to add detail and realism to CG renders and make them more believable. Before attempting to include all these techniques in creating the perfect computer graphics scene, step back and evaluate the visuals created by all these techniques. Numbers can be perfect but still look wrong. As with a lighting diagram, the reference and data is collected and interpreted but not blindly entered into the computer as the final lighting data of a scene. The end result is the key, and the eyes are the only true judges. If the numbers all say it is a perfect match, but everyone you show it to says it looks out of place, then the numbers are wrong. Do not fall in love with numbers, symmetry and the enticement of an image in which every pixel fits neatly into a predetermined equation. The way light sculpts the forms of a scene is an organic process. The difference between a natural lighting setup and a crystal-clear, shiny computer graphics scene is obvious to the most casual observer.
With that said, attention to detail is still extremely important. The details in the eyes help the viewer identify with a character and add life to CG creatures. Reflections tie elements in directly with the elements surrounding them, and motion blur helps to mimic the recording process of a film camera. Cookies and shadows offer methods for occluding light to add both realism and compositional interest to a scene. Each of these techniques can be explored to give the lighting artist indispensable tools for creating quality computer graphics imagery.
To learn more about lighting and compositing and other topics of interest to animators, check out Inspired 3D Lighting and Compositing by David Parrish; series edited by Kyle Clark and Michael Ford: Premier Press, 2002. 266 pages with illustrations. ISBN 1-931841-49-7. ($59.99) Read more about all four titles in the Inspired series and check back to VFXWorld frequently to read new excerpts.
David Parrish (left), Kyle Clark (center) and Mike Ford (right).
David Parrish went straight to work for Industrial Light & Magic after earning his masters degree from Texas A&M University. During the five years that followed, he worked on several major films, including Dragonheart, Return of the Jedi: Special Edition, Jurassic Park: The Lost World, Star Wars: Episode I The Phantom Menace, Deep Blue Sea, Galaxy Quest and The Perfect Storm. After five years with ILM and a short stay with a startup company, he was hired by Sony Pictures Imageworks to work on Harry Potter and The Sorcerers Stone.
Series editor Kyle Clark is a lead animator at Microsofts Digital Anvil Studios and co-founder of Animation Foundation. He majored in film, video and computer animation at USC and has since worked on a number of feature, commercial and game projects. He has also taught at various schools, including San Francisco Academy of Art College, San Francisco State University, UCLA School of Design and Texas A&M University.
Michael Ford, series editor, is a senior technical animator at Sony Pictures Imageworks and co-founder of Animation Foundation. A graduate of UCLAs School of Design, he has since worked on numerous feature and commercial projects at ILM, Centropolis FX and Digital Magic. He has lectured at the UCLA School of Design, USC, DeAnza College and San Francisco Academy of Art College.