Search form

'Inspired 3D: Lighting and Compositing': Dead Give-Aways: Real World Vs. the CG World — Part 1

Continuing our run of excerpts from the Inspired 3D series, David Parish, in part one of a two part article, addresses the dead give-aways between the real world and the CG world.

All images from Inspired 3D Lighting and Compositing by David Parrish, series edited by Kyle Clark and Michael Ford. Reprinted with permission.

This is the latest in a number of adaptations from the new Inspired series published by Premier Press. Comprised of four titles and edited by Kyle Clark and Michael Ford, these books are designed to provide animators and curious moviegoers with tips and tricks from Hollywood veterans. The following is excerpted from Lighting and Compositing.

Lighting a computer graphics scene is a simple process. Press a button to create a light and then aim the light at the CG elements to bathe the scene in illumination. If its too dark, increase the brightness or add a couple of extra lights from different directions. Heck, if that still doesnt make it bright enough, the 3D software people have come up with ambient lights, which light the whole scene without the trouble of aiming the light. Its simple, right? The difficulty lies in imparting the real-world subtleties of lighting upon the CG world. Following the steps outlined above produces a bright, shiny and unrealistic scene. All the work on the models, shaders and textures can quickly be nullified with a poorly conceived lighting approach. Mathematics and physics provide rules for how a light reflects off a surface, but no such rules exist for how to bring life to a computer graphics scene through lighting.

You can approach lighting a shot many ways, and how to begin can seem an overwhelming decision. The basic three-light setup of the key, fill and rim lights offers a good starting point, but even given that paradigm, the number of options quickly multiplies. Each light has a number of controls affecting the look of the image as well as the time it takes to render. The lighting artist constantly walks the line between efficiency and image quality when adjusting the lighting and rendering options. The computer does a great deal of the work, but the price in terms of time and computer power is sometimes prohibitive. There are many tricks and shortcuts for simulating computationally expensive processes, and this chapter explores various techniques available to the TD.

Choosing a Technique

There are as many different ways to light a computer graphics scene as there are lights on a movie set. I have known TDs who light exclusively with spotlights, those who rarely use more than five lights in any scene, some who use 30 lights no matter how simple the background, and others who rely heavily on computationally expensive processes. In a professional production environment, many of these choices are made before the shot is ever scanned. Each studio has specific software solutions, some proprietary and others off the shelf, and a pipeline in place for lighting a scene. While it is possible to implement new approaches in a studio, it is difficult to change the general approach used for lighting a scene. With personal projects there is more freedom, but the price of software and hardware can be prohibitive. Each situation requires careful evaluation of the requested effects, the research requirements, the software and hardware required, and the necessary talent. As you might expect, the mighty dollar plays a large role, and every stage is evaluated in terms of the overall budget for the project.

Large Studios

The choice of technique for lighting a shot is in large part dependent upon whether the production is handled by a small group or a large studio. Small groups or individuals generally have more freedom to explore creative options, but less money to spend on them. Large studios have more money, but are often restricted by established pipelines and lengthy testing phases for the implementation of new techniques. New techniques for achieving more realistic or stylized images are frequently discussed, but there must be a good reason (along with sufficient budget) to use them on a production. If the techniques require the purchase of software, that costs money. It also takes time and money for a studios programmers to write software and to integrate it into the lighting pipeline. If the technique is computationally expensive and requires a large amount of memory, disk space, and/or render time, this can affect every production in the facility. This either calls for the purchase of more machines and disk space or a delay in the output of images. These factors often outweigh the fact that a new technique will produce better imagery. If new techniques are promising, they are often tested on smaller projects, independent of the main pipeline for a studio. If the tests prove successful, the new technique can begin the process of integration with the studios main production pipeline. The competition is also a consideration, because another studio using a new technique to create higher quality images in less time always gets the attention of those in charge of the budget. At some point, regardless of the difficulties in implementation, most developments in computer graphics become affordable and are added to the studios arsenal.

Small Studios or Individuals

For an individual or a small group, the options for how to light and render elements must go through a similar process. The initial evaluation investigates whether the technique really provides the desired look for the entire project. Every computer graphics artist wants his work to look as good as possible, but it still comes down to ability, time, and money. The individual must have the knowledge and experience to use the technique properly or write software to create it in the first place. This is where a large studio, with a large talent base, may have an advantage. An individual or a smaller group, because of time and talent constraints, may be relegated to using software written by third-party vendors. Also, the specific needs of a production may require enhancements to existing software for achieving the desired results. If a small group does not have the technical ability to write the computer code for those enhancements, the cost of hiring a skilled and experienced programmer can take funding away from other areas of the project. Money plays a role in the talent as well as the computing power for generating the images. If more money and time are spent on a programmer developing specific techniques, less time remains in which to output the images. This requires more money still, for computers with sufficient memory, disk space, and processor speed to allow the production to keep on schedule. Each decision affects the other, and a small group on a tight budget must evaluate a number of techniques and the effect they will have on the production schedule.

Efficiency

The lighting workflow can be optimized within any system, regardless of the techniques chosen. Prioritization is the first step because every shot tends to have more tasks and details than accounted for in the production schedule. Each shot is a set of visual layers and each layer needs to be prioritized. In terms of lighting, this means breaking down the contributions and deciding how much time can be spent on each. One possibility is to render each light separately, and control their relative levels in the compositing stage. Another approach is to separate the computer graphics elements into components, such as fill, bounce, and specular, which can also be combined in the composite stage. The choice is up to the artists personal preferences, but each requires starting simple and building on a base.

Simple Tests

In the early lighting stages, many simple tests are more valuable than a single complex test. With the minimum number of lights to provide a general idea of the lighting scheme, many tests can be done on variations of the light positions, intensities, and colors. If the subject of the scene is an extremely complex CG model, the initial testing phase should be done on a simplified version (a proxy model) or stand-in object such as a deformed sphere. The size of these renders also plays a large role in the speed, because a render of one-half the resolution in pixels takes about one-quarter the time to render (not including render overhead such as file access times). Other factors in the speed are the render quality and optimization controls such as the number of samples or the shading rate. Samples refer to the number of samples used in anti-aliasing. More samples usually produce better quality, but other factors can affect this. The shading rate defines how fine an area is sampled to select the color for a pixel; a smaller sampling area produces better quality and a longer render time (see the Rendering section in Chapter 9: Computer Representations of Lights and Surfaces).

Shifting to High-Resolution

After the basics are established, it is time to move on to the high-resolution model and incorporate the lighting details. To judge the CG elements properly, the shading quality options need to be set close to their final values. The resolution also needs to be set to a reasonable size, because many of the details to be fine-tuned are not evident at lower resolutions. Once all the settings are established for a high-resolution image, the render time per frame of the tests is much longer. As stated earlier, doubling the pixel resolution quadruples the render time, and increasing any quality option parameters lengthens the render further. This makes it difficult and impractical to render an entire image each time a light is tweaked.

A good practice is to render the entire image as a base starting point, and save a copy of that image to disk. Then focus on a particular, representative area of the CG element and do test renders of only a very small portion of the entire image. Different software packages have different names for this render option and in Maya it is called a render region. An area can be selected simply by dragging a rectangle with the mouse, and then only the area defined by that rectangle is rendered. This saves a tremendous amount of time, but be wary of spending too much time making changes with a small area of the image as your visual feedback.

Lighting adjustments can cause unforeseen or unwanted results in other areas of the frame, so be sure to occasionally test the entire frame to ensure the desired look for the image.

Shadow Time-Savers

Shadows are discussed in more detail later in this chapter, but it is necessary to introduce a timesaving device with regards to shadow maps. The creation of depth map shadows can take a tremendous amount of rendering time. For each frame of a shot, the scene must be rendered once from the view of each light utilizing depth map shadows. Larger depth maps yield sharper resolution in the shadow but take more time to render. Start off with small maps, probably 512 pixels square, and see how the image looks. Most likely the shadow buffer (another term for shadow depth map) size will need to be increased, but the goal is to use the smallest size you can get away with. Depending on the renderer, low-resolution shadow maps may produce a desirable, soft shadow, or an undesirable, jagged shadow. Experimentation is important to understanding and optimizing this process.

Another timesaving method for use with depth map shadows is rendering the shadow buffers once, and then reusing them over and over again. Most software packages offer the option to save the shadow buffers and use them over again the next time the image is rendered. With the shadow buffers created ahead of time, the render process is sped up considerably. Only use this technique if the lights are in their final position, because moving the lights means new shadow maps must be rendered. Also, if the shadow buffers are rendered and saved to disk before the final render is completed, image manipulation can be performed on those frames. An additional shadow can be added, for example. This brings up a good point on efficiency when dealing with rendered sequences of frames. With an adjustment such as the one just mentioned, in which a sequence of frames needs to be adjusted and overwritten, it is good practice to make a copy of the original sequence first.

Saving Versions

On the subject of saving frames, it is useful to save as many frames as possible during the progression of lighting a shot. Systems administrators and producers will hate me for saying this, but save as many images online as you can get away with. Images take up a great deal of disk space, but they are both the end result and the only true documentation of the incremental steps along the way. Supervisors and directors frequently ask to see previous versions, and sifting back through videotapes of the shots history may not provide the necessary information. Images saved online also offer the opportunity to compare earlier and newer images side by side on the monitor. This can offer the lighting artist a great indication of progress, and can frequently help him spot areas that may have gotten worse instead of better. In addition to images, save as many versions of the lighting files as possible, with careful notes identifying the changes to each file. It is important to have a direct reference between images and lighting files, so make sure also that the file names for the images clearly link them with the lighting file which produced them. Any shot shown in dailies or to a supervisor should be kept track of because many times a director or supervisor asks to go back to something from several days or even several weeks ago. With careful notes and versioning, reproducing earlier lighting takes is a simple task.

IBL Basics

Global illumination is a term to describe light originating from every reflective and transmissive surface in a scene as well as light sources. As a technique, its most common implementations are radiosity and a variety of image-based lighting (IBL) techniques. Image-based lighting uses imagery in some way to illuminate the scene. In the past several years, much research has gone into implementing many IBL techniques to produce spectacular images. Many of these also incorporate very complex computer vision or image-based rendering techniques. The term global illumination is thrown around by a number of artists, supervisors and directors, but it is often actually describing a shortcut or a fake of true global illumination.

i3DLight01_fig01.jpg[Figure 1] IBL technique creating, coloring and positioning lights according to the background image.

i3DLight02_fig02.jpg[Figure 2] IBL technique using the background image as an environment map.

i3DLight03_fig03.jpg[Figure 3] IBL technique using a filtered background image as an environment map.

Multiple Lights

One basic IBL technique involves creating lights colored and positioned by the scene in the image. Suppose we have an image of a day at the park in which a CG character is to be placed. The background image is sampled and lights of the sampled color and intensity are placed in their apparent position (Figure 1). It could be assumed that the park is similar on the right, left and behind the character, and the lights can be copied to each side. Additionally, only the brightest lights may be allowed to produce a specular reflection. Higher sampling creates more lights and more accurate results, but more lights are typically computationally expensive. This technique has been used to create automated lighting for entire sequences of shots. Once set up, a lighting TD need not touch another shot if all goes well. A 360-degree photo of the environment would generate a more accurate set of lights. This could be created from many stills tiled together to capture a panorama (which appears later in this chapter) or just a couple of fisheye lens shots.

Environment Map

Why do we even need lights at all? Another IBL technique simply uses the image like an environment map and multiplies the CG objects diffuse color by the corresponding color in the environment map. This technique uses no lights at all, so it can render very quickly. The shortcomings of this technique are that specular highlights are missing, and the object looks like the environment was shrink-wrapped onto it (Figure 2). The lighting all comes from the same distance away and each piece of the environment map strikes an equal area on the object. To make this technique more useful (or useful at all), lights are added for bright locations of the image, and the image is filtered before it is used as an environment map (Figure 3). Suddenly this technique is more efficient than the first one mentioned, and it can provide almost the same quality and automation capability.

Of course, there are still many more tricks to getting these techniques to produce beautiful images, but that discussion can and has filled volumes.

If you do not have the ability to implement an IBL or radiosity technique, there are a few ways to simulate their qualities. Appropriately colored and positioned bounce lights mimic reflective illumination. It is essentially an implementation of the above technique, by hand with only a select set of samples and lights. Using a texture map that already has the environments lighting built in is also a very common way to simulate global illumination. Unless camera and character motion is limited, however, this cheat is easily noticed.

Shadows in Lights

In the real world, there is no such animal as a light that does not cast a shadow. In the CG world, however, shadows are optional. This level of control is beneficial for optimization and saving render time, but it is horrible for creating believable imagery. Computer graphics lights also enable the lighter to choose from a variety of shadow methods, including ray-traced shadows, depth map shadows, and shadows rendered as separate passes. Chapter 6 illustrates some of the problems with using non-shadowed lights (shown specifically in Figures 6.6 and 6.7). one being that light passes through objects if shadows are turned off. In many cases this is not an issue, but it can often cause problems by lighting up interior surfaces of a model that would otherwise be dark. The inside of a creature or characters mouth is a good example (Figure 4).

i3DLight04_fig04.jpg[Figure 4] No shadows in lights, causing a brightly lit mouth interior.

i3DLight05_fig05.jpg[Figure 5] A self-shadow from the arm onto the body.

i3DLight06_fig06.jpg[Figure 6] Sharp self-shadows from many directions.

Some supervisors insist that every light in a computer graphics scene use shadows, but practicality usually calls for some lights without shadows. The render timesavings are one reason, but the affects of a shadowless light can be to the advantage of the final image. Because it is unobstructed, it produces a fairly even level of illumination across a broader area than the same light with shadows. This can be useful for fill lights and bounce lights, or any light in a scene set to a fairly low intensity level. These shadowless lights become evident when they are bright or produce bright specular highlights, because the lack of a shadow opposite intense illumination is a glaring problem. A good way to test the effects of such a light is to isolate it, increase its intensity value to a high level, and render the scene. This points out areas to watch when the light is returned to its normal lighting values.

Types of Shadows

The difference between cast shadows and self-shadows is important to emphasize. Cast shadows are created when one object blocks the light from another object. A simple example is a character standing on the ground, blocking the light from hitting certain areas. The shadow on the ground appears as a cutout of the shape of the object occluding the light, and appears as if the object is casting its shape onto the ground. Cast shadows are only created in computer graphics scenes if the objects the shadows are to be cast upon are built in the scene. In the case of adding a character to a live-action background plate, the other objects are in the film but not necessarily in the CG scene, so creating cast shadows becomes a bit of a trick. Surfaces to receive shadows need to be created in the computer to mimic those in the image. The shadow pass can then be applied to the images in the composite. Purely digital scenes, in which every element exists in the computer graphics scene, offer an easier solution to cast shadows, because they are produced automatically if shadows are turned on.

Self-shadows are created when a part of a CG element occludes the light from another part of that same CG object. A good example is a characters arm moving between his body and the light source (Figure 5). This shadow is the same in principle as a cast shadow, because one object (the arm) is occluding the light from another object (the body). The difference in computer graphics is that the renderer needs to include each object in its own list of objects to consider when being shadowed. For ray-traced shadows, this means that rays bouncing between surfaces of the same object must be considered. A complex object shape may cause the ray tracer to hit its ray bounce limit sooner. If the level of illumination is to be maintained, more bounces must be allowed, which means more render time. This is why self-shadowing is often an option in a ray tracer. Self-shadows and cast shadows are both vital for making a computer graphics scene convincing. If a character is being placed into a live-action scene, and the shaders offer control over the softness of the shadows edges, it is beneficial to turn shadows on in all or most of the lights in a scene. The control over the shadows edges is important, because an element can begin to look a little crazy with sharp self-shadows from a variety of directions (Figure 6).

A scene typically has a single, dominant light source, called the key light. This light usually produces the most clearly defined shadow in a scene, while other less intense lights cast softer, fuzzier shadows. This is especially true for a light used to simulate ambient lighting in a scene, such as a bounce light (remember: ambient lighting is not the same as a CG ambient light). Every light source casts shadows, but the more diffused the source, the less perceptible the shadow becomes. With ambient lighting in a scene that has bounced off several surfaces before reaching the subject, the shadow can be faint and almost imperceptible. This makes it possible to place a great amount of blur on shadows from bounce lights, or even turn them off completely (Figure 7).

While fully digital scenes offer all the geometry necessary for cast shadows, they also present a problem in terms of using shadows in all lights. Due to the diffusion of many light sources in a scene, it is likely not desirable to have harsh shadows from every light criss-crossing through the image. This can completely change the composition of an image, not to mention confusing the original point of interest. For this reason, I find it more appropriate to use most lights without shadows in purely digital scenes as compared with fitting a CG element into a live-action background. The fully digital scene has no point of reference within the image to point out the inconsistencies of non-shadowed lights with reality. It is not advisable to turn off shadows in every light in a digital scene, or even in every light other than the key. Having several shadowless bounce and fill lights, however, can work as long as the possible pitfalls are taken into account.

Cookies

Cookies offer a nice break from the technical discussions of shading rates and anti-aliasing. Although theyre not as tasty as their name might indicate, cookies are a valuable tool in both live-action film lighting and computer graphics lighting. A cookie (also called gobo, cukaloris, cuke or slide map) is something placed in front of a light, usually with irregular openings, to occlude certain areas of light from the subject. The resulting patterns simulate something between the light source and the subject, such as tree leaves, clouds, and so on. In computer graphics, a painted texture is often used as the cookie. The texture can be purely black and white, with black completely occluding light and white being completely transparent. Gray values can also be used with a 50% gray area filtering out one-half of the lights intensity. Colors used in the cookie texture simulate effects such as light shining through a stained glass window. These colors affect not only the intensity of the light (colors represent a value which decreases the amount of illumination, just as with the grayscale image), but also contribute to the color of the illumination reaching the subject.

A common usage for a cookie, seen in many films, is a dappled sunlight effect. A cookie is created in the form of a grayscale image to produce a pattern of tree leaves in the key light for a scene (Figure 8).

[Figure 8] Texture used as a cookie for creating a dappled sunlight effect.

This pattern is large in scale and simulates a high canopy of trees with larger clumps as opposed to individual leaf shapes. The gray areas in the transition between black and white will only partially occlude the light, and they help make the transition from light to shadow less harsh. The cookie shown here is sharp and high in contrast, but it can be processed, prior to its use in a light, to add blur or distortion. It can provide the subtle effect of a character or creature walking in and out of sunlight and tree shadows without producing harsh shapes and distracting the viewer from the intended point of emphasis in the scene. For example, the cookie used in front of the light for a scene from Star Wars: Episode I The Phantom Menace is blurred considerably, because hard-edged shadows on Jar Jar would distract from the intent of the scene. Each of Jar Jars forefingers receives a bright hit of light in which the key light makes its way cleanly through the cookie. The forest behind the three foreground characters offers additional reference for the type of lighting a cookie is often used to simulate. The effect is downplayed slightly in this close-up scene, but in studying previous scenes, the effect is more pronounced as Jar Jar walks through the forest. In those scenes, the primary goal of the shots is to emphasize the motion of traveling through the forest. Well-defined shadows from the trees offer a clear indication of the distance over which the character travels. To that end, less blur on the cookie textures serves to enhance rather than distract from the shots intent.

Instead of a single texture used as a cookie, a sequence of frames can be used to simulate motion of the occluding object. Using the leaves from the previous example, a simulation could produce the effect of rustling leaves. The textures are simulated, either with a particle system or a procedural texture map that changes over time, and are rendered out as a sequence of frames. Footage of the sky viewed through leaves blowing in the wind could also be used. The total number of textures is usually at least the length of the shot for which they are to be used. Many times, the textures are used on multiple shots, and the sequence may need to be looped (looping is continuously repeating a sequence of frames) to cover the entire sequence. As long as there are no distinctive movements and the shapes produced by the cookie textures are not too well defined, looping the sequence is usually not noticeable.

Motion Blur

Among the many computationally expensive computer graphics techniques, motion blur is one of the most important in creating believable scenes. Even with fairly limited knowledge of computer graphics elements, viewers will quickly spot that something is wrong if CG elements are rendered without motion blur. Motion blur is a result of motion relative to the camera, such as a camera moving across a still scene, subjects moving across the camera plane, or a combination of the two. Film and still cameras alike produce motion blur and although it can be minimized with an increase in shutter speed, it is an expected phenomenon for any movement recorded on film.

Motion blur is broken down into two basic categories: transformation and deformation. Transformation motion blur happens when something is moved (or transformed) through space. An example would be a ball passing in front of a camera or a camera panning across a scene. With deformation motion blur, the movement is the result of something changing shape. An example of this type of blur would be the swinging of a tail that is part of a sitting dog. The dog is not being translated through space; the points defining the tail are being deformed to create a new tail shape. Deformation motion blur actually involves copying two sets of geometry into the renderer, whereas transformation blur only requires one set of geometry (for each item moving through the scene) and its transformation information. Each motion blur type is captured automatically by the film camera but takes quite a bit of computation to create in computer graphics.

Shutter Speed

Motion blur is a result of motion during the exposure time of a frame of film. With a computer graphics camera, however, the shutter exposes the image plane instantaneously, thereby eliminating motion blur. In order to create motion blur for a CG camera, a camera shutter is simulated. Most film cameras use a 180-degree shutter (although other shutter angles can be used) with a film speed of 24-frames-per-second. In this setup, the shutter is open for 1/48 of a second recording the image, and closed for 1/48 of a second while the film is advanced. The 1/48 of a second represents one-half of the time a single frame of film occupies the screen. For that reason, many 3D computer graphics rendering software packages use a 0.5 (half frame) shutter speed to evaluate motion blur. What this means for the computer camera is the creation of an extra rendered frame to simulate the motion blur.

With two frames, the renderer can compare the difference between the two, evaluate which pieces of geometry have changed location, and blur them according to how far they have moved. With a motion blur shutter speed setting of 0.5, the CG camera will render two images, each one a half frame (1/48 of a second) apart. You can accomplish this several different, but a common method is to render the actual frame and also an additional frame one-half frame later. For instance, if frame 22 is being rendered, then with a 0.5 shutter setting, frame 22 and frame 22.5 could be rendered for comparison. The software automatically does the comparison, applies a blur according to the amount of movement, and outputs a single render for frame 22. It becomes clear why this process is so computationally expensive, because it means rendering two frames for every one that is output and applying some sort of 3D filter to create the blur. Depending on the software package, a shutter speed of 0.5 may also evaluate a frame using frames 21.75 and 22.25 or frames 21.5 and 22 in order to output a motion blurred frame 22. Each method produces slightly different results, but for the purposes of this text, it is sufficient to understand the concept of two frames being calculated for the output of one.

To learn more about lighting and compositing and other topics of interest to animators, check out Inspired 3D Lighting and Compositing by David Parrish; series edited by Kyle Clark and Michael Ford: Premier Press, 2002. 266 pages with illustrations. ISBN 1-931841-49-7. ($59.99) Read more about all four titles in the Inspired series and check back to VFXWorld frequently to read new excerpts.

i3DLight09_davidParrish.jpgi3DLight10_kyleClark.jpgi3DLight11_mikeFord.jpg

David Parrish (left), Kyle Clark (center) and Mike Ford (right).

David Parrish went straight to work for Industrial Light & Magic after earning his master's degree from Texas A&M University. During the five years that followed, he worked on several major films, including Dragonheart, Return of the Jedi: Special Edition, Jurassic Park: The Lost World, Star Wars: Episode I The Phantom Menace, Deep Blue Sea, Galaxy Quest and The Perfect Storm. After five years with ILM and a short stay with a startup company, he was hired by Sony Pictures Imageworks to work on Harry Potter and The Sorcerers Stone.

Series editor Kyle Clark is a lead animator at Microsoft's Digital Anvil Studios and co-founder of Animation Foundation. He majored in Film, Video and Computer Animation at USC and has since worked on a number of feature, commercial and game projects. He has also taught at various schools including San Francisco Academy of Art College, San Francisco State University, UCLA School of Design and Texas A&M University.

Michael Ford, series editor, is a senior technical animator at Sony Pictures Imageworks and co-founder of Animation Foundation. A graduate of UCLAs School of Design, he has since worked on numerous feature and commercial projects at ILM, Centropolis FX and Digital Magic. He has lectured at the UCLA School of Design, USC, DeAnza College and San Francisco Academy of Art College.

Tags