Search form

'Inspired 3D': Compositing Techniques and Methods — Part 2

From the Inspired 3D series, David Parrish continues his look at compositing techniques and methods.

All images from Inspired 3D Lighting and Compositing by David Parrish, series edited by Kyle Clark and Michael Ford. Reprinted with permission.

All images from Inspired 3D Lighting and Compositing by David Parrish, series edited by Kyle Clark and Michael Ford. Reprinted with permission.

The following is a continuation of the tutorial on Compositing Techniques and Methods from the new Inspired series published by Premier Press. Comprised of four titles and edited by Kyle Clark and Michael Ford, these books are designed to provide animators and curious moviegoers with tips and tricks from Hollywood veterans. The following is excerpted from Lighting and Compositing.

Edges and Blending

One of the primary concerns when layering images together is the edge quality of the elements. When placing an element over a background, the place for problems to be introduced and noticed is most likely the outer edge of the element being composited. As stated in the section describing the over function, a pre-multiplied image is necessary for many basic compositing functions. The process of multiplying the color channels by the alpha channel is necessary for an operation such as the over, but is not desirable for operations such as color corrections. Pre-multiplication changes the color channels in order to provide the proper blending levels for compositing functions. By changing those values, however, a different set of color channels is produced, affecting any future adjustments to color. If the over operator is the final step in the composite, the pre-multiplied image is what is needed. If color corrections are required on the original element, though, they should be performed before the pre-multiplication. If performed afterward, the edges of the image will be adjusted incorrectly. For instance, if an image has a portion of its matte edge with an alpha value of 0.5, during the pre-multiply process each color channel will be multiplied by 0.5. Each red, green and blue value will become one-half of its original value in those edge areas in which the alpha value is 0.5. A color operation, such as a brightness call, performed after that pre-multiplication adjusts a darker value of each color channel than in the original image. Brightness values of two will double the red, green and blue values for the majority of the image. In those edge areas in which the pre-multiplication has reduced the values of the color channels, however, the resulting colors will be darker than if the original color values had been brightened. If that image is then used in an over function, a dark edge will appear around the element. If the alpha edge is fairly abrupt, meaning the transition from values of zero to one happens in a very small amount of space, the artifacts introduced by pre-multiplying before a color correction may very well be imperceptible. The problem is most pronounced with soft-edged matte channels, in which the transition from values of zero to one happens over a larger distance. In any case, it is good practice to consistently make color corrections on un-pre-multiplied images. Even the smallest of lines around the edge of an element can balloon into larger problems by the end of a complex comp script.

The Earth mentioned in Chapter 6, If You Can See It, You Can Light It, presents a good example of layering several elements together. The elements combined in creating the Earth are the planet itself, the clouds and a smoke layer to be used on the outer edge. It is important to note with any composite script and any element, the image is not confined to only the images provided. A composite script may start with only three layers, but each of those three layers may be manipulated to create several additional layers. Each layer presents an opportunity to add realism to a scene. With a strong understanding of the desired look gained from studying reference images and footage, additional layers can be created and introduced into the comp script in ways to enhance the final output.

Base Layers

The first layer to be utilized for compositing the Earth is a render of a sphere with the continents and ocean textures applied (see Figure 36). The two texture layers of land and water are combined in the rendering stage. The clouds are rendered separately as their own layer, and are simply a texture map applied to the same sphere as the Earth element (see Figure 37). Step number one is placing a shadow of the soon-to-be added cloud layer on the earth. This is accomplished by taking the cloud layer and using it as a matte for the Earth. To do this, an alpha channel must be created which will outline the shapes of the clouds and provide transparency values for the clouds that are less dense. The alpha channel resulting from the render of the cloud layer is a solid outline of the entire Earth sphere.

lighting01_36a.jpglighting01_36b.jpg[Figure 36] (figure is in two parts) Earth render on the top, and cloud shadow element added on the bottom.

Because this is not what is needed in this situation, another alpha channel must be created. One way to do this is with a luminance key operator, called a luma key for short. The luma key creates an alpha channel for an element based on the luminance value calculated from the combined color channels. The luma key creates more opaque alpha values for brighter portions of the image, with pure white areas being completely opaque, whereas pure black areas are completely transparent.

Cloud Layers

The alpha channel from the luma key operation on the clouds is then translated slightly down and to the right (because the sun is above and to the left). A new element is created by placing the Earth inside the translated cloud alpha and darkened with a brightness function. This layer is now clouds of darkened Earth. By placing this cloud shadow layer back over the original earth, cloud shadows are simulated (see Figure 36). Before placing the cloud shadow layer back over the Earth, it is necessary to perform an inside operation, to limit the cloud shadows to the Earth. Otherwise, because the clouds shadow layer is translated down and to the right, the shadows would extend past the Earths edge. This method of creating shadows is not technically accurate (the fact that the offset shadow would extend beyond the Earths edge instead of wrapping around is a clear sign of this), but at this distance it is a very good trick. It is much faster and easier than rendering an actual shadow pass of clouds onto the Earth (and it also saves disk space). Because the most inaccurate portion of the shadows is to the lower right of the Earth, which is the darker side opposite the sun, it works well.

lighting02_37a.jpglighting02_37b.jpg[Figure 37] (figure is in two parts) Cloud render on the top, and slightly blurred cloud render on the bottom.

The next step is to add the clouds that are casting the shadows. The clouds are rendered separately as their own layer, and are simply a texture map applied to the same sphere as the Earth element (see Figure 37). The cloud render is fairly hard edged, so a slight blur is performed on the element (see Figure 37). Notice that the blur not only softens the edges, but also reduces the contrast. The amount of blur is slight to maintain the distinctive shape of the clouds, yet slightly reduce their harshness and contrast.

Haze Layer

In addition to the clouds, another pass of the Earth is rendered to help with the edges. To simulate the light refracting through the Earths atmosphere at the edges, a separate haze pass is rendered (see Figure 38). Because this pass is intended to extend beyond the edges of the original Earth element, it is rendered on a sphere scaled up 5% larger than the original Earth sphere. The smoke texture used for this render is a generic smoke texture and is simply used to break up this particular element.

lighting03_38a.jpglighting03_38b.jpg[Figure 38] (figure is in two parts) Haze render on the top, and blurred haze render outside Earth alpha on the bottom.


Combining the Earth Layers

Once the elements have been created, the fun of layering them together into a final image begins. The base layer for this composite is the Earth render with the darkened clouds on top of it (see Figure 36). An add operation is used to place the next layer, the clouds (see Figure 37), onto the Earth. This step can be accomplished with either an add operation or an over operation (utilizing the alpha channel created with the luma key). A benefit of the add operation is the control over the precise amount of clouds added to the Earth. For this comp, the clouds were added in at a value of 75%. Another benefit to using the add operation is the mixture of cloud color with the existing color channels of the Earth. Adding a gray cloud RGB value to the green continent backdrop of the Earth results in a cloud with a green tint, as if the cloud were partially transparent. This can also be accomplished with the over function, but an extra step is involved to provide the clouds alpha channel for controlling the transparency. After the clouds are added, the final layer in the composite is the haze for the outer edge. This layer offers the same options for layering as the cloud layer, and it too is composited into the image with an add operator. The outer edge is added in at 90% to strongly differentiate it from the background. Once all of the Earths layers are combined, they can be placed over a star field to see how it melds with the background (see Figure 39). Notice how the transparency of the outer edge layer affects the stars behind it providing interaction between the Earth element and its surroundings. For the purposes of this book, each layer is a little brighter or saturated to clearly illustrate the points. In the production environment, the art direction might call for such chromatic strength or something more subtle. If the spacecraft is headed for the Earth, and the director wishes the emphasis to be on the destination, then it may well show up in a scene, as depicted here.

Although each of the layers in the Earth composite might seem like elements created independently from each other, it is important to realize the interconnectedness of these pieces. The first step of most composites is to place the basic foreground over the basic background. In this case, the base Earth render would be placed over a simple star field, or possibly even a crude, low resolution image of stars. This first, quick or rough comp (sometimes referred to as a slap comp) is a vital first step in the compositing process. The broad strokes are put down first, and the details can then be added as the shot progresses. In compositing, the first 80 to 90% of the process is usually straightforward. The last 10 to 20% is where the serious amounts of time and energy are necessary for adding the details and fine tuning. Because the first part of the job is easier, it makes sense to look at several rough comps in a sequence before putting the time and energy into the details. The rough edit of these simple comps is valuable in determining whether a shot will work as intended. Its a much more efficient process if those decisions are made before the painstaking detail work is done. This is true for large film productions as well as small personal productions. The sooner the shots can be cut together, even in the simplest form, the easier it is to get a handle on the need for details, and where time can be saved on things that are unnecessary or will never be seen.


Edges in compositing go unnoticed if theyre done well, but can destroy a shot if not handled properly. The two most glaring problems a viewer normally spots in a poorly composited shot are lighting and edge problems. When an elements lighting does not match the rest of the scene, the viewers eye is drawn to it as odd rather than a visual point of interest. Poor lighting can also accentuate problems with the edge of an element, making a tough problem even worse. If elements are created on the computer graphics side of things, there is no excuse for mismatched lighting. The CG lights can be set up to match any environment and made consistent from element to rendered element. The problem becomes more difficult when bluescreen or greenscreen elements do not match the lighting of the scenes with which they are to be combined. A bluescreen character with high fill levels and a key to the left can be extremely difficult to integrate into a live-action scene with the key on the right and low fill levels. For this reason, it is important in the planning stages of shots, particularly for bluescreen or greenscreen elements, to create a solid lighting plan and stick with it. Changes can and will be made down the line, but major shifts in lighting emphasis, such as switching the key light to the opposite side, can make for many long days in the compositors world.

The edges in bluescreen and greenscreen extractions can take a considerable amount of work. Element edges in compositing definitely fall into the final 10 to 20% category that can be the most challenging part of a shot. Fortunately, there are several powerful software options that provide excellent tools for simplifying this task (Ultimatte from the Ultimatte Corp. and Primatte from Photron Inc. are two examples). The ins and outs of extractions can be learned from the software manuals, from a mentor, or from trial and error. Theres no substitute for experience in this area, so practicing with bluescreen and greenscreen extractions is important. One good type of practice subject is a bluescreen or greenscreen element of a person with long hair, which can be extremely challenging due to the fine detail and transparency of the hair. Another practice scenario involves a blue-screen element shot with excessive blue light spilling across the subject. This blue spill needs to be removed or suppressed and replaced with colors matching both the subject and the scene. Before, during, and after each operation in the extraction process, it is necessary to pay careful attention to the edges of the element. Check for deterioration, discontinuities, loss of edge detail, excessive fuzziness or crispness, and ensure that the matte resulting from the extraction, accurately represents the shape of the object being composited.

Grain and Finishing Touches

After combining the elements and sorting out the layering processes, the final touches can be applied to the composite. These final touches vary depending upon the types of input and output the shot requires. They include adding grain to match film footage, carefully checking each color channel for details and inconsistencies, evaluating the black levels to ensure consistency between CG elements and film or video elements, and adding in additional renders or elements for small details.

[Figure 40] Noise field to simulate film grain, shown in red, green, blue and all color channels.

[Figure 40] Noise field to simulate film grain, shown in red, green, blue and all color channels.

For film work, in which computer graphics elements are added to film footage, it is necessary to match the grain of the film. Grain is the noise variation due to the uneven distribution of the light-sensitive crystals, which capture the image. Each color has its own layer of crystals on the film, so each color has different grain properties. Additionally, each film type has its own distinctive type of grain due to its chemical composition, and the grain amounts can vary greatly from one roll of film to the next, based on the manufacturing run. The best way to view the grain is by zooming very close on the image and checking each individual color channel. In most cases, the blue channel has the densest grain, followed by green, with the least amount usually in the red channel. The grain appears as a subtle change to the image, but is more noticeable in moving footage. Grain is simulated in the compositing stage with a noise function. This noise function can be adjusted in terms of scale and intensity in each color channel to closely resemble the look of the background film image (see Figure 40). The scale and intensity of the grain in the example shown here are magnified to show the element more clearly. There is also grain in digital video footage that sometimes goes unnoticed. The light-sensitive chip in a video camera is also susceptible to manufacturing imperfections and electromagnetic interference, which can add noise to the image recorded digitally (see Figure 41). The digital footage of the beach and pier clearly show the grain effect in the sky of the blue channel. If grain is not added, computer graphics elements will stand out as too crisp and clean. Grain is one of the many elements utilized by the compositor to combat the perfect CG look.

lighting06_41a.jpglighting06_41b.jpg[Figure 41] Digital video frame on the top and the blue channel isolated on the bottom to show the grain.

lighting07_42a.jpglighting07_42b.jpg[Figure 42] Separate diffuse and specular renders of a beach ball element.

The following steps demonstrate techniques used in placing a simple computer graphics beach ball into the scene of Figure 41. The beach ball is rendered in two separate passes, one for the diffuse light contributions and a second for the specular (see Figure 42). By rendering these elements separately and then compositing them with the add operator, the specular highlight can be quickly and easily increased or decreased.



Two additional elements for this scene are shadows. When combining computer graphics elements with live-action footage, it is often necessary to render separate shadow passes. The shadows to be rendered must be cast upon a surface in the CG scene. This surface is a ground plane placed to match the ground plane of the live-action scene. With rudimentary pushing and pulling of points on the ground plane model, a rough or undulating surface can be simulated. The background image can be used as a guide in the 3D software package. Keep in mind that the ground surface need not match precisely, because the shadows are not the focus of most shots. Providing enough variation in the ground surface to break up the general shape of the shadow will suffice. The rendered shadow pass can then be used as a mask channel for darkening the background image (see Figure 43). Notice also the darker area on the inside left of the shadow. This is the second shadow render pass, referred to as the contact shadow. This pass represents the darker area closest to where the element comes into contact with the ground. In most scenarios, the ambient light in the scene lightens the shadow as it gets farther from the object casting it. This depends on a variety of external conditions, such as the amount of bounce light, additional light sources, and the intensity of the key light, but as a general rule, the contact shadow will help integrate a CG element more cleanly. The contact shadow pass is often blurred more than the full shadow pass, and is used to make that area slightly darker than the original shadow.

Grain and Edge Blending By using the beach ball element, two more useful elements are created for integrating the ball into the background scene: a grain element and an edge-blending element. The grain element is created with a noise field matching the noise existing in the background element. In this instance, a small bit of the background sand is added in with the grain to provide additional break up (see Figure 44). The elements in Figure 44 are enlarged to better show the detail. The new grain element is then placed inside the alpha of the beach ball and combined with an add function to the beach ball element. The second element, shown on the bottom of Figure 44, is an edge element. This element is created by applying an edge detection filter on the beach ball element. Most compositing software packages provide an edge detection filter. It is important to note that the edge filters utility depends heavily upon the contrast level of the element involved. Edge detection algorithms are based on contrast, so it is frequently necessary to boost the contrast on an element before using such a filter. Once the edge matte is created, it can be used for a number of final compositing tweaks. In this case, it will be used for a slight edge blur on the final composite.

lighting09_44a.jpglighting09_44b.jpg[Figure 44] Beach ball grain element and edge element.


Color Adjustments

These elements can now all be layered together to help place the beach ball on the beach. The first step in the process is placing the beach ball over the background plate and making a simple A over B composite (see Figure 45). The beach ball does not exactly blend in seamlessly with the background. The render itself is extremely bright and high in contrast with anti-aliasing problems around the edges, and no shadow to tie it to the ground. It needs help, and thats exactly what the shadow, grain, and edge elements will do. Adding those elements, along with color corrections to the beach ball, makes this scene much less offensive to the eye. This rough comp provides direction and clearly illustrates the problems with the element. Before the additional layers are added, the beach ball saturation is reduced, and the specular highlight is attenuated to 60% of its original value. These values are adjusted using trial and error to select values for each operator until the ball blends more closely with the background. This can be evaluated by taking pixel readings from the background to determine saturation and brightness values. The beach ball values can be adjusted to put them in the same range as the background. Once the numbers are fairly close, look at the ball and see if it looks right. This is one point where it benefits the compositor to take a step back, and look outside the numbers. If it still looks too saturated, even if the numbers match with the background, reduce the saturation. The eye of the viewer is the final judge, and even with precise value matching, a shot can still look wrong.

[Figure 46] Final comp of the beach ball over beach background.

[Figure 46] Final comp of the beach ball over beach background.

Combining the Beach Ball Layers

The grain can now be added to the beach ball element. If the grain is adjusted to match the background element closely, this should be a straightforward add process. Once again, if the grain looks incorrect, adjust the values to help the ball blend in with the background. With the grain applied to the beach ball, the result can be placed over the background. Because the background has already been darkened using the shadow mattes, the beach ball should fit nicely over its shadow. The last step in this example is the application of the edge blending. In the photographic reproduction of most scenes, there is a certain amount of interaction between the elements. It is noticeable particularly at the edges of objects, in which a small amount of the background appears to bleed into the edge of the foreground element. This can be simulated with the edge matte created earlier. In this example, the beach ball element is blurred through the edge matte. This creates a semi-transparent thin edge to the ball, allowing pixels from the background to bleed through slightly. This is a delicate operation, and keeping the edge matte thin, as well as using a reasonable blur amount, will prevent the edge of the ball from appearing transparent (see Figure 46). An alternate method for blending the edge with the background begins with putting the background inside the edge matte after the beach ball shadow has been applied to the background. This provides the beach balls edge, but with the background inside of it. This new element can then be blurred and carefully added to the beach ball after it has been placed over the background. This essentially accomplishes the same thing as blurring through the matte but with different operators. There is rarely a single solution to a compositing challenge, and experimenting with different approaches often helps the compositor develop new tricks and a stronger understanding of the discipline.

Each of these finishing touches can add a great deal to a shot, but many may also go unnoticed. When adding these touches to a shot, carefully evaluate the time involved in their creation versus the ultimate visual payoff. For instance, you might spend an entire day adding just the right specular glint to a characters fingernail for two frames of a shot. If no one notices it, then it hasnt really improved the shot, and the time could have been better spent making progress on other shots. There is a fine line between appropriate attention to detail and excessive tweaking. Until youve gained the experience to confidently make this call on your own, it is vital to solicit the opinions of peers and supervisors to help develop this sensibility. There are many valuable concepts presented in this chapter, but it only begins to scratch the surface of compositing as an art and a profession. With the ease of shooting digital video and capturing it on a computer, opportunities to practice are readily available. Observation and reference remain the most valuable tools, because compositing is ultimately understanding how 2D layers come together to simulate a 3D world.


Author David Parrish (left), series editor Kyle Clark (center) and series editor Mike Ford (right).

David Parrish went straight to work for Industrial Light & Magic after earning his master's degree from Texas A&M University. During the five years that followed, he worked on several major films, including Dragonheart, Return of the Jedi: Special Edition, Jurassic Park: The Lost World, Star Wars: Episode I The Phantom Menace, Deep Blue Sea, Galaxy Quest and The Perfect Storm. After five years with ILM and a short stay with a startup company, he was hired by Sony Pictures Imageworks to work on Harry Potter and the Sorcerer's Stone.

Series editor Kyle Clark is a lead animator at Microsoft's Digital Anvil Studios and co-founder of Animation Foundation. He majored in film, video and computer animation at USC and has since worked on a number of feature, commercial and game projects. He has also taught at various schools, including San Francisco Academy of Art College, San Francisco State University, UCLA School of Design and Texas A&M University.

Michael Ford, series editor, is a senior technical animator at Sony Pictures Imageworks and co-founder of Animation Foundation. A graduate of UCLAs School of Design, he has since worked on numerous feature and commercial projects at ILM, Centropolis FX and Digital Magic. He has lectured at the UCLA School of Design, USC, DeAnza College and San Francisco Academy of Art College.