Search form

'Inspired 3D: Lighting and Compositing': Lighting a Production Shot

In another excerpt from the Inspired 3D series, we step through the collaborative efforts involved in lighting and compositing a shot.

All images from Inspired 3D Lighting and Compositing by David Parrish, series edited by Kyle Clark and Michael Ford. Reprinted with permission.

This is the latest in a number of adaptations from the new Inspired series published by Premier Press. Comprised of four titles and edited by Kyle Clark and Michael Ford, these books are designed to provide animators and curious moviegoers with tips and tricks from Hollywood veterans. The following is excerpted from Lighting and Compositing.

The previous chapters described techniques and shared the knowledge and experience of established industry professionals. This chapter begins the practical application section, in which some of those tips and concepts are applied to a real-world example. The next two chapters step through the process of lighting and compositing a shot, from the time the scene data is collected to the point when the shots are complete and labeled as final. This description is a continuation of the processes shared in each of the four books in the Premier Press Inspired series. The models for this scene are provided by Tom Capizzi, and the steps involved in creating and texturing the models used here are covered in his book, Inspired 3D Modeling and Texture Mapping. The process of creating the character rig used in the scenes, which includes the joints, facial controls and anything enabling the characters movements, is covered in the book by Michael Ford and Alan Lehman titled Inspired 3D Character Setup. Kyle Clark gives life to the character, and in his book, Inspired 3D Character Animation, covers the process of animating the character in the story. That brings us to the lighting and compositing.

This collaborative effort mimics the production pipelines found in digital production studios. The communication, hardware and software issues, and critiques of the work are all a part of the process. The project revolves around a story created specifically for this book series, with the intent of exploring the process of creating our own pipeline and creative content. Sharing ideas and data with a group of people scattered across the country has proven at times to be difficult, but the overall experience has been both enlightening and successful.

Without a production crew tracking every detail and every element, each author is responsible for his own organization and prioritization. The constraints include hardware considerations, the scope of the project, deadlines, and budget. With four books in production, in addition to the creation of the scenes used for the practical application sections, the process took on a slightly different path than that of a visual effects studio. The machines used for the renders are our own personal computers, not the hundreds of processors in huge render farms available at large studios. Iterations are limited, and planning takes on an even greater role in this scenario. The budget is not that of a summer blockbuster film, so our examples emphasize the process while still creating an interesting character that serves the story. The techniques covered earlier in this book are put to use here, with more emphasis now on the workflow and less on the technical aspects of how it all works. With that in mind, Ill start the process and youll see what the computer graphics world has in store for the newborn character.

[Figure 1] Storyboards for CG animation shots SF-00 and SF-01.

[Figure 2] Storyboards for CG animation shots SF-02 and SF-03.

The Goal of the Shot

To begin any shot, the TD first needs a clear understanding of the goal for a shot and how it fits in with the surrounding shots. The storyboards offer a good starting point for obtaining a general idea of the story points and how they cut with each other. With the project for this book, called ISF (Inspired Short Film is the not-so-great working title), discussions were conducted early in the process regarding the story and the creation of the artwork. Once the story was ironed out and the script was created, storyboards were drawn to help with visualization (see Figures 1 and 2). Artwork used as a reference is usually discussed with the supervisor or the person in charge of the artistic direction during the shot production. The supervisor acts as a go-between for the original creators of the artwork and the people working on the shots. Because each of the participants in ISF was actively involved in the storyboarding process, there was no need for such a middleman. Storyboards offer a good starting point, and they are valuable to look back on at times as a reminder of the storys intent. They are simply guides, though, and things can change as the shots develop and detail is added.

In an effects facility, a storyboard for a shot is usually accompanied by a chart providing many other useful bits of information. Because the ISF production was small, each of the participants involved was responsible for recording and maintaining this information during the course of the project. An online database was created, and each of us made updates and changes as the project progressed. Through e-mail, each member of the group was notified of every change, and all messages were stored for future reference if necessary. The pipeline for ISF required the ability to send large images and scene files back and forth among the participants on the project. Using this workflow, the requirements for the lighting and compositing were established.

Shots in Context

The surrounding shots and the story are important aspects to the goal of the shot. Familiarization with the sequence and how the shots cut together is useful both for establishing continuity and for understanding the major points of interest. The scenes depicted in the storyboard drawings of Figure 1 cut directly together. On the left is an opening shot establishing the scene in a workplace of cubicles. As the first shot in the sequence (labeled SF-00), it sets up the mood for the shots to follow. Shots in a digital production require names, and a common format is a two-letter abbreviation representing the sequence name (in this case, Short Film), and a two-digit number, representing the shot number in the sequence. Shot SF-01, depicted on the right of Figure 1, represents the interior of one of these cubicles and our hero stretching and yawning. Because these two shots cut together, it is necessary to match the general mood and lighting scheme. They are, however, different camera angles showing different areas of the workplace, so there is some artistic leeway with regard to perfect continuity. Cutting to the third shot in the sequence does not offer this luxury (see Figure 2, left). Shot SF-02, a close-up of the character yawning, requires accurately matching the previous shots lighting. The fourth shot in the sequence (see Figure 2, right) shows the character heading back to his chair and settling down to work. Once again, shot SF-03 must maintain the lighting and look of the previous shot due to the similar camera angle on the character in the shot.

The story represented by these boards is that of an energetic computer graphics artist dealing with many of the crafts frustrating events. The four shots pictured are the opening shots of the short animated piece. They offer insight into the flow of the story and the goals for making each shot fit with those surrounding it. Each of these goals is related to the ultimate vision for the look and feel of the shot. Many opinions are involved in creating and improving the work, but in the end, it is the directors vision that determines when a shot is ready for a spot in the final product.

The Directors Vision

Maybe its Spielberg or maybe its you, but someone has the vision for the shot, and its up to you to understand it and produce it with computer graphics imagery. This is the goal, and without an understanding of that, the most beautifully lit scene may not achieve the result requested. Developing new techniques and coming up with amazing new looks is great, but creating an image showing off brightly lit, perfect reflection maps, brings few plaudits if the director asks for a dull, unobtrusive CG element.

So how do you know what the directors vision is and how to obtain it? Many of the answers lie in the meetings that take place at the beginning of a project. With a large film project, these meetings occur long before the TD ever sees a shot, as the director, scriptwriters and studios work to get a project into production. With ISF, however, the process was quite different. Due to the collaborative nature of the project, we each had input at the beginning and all through the production on how the scenes would look and for the ultimate vision for the project. Each of us played, in part, the role of director and shaped the vision for the project. This process offers the opportunity for each person involved in the production pipeline to offer input on not only creative concepts, but also optimization. As the specific portions of the story were discussed, each member of the team was able to offer ideas and suggestions on what was possible, what might help make things move more smoothly and what would be unlikely given the time and the resources. In terms of lighting and compositing, it was important for me to point out when elements would be required and in what format they would need to be created. Directing by committee takes a tremendous amount of communication and willingness to compromise, but, in the end, the result has been a rewarding experience.

Setting Up the Lights

Armed with a clear visual understanding of the shots intent, its time to add the lights and make it look beautiful. The shot begins with analysis of both the background plate and the online reference to determine where the initial lights are placed. If the scene is completely digital, as the shots in this chapter are, then all elements necessary to render the scene may be in one 3D scene. Depending on the complexity of the elements, scenes are split up for easier management, and either combined during the rendering process or afterward in the compositing stage.

i3DLight03a_fig03.jpgi3DLight03b_fig03.jpg[Figure 3] Top view (left) and side view (right) of the scene for shot SF-02.

To understand the scene, it is helpful to view it from several different angles, in addition to the scenes camera. A top view and a side view offer a feel for the scale of the scene, how far away to place the lights, and how the character relates in 3D space to the set (see Figure 3). The shot here is SF-02, which is a close-up of the character yawning. The camera for this scene is the highlighted green camera. The other cameras represent views from other scenes. The view from the camera shows exactly how the character fits within the picture plane (see Figure 4). At this point, it is helpful to look at several different frames throughout the shots length. Creating a quick, hardware-rendered flipbook (see Mayas manual entries for playblast) provides reference showing the character movement as the shot plays through its frame count. Take note of which character parts stay in frame, which parts may enter or leave frame, and where the eye is drawn by the motion throughout the scene. The characters animation and framing work together with the lighting to create the visual impact agreed upon when establishing the vision for the project.

The Key-Light

Now its time to create and locate the lights. From the storyboards and the art direction of this shot, it is apparent that the character is the emphasis. The framing leaves little other choice, and the yawn is the obvious story point. With that in mind, start with the basic three-light setup. Although the shot does not at first glance lend it self to the strong key setup that exists in a scene with direct sunlight, it is still a good starting point. To identify the direction of the key, the scene is analyzed in terms of possible lighting contributions. The desk lamp in the corner has been chosen as the key-light for this scene, because it has an obvious presence in the environment. With a live-action background plate, the set lighting dic ates the starting point for the CG lighting. Before adding the fill and rim, the key can be tested by itself and adjusted to create to create the most pleasing composition the for the scene. Using only the key creates a high contrast scene and provides a base to build from with the other lights. Once the key-light looks good, the others are integrated much more easily. Do not underestimate the power of a single light source. For an excellent example of dramatic but minimal lighting, watch the classic film Citizen Kane. By using harsh lighting techniques, Orson Welles creates a world of striking contrast and visual intrigue. The character in shot SF-02, rendered with only a key-light, might not quite be up to Citizen Kane standards, but its a start (see Figure 5).

i3DLight05a_fig05.jpgi3DLight05b_fig05.jpg[Figure 5] Shot SF-02 rendered with the key-light only, from two different light positions.

It is common for a TD to work on multiple shots simultaneously, and many times similar shots are handled by the same TD to help maintain continuity. Shots SF-01, SF-02 and SF-03 are all similar shots and can likely utilize many of the same lights. A lighting rig, comprised of lights the scenes have in common, helps to maintain consistency. There are a number of ways to create the rig, the simplest being to create the lights, export them into a separate file and copy that file into each scene. If the setup is more complicated, involving constraint s to geometry the scenes share for instance, a simple scripted set of commands can automate import of the lighting rig. Lighting rigs are particularly useful on large productions with many shots of the same creature or character in similar lighting conditions.

[Figure 7] Shot SF-01 (left), SF-02 (middle) and SF-03 (right) rendered with the key-light at a high angle.

Testing Key-Light Positions

Because the lighting for shot SF-02 needs to work well with shots SF-01 and SF-03, key-light test s on frames from those shots are helpful as well (see Figures 6 and 7). In Figure 6, the key-light is fairly low and in front of the character. This position most closely simulates the desk lamp, but using the key-light from a low angle such as this results in a frightening looking character. Lighting from low angles is commonly used in horror films, but the goal is not to create a demonically possessed CG artist. Because most people are seen with lighting from above, whether it is from the sun or a light fixture, the unusual shadows cast by lighting from below cause the character to look otherworldly. The key-light positioning in Figure 7 offers a more flattering look for the character.

CG Light Choice

The key-light is a spotlight with no color adjustments at this point. The cone angle is wide enough to extend beyond the view of the camera, and the edges are softened using a value of 20 in the penumbra field. Each person develop s his own techniques and preferences for choosing light s and placing them in a scene. There are no hard fast rules in this process, because the final image is judged and not the CG lighting setup. The examples shown here use only spotlight s because of their versatility and the level of control they offer. The key-light in each of these scenes uses depth-map shadows with a 2K resolution. This is a fairly high-resolution size for shadow maps, which help s reduce artifact s apparent in self-shadowing situations. Figure 8 is a detail view from a frame of shot SF-01 showing the key-light shadow from the characters left arm casting onto his body. In the top image, the depth map shadows are rendered at a resolution of 512 pixels square, and in the bottom image they are increased to 2,048 pixels square. The top image shows artifacting in the shadow s edges, while the increased shadow map resolution in the image on the bottom has smoothed the shadows edges considerably.

i3DLight08a_fig08.jpgi3DLight08b_fig08.jpg[Figure 8] Shot SF-01 shadow detail with 512 pixels square shadow-map resolution (top), and with 2,048 pixels square shadow-map resolution (bottom).

Of note, relating to the key-light, is a special light added for shots SF-01 and SF-03. The desk lamp providing the primary source of illumination for the scene is just behind the computer, in the screen-right corner of the cubicle. The light itself is not visible from the camera, but its affect on the wall is. The desk lamp is aimed at the wall and the resulting light bounces to illuminate the cubicle. Because the renderer is scan line and not ray traced, this effect is simulated with two lights. The key-light shines out onto the scene as the light bounces from the wall. A special light for the desk lamp shines from the desk up onto the cubicle wall (see Figure 9). The only illumination provided by the desk lamp visible from the camera is on the far wall of the cubicle.

i3DLight10a_fig10.jpgi3DLight10b_fig10.jpg[Figure 10] Fill light placement in plan (left) and elevation views (right).

Fill Light

Fill Light

Now its time to add the fill and rim lights to the lighting rig. Because the rig should be as generic as possible, SF-01 is a prime candidate for a template lighting shot. The camera for this shot provides the widest view of the scene, so if the lights illuminate the proper elements here, they also cover everything in the tighter shots. The close-up shots may require additional lights for details, but that comes later in the process. The first fill light typically illuminates the side of the character opposite the key-light. Here, the fill is placed off to the screen-left side and lower in elevation than the key to more evenly fill in the left side of the character (see Figure 10). Figure 10 shows the key-light as well as the camera. Note the lines extending from the camera, indicating its field of view. This is helpful in determining the placement of certain lights to cover the cameras entire viewing area. Figure 11 shows the rendered result of the fill lights contribution.

Rim Light

The rim light can be tricky and often requires a good deal of experimentation. To match the typical film lighting techniques, it is usually placed above and behind the character to be emphasized. In this scene, as with many movie scenes, there is no motivation for this light. There is not an actual light in the scene that would produce the rim lighting, but its purpose is to outline and accentuate the character. In my experience, the placement of CG rim lights tends to be higher and more toward camera than their real-world counterparts. Figure 12 shows the plan and side views of a rim lights placement, and Figure 13 shows two rendered variations of the rim light. Notice in the left image of Figure 13 that the characters screen-left arm has an intensely bright highlight as a result of the rim lights location almost directly above. The rim effect is usually a fairly narrow, bright edge, just big enough to define the characters outline. The image on the right of Figure 13 (which matches the light position shown in Figure.12) shows a narrower rim effect, although because of the characters position, the amount of rim varies greatly between the hair and arms. If a rim with consistent width is requested, it is sometimes necessary to add multiple rim lights that illuminate only certain portions of the character. These additional lights can either be created to shine only on specific geometry (called object-centric lighting in Maya) or can be zoomed in close with large penumbra settings so their edges are not noticeable.

i3DLight12a_fig12.jpgi3DLight12b_fig12.jpg[Figure 12] Rim light placement in plan (left) and elevation views (right).

i3DLight13a_fig13.jpgi3DLight13b_fig13.jpg[Figure 13] Two positions for the rim light in shot SF-01.

Key, Fill and Rim Combined

With the key, fill and rim lights placed, the scene can now be rendered and evaluated with the contribution of all three lights (see Figure 14). This is the beginning of the lighting for the scene and establishes a basic key-to-fill ratio along with a rim location. The task now is in evaluating where the lighting looks good and where it needs help. Certain areas need help, such as the face, specifically the eyes. More general areas of concern are the ambient contributions from the scene and the many surfaces in this sp ace from which to bounce light. Because one shot is a close-up, however, and likely requires different bounce lighting than the other two shots, the bounce lights are left out of the generic rig. For the purposes of this discussion, the three spotlights (key, fill, and rim) are considered the lighting rig to be shared among shots SF-01, SF-02 and SF-03.

The lighting rig location is not locked. The rig may need to be moved based on the location of the character. If the entire rig is grouped under a single node (a parent node), it can be moved to other parts of the scene as one piece. Because the character is the center point of the shots, the lights need to follow him. Translation of the lights maintains the basic look established in the template shot, but any rotation changes the look and feel of the scene and hinders the continuity between shots. In shot SF-03, the character is closer to the monitor, so the entire rig may need to be moved in that direction. If the lights are far enough away, the character remains in the beams of the lights. You might ask, why not just move every light a great distance from the models to maintain the light contributions? There are two reasons why this is not a good idea: One is that the farther a light is from the objects in the scene, the less detail and resolution there is in the shadow maps. The shadow maps are rendered from the point of view of the light, so if the light is far away and the character is a speck in the middle of the shadow buffer, then no detail appears in the shadow maps. This limited information in the shadow maps causes artifacting in the render (worse than that depicted in Figure 8, top). The other reason to keep lights closer to the scenes subjects is attenuation. If the lights have falloff, as real-world lights do, their proximity to the models affects the illumination. If the lights are farther away, the intensity of the light can be increased until the light reaches the subjects. This solution has a major drawback, however, because a distant light offers a much shallower falloff gradient. The light decreases in intensity from its point of origin until it fades off to zero. Over a longer distance, this attenuation is gradual. Unless the light being simulated is sunlight, attenuation is more accurate and more noticeable if the lights are placed close to the subjects.

i3DLight15_fig15.jpg[Figure 15] Scene color palette.

i3DLight16_fig16.jpg[Figure 16] Test reference scene with desk lamp and computer.

Reference and Observation

During the process of lighting a shot, observation and reference remain worthy of consideration. For the shots in this sequence, a general look for the lighting and color scheme is chosen by the team involved in the production. A color palette is produced to gain a better understanding of the range of colors that are expected once the scene is lit and rendered (see Figure 15). These color palettes are common in fully digital features, because the entire film is of 10 mapped out in terms of colors and the moods they create. The overriding color scheme has a tremendous effect on the scenes impact on the audience. Color establishes the mood, and this scene is dominated by the blue portion of the spectrum. This tranquil coloring helps with the sleepy, late-night time frame that the shots span. By adding small amounts of the colors found in the palette to the lights, the scene takes a step toward fitting more closely with the intended vision.

The shots in this sequence received general lighting direction from the team of artists participating in the production. This direction calls for the scene to be lit from the corner by a desk lamp. The glow of the computer screen is also described, but no additional specifics are mentioned. The character is obviously the focus, so the task is to create a lighting scenario within the basic range provided by the color palette and maintain emphasis on the character. A test scenario is created to investigate the type of illumination a desk lamp creates and how the glow of the computer monitor adds to the lighting of a fairly dark scene. By taking photographs of a scene with a desk lamp and a computer, additional reference is provided (see Figure 16). These photographs offer valuable footage, describing how a desk lamp can provide bounced illumination for the scene, as well as the effect a glowing computer screen can have on a characters face. This type of reference can be collected during the entire life of a shot and usually provides suggestions for improving the lighting. Each time a difference is noticed between a reference photograph and the computer graphics scene, ask the question, Whats missing? By identifying the specific differences between the goal and the CG scene, creating the effect becomes a much simpler task.

Tests and Isolation

Once the initial lighting rig is created, additional lights are added as the shot enters the fine-tuning stage. Once the lights are in place, the render times increase and efficient testing of the entire frame becomes impractical. Test renders at this stage need to be done on smaller portions of the frame. Detail items such as the eyes, skin, cloth and hair can be evaluated with very small renders showing only a portion of the scene. A typical method involves rendering the entire scene at its final output resolution for a single representative frame. That frame is saved and multiple tests are then created of small sections within the frame. These tests are then compared with the original render, as well as other tests, to judge whether the adjustments are successful.

The eyes offer a good example of this technique. Because shot SF-02 is so close on the characters face, the eyes take on more importance. With a render of the entire scene completed, smaller renders of only the eyes can then be completed very quickly (see Figure 17). This allows many more iterations for adding things such as specular highlights to the eyes.

Another technique involves isolating a particular piece of geometry and rendering it alone. The eyes can be handled in this way as well by selecting only the eyeballs, hiding everything else in the scene, and rendering. If the shadow maps are already rendered and saved to disk, this isolation of the eyes shows the proper effects of shadow from the eyelids or other hidden elements in the scene. Placing specular highlights in the eyes can be a difficult process, but this technique makes it much easier to run a great number of tests in a small amount of time. The special eye-highlight lights are added, tested until the specular hits are in the exact location desired and then the rest of the geometry is turned back on to test the frame once again. The eye lights are typically specular-only lights with no diffuse contributions. They can also be set up as object-centric lights and only illuminate the eye geometry.

Isolating elements for testing is a valuable way to save time, but it can also cause problems when all the geometry in the scene is rendered. Be cautious of testing lights that affect an entire scene on just a small portion of the character. Unexpected or undesired results may occur in the areas not represented in the small test areas. After all the adjustments are made in the small testing area, it is worthwhile to render the entire frame again to eliminate surprises. This is true on a single frame and also for other frames in the shot. A light producing the perfect specular highlight in one frame may produce entirely different (and unwanted) results at a different frame. It may not be possible to test every frame, but representative frames on the characters extreme positions and boundaries of the camera move are of ten worth a test render.

Critique and Prioritization

When the work is ready to be reviewed, the shot is rendered for the entire frame range in preparation for a critique. Each studio has its own way of handling critiques, but most are conducted in a forum known as dailies. As the name implies, these occur each day, usually at the same time at the beginning of the workday. The work is presented to the supervisors and other crew members, suggestions and criticisms are made, and the rest of the day is spent incorporating as many improvements as possible. Dailies present a great deal of valuable information, and taking notes is helpful for remembering each point.

Start Simple

The first time a shot is sent to dailies, it is in a rough format with basic elements and rough lighting. The first takes start the progression involved in critique and improvement. It is the time to get as many elements as possible into the shot rendered at full length. This involves render time, which takes place on the render farm of a studio. Depending on the size of the studio and the budget of the production, a certain number of processors are available for rendering the shots each night. Priorities are set based on how processor-intensive a shot is, as well as on how close the shot is to being completed. As a shot becomes more refined, it gains priority in the hierarchy in an attempt to complete the shots and achieve the quotas set by the production schedule.

Because the early stages of a shot receive lower priority in terms of processor time, it is necessary to keep things simple. The key, fill and rim lights are enough for this take, particularly if one TD is running two or three shots at a time. The goal of this take is to show the entire shot with a basic lighting scheme and evaluate how the scene looks with the animation, camera move, and textures. The process for the shots represented in this book is similar, and each shot is run from start to finish with a basic lighting setup. This allows the modeler to evaluate the geometry, the character setup team to evaluate the characters skin and movement, the animator to evaluate the actions, the lighter to evaluate the lighting and the entire team to evaluate the overall look and feel of the shots. The render resources for this small project are minimal, because we do not have access to a farm of processors, but the concept is the same: Get the entire length of the shot out in some form as soon as possible and begin the critical analysis.

i3DLight18a_fig18.jpgi3DLight18b_fig18.jpg[Figure 18] Shot SF-02 with a warmer color added to the key-light.

Shot-Specific Details

Shots SF-01, SF-02 and SF03 start with the lighting rig and are developed on a shot-by-shot basis from that point. After each is shown in rendered format, decisions are made on what needs to be added, and priorities are assigned. Each shot requires the addition of bounce lights to fill in the darker areas underneath the chin of the character. The suggestion is also made to change the key-light to a warmer color and see how that affects the characters appearance in the primarily blue environment. This suggestion definitely yields a result giving the character a less pasty feel, while still maintaining the overall color scheme of the shot (see Figure 18). Shot SF-02 also requires additional work on the eyes.

The computer monitor produces a glow most prominent in shot SF-03, in which the character is closest to the screen (see Figure 19). This light offers a great deal of flexibility, because it can be any color. Changing its color from blue to red through the course of the sequence can be useful for changing the mood as the story progresses.

As these additional lighting suggestions are added to the shots, the refinement process continues. Each suggestion must be prioritized in order of importance to the shot and the time it takes to accomplish. More difficult additions should be started early, in case additional support is needed or an alternate method must be devised. The additions for shots SF-01, SF-02 and SF-03 are all fairly simple to execute, but each requires many test renders for fine-tuning. The critiques continue even when a shot is labeled as complete, but the renders at this point are labeled as final (see Figures 20, 21, and 22).

i3DLight20_fig20.jpgi3DLight21_fig21.jpg[Figures 20 & 21] Shot SF-01 (left) final render. Shot SF-02 (right) final render.

The process followed to create the images in this chapter is one of many possible approaches to producing a computer graphics shot. Because a great number of the technical aspects are covered in other chapters, this chapter simplifies the steps. With the basic computer graphics knowledge from those chapters, along with a solid understanding of a 3D software package, a computer graphics scene can be produced. The quality of that result depends on the time spent, the hardware and software resources and the talent involved. The elements created in this chapter are broken down and reassembled in the following chapter on compositing. 14. Lighting a Production Shot

To learn more about lighting and compositing and other topics of interest to animators, check out Inspired 3D Lighting and Compositing by David Parrish; series edited by Kyle Clark and Michael Ford: Premier Press, 2002. 266 pages with illustrations. ISBN 1-931841-49-7. ($59.99) Read more about all four titles in the Inspired series and check back to VFXWorld frequently to read new excerpts.

i3DLight23_DavidParrish.jpgi3DLight24_kyleClark.jpgi3DLight25_mikeFord.jpg

Author David Parrish (left), series editor, Kyle Clark (center), and series editor Mike Ford (right).

David Parrish went straight to work for Industrial Light & Magic after earning his masters degree from Texas A&M University. During the five years that followed, he worked on several major films, including Dragonheart, Return of the Jedi: Special Edition, Jurassic Park: The Lost World, Star Wars: Episode I The Phantom Menace, Deep Blue Sea, Galaxy Quest and The Perfect Storm. After five years with ILM and a short stay with a startup company, he was hired by Sony Pictures Imageworks to work on Harry Potter and the Sorcerers Stone.

Series editor Kyle Clark is a lead animator at Microsofts Digital Anvil Studios and co-founder of Animation Foundation. He majored in film, video and computer animation at USC and has since worked on a number of feature, commercial and game projects. He has also taught at various schools, including San Francisco Academy of Art College, San Francisco State University, UCLA School of Design and Texas A&M University.

Michael Ford, series editor, is a senior technical animator at Sony Pictures Imageworks and co-founder of Animation Foundation. A graduate of UCLAs School of Design, he has since worked on numerous feature and commercial projects at ILM, Centropolis FX and Digital Magic. He has lectured at the UCLA School of Design, USC, DeAnza College and San Francisco Academy of Art College.

randomness