Search form

2006 Year in Review: Digital Acting and 3D Environments

VFXWorld picks its favorite moments of digital acting and 3D environments in this Year in Review overview from 2006. View the various clips in our VFXWorld Media Player.

This year certainly had its share of memorable achievements in digital acting and environments. Industrial Light & Magic led the way in both areas with Davy Jones from Pirates of the Caribbean: Dead Man's Chest and the full volumetric 3D water simulation from Poseidon. Other standouts include: The Man of Steel's flying exploits from Superman Returns, Saphira, the dragon from Eragon, the kids in peril from Monster House, the T-Rex skeleton from Night at the Museum, Route 66 and Radiator Springs from Cars, Lex Luthor's crystal island in Superman Returns, the photorealistic Antarctica of Happy Feet and the photorealistic skies and landscapes during the exciting dogfights in Flyboys. Here's a recap of the top five in each category from our Year in Review:

Digital Acting

Pirates2-320.jpg

1. Davy Jones From Pirates of the Caribbean: Dead Man's Chest

Tasked by director Gore Verbinski to come up with more complex and authentic-looking CG characters in Dead Man's Chest, since Davy Jones and the crew of The Flying Dutchman would be interacting closely with the live actors, Industrial Light & Magic put its R&D team to work on a new incarnation of its proprietary motion capture system, dubbed Imocap. The results of Jones are so impressive, in fact, that people have already begun talking about the sea-encrusted villain with his creepy tentacle beard as the next great CG performance breakthrough.

"We've done a lot of computer vision work here in R&D for the last several years and we were hoping to apply that to motion capture work outside of the MoCap studio some day," remarks Steve Sullivan, director of R&D at ILM. "Dead Man's Chest provided an opportunity for [remote MoCap] and a clear case of [requiring] that same quality on set where we needed those actors together in a scene for those hero performances. So we worked with the production team to nail down constraints of what we could get away with and what's off-limits."

Imocap became a new protocol for measuring the actors and obtaining data during the actual shoot for the creation of skeletal motion in the computer. The software contained added functionality and new ways of tracking data. Special sensor-studded suits for the actors playing CG characters were created, which were more comfortable than typical MoCap outfits, as the actors were required to wear them in a variety of simple and treacherous conditions. "... On set, I wore a gray suit, which had reference points comprised of white bubbles and strips of black-and-white material, so that when they come to interpret your physical performance, they're better placed to do so," adds Bill Nighy, who plays Jones.

According to Sullivan, "the suits needed to be 'dignified.' They had to be comfortable and not look 'stupid.' There were a few iterations of the material itself, which started out as a cotton blend but ended up being a stretchy, semi formfitting material. And we arrived at a neutral gray to help with our lighting calculations... and we used some markers and bands to help with the capture process itself. Those needed to be comfortable as well. Cameras were based on location and shooting conditions."

"For shots where we used reference cameras, Kevin Wooley, our Imocap lead, housed some cameras in watertight enclosures and wired them to a computer for storing the images," explains animation supervisor Hal Hickel. "This was great for the onset stuff. For beaches and jungles, we used untethered cameras with lightweight tripods. They were a little more trouble on the backend because they weren't synchronized to each other, but both solutions worked well, and will continue to be used on the third Pirates movie [At World's End]."

Thus, by integrating the MoCap process with the actual shoot -- providing the animators with hero plates with the actors in them, casting their real shadows and making good eye contact with the live actors -- they were able to create, for instance, a more expressive, nuanced performance out of the maniacal Davy Jones, with the help, of course, of Nighy.

"We had new ways for the computer to analyze the images," Sullivan continues. "The software piggy backed on MARS, the matchmoving [and tracking] solver. It understood what the actors could and couldn't do. Our process is more holistic than traditional MoCap. We try to capture the whole body at once from different kinds of information, and that allows the flexibility to use many kinds of cameras and to work with partial information sometimes.

"The product of Imocap comes out as an animated skeleton, just like regular MoCap, and the animators do with that whatever they want, with artists in the middle running the post process. Sometimes they'll need to cheat the body to get a better composition of the image. But the advantage is that the animators are overriding things and animating for performance reasons rather than just getting the basic physics and timing down. That all comes from the actor."

Although ILM is currently developing its own facial performance capture system, Hickel determined this wasn't the time to introduce yet another R&D component. "We have a lot of confidence in our facial animation, so we decided to do it by hand. The creature pipeline was being moved over to Zeno and most of the faces were different enough from the actors anyway."

Superman_Returns-Airplane-320.jpg

2. Superman Returns' Flying Man of Steel

When director Bryan Singer undertook Superman Returns, his first priority was to make sure the critical flying scenes were as believable as possible. To that end, visual effects supervisor Mark Stetson chose Sony Pictures Imageworks, a studio he was very familiar with, having worked there as supervisor of Peter Pan and Charlie's Angels: Full Throttle.

"Imageworks really stepped up with the animated Superman work," Stetson suggests. "The image-based render approach is very good, especially the close-up, high-res work; the cape sim is very good too: it's very fluid and very consistent. [Overall] the animation is head and shoulders above what Imageworks did on Spider-Man 2, which was remarkable. And the Shuttle Disaster [the main action set piece] will be a real landmark for them."

Imageworks artists, led by visual effects supervisor Richard Hoover (Armageddon, Unbreakable, Reign of Fire) and animation supervisor Andy Jones (I, Robot, Godzilla, Final Fantasy: The Spirits Within), were tasked with creating a digital double of Brandon Routh that allowed viewers to see Superman's death defying acts of courage not only from a distance, but, more crucially, in close-ups as well.

The Imageworks team had several challenges: to build a digital version of Superman's famous red cape that could be directed as if it were a character in the film, including "reactions" to all that was going on around it, independent of natural forces.

Building a better digital human, in the form of Superman, was not an unusual task for Imageworks. Many of the artists on the project had been part of innovative digital human work on both the groundbreaking Final Fantasy and the Oscar-winning Spider-Man 2.

Creating a realistic digital double of an actor is one of the greatest challenges in visual effects: the digital double needs to look the same as the real actor from every viewpoint, in any lighting, with any facial expression. To help ensure that Routh's digital double looked as realistic as possible, he was scanned in the Light Stage 2 device at the University of Southern California's Institute for Creative Technologies (USC ICT) in Marina del Rey, California. Light Stage 2 consists of 30 Xenon strobe lights arranged on a rotating semicircular arm 10-feet in diameter.

The device illuminated Routh's face from 480-light positions covering every direction light can come from in just eight seconds. A specially designed rig built by Imageworks' Nic Nicholson allowed his hands to be photorealistically captured as well. As this happened, he was filmed by six synchronized Arriflex movie cameras.

The resulting frames were digitized at 4K resolution and texture-mapped onto a laser-scanned geometric model of Routh's face. Since the texture maps encoded all possible lighting directions, they could be digitally remixed to show the virtual actor photorealistically reflecting light in any environment the story called for. In this way, the complexities of how his skin reflects light -- its pigmentation, shine, skin luster, shadows and fine-scale surface structure -- was synthesized directly from light reflected by the actor himself. This data was given to the Imageworks artists who painstakingly created the fully realized walking, talking and flying superhero.

In Superman Returns, his cape is as much a character as The Man of Steel himself. To create a cape that could be directed in the same way an actor would be, Singer asked the Imageworks team to build a digital cape that would be worn by both the digital Superman and the real actor in every scene. However, the Imageworks crew soon discovered that not all cloth software programs are created to bring this amount of directing flexibility to fabric. Syflex cloth simulator was used to tackle the cape challenge. The software was a perfect base program for the Imageworks team to use in building a cape that would display a distinct behavior and attitude.

Eragon-320.jpg

3. Saphira From Eragon

Helming the epic production was first-time director Stefen Fangmeier, a former visual effects supervisor at ILM. "There were multiple challenges for me, as it was a dragon movie, and a period movie, and it featured large-scale battle scenes, and the lead actor had never done a movie before. My priority was to design Saphira, the dragon. I worked with the design team at ILM over a period of about six months, testing various approaches. We finally created a shape that we were all happy with, a strong dragon with a very powerful head and feminine blue eyes. At this point, the wings were bat-like, but when we showed the concept to the studio, they said that they really liked the wings of Angel in X-Men: The Last Stand, which they were producing at the time, and could our dragon have feathered wings? So, we revamped the concept to incorporate scaly wings, which took the whole design away from reptilians. In the end, it was a good thing as our dragon doesn't look like any other dragon in movie history."

Although ILM had just created a dragon for Harry Potter and the Goblet of Fire, the challenge of creating Saphira was of a completely different nature. 'To start with, the dragon in Harry Potter was bipedal, while Saphira was a four-legged character," says visual effects supervisor Samir Hoon. "Plus, Saphira was a real character, not a mere creature. She had to communicate with Eragon telepathically, meaning that we had to convey all her emotions through facial animation only. She was basically delivering dialog without really talking. Also, we had to deal with a highly unusual skin color... " In the book, Saphira is described as a giant blue dragon, a nice concept on paper, but a tough one to translate on screen. "We worked on the skin color a lot, trying to keep it blue, but still tweaking it to make it believable," Fangmeier notes. "There is no large animal in real life that has a vibrant blue color. We had to find just the right skin color, a subtle hue that would allow Saphira to appear in scenes that were lit in warm tones and not stand out. It was a real challenge."

Saphira was modeled and animated in Maya. "We needed to have the detail of small overlapping scales over Saphira's surface that would have made the geometry too heavy if modeled," CG supervisor John Helms explains. "So, we used a combination of painted textures and shading to directionally displace scales from her surface, so that even the smallest scales overlapped. Her skin had subsurface scattering, and her eyes had the same type of work that would have been done on a digital human." The tricky part was finding a way for the wings to fold up in a pleasant manner. Sometimes, the geometry wouldn't fold up properly or the scales would end up creating messy intersections. "We did cheat a little bit, once in a while," Hoon smiles. "What you don't see doesn't hurt you, right? So, we just had a modeler go in and clean it up. When Saphira was flying, since she was not moving her wings a lot, we had simulations running on top of the wings and on the scaly feathers, just to keep them alive."

Monster_House-320.jpg

4. The Humans From Monster House

From the beginning, director Gil Kenan opted for a very stylized look. Although Sony Pictures Imageworks would employ the same ImageMotion technology as The Polar Express, Monster House didn't use it to try to recreate reality in the computer. Visual effects supervisor Jay Redd confirms, "Our characters in Monster House are indeed human, but we always approached them as stylized -- almost as if they were puppets. If you look at their proportions, you will notice that the heads, eyes, hands and feet are larger than they should be. Also, we didn't concern ourselves with moisture, eyelashes or even real hair. We started our character modeling by creating actual clay sculptures of each character. Once a sculpture was approved, it was laser-scanned in and final clean-up and patching, costumes, etc. were created. The most interesting aspect here is that we avoided symmetry at all costs. So many people model one side of a character and then simply mirror and flip to get the other side, which is highly unnatural. Admittedly, modeling and rigging non-symmetrical characters is a lot more work for the crew, but the results are so much more interesting and subliminal."

In order to decide what the best approach to animation was, the crew did three versions of a test shot for each of the main human characters. The first version was MoCap facial data only. The second version was keyframed accents added to the MoCap, and the third was keyframed only with no MoCap, using the video reference of the performance as a guide. "We found that the best results were obtained with the second method," recounts co-animation supervisor T. Dan Hofstedt. "So, we usually employed the MoCap facial data as the basis for the animation, and then added selective accents and exaggerations that maintained the spirit and integrity of the actor's performance. Sometimes, the MoCap was left alone or only altered slightly, and sometimes for various reasons, the faces were totally keyframed. But by far, the majority of the shots blended both influences."

Night_At_The_Museum-Bone-320.jpg

5. Night at the Museum's T-Rex Skeleton

Responsible for more than 300 shots, Rhythm & Hues was the lead vendor on the demanding project. According to visual effects supervisor Dan Deleeuw, "the T-Rex was undoubtedly the show stealer!"

Visual effects producer Julie D'Antoni, animation director Craig Talmy, and their team started by building a CG replica of the full-size T-Rex skeleton that had been used on set. Gentle Giant performed Lidar scans of the prop from multiple angles to provide a three dimensional data set. Modeling and texturing the T-Rex turned out to be a huge endeavor as each piece had to be treated individually. "We took photographic references of every single bone and joint, and then applied these textures to each corresponding bone and joint geometry," Deleeuw comments. "It was very time-consuming. Rigging the model was also tricky, as we were obliged to make it work as a real skeleton. Interpenetration was a big problem. The bones on the spine basically moved against each other, and each vertebra wanted to go inside of each other! So, special care had to be taken in this area. The joints were based on how the actual prop was built, with metal pieces holding the bones together. We used those pieces to rotate the limbs."

Animation required a lot of tweaking, as this was not supposed to be a living, breathing dinosaur, but a mere skeleton brought to life by a magic spell. The team started by doing tests in which the T-Rex had the weight of its skeleton. Since the creature was much lighter than its fully fleshed and muscled counterpart, it should logically move much faster. However, this reality-based approach didn't provide a pleasing animation. Something was missing. So, using a great deal of artistic license -- this was a comedy, after all -- the T-Rex was eventually animated as if it had all its flesh and muscles on. "Right away, the animation was much more effective," Deleeuw says.

Modeled in Maya, the T-Rex was imported into Rhythm & Hues proprietary package Voodoo, where it was rigged, animated and lit. It was then brought into proprietary tool Lighthouse, where light and color values were assigned. The information was then combined with the Voodoo lighting, and exported to Rhythm & Hues' in-house render engine, Wren. Compositing was carried out in proprietary Icy. "When we rendered the images, we broke out all the lights in different passes," Deleeuw explains. "It allowed us to mix the lights in the compositor directly, and get a better integration of the character." HDRI was gathered on set to record the lighting set-up and to provide texture maps. In order to give a sense of scale, the creature was always lit with multiple lights.

Environments

Poseidon-320.jpg

1. Poseidon's Water Simulation

According to Poseidon's overall visual effects supervisor, Boyd Shermis, the creative and technical challenges they faced forced them to make advancements over then-existing fluid emulation technology. "The semantics of emulation vs. simulation notwithstanding, we needed to be able to do things with water or fire, oil, smoke and dust on a scale that hadn't been approached before. And there were no proven pipelines to make it run up and render in the way we were expecting it to work. For their part, ILM was being asked to simulate full, volumetric (3D) water on a scale that allowed a 1,200-foot ship to interact with it. [In other words], the ship (with a record number of polygons) had to be hit by, then rollover down into, through and up out of the water -- and then roll back down into it again. And that was just in one scene.

"I needed to have the bubbles and/or foam and/or spray to be truly borne from the volume, not as an added element on top of the volume. I needed to see the true interaction of the ship displacing and motivating motion within the volume of water -- and to scatter light within the volume and the bubbles/foam/spray. These are all things that might have been done before, but at nowhere near the scale or complexity that we required, both in terms of area coverage and in number of shots. So ILM was asked to rewrite their water code on our behalf to accommodate all these things. [Then] there was the issue of time. Running these kinds of simulations have been notoriously compute intensive and were literally taking weeks to run up. ILM, in association with Stanford's computer science department, managed to break the frames down into smaller bites, or tiles, in order to 'parallel-ize' the computing effort over several processors (between 8-16) to speed up the process of simulation run ups. This was a tremendous leap forward for them.

"In terms of the MPC and Scanline efforts, there were a couple of areas that forced them to advance the Flowline software. Flowline is a very unique and capable piece of liquid simulation software. Scanline has been using it well for several years, obviously. And while it has had the ability to do water, fire, smoke or other viscous fluids, it has never been called upon to do all of those at once, in a single simulation, all confined in a very complex, hard surface environment. And it has never been asked to scale up to 4K for run up and rendering. And it has never been asked to do it in a massive Linux render farm. When MPC researched and ultimately licensed Flowline, they realized that it was mental ray-dependent and MPC themselves had to switch their entire rendering pipeline to mental images' mental ray to accommodate the use of Flowline. Having now stretched the capabilities of Flowline as we have, I think that, within the industry in general, there will be a greater inclination toward using viable 3D solutions where once only practical, filmed elements would have been considered. 3D water and fire of this caliber will become the expected norm in the vfx toolset."

According to ILM visual effects supervisor Kim Libreri, "On Poseidon, our simulations revealed so much detail that breaking waves would emerge from the body of the water. This phenomenon alone saved us months of work that would have been needed to fake in braking waves over the top of rendered simulations. When two waves collided splashes would be automatically jettisoned from the collision area and vortices would form in the body of the water that in turn would generate hundreds of thousands of bubbles under the surface. Each of these would traditionally have to be hand-animated and painstakingly placed into each of the shots. Even though arriving at such a high fidelity fluid solver took many years of work at both ILM and Stanford, the results more than justify the investment. We can now turn large-scale water shots around in a matter of weeks (and in some cases days) compared to the months that it would have taken with a more traditional approach. This frees our artists up to spend time making beautiful shots instead of fighting with technology."

Cars-Customers-320.jpg

2. The World of Cars

One of the biggest tasks at Pixar was being able to accomplish all of the expanded lighting techniques while cutting down on the render time, which, early on, approached 17 hours-per-frame. "We instrumented a lot of what we did with statistics so we could glance at the numbers and figure out if we were doing what we should be doing and to catch our mistakes," offers supervising td Eben Ostby. "Say we had an object that was using ray tracing for shadows. If we knew that the whole world was looking to the car for shadows, it could be expensive. But if you could limit that to just one road, you could simplify things dramatically, particularly when you have the vegetation that we had, which contains a lot of detail. So we wanted to make sure that the part of the ground that was checking for information from plants was doing so only from plants in its neighborhood.

"We reengineered our lighting pipeline in a couple of ways. The primary thing was we wrote a new tool for ourselves for the setting up of lights, which made it possible to deal with the complexity of light sources and all the things doing ray tracing. We put those controls in the hands of the lighting team. We also reengineered the way we construct low level lighting so we could hook together different parts of a light and make it more powerful and modular, so a light could do both shadowing and occlusion. One of the highlights with a lot of eye candy value is the neon sequence. There we had to do a lot to make it look lovely, primarily to get the sense of light emanating from all along the neon tubes. For that we needed light sources that weren't just point sources but were areas all along the tube casting spill along the buildings and along the ground. Of course, the tubes themselves had to glow and there were dirt and dust on the tubes. Plus, for all the characters that were reflecting those neon lights, we wanted the neon to reflect off a particular body panel. So there was a lot of cheating of light source location so it would read just right and tell the story that we wanted to tell."

Radiator Springs, of course, is a composite of several small towns they encountered on Route 66, which were about a half-mile long, each containing one main street and a few supporting streets. "Often the towns were there for a reason," offers production designer Bob Pauley, "whether you just crossed the desert or you were about to go into the hills, you would end at a place where you could refill or tap off your radiator. They all had diners and gas stations and we could blend the two worlds by having aspects of one combined with the other. So I think every time that we went through these towns we saw similarities and consistent themes. The signs going into town, the layout, one stoplight, symbolically, which only flashes yellow now."

Even more than in previous Pixar movies, color and light are used to convey certain emotions. For example, Radiator Springs appears pale and dusty when Lightning McQueen first shows up. But as he becomes more intimately involved with the residents, the town becomes more vibrant.

Superman_Returns-Enviro-320.jpg

3. The Crystal Island Showdown From Superman Returns

Under the supervision of Jon Thum, Framestore CFC delivered 313 shots, which spanned the entire length of the final reel in the movie. This involves the final climactic showdown between Superman and Luthor. The work encompassed huge CG environments of oceans, crystal rocks, water interaction, seaplane, helicopter and Superman himself, all mixed with 2D elements of mist, waterfalls, layered skies and various greenscreen elements. There was only one partial set built for all of this action, so the contribution was substantial.

"The crystal island itself that rises from the sea is created from a mixture of procedural textured geometry, with additional matte painting in some areas to create more detail, " Thum explains. "The greatest challenge here was to interact our CG ocean with the island..."

However, the biggest hurdle Framestore encountered was when the end sequence of the island rising out of the ocean was recut and the island put back to concept. This happened some six weeks from the deadline and required some radical reorganization.

"It's hard to list all the changes," Thum continues. "It affected each shot in a different way. The main change was to make the island less spikey and thus reveal more column and rock detail that were previously hidden by the spikes. Another was to flatten the top of the island completely, where before we had more of a 'hump' in the middle. In addition, Bryan [Singer] created two completely new shots to help the recut, and then wanted a transition from one of our biggest shots through to a new plate shot six weeks from our deadline.

"It was more important that our look matched Rhythm & Hues' work because they were doing the same crystal rocks as us. We were both working in parallel on the look and more or less independently came up with the same idea, largely because it is the best solution to the problem. The problem was how to grow the spikes without stretching the texture. Because of the way the spikes are made up of several shards that form a larger piece, it was possible to grow them by translating shards up around the perimeter to increase its girth. The translation itself increased the length and because we never see the base, we don't have to worry where the translation comes from.

"The feel of the growth animation was something that we all converged on, and this had to be the right mix of order and randomness. Generally, the crystals would need to arrive at a predetermined look, so the animation was rewound and randomly translated back toward that position. In reality, each shot had different speeds of animation to suit the story, but there was a definite 'feel' that was developed and because we were all seeing what the other houses were doing, it was easy to maintain this."

HappyFeet-Splitscreen-320.jpg

4. Happy Feet's Antarctica

It was the job of Animal Logic's supervising art director/live action visual effects supervisor David Nelson to help director George Miller achieve an acceptable reality that took it beyond animation. "We looked at the production design that we were doing for the performance areas of the stage -- it's very musical and theatrical in many ways. And you want to get a sense of the real Antarctica. Basically, the environment surfacing, modeling, lighting and the character surfacing and modeling, to a degree, all came under my supervision, along with the visual effects integration and briefing.

"These are big landscapes, mainly shot from the penguins' perspectives, so they are shot pretty low to the ground. We wanted to retain a sense of vastness and scale. It became a balance of having enough detail, but not to distract from the performances. These are black-and-white characters playing in a blue-and-white landscape. We solved it by using a number of photographic devices. When it came to lighting and depth of field to help focus our characters, we used the construction of the environment where we put the detail in the surfacing. We took a lot of care not to make it too complex where we had crowds of penguins. We put granulations, sparkles and footprints in the snow."

Lighting also played a crucial role. They created a lighting arc for every scene and for the entire movie as a supporting role. Arcs range from dark to golden-lit Antarctic sun. Each lighting scheme supports the mood or dramatic play of a scene. They often would start in broad daylight and have cloud shadows come into play.

"We shot digital skies and developed a rig with three cameras where we shot skies in a time-lapse motion and then mapped those into an environment, so we had 360-degree coverage of sky," Nelson continues. "We were able to use movement in those skies to give a sense of perspective. This helped achieve vastness of landscape.

"We reconstructed environments out of thousands of photographic material shot in Antarctica wherever we could, and then matched very accurately into CG for full surfacing. Camera projection here was very flexible and integrated into the CG surfacing package. And we were able to add such layers as subsurface scattering and displacement into the camera projection."

Flyboys-320.jpg

5. The Unfriendly Skies of Flyboys

Parallel to the airplanes and their animation, the other key aspect to the project was Double Negative's creation of the 360° environments in which the action would take place. Co-CG supervisor Alex Wuttke oversaw the effort. Visual effects supervisor Peter Chiang and his team first filmed a series of test at various altitudes to get a perspective of what one would actually see from the air. "We realized that from below 3,000' (1,000 meters), we would need to see parallax in the trees," Chiang says. "We decided that below 1,000' (350 meters), we would shoot plates of the real environment. For between 2,000'-3,000', we developed a procedural way to populate the landscape with CG trees. The vegetation was created by a proprietary tool called Sprouts: volumetric sprites that are able to render millions of trees. R&D developers Oliver James and Ian Masters wrote it.

"For the terrain itself, we procured a dataset of high resolution aerial photographs covering a 10 square mile area of the U.K. We chose a very rural area, very pastoral with very little modern architecture and roads. Any anachronistic element was painted out. In addition to the maps, we were supplied with digital terrain models to match. It was important to purchase maps that were shot in flat light, so that we could light them as required. The goal was to build a global environment that extended to infinity and in which we had the flexibility to move the sun wherever we needed."

The R&D team was soon asked to develop a way to efficiently manage and render large amounts of landscape data. The result was Tecto, the brainchild of senior R&D developer Jonathan Stroud. "Tecto is the name of a suite of tools and plug-ins for Maya and PhotoRealistic RenderMan," Stroud says.

The Tecto application allowed artists to select rectangular regions of the database and export them to images for modification in an application such as Photoshop. Mesh regions could also be exported for editing in Maya, and re-sampled to whatever resolution the artist required, allowing them to see a low-res proxy of the real landscape. Low-res versions of the landscape textures were applied through normal Maya shading nodes.

Once the global environment was completed, Wuttke and his team built CG clouds to populate it. Clouds were deemed of paramount importance to help the audience maintain a sense of up and down, and a sense of speed too. They were thus treated as static anchors in the shots. "R&D developer Ian Masters created a tool, dnCloud, that was used to model 3D clouds in Maya's Viewport," Chiang says. "It employed implicit spheres and noise functions to generate exactly the types of clouds that were needed. R&D also created custom volumetric shaders to make the clouds react to lighting in a believable way, incorporating such effects as multiple forward scattering and self-shadowing. Our proprietary voxel renderer DNB (originally developed by Jeff Clifford for Batman Begins) was used to render them. HDRI maps were obtained from high points in England to light the shots. Once set up, the clouds could be pulled into position within the master battle arenas and rendered on a shot by shot basis."

Bill Desowitz is editor of VFXWorld.

Bill Desowitz's picture

Bill Desowitz, former editor of VFXWorld, is currently the Crafts Editor of IndieWire.

Tags