Double Negative takes the vfx lead for the first time on Harry Potter and the Order of the Phoenix, and Alain Bielik is back to reveal the secrets.
What do you do when you survive an encounter with evil sorcerer Lord Voldemort, only to discover that nobody within the wizarding community believes you? Things are not getting better for young Harry Potter in his fifth year at Hogwarts Academy. As if being called a liar was not enough, he has to deal with a new Defense Against The Dark Arts teacher whose only goal is to replace Dumbledore as Headmaster. A perfect situation for Voldemort to make his big come-back indeed
On the most recent Harry Potter movies, the visual effects workload had been spread somewhat evenly among several key vendors, with many additional vendors contributing to the worldwide effort. For Order of the Phoenix (which opened July 7 from Warner Bros. Pictures), overall visual effects supervisor Tim Burke opted for a new approach and assigned the larger part of the 1,400-plus vfx shot count to lead vendor Double Negative. The London-based company eventually produced over 950 shots, four times more than the next largest vendor.
Visual effects supervisor Paul Franklin and vfx producer Dominic Sidoli were approached as early as September 2005 to work on previsualization for the whole movie, a process that lasted until March 2006. Double Negative's shot list was divided into four distinct geographical areas: Hogwarts, the Forbidden Forest with teenage giant Grawp, the Hall of Prophecy and the Veil Room. Two teams started working in parallel, totaling 250 artists, technicians and developers. CG supervisor Richard Clarke and compositing supervisor Jelena Stojanovic took care of the key Grawp sequences, while CG supervisor Justin Martin and compositing supervisor Jolene McCaffrey focused on the Hall of Requirements, the Hall of Prophecy and the Veil Room sequences.
Hogwarts Inside and Out
When their new professor refuses to teach them defensive spells, Harry and his friends set out to practice on their own. They do so in a secret magical room that opens up only in time of need, the Room of Requirements. There, the young sorcerers learn how to create a Patronus spell, an effect previously seen in Prisoner of Azkaban. Each child casts a unique version of the Patronus with its own individual animal at its core. Created in Maya, the 3D creatures were exported to Houdini, which was used to generate spiraling soft body trails. These were then re-imported into Maya where they were mapped with vfx shaders producing complex, coruscating patterns of light. The shots were composited in Shake.
Double Negative also worked on several exterior shots of Hogwarts and Hogsmeade, all of them involving multiple camera passes on huge miniatures built by Cinesite Models, and digital environments.
Into the Forbidden Forest
Confronted by Harry, Ron and Hermione, Hagrid exposes the reason for his furtive behavior. He leads them into the depths of the Forbidden Forest where he reveals Grawp, his younger half brother -- a teenage giant. The Forbidden Forest was a practical set built on a sound stage at Leavesden Studios in England. "The set was big enough that it was quite possible to lose your direction within all the enormous trees!" Franklin recalls. "However, due to the restricted height of the ceilings in the studio, the trees topped out at about 25 feet, and their proximity to the lighting rigs meant that only the first 10 feet or so of the trunks were useable in the shots. Everything else had to be enhanced and adjusted. All up angles into the forest canopy required extensions."
The team started with a high-detail Lidar scan of the set. The complex geometry was then extensively optimized and rebuilt. Displacement maps were extracted from the fine-grained bark detail, and a library of trees was created to be fitted onto the stumps in the set. Lower-detail tree models were created for use in the mid distance, whilst the far background was finished with a high-resolution digital cyclorama created in a combination of Photoshop and Stig, part of a proprietary 2.5D environmental toolset. For shots looking up into the canopy, the team modeled complex patterns of branches and added thousands of leaves with a modified fur system.
Meet Grawp, the Teenage Giant
Development on Grawp started as soon as Double Negative joined the project. The CG character was a major challenge on several different levels; he had to look real, but also needed to deliver an emotionally complex performance. Extensively previsualized, the Grawp shots were filmed using an innovative virtual set system that allowed the director and crew to "see" the giant on set composited into the live video feed from the main camera. It enabled them to produce tightly choreographed camera moves and performances that responded accurately to the giant's notional position on set.
"The system was created in collaboration with Lester Dunton of Joe Dunton Cameras (JDC) and third party developer Olaf Wendt of United Image Systems," Franklin explains. "Basically, we took realtime data from a motion-encoded Supertechno 30-foot camera crane, and fed it into Autodesk Motion Builder where it was used to drive a CG camera on a model of the actual Forbidden Forest set. Custom tools were developed to allow the Motion Builder set to be registered quickly in 3D space with the real set. Senior programmer Oliver James devised a workflow to import Maya generated previs animation into the system, where it could be manually cued to respond to the movements of the camera crane. The on-set system consisted of a PC laptop running Motion Builder. It fed the output of its video card to a small vision mixer where it was composited in realtime with the feed from the videotap on the main 35 mm camera. The composited output was then relayed to the camera operator who could see the 3D animation's position and action overlaid onto the image of the set."
For the character itself, the team developed a very detailed muscle and skin simulation. They added many secondary dynamics to give jiggles and wobbles of the correct scale as Grawp lumbered around the Forest clearing. "We used an advanced facial rig that was specifically designed to give our animators very precise control over the facial shapes," Franklin notes. "One of the rig's best features is that it can take input from almost any source, as we have worked out how to feed in data directly from a number of motion capture systems. It allows us to use that kind of data as a basis for a performance if needed. At the outset of animation development, we used a new motion capture system developed by Image Metrics to capture the facial performance of actor Tony Maudsley who was cast as the reference for Grawp. In the end though, due to changes in the sequence during postproduction, we eventually keyframed all of the animation."
Shading supervisor Graham Jack developed a sophisticated approach to simulating Grawp's skin. "Initially, the skin work started with fast shadow map based scattering, but as look development progressed, it became obvious that this method was limited by the fact that it only supported spotlights, and that it only simulated single scattering," Franklin continues. "To overcome these problems, we added Pixar's point-based subsurface diffusion approximation. This uses a multi pass approach which bakes surface illumination in the first pass to a point cloud, computes the subsurface illumination on the point cloud, then reads it back in for the beauty pass. Various tools were written to help fit the technique into DNeg's existing shader setup. In theory, using the point cloud based scattering would have circumvented the need for the shadow map approach, but in practice, the multipass approach didn't give enough detail, so the final result used a combination of the two approaches. In addition to the scattering, the shader also made use of the new point cloud-based color bleeding in PrMan 13. This allowed the very fast calculation of indirect illumination, and really helped in the areas around the ears and eyes."
Skin specularity was catered for by three separate specular and reflection lobes, each of which could be texture mapped. This allowed very careful balancing of an overall specular sheen, glossiness from surface sweat and separate glossiness from liquid on the skin surface on the lips, eyes etc. Numerous displacement maps, representing different facial expressions, were blended together in the shader to achieve the final displacement. The blending of these maps was controlled by the character's animation rig. They served to augment the facial animation system with details such as wrinkles.
Wrecking the Hall of Prophecy
Trying to rescue his uncle Sirius from Voldemort, Harry leads his friends into the Hall of Prophecy, a vast storeroom of magical predictions. The cavernous space was designed as a regular series of regimented shelves stretching in all directions. Each shelf bore a multitude of prophecy spheres, all holding a swirling mass of cloudy fluids. The set was created as a completely digital environment.
"We modeled a library of around 60 prophecy stands, all shaded to look like aged wood or corroded metal," Franklin explains. "We then developed a rule-based system for placing the spheres automatically on the shelves. All of the shelf layouts were managed with our proprietary asset management system which allows us to switch out geometry with placeholder nodes whilst working directly on the scene in Maya -- the hero geometry is then loaded at render time. The glass spheres were shaded with a standard in-house glass shader that supported our in-house reflection placement tools. Extra layers were added to the shader to give the look of dust and dirt, which was created with a combination of hand painted and procedural textures.
"The prophecy fluid was created with a pseudo-volumetric shader that mapped a complex, animating fractal noise texture through a series of closely nested concentric spheres like the layers in an onion's skin. The layers were offset-animated against each other, producing a three-dimensional swirling effect. Additional parameters created subtle glows and color gradients within the fluid. The render pipeline allowed all of the different qualities and layers of the prophecies to be broken out and carefully balanced in compositing. In the end, the shaders turned out to be so efficient that, with the exception of the most distant shelves, all of the prophecies were rendered with the hero look at all times."
For the confrontation between the children and the Death Eaters, a new rigid body dynamics system was used to animate hundreds of toppling shelves and smashing spheres. Created by software developer Peter Kyme, dnDynamite replaced Maya's native RBD solver. The tool was integrated with the dnAsset toolset, so that all of the objects within the Hall of Prophecy could be animated with a dynamics simulation. As animations were approved, they were cached out into a library that could be used to build up larger scenes.
Initial animation by dynamics tds Trina Roy and Nici Hoyle was based on simple previsualization. Once the large-scale collapse of the shelves had been approved, the team switched on the spheres and prophecies, and made adjustments accordingly. Shattering spheres were created as separate simulations that were stored in a cache library, and then swapped out with the spheres at their impact points. Extra layers of pre-calculated falling debris, spheres and shards could then be dressed into the scenes to complete the animations.
To address the long render time, a custom lighting pipeline was built by lead td Philippe LePrince. "We avoided ray-tracing whenever possible," Franklin notes. "The desired glassy look was created through the use of an in-house reflection card tools that placed reflection maps on 3D coordinate systems. Rather than reflecting each other, the shelves and spheres reflected pre-rendered HDRI maps of the surrounding geometry. Rather than use geometric level of detail, we structured the scenes to use regular geometry close to the camera and shelves baked into PrMan's brickmaps in the background. Tds would choose how to partition their scene with user-friendly selection-based tools. Whenever we could, we used point-based occlusion and indirect illumination. The shelf assets contained baked occlusion and indirect lighting. The last destruction shots used dynamic point-based indirect illumination to allow falling prophecies to light their surroundings as they fell. New advances in PrMan allowed us to simulate area lighting in large dynamic scenes at a reasonable cost."
Climax in the Veil Room
Having managed to vanquish the Death Eaters, the children discover a strange amphitheatre-like room containing a huge arch holding a magical veil. The plates were shot on a partial set built on a stage. Due to height limitations, the walls and arch topped off at approximately 20 feet, which necessitated digital set extensions on most of the shots. The Veil itself was created using a combination the Syflex cloth simulation plug-in for Maya, and a set of custom shaders that produced a subtle combination of highlights and refractions.
Soon though, the Death Eaters are back to set upon the helpless children. The final battle is about to begin Once keyframed on digital stunt doubles, the Death Eaters animation was baked out from the rig onto the surface geometry, which could then be exported to other systems for procedural animation. The trailing ribbons of cloth and tendril-like vapor plumes were animated in Houdini, whilst the billowing clouds of smoke were created in Maya fluids. All elements were then rendered in with a combination of PrMan and DNB, Double Negative's proprietary volumetric renderer.
"Custom tools developed by R&D lead Trina Roy and lead td Eugenie von Tunzelmann enabled the geometry to pick up the underlying shape of the voxelized fluid simulation," Franklin explains. "Basic A-over-B comps would show us how the animations in the different layers were working together. Once we were happy with the elements, they would be sent for integration by senior compositor Bridget Taylor. The final look of any of the Death Eater shots was a combination of 2D and 3D work."
It's a particularly intense battle for Sirius: a collaboration between 2D, 3D and practical effects. On set, Gary Oldman was carried up on a flying rig through the Arch. "We matchmoved the shot in 3D, rotoscoping Gary's action onto a 3D stunt double," Franklin continues. "This was then used by 3D sequence supervisor Phil Johnson and lead td Alison Wortman as a guide to animate layers of sweeping cloth simulations that wrapped themselves in a series of waves around Gary. 3D passes for lighting, displacement and normals were rendered out. 2D was then used to distort the image of Gary whilst progressively dissolving selected areas of his body away. In addition, we sucked out the color from Gary's face and added a subtle cataract dullness to his eyes."
First Class Operation
After 19 months of intense work, the team of 250 artists, developers and technicians, delivered Double Negative's final shot. All agreed that the visual effects work hugely benefited from the uncommonly long schedule. "By joining the show at such an early stage, we were able to develop unique and complex R&D-lead solutions to a wide range of challenges," Franklin concludes. "In the current climate of increasingly pressured post schedules, that was a luxury and a privilege. The time that we were given undoubtedly made a major contribution to a first class result from all the companies involved. The Harry Potter machine -- certainly from a vfx point of view -- is a first class, `Rolls Royce' operation to work for!"
Alain Bielik is the founder and editor of renowned effects magazine S.F.X, published in France since 1991. He also contributes to various French publications, both print and online, and occasionally to Cinefex. In 2004, he organized a major special effects exhibition at the Musée International de la Miniature in Lyon, France.