Search form

'The Lion, the Witch and the Wardrobe' Diaries: Part 1 — R&D and Principle Photography

In the first installment of VFXWorlds exclusive production diaries, Rhythm & Hues Bill Westenhofer talks about R&D and principle photography on The Chronicles of Narnia: The Lion, the Witch and the Wardrobe.

This is the first of four installments in VFXWorldsThe Chronicles of Narnia: The Lion, The Witch and the Wardrobe production diaries.

Rhythm & Hues was under pressure to create a convincing Aslan. Fans had made the iconic character the litmus test of the films chances at success. All images © Disney Enterprises Inc. and Walden Media Llc. All rights reserved.

I still remember the moment I first heard that Rhythm & Hues was being considered as one of the production houses to work on the visual effects for The Chronicles of Narnia: The Lion, the Witch and the Wardrobe. Its not often that a chance to work on one of your most cherished stories from childhood comes along, and I was pretty excited, to say the least. After a lengthy process of bidding, doing a few creature tests for director Andrew Adamson, and a seemingly endless series of meetings, that chance became a reality, but eventually the excitement morphed into a gnawing terror, as I wondered: How on earth are we going to do this?

VFXWorld asked me to create my own chronicles of sort, by requesting that I keep a log of the goings on within our part of the production from that point onward. As with any project of this overwhelming size, my favorite tactic is to divide and conquer. Basically, this means that I need to reduce the task of the job into manageable chunks and worry about each one in a timely fashion. In that spirit, I am breaking this diary into two parts. This installment covers the visual effects version of pre-production, which includes creature construction, rigging and prelite, the development of software and pipelines and the actual principle photography of the film. The next installment will detail our actual shot production in animation, lighting and compositing.

For this movie, Rhythm & Hues was tasked with creating Aslan, the lion, and handling the majority of the battle sequence at the end of the film, which required the creation of a large number of hero CG characters and a simulation to deal with their combat. We also took on the sequence in which Aslan meets with the White Witch and is sacrificed in front of a throng of those same creatures that would appear later in the battle. For the sacrifice and battle sequences, our biggest issue was the challenge of building, rigging, lighting and controlling that many different characters across a large number of shots. For Aslan, it was the equally scary prospect of doing justice to the iconic character that everyone expected to see executed perfectly. Many an early sojourn onto the various fan websites cemented the reality that, for many, pulling off a convincing lion would make or break the film for them.

Bill Westenhofers favorite tactic when taking on a project as immense as Narnia, is to divide and conquer. He reduces the task of the job into manageable chunks and worries about each one in a timely fashion.

Fortunately, the work that Rhythm & Hues had accomplished in the past, and the work that I had directly supervised during that time (especially Cats & Dogs and Elf) gave me confidence that we could achieve these goals. Fortunately, I knew what our animation and fur were already capable of and knew that our gurus in the software department could push it to a new level that would surpass what we had done before. To mobilize this effort, one of the very first things we did was to sit down and draft a technology document that itemized the lists of new capabilities we would need. Big items on that initial list included improvements to our muscle/skin dynamics, fur collision detection and overall pipeline improvements to deal with tracking the multitude of characters and models in a shot. The biggest item, however, would be a crowd simulation system.

After a fairly lengthy period of investigating the commercial packages and even considering the cost and time of developing our own, we selected Massive, which had just proven itself spectacularly on The Lord of the Rings trilogy, and our early investigations and discussions with its developer, Stephen Regelous, cemented our decision. Rhythm & Hues had done some crowd work before, but we knew it would be prudent to try to take advantage of the experience many had had with the software on Lord of the Rings. One of our earliest tasks, therefore, was to set up a meeting with Stephen.

Feb. 12, 2004

We met with Stephen today and it was extremely useful in getting a handle on the specific tasks we would need to perform in readying ourselves to do the work on the film. New terms such as motion-tree,motion-editandmassive agent have now entered our lexicon. He also helped us refine our expectations with regards to what kinds of shots would be more and less difficult given the capabilities of the software. We had previs of the opening of the battle to show him, which was also helpful in letting him see exactly what we needed Massive to do. This meeting also got the ball rolling with our software developer, Hans Rijpkema, in starting to develop interfaces both to and from Massive so we could import our models and rigs, and get renderable items out of the simulation to pass to our proprietary renderer, wren.

Within our scope, everyone agrees that the centaur (a classical creature comprised of the torso of a human and the body of a horse) will be one of our toughest technical challenges on many fronts, and this also seems to be true in Massive. The software itself simulates performing agents by executing brains, which are bits of code that use fuzzy-logic to select from a series of pre-generated actions. A motion-tree is a set of these actions for a given character. You can imagine a motion-tree as a set of actions that start from a rest-pose. Each branch is the set of actions that can proceed from that rest-pose like a swing or a block. Each branch from there continues as a set of actions that can then be accomplished from the new position and so on. For the rich complex actions of a battle, this tree can easily comprise several hundred actions. For this reason, motion capture remains as the desired means to create the actions for a tree; having our animation staff create every one of these would be prohibitive. So, the challenge for the centaurs in particular was how to motion capture their actions.

Creating the centaur was one of R&Hs toughest technical challenges on many fronts. Westenhofer realized two things: centaurs look silly when their arms have nothing to do and the upper half drives the actions of the lower. Photo credit: Phil Br

Several weeks prior to this meeting, I conducted a survey of centaurs that had appeared in past films. Examples in The 7th Voyage of Sinbad, Fantasia and several others were reviewed with Andrew to see what we liked and disliked from past attempts. Two things became obvious: 1) centaurs look silly when their arms have nothing to do; and 2) it is imperative that the upper half drives the actions of the lower. The first is easily solved by always giving them a shield or a sword to hold. The second will be the key issue to solve from a motion capture standpoint, since our expected plan is to capture the upper and lower halves separately. We can capture a horse, and we can capture a human, but the trick will be to make these separate captures feel like they come from one creature.

Mar. 6, 2005

I arrived in Aukland, New Zealand, three days ago. Today we are on our first tech scout of the locations, where we will be shooting the major sequences of the movie, including the battle and Aslans camp. These are scattered all over the South Island in some pretty spectacular locales. It is worth noting that these locations are quite remote. I say this, because Ive been dragging around a 40 lb camera case with our dual-camera High-Dynamic Range Imagery (HDRI) rig up and down those remote locations! This behemoth consists of two 35mm film camera bodies with 180º fish eye lenses attached. They are mounted back to back so they can see a complete spherical image of the world around them. Each camera is programmed to march through a set of 10 exposures, which, when combined, record the complete intensity range of the surrounding environment. Fortunately, this trip is the swan song for this rig, as we are building a much more compact (and light!) digital system for the main shoot.

R&H handled the majority of the battle sequence at the end of the film, which required the creation of a large number of hero CG characters and a simulation to deal with their combat.

The reason I am bringing this system now is for our pre-lighting phase of character development. We are going to be building, texturing and lighting a large number of creatures, so it is critical that they all are set up in a consistent environment. It is even better if they can be setup in a lighting environment that matches what we will see in the actual photography. Thus, Im out to capture an HDRI image of key locations such as the battlefield.

Having a good representative environment early on is important for any character show. Even on jobs with just a single character, the work is done by a team of texture painters and lighters and each might have their own ideas of whether to bake color settings into the texture maps or into the shaders themselves. On this job, were going to have many characters (40 at this point) so consistency is essential. Without it, we can definitely expect that characters will have color and brightness levels all over the map and it will require a hand adjustment per-character in a fashion that we will not have time for.

The other key thing that is happening on this tech scout is that we are having Paul Maurice and his company, LIDAR Services, LIDAR the entire battlefield, Aslans camp and the Practice Grounds. LIDAR is a process that uses laser imaging to measure distances to all objects in its field of view. The result is a high res point cloud of amazing accuracy. Pauls crew combs over this massive expanse the battlefield alone is over a thousand yards and covers it from many vantage points. The end result will be a highly accurate terrain model that we will use for tracking and as a terrain mesh on which to run our Massive simulations. The previs team will also be using the same data to rework the sequences theyve done with real camera angles. This will be crucial reference during the shoot, which will often be blank plates, or plates with only a handful of extras to inform the camera crews what the final shot is intended to be.

Mar. 9, 2004

Today we visited Weta Workshop in Wellington, New Zealand, to review progress on the maquettes they were creating and the armor and weapons under development. Weta Workshop is in charge of the design of all of the mythical creatures in the film, like the centaurs and the fauns, and they are also making a maquette of Aslan. All of these are being cast in resin and shipped to Los Angeles to be cyberscanned by Gentle Giant Studios. All of the maquettes are approximately three feet high, which was deemed to be the best compromise between the scale of detail and the practicality of casting and shipping them.

R&H worked with a prosthetics/animatronics firm on models and designs to ensure consistency between the prosthetic and CGI creations. Rules were set up for the human actors playing the fauns (above) and centaurs. Photo credit: Phil Bray.

Its interesting to view the maquettes in person, as I had been seeing them for several weeks during videoconferences while they were in production down here. The sculptors and I have been collaborating during these discussions to make sure that their designs also work for us technically. We plan to scan these maquettes for our models, so issues like the specific posing of limbs are required to facilitate the rigging process.

Of particular interest is the centaur design. There are two issues Andrew was most sensitive to. The first is that it not look like a pantomime horse reminiscent of two-manned horse costumes. This issue is avoided by cantilevering the human torso out in front of the horses front legs. This feature also makes for more elegant lines in the form, as the human torso seems to blend out of what would have been the horses neck. The second issue concerns the size of the human vs. that of the horse. Andrew really wants to avoid a giant human body. The upper half has to have a one to one correspondence with a real human. The horse as well needs to feel impressive and not be pony-like. Weta has modeled the maquette to work with a real human and a 14-hand horse (each hand is four inches), and has positioned the human body to the horse in a way that a satisfying ratio is achieved.

Mar. 26, 2004

Today we had a once in a lifetime opportunity to be in a cage with a real lion, leopard, cheetah, bear, hawk and several other animals, all courtesy of the productions animal training facility Gentle Jungle, in Frazier Park, California. Our ability to build lifelike representations of these exotic animals requires detailed reference of their fur, eyes, mouths, etc. Up until this point, we had mined everything we could from books and nature videos, but none of them were as close or featured the specific areas that we needed. Today we were able to get in there with a hi-def video camera and lots of still cameras to get all the detail we needed. It was pretty thrilling to be in there with the trainers to see the animals up close. In addition, we were able to see the animals perform specific actions that we would be able to match to during our rigging process to make sure we had captured all the subtleties of their musculature and skin movement.

R&H scanned head sculptures for the minotaurs (above) and other mythical creatures to ensure that its models would perfectly match the on screen performers. Photo credit: Phil Bray.

Apr. 13, 2004

Today we shot video for a centaur and faun test at KNB, the facility in charge of the prosthetics and animatronics for the movie. My team has been working together with Howard Berger and his crew ever since we both started in December of last year, sharing whatever models and designs that were appropriate to ensure consistency between the prosthetic and CGI creations. A good example was that we supplied our digital model of Aslan both in a standing pose and laying on his side, with which KNB was able to directly create a lifesize form to use for their animatronic lion. So often the CGI and the animatronic creations follow parallel paths based on concept art, which yield things that are similar but not identical. KNB also supplied us with head sculptures for the minotaurs, minoboars, cyclopses, goblins and so on that we were able to scan, which ensures that our models will perfectly match the on screen performers.

Our test today is to nail down the process and rules we will need to follow to position and direct the human actors that will be playing fauns and centaurs. We know there is going to be many scenes where we will have to shoot real actors on set, so it is essential that we are able to get upper body motion that make sense when we replace it with a horse body or goat leg. This test will experiment with several ideas ranging from low-tech walking on tip-toes, to special shoes and, in the case of the centaur, walking on a platform and walking with platform shoes. To prepare ourselves for this day, weve already had our animators do a lot of motion studies of full CGI fauns and centaurs to try to get a clear picture in our head of what we want the upper half to do. Today is really a test of which reverse engineering technique works the best and is still sound from a practicality and safety aspect for live-action shooting.

We wont know until we are able to actually replace the legs in the computer, but the tiptoe approach seemed the most natural. For the centaur, a platform shoe gave an interesting result, but Im afraid from a safety standpoint when the actors are walking on steep mountain slopes. More than likely we will need some sort of raised platform for them to perform on.

Apr. 19, 2004

Simultaneous to the video test, weve been doing motion capture experiments with Giant Studios in Culver City, California. Our motion capture director, Michelle Ladd, had a performer go through some typical faun actions using the same set of techniques we employed on the KNB video test. These are being retargeted in realtime to our faun rig so we can see how each performs on the real creature. Retargeting is a process that takes the motion tracked joints of the actors real legs and remaps them to similar points on the target model. In this case, it is a reverse goat leg. In truth, its not really reversed topographically; its just that the heel of a goat is so long and held off the ground at such a steep angle that the leg appears to have three segments instead of a humans two. The retargeting software takes this into account and uses some custom procedures to give a natural pose for the goat leg in response to the captured data.

CGI and animatronic creations followed parallel paths based on concept art. R&H gave its digital model of Aslan to KNB which used it to create a lifesize form to use for its animatronic lion.

Today was really a eureka moment when I finally saw the resulting CG faun walking on the screen. The retargeting is working very well and the character seems amazingly balanced and natural over its reverse legs. Interestingly enough, the tiptoe approach works the best overall. The heel-support shoes end up coming through in the action, and it feels like the faun is wearing high-heeled shoes in the way the foot lands and weight shifts over the leg. I realize at this moment that sometime in my future I will be telling a cast of stuntmen that they are going to be walking around in greenscreen tights walking on their tiptoes. Ah, the magic of visual effects!

Apr. 22, 2004

Richie Banehamm, our animation director, and Michelle have just completed a series of videos to be used as motion reference for the performers of various creatures. One of the issues we realized early on was that there will be many creatures that will be achieved in a variety of ways. In some shots, a stunt-performer will be acting on set in a prosthetic costume, while in others a full CGI version of the character will appear. Most of our motion capture will be done concurrent to the live-action shoot, which means we all have to come to an agreement now as to how each creature should move.

We started this process by holding discussions with Andrew regarding his thoughts as to the nature of each creature and got a feeling about what he had in mind for their movement/fighting styles. Michelle and Richie then worked with performers in Los Angeles to create a proposal video with these concepts in action. These videos will be brought to New Zealand, where we will go over them with the stunt coordinator and incorporate his notes into a final product that will be used as reference by both on set and by the motion capture performers back in Los Angeles.

The centaur continues to prove challenging in every task, including this one. Michelles clever idea was to select a series of video clips of horse actions, including some featuring the famous Lipizzaner Stallions, and then superimpose them with those of a performer matching with corresponding moves. Watching the video takes a little bit of imagination, but it is surprisingly successful. The Lipizzaner Stallions are horses trained for battle, so their amazing high kicks matched with a sword-wielding human provoke a lot of ideas for our battle shots.

A lot of the pre-production period is spent gathering prop reference, like finding the white horse to play the unicorn. Photo Credit: Phil Bray.

May 20, 2004

Our creature development is well underway. Our creature supervisors Will Telford, Mike Sandrik and Nori Kaneko have been hard at work on most of the exotic animals, including Aslan, polar bears, leopards, cheetahs and the rhino. Preliting has been working in the HDRI environments from the images taken on the tech scout. Today was the official roll-out of another development, which would be crucial for the job that we call our Creature-Kit (CK). The CKs formalize the rigging of bipeds and quadrupeds so that every rig will have the same naming conventions, axes format and animation controls. This will be critical since we know our animators will need to be able to jump from character to character with no time for retraining between each one. The other advantage of the CKs is that they can be quickly adjusted to fit any model. Our rigging time has been significantly reduced, at least in the area of marrying a skeleton to a model. This allows us more time to finesse the areas that need more character specific attention such as the muscles and other soft-body deformations.

In the area of soft-body work, we also have produced a very successful test of a running rhino that uses harmonics as a means of simulating true skin dynamics. Harmonics use an oscillation function on top of predetermined soft body deformers to move the skin in response to global body motion. Its not true dynamics in that there is no attempt at volume preservation, but several other rigging tricks are employed to offset this limitation. Its big advantage is that its a lot faster to compute so it can be used more liberally throughout our rigs. From the reference we have been studying, significant skin wiggle and muscle sway is one of common traits of large animal movement. Harmonics will allow us to match this efficiently.

Jun. 17, 2004

Design work continues at a furious pace at this point. With the start of principle photography just around the corner, we have been focusing on things that will have an impact on how specific things are shot. Today we went over a motion study of Aslan running with the girls on his back. This all-CG version will be the basis for a motion rig that KNB is building for a greenscreen shoot at the end of the year. Weve also posed a lot of our characters to supply 3D data for the art department, which has to make a large number of practical stone statues for the White Witchs courtyard. They use this data as a starting point so their sculptors dont have to work from scratch. A lot of this time period is also spent gathering prop reference, and reference for specific creatures like the white horse that will play the unicorn, and hero characters like Otmin, the minotaur general whose armor and prosthetics are nearly finished and signed off.

Aug. 10, 2004

Today we showed Andrew a prelit test of Aslan walking with two girls we shot on video outside of Rhythm & Hues. This was extremely promising in many respects. We used our new digital HDRI system to capture the scene lighting and the shot came together with Aslan intergrated into the scene right off the bat. One of our prelighters, Gregg Steele, has also been working on Aslans mane with more clumping and fur detail thats starting to really show promise. It was also the first time we were able to test some of our fur dynamics and collision detection in a shot. The lion still has a long way to go, but even now its a very encouraging benchmark.

Sep. 14, 2004

The start of a six-month motion capture process began today. Each character capture session involves a week of rehearsals and tests followed by one to two weeks of actual capture. At the end of the first week in each case, I will receive a retargeted sample performance of the actor onto our CG model. This will be shown to Andrew as a final check to make sure everything is going as we had discussed. By now the motion trees for each character are complete and are being used by Michelle to direct the list of actions that have to be captured for each massive agent to utilize.

The first up is the werewolf. The werewolves in The Lion, the Witch and the Wardrobe will move about in a chimpanzee-like fashion, preferring to run in a loping quadrupedal run. One of our animation sequence supervisors, Matte Logue, worked out a run cycle that were using as a basis for the capture. The actual capture strategy can vary on a creature by creature basis. Some, like the faun, use motion capture for everything. Others, like the gryphon, cant really be captured, so well be hand animating all of the actions in their motion-tree. We use the term supplemental-animation (supanim) for these cases where animators create the individual clips. With the werewolf, were a little uncertain if the performer will be able to pull off the loping run, so there is a chance it will be achieved with a mixture of motion capture and supanim.

During a tech scout in March, Westenhofer traveled with Paul Maurice and his company, LIDAR Services. Aslan's camp utilizes LIDAR, a process that uses laser imaging to measure distances to all objects in its field of view.

Sep. 20, 2004

The moment has finally arrived when weve started shooting principle photography for one of our effects sequences. The first sequence up is the scene where Aslan is sacrificed on the Stone Table amongst a throng of the White Witchs evil minions. This will be a very challenging sequence technically that will employ hero CGI characters and an army generated in Massive.

The first thing that is becoming readily apparent is that there are very few creatures in this film that are fully practical. With the exception of the ogres, cyclopses, goblins and boggles, everything else needs some sort of enhancement, especially in the area of leg replacement. My first day on set watching the prosthetic-laden extras arrive felt just like a sea of green legs. This will certainly be a challenge for the post-production. KNB had built a sort of furry pants with an actual minotaur hoof-shoe that can be used in wider shots. These wont work closer in, however, because you can see that the legs arent bending right. Were learning very quickly that the challenge will be in rapidly determining what is in the view of the camera and strategizing where best to use the prosthetic extras and who should be wearing green-legs vs. furry pants.

Fortunately for this sequence, the need for creature continuity is helped a bit by the randomness of the throng around the witch and table. So long as key characters are maintained in their positions, we can be a little freer in distributing what we have around the set. Our rationale is to always fill whatever is closest to camera with live action and let CGI fill in the rest. We were also careful to leave some gaps in the crowd to insert fully computer- generated creatures like werewolves and harpies.

From a compositing standpoint, one thing that is also going to be difficult is torch smoke. The director of photography, Don McAlpine, needs to light the scene using torchlight to achieve the desired look. The original idea was to shoot the stagework on the stone table set with a greenscreen backing. On the very first take, the torches emitted so much smoke that the greenscreen became more of a hindrance. All subsequent shots will be done using a black backing instead. Even still, it is going to be a big challenge to integrate the CGI characters back into the shot behind all of the smoke.

Nov. 1, 2004

We have moved down to the South Island of New Zealand and are starting to shoot exteriors. Right now we are in Omaru shooting the scenes in Aslans camp. I am starting to believe the green legs are reproducing because there seems to be a never-ending flow of them. But on a more interesting note, today I downloaded video clips for review with Andrew and one included a motion test done by one of our lead animators, Michael Hozl. Currently the battle, which we are set to begin shooting in three weeks, opens with a shot of a hawk flying in and landing on Peters arm to give word of the White Witchs approach. The intention is to use a CGI hawk for the fly in and a real one wherever possible on Peters outstretched arm. This was originally meant to be an eagle, but quarantine rules in New Zealand prohibit the importation of birds, and no trained eagles were available within the country. Michael had been working on the rig for our gryphon (a mythical half eagle/half lion hybrid) doing motion studies to test its performance. One of these studies involved the gryphon flying in and landing, complete with folding wings. I showed this test to Andrew and we realized right away that the gryphon is a much more impressive creature for this task, so the shots are going to be reframed, and more space is being cleared on the mound to allow it to be used in the final shot.

In the battle sequence, the basic philosophy was to use extras in prosthetics in the foreground, hero CGI characters in the middle ground and Massive armies in the back.

Dec. 18, 2004

After a long and arduous shoot in the area of Arthurs Pass, we have finally finished our shoot of the battle sequence, and with that the principle photography in New Zealand has wrapped. The battle shoot was logistically challenging from a location standpoint alone, but it was also tough on the visual effects side. The basic philosophy of using extras in prosthetics in the foreground, hero CGI characters in the middle ground and Massive armies in the back is still there, but there were many more cases where the production timelines left us with only enough time to shoot empty or nearly empty plates instead. The weather was also a constant challenge, so there will be a lot of work with sky replacements and other compositing tricks to make it all feel sunlit.

Our predictions about the necessity of previs were accurate. From shot to shot, creatures might go from being prosthetic in one and fully CGI in the next. Without the previs to refer to, continuity would have been nearly impossible to keep track of. The cast and crew relied on it for every shot and we even maintained an onset, working cut that replaced the animatics with live plates as they became available. Andrew spent a lot of time in preproduction working specifically on the battle previs. His efforts really show in that the latest rough cuts in editorial are very faithful to the shots that were laid out then.

Everyone is now returning to Los Angeles to prepare for the post-production work that lies ahead. There is an additional three-week shoot in the Czech Republic, but that part is covered by the work that Sony Imageworks is doing, so now it is time for me to return and work with my team back at the studio to finish the last of our character development and to proceed with actual shot production, some of which is already underway. This will be covered in my next installment.

Bill Westenhofer is the visual effects supervisor for the Rhythm & Hues team on The Chronicles of Narnia: The Lion, the Witch and the Wardrobe. His numerous credits as visual effects supervisor include: Elf, The Rundown, Men in Black 2, Cats & Dogs, Stuart Little I and II, Frequency and Babe: Pig in the City, the latter a nominee for Best Visual Effects by the British Academy of Film and Television Arts. Starting as a technical director with Rhythm & Hues in 1994, Westenhofers lighting and effects animation were featured in Batman Forever, when he first worked with Narnia director Andew Adamson, and numerous television commercials. Westenhofer quickly rose to CG supervisor for Speed 2: Cruise Control, and continued in that role for Spawn, Mouse Hunt, Kazaam and Waterworld.

Tags