Search form

Framestore Brings 'Wrath of the Titans' an Eye for Detail

Led by visual effects supervisor Jonathan Fawkner, Framestore created some 300 shots for the Warner Bros. feature “Wrath of the Titans."

Press release from Framestore:

Framestore created some 300 VFX shots for two elaborate sequences for Wrath of the Titans, the Warner Bros. sequel to 2010’s Clash of the Titans. The film was produced by Bill Iwanyk and Polly Johnson, and directed by Jonathan Liebesman, with Nick Davis returning as production Visual Effects Supervisor.

Leading the Framestore team was Visual Effects Supervisor Jonathan Fawkner. “We were chiefly tasked with two sequences,” he says, “The first involving an encounter between Perseus’s team and a trio of Cyclops, the second (which follows the first closely within the film’s chronology) concerning Perseus’s groups assault on The Labyrinth, a towering maze which provides access to Tartarus wherein Zeus is imprisoned. So we had two very contrasting types of visual effects to deliver: photo-realistic near humans, albeit giant ones, interacting with environments and human actors; and an impossibly vast, constantly moving architectural environment.” An Eye For Detail

“For the three Cyclops, we decided that performance capture was the only route from the start, but that we’d play it a little differently from usual,” explains Fawkner. "An initial session was recorded before the shoot to explore the behaviour of the Cyclops and inform the cast and crew. Then the plates for the sequence were shot in forest land in Dorking, England in April 2011, without mo-cap referenced directly in camera but rather relying on the tried and tested "tennis ball on a stick" technique providing the most flexibility for the director and cast.” ”By the time we came to the actual performance capture sessions at Shepperton studios, we had a sequence in the can, cut and camera tracked.  We had an accurate scan of the set and by carefully laying out proxy trees and other obstacles, and by matching the topology with a movable sloping deck, we were able to composite the Cyclops into the plate live providing an incredibly intuitive and comprehensible tool. Former international rugby player Martin Bayfield was cast for the mo-cap shoot, not least because, at 6ft 10in tall, he had something of an edge when it came to playing giants. Involved from an early stage was Animation Supervisor Paul Chung, who says, “Since Martin Bayfield’s performances would be used for all three – very different – Cyclops characters, I did a lot of research into how each of them might move and perform. I wrote some biographical notes on each family member to give Martin some material he could inform his performances with. The hot-headed, muscular younger brother, his fatter older sibling who always has to rescue him from the scrapes he gets in, and their dad, who both of them are a bit scared of – that sort of thing. Bayfield took all this on board and gave excellent physical performances.” Bayfield would study the plate, the timing and the rhythm and, with Fawkner directing, attempt to hit his marks in each shot. The team would try as many variations as time would allow, and the various takes were delivered to the client to pore over and make selects from within hours of the capture, ready for editing in the normal fashion.  Says Fawkner: "The massive benefit of doing the capture this way was that the animation was effectively blocked and locked very early on, leaving more time to finesse the details.” Motion-capture has a bit of a reputation – not unearned - for creating a slightly unnatural ‘mo-cap look’. There are a lot of different elements that contribute to that, from the actor wearing the markers and where they are placed, to how you solve it, to how you put it on the character. There are maybe a dozen stages that this data goes through and detail can be lost at any one of them. So Framestore has developed a new pipeline. They use several witness cameras to complement the mo-cap input, increasing its accuracy. With Wrath, they also became the first company in the world to forge a partnership with IKinema, a company that makes software for the process of taking mo-cap and transferring it to a creature of a different scale. Framestore also built tools based on IKinema software that could help quality control the solving process, neatly comparing solve with witness cam footage. Not only does the solver prove incredibly accurate, but it also gives a newfound flexibility that streamlines the entire mo-cap solving and retargeting pipeline. Nicholas Scapel, Head of Rigging, takes up the story. “So we got the action perfectly from Martin Bayfield the actor to Martin Bayfield the digital model. The key to great mo-cap is how you give it to animation. Many studios have a mo-cap department which has a large motion editing team – they cannot do technical animation – and they fiddle with the mo-cap to make it work in the shots and then it goes to Animation, who are often less than thrilled with what they get. We want to give as close as possible to the raw performance to the animators and to let them work it up from there.” Scapel’s rigging team (Leads Laurie Brugger and Matthew Goutte, and Rigger Mauro Giacomazzo) had also worked together on the acclaimed Dobby and Kreacher appearances in the final Potter films, but this was a whole new level of intensity. “We soon realised that these creatures were much more dynamic than Dobby and Kreacher, and that anatomically there was lots more to do,” recalls Scapel, “We ended up with about 10 times the amount of data that we generated for Potter. Our hero or base mesh was about ½ a million polygons, and the map that we used was the equivalent of a 900 megapixel image times maybe 50 different maps that we had. We also had four months fewer than we did for Potter, so it was quite a challenge.” Creating the Cyclops’ anatomy and musculature involved a massive amount of detail, given that the creatures are frequently shown in extreme close-up.  Creature Modeler Scott Eaton was with the team for much of the project and his anatomical expertise was invaluable. He was able to do detailed sculpts of all three characters, and inverse meshes that represented their inside anatomies. The team used a generic human for the base, cutting out a spot for the eye patch, and then bumping up the resolution. The rigging team gave controls to the animators allowing them to control the muscle tension, as the same action can look completely different depending on how tense the muscles are. The result of all this was that the rig would have as much detail as possible but didn’t have any dynamics, so the creature FX team would run a simulation of the fat and the muscle jiggle and the skin sliding. To achieve these they extended the approach they’d used on Dobby and Kreacher, which was to have essentially a volume mesh providing a certain thickness under the skin, simulating a volume rather than doing a typical surface simulation. This gives you the effect of flesh having a mass. Says CG Supervisor Mark Wilson, “Nicolas Scapel and I worked hard together to ensure that on the shading and rendering side we anchored the displacement of the mesh with the movement of the skin, so that the two things worked together and it doesn’t look like a series of surface events, that the Cyclops has internal organs, bones and muscles.” Notes Fawkner, “From the outset I wanted to push what has been termed ‘physically plausible lighting’ – if not perfectly ‘physically realistic lighting’. The lights and materials have physically plausible characteristics in a way that Renderman hasn’t previously been able to do. Renderman 16 allowed us to ray trace the whole character, the weapons and the interactive effects. Paying close attention to the environment, the position and intensity of the light sources all of which were surveyed and photographed per slate provided a deeply rewarding result in a way that doesn't compare with a more traditional Renderman pipeline.  Fabulously detailed textures - lead by Michael Borhi - meant we could get close enough to see the Cyclops’ fingerprints.  Building on the experience and heritage that Framestore has rightly established, they retooled their skin and hair pipelines, optimising them for the demands of ray traced global illumination.  Says Wilson: "Renderman 16 didn’t actually come out until after we’d nearly completed the project, so we were using pre-release versions throughout, in some cases actually developing tools and routines in-house that would later be supported natively by the software which was a little nerve wracking at times, but Pixar were enormously helpful.” The creatures’ faces posed many of the biggest challenges. The one element that was pure animation, informed by live reference material but not by mo-cap, they were designed to be appealing and not cartoony. But so much of our reading of facial expressions derives from the brow line and the eyelids and the way they combine to wrinkle and create emotional information. So if you have one eye there’s a big problem. The team played around with different eye elements and ended up with a large central eye with two modified tear ducts. Anger, fear, sadness, pain, all had to be expressed. They created a highly flexible brow which could be treated either as a left or right eye feature, or as a hybrid of both. They also developed a system that would read the amount of stretch and compression in the skin and automatically generate appropriate wrinkles. This system was also successfully used on the hands. Notes Compositing Supervisor Chris Zeh, “The Cyclops was – surprisingly, perhaps - more straightforward than the Labyrinth work. This was largely because the renders were very good – they’d put a lot of time, thought and R&D into making the creatures look as though they were there in the plate, which was very helpful to my team. It was somewhat complicated by the decision to switch from the darker, foggier look they’d shot to brighter sunlight, but we effectively re-lit it very successfully.” Amazing Stories The Labyrinth is first seen by Perseus and his men from a distance. It is an impossibly tall tower, with circular layers in constant revolution, rotating in different directions grinding each against each. They climb to the top where there is a battle with the evil Ares (Édgar Ramirez). After gaining entry, our heroes are separated inside the cavernous interior until Perseus fights and defeats the Minotaur, whereupon the entire interior realigns itself and a gateway to Tartarus appears.

“The long shots of the tower were a straightforward – albeit enormous – modeling job,” explains Fawkner, “And the fight at the top involved a lot of shockwaves, god weapons flashing and so forth.” This part of the sequence was further complicated by a decision, taken quite late on in production, to ratchet up the tension during the fight by creating a magical door that Hephaestus (Bill Nighy) has to open. This was shot against green screen, with the Framestore team adding the stone door puzzle elements. “On the Labyrinth as a whole we usually body tracked the actors and placed CG dummies in situ,” continues Fawkner, “so that when the dummies were lit correctly and looked like the guy on the plate, you knew you were in a good place. I wanted to bring volumetric lighting into the picture, too, as a complement to the physical lighting I’d already got in place. We were using Arnold Renderer for this sequence, which chews through geometry like nobody’s business. We actually had the film’s Gaffer come in and talk to us about how he had lit the (relatively contained) set – and how he’d like to light it if it was a physical set. This brought a welcome element of realism and practicality, but was also creatively really rewarding.” The final shot of the sequence involves the set breaking apart to reveal the gargantuan chamber that the labyrinth is in before reforming into the gateway to Tartarus. With nothing more than a single figure on 1000 frames of green screen, Framestore needed to bring all the various disciplines together.  They recreated the entire Minotaur set, lighting and set dressing, then the effects team, led by Rob Richardson, proceeded to animate and decimate it.  Houdini and Maya simulations of falling columns and crashing stone, combined with hundreds of 2d elements and an ever moving and changing lighting scene, provided a memorable and fitting finale to Framestore's work.

Jennifer Wolfe's picture

Formerly Editor-in-Chief of Animation World Network, Jennifer Wolfe has worked in the Media & Entertainment industry as a writer and PR professional since 2003.

Tags