Diverse animation and a new synthesis of techniques, a bevy of naturalistic environments, photoreal superheroes and a graphical mash-up are among the highlights of this year's VFX Oscar contenders.
Now that the VFX Oscar category has been expanded from three to five nominees, what's to become of the bakeoff? That was a hot topic of conversation last night during the reception at Kate's. Apparently one plan being considered for next year is to expand the bakeoff shortlist from 7 to 10 while trimming the 15-minute demo reels to 10.
"To me, the bakeoff is like an essential part of modern VFX culture," suggested Paul Franklin, the visual effects supervisor for Inception. "It's something that everyone aspires to: everyone wants to win an Oscar. But the actual bakeoff itself -- when I got to the bakeoff with Batman Begins as part of that team five years ago -- a huge part of it was that you were there, in front of your peers in the industry. Whereas if it's just some anonymous committee that decides on it, you've got no idea, they could've just spun the dice. But you feel if you got up there and gave a good account of yourself and you got people to see it and you can actually gauge people's reactions, it's a special night. And it would be a real shame if they did away with it."
Eric Barba of Digital Domain starting things off by reminding his VFX colleagues that the groundbreaking Tron was not even allowed to compete for an Oscar because computers were considered an unfair advantage. He then admitted it was intimidating trying to live up to a legend with Tron: Legacy. But Digital Domain not only raised the stakes with a host of new vehicles and environments in raising the Tron bar, but also how important it was to shoot in 3-D for an immersive experience. However, the biggest challenge was improving its performance capture capability (Face Plant) for turning the 60-year-old Jeff Bridges into the 35-year-old Clu avatar. Everyone knows what Bridges looked like in Against All Odds, and that's what they were aiming for, using the actor to help drive the performance as his younger self. This involved a smaller footprint, writing better tracking data and improved data wrangling (with the help of EA in Vancouver), but also putting the volume process into the hands of the animators with a new interface for faster and quicker results.
Meanwhile, in his Inception presentation, Franklin described how each sequence had its own unique technical challenge: the zero gravity required monumental rig and wire removal, plus rebuilt environments and floating CG objects at the end, and necessitated roto because there was no greenscreen work. The strange Limbo City featured all sorts of conceptual challenges and they arrived at a procedural method for combining the structure of a glacier with 20th century architecture. For the Bond-inspired ski chase, they needed very convincing environment work and practical miniatures from New Deal and then blowing it all up. And for the folding city-- which has become the film's iconic image -- they had a large logistical challenge in recording and reproducing the architecture of Paris so that it held up to the scrutiny that it demanded.
For Clint Eastwood's unconventional Hereafter, Michael Owens revealed the importance of Scanline's improvements in fluid sim with Flowline for the creation of a naturalistic tsumani in keeping with the tone of the film about coping with near-death experiences. Aside from water interaction, digital doubles figured extensively in this sequence. They had a motion capture shoot, to build a library of moves for digi-doubles (including Massive doubles). Not surprisingly, motions included running and stumbling actions, along with various reactions to the wave, accomplished with more conventional falls into crash pads. But to simulate characters in water, they also used a traveling wire rig, in order to capture characters in water -- from getting pummeled by strong currents, to treading water and trying to surface and stay afloat and floating dead underwater. Ultimately, much of this motion capture was combined with keyframe, traditional animation, as animators worked to incorporate characters into digital water flows, both above and below water.
Ken Ralston joked about how much fun it was working with Tim Burton for the first time on Alice in Wonderland and figuring out how to communicate with him. He then described the synthesis of techniques at Sony Pictures Imageworks that went into the making of the fantastical film, showcasing an abundance of CG characters and virtual environments. They decided early on to acquire the live-action performance in a greenscreen environment, and many of the characters were a hybrid of live action and animation. Numerous motion capture tools were tested mainly as reference for what was ultimately handled as animation. The challenge was to find the balance where CG and hybrid animated characters blended together with the live actors to look like they were part of the same world. Alice falling down the rabbit hole, for instance, is a combination of a live-action Alice on wire rigs on a greenscreen shoot; and the whole environment is CG. The Red Queen, shot digitally, was accomplished by, enlarging her head and giving her an hourglass waist to blend more naturally. The Tweedles were a hybrid with Matt Lucas' eyes, nose and mouth along with keyframe animation, again, using MoCap as reference.
For Scott Pilgrim vs. the World, Frazer Churchill explained how they were tasked with translating Bryan O' Malley's manga artwork and director Edgar Wright's pop cultural vision into a graphical-looking film. This was apparent in the very stylized fight sequences, all of which were based in a hyper-real, alternate world. Double Negative and Mr. X went through every storyboard to establish how to realize each frame: they'd identify which shots they thought would be slo-mo, Phantom digital, film, VistaVision or regular spherical; how much set to build, how much set-extension, which characters would be shot in bluescreen or digital. Each frame in SP became a marriage of physical & digital techniques, and they locked down their approaches early on, thanks to the extensive storyboarding, test shooting and previs that had already been done.
ForIron Man 2, Janek Sirrs, the overall supervisor, explained how ILM (under the supervision of Ben Snow) raised its heavy metal game in not only creating more CG suits for Iron Man, War Machine and the drones, but also got closer and lingered longer on the shots, thanks to the improved look and better animation. He also relayed the explosive firepower that went into Double Negative's Monte Carlo fight between Whiplash and Iron Man's suit-up and armor. He also detailed how ILM raised the stakes for the climactic battle in the Japanese garden, which required full use of Imocap, background plates, virtual cameras and other tools in the extensive arsenal to pull off a photoreal-looking battle.
With the penultimate Harry Potter and the Deathly Hallows: Part 1, Tim Burke explored how they've continued down the path of gritty realism for this road movie outside of Hogwarts for the first time. He said that MPC's opening set piece featured six of Harry's friends shape-shifted to look like the famous wizard to fool the Death Eaters. There was plenty of CG environments, CG digi doubles and a mixture of stunt work and face replacements. For the sake of believability, they used the real performances of each actor to drive the CG Harry and then made the transformations a hybrid of Harry and the real characters. This entailed Daniel Radcliffe playing Harry seven times and using motion control to shoot multiple passes for every single shot with Radcliffe. Burke also described the improvements in both look and performance for Nagini the snake (MPC) and Dobby and Kreacher (Framestore).
Bill Desowitz is senior editor of AWN & VFXWorld.