Search form

'Pacific Rim' and VFX Robot Porn

ILM and Guillermo del Toro discuss the latest robot/monster mash.

All photos courtesy of Warner Bros. Pictures.

The other day, director Guillermo del Toro geeked out on his Pacific Rim at ILM, calling it "robot porn," and touting the successful partnership with the VFX giant. But it was more than just great fight choreography between the robotic Jaegers and reptilian Kaiju monsters. It was also about creating a gritty aesthetic.

"I wanted to take a different approach," the director explains. "I wanted a realistic use of camera and lighting. I wanted camera work that was geographically understandable and believably located, including the impact of surrounding water. The heads are often chopped off and movement is out of frame. We have built in errors with camera positions that are real. We talked to the simulation guys early on in the process and I asked them to make the ocean and the water and the rain another character – to give it a real kind of dynamic."

But for ILM's John Knoll, who’s been promoted to chief creative officer, it was a constant negotiation between speed and scale because of his engineering background, so he wanted the physics to be as realistic as possible. But for del Toro, theatrical spectacle was more important so they often cheated the physics to make the Jaegers faster so they didn't look like slow, lumbering machines.

“Gipsy has the longest screen time and is the most complicated of the models and is equivalent to Optimus Prime in terms of geometric weight," Knoll concedes."The rest are simpler and we built what was necessary. The Kaiju were simpler in terms of being single surface creatures. They are complicated sculptures but geometrically lighter than Jaegers. They're meant to be large so there was a large amount of displacement detail in them done in ZBrush at the highest resolution available. The Jaegers were done in Maya but [VFX art director] Alex Jaeger liked modeling in 3D and used modo.

Hal Hickel, the animation supervisor, says the Jaegers were all so different that it was fun figuring out what their capabilities, uniqueness and fighting styles. It was a constant give and take between live action and animation since the Jaegers are controlled by two pilots inside them who share a neurological handshake called "the drift."

"I'd say the Kaiju were the most interesting in some ways because we do more creature work than robot work, Transformers movies aside," Hickel adds. "The Jaegers were a bigger animation challenge because aside from the scale issue, which we had with both [creatures], we had to figure out a way to make them operate smoothly and look powerful but at the same time seem mechanical. We didn't opt for Mocap even though they're mostly bi-peds and there was certainly a case to be made for Mocapping the actions and then slowing it down. But we didn't want it to be that naturalistic and human. We wanted it to definitely feel like it was a machine with certain mechanical limitations. And I think finding that language took the longest time. At some point, you have to believe they could fight these monsters and win.

"The main challenge with the Kaiju was figuring out the personalities. There were weird scale things and a variety of creatures and the water interaction and destruction. Guillermo wanted everything to have an operatic feel, particularly the scenes out in the ocean where you've got water blowing off from the wind and the Kaiju are lit either from the helicopter beams or from the Jaegers shining on them. Or in Hong Kong, the way the neon would light up the rain and bloom all these bright colors."

But faced with a budget gap, ILM had to massively rework its pipeline to work on Pacific Rim. It had to not only cut out inefficiency and waste but to also empower its artists with more sophisticated lighting and ray tracing rendering tools (Katana and Arnold).

"On Pacific Rim, there was a specific number that Legendary and Warner Bros. needed us to hit and we were coming up ahead of that even with the efficiencies factored in from Rango," Knoll suggests. "And I wanted to do this experiment to close the gap by swapping out these tools and think of a different way of working with the new tools. Guillermo really wanted us to do the show and was willing to partner with us to help this work. We tried to separate front end and back end processes and iterate on the layout and animation phases and tried not to iterate on the simulation and rendering and compositing. But all those things can be undone in your client doesn't buy into that idea when you suddenly get a whole bunch of notes and changes on work that's nearly finished. It worked out really well.”

Knoll had already experimented with Katana and Arnold for the parking garage sequence in Mission: Impossible – Ghost Protocol. But it was so successful that he wanted to try a whole movie using this approach and switch to an all alembic cache-based workflow [an open computer graphics interchange framework that distills complex scenes into baked geometry].

"One of the objectives of trying a show with Arnold was that different shows are architected with different tradeoffs in mind," Knoll continues. "And RenderMan, for example, which we have decades of experience with, is architected around the idea of very high flexibility and computational efficiency potentially at the expense of cognitive workload of the artist, meaning that you can get decent render times out of it if you spend enough time setting it up properly and every dependent pass is something that the artists have to manage and think about. And I think we have plenty of experience knowing what the tradeoffs are: how long it takes an artist to set up something and to run a pass. I've been experimenting with commercial ray tracers that make a very different tradeoff. How do you do global illumination? You shoot more rays. How do you do subsurface scattering? You shoot more rays. It's technologically a relatively simple approach so it makes it much easier to implement things like interactive re-rendering, which has been a vexing problem for RenderMan.

"So the tradeoff for the ray tracers was that they're much easier to set up. It's all one pass (there are no dependent passes). You can easily support these interactive previewing tools and so I wanted to try a big show where we were trying to make it easier for the artists to set up given better tools to see what they were doing, knowing that on the other side of that the expense was going to be in potentially longer render times. If the artists have better tools to see what they're doing and it's easier to set up, then we're reducing the man hours that go into setting up and lighting a shot. And if you can really see what you're doing, you don't have to iterate as frequently. The long render times were definitely a painful experience and so we're now hoping to get the best of both worlds, given that RenderMan 18 gives you the easier setup with the computational efficiency."

The Pacific Rim experiment worked so well that it's now become the template for how ILM does work in the future. But cutting out waste and inefficiency is an ongoing problem yet the best way to combat the economic crisis in VFX, which ILM isn't immune to despite the new Disney ownership.

"I don't think it ever really ends," Knoll believes. His task is now to see the forest through the trees. "To keep this place healthy, you have to be constantly questioning your assumptions. Is there a better way of doing this?"

--

Bill Desowitz is former senior editor of AWN and VFXWorld and the owner of Immersed in Movies (www.billdesowitz.com). He's also a columnist for Thompson on Hollywood at Indiewire and contributing editor of Animation Scoop at Indiewire. Desowitz is additionally the author of James Bond Unmasked (www.jamesbondunmasked.com), which chronicles the 50-year evolution of 007 on screen, featuring interviews with all six actors.

Tags