Search form

'Avatar': The Game Changer

Find out from Joe Letteri and others how Avatar has created a VFX revolution.

Check out The Avatar trailer and clips at AWNtv!

Avatar breaks the barrier between live action and digital moviemaking, changing the way VFX movies are made and experienced. All images courtesy of Twentieth Century Fox.

With its revolutionary virtual production techniques, Avatar has broken the wall between director and viewer, allowing us to experience a whole new visceral and immersive kind of stereoscopic cinema. According to James Cameron and his colleagues, Avatar is thus a game changer for the way VFX movies are made and watched, discussed and written about. No wonder Steven Spielberg proclaimed it "emotional spectacle."

And with an opening weekend of $77 million domestically and $241.5 million globally, Avatar is wowing viewers, too, getting the largest 3-D boost ever, including an IMAX record of $9.5 million, or about 13% of the total domestic gross: "It looks like we made a good bet," boasts Greg Foster, chairman and president of IMAX Filmed Ent., who has ridden this IMAX 3-D wave since The Polar Express. "We worked really closely with [Cameron] on this one. He's basically been over here or someone from Lightstorm every day for the last six months. The aspect ratio, the color grading, the audio, obviously the DMR, they've all had the IMAX DNA in it. And he's made three different versions of the film: the 2-D version, the Digital 3-D version and the IMAX 3-D version. The content is the same but each has its own nuance. He's such a perfectionist and what he's done is to customize everything to take advantage of the specific venues, so for us, what he's really been making sure is that every seat in the auditorium is a sweet spot."

Thus, thanks to the virtual cinematography workflow created by Rob Legato, allowing Cameron to observe directly on an LCD monitor how the actors' CG characters (or avatars) interact with the CG Pandora in realtime and direct scenes as though he were shooting live action, digital and live action moviemaking have become one. In other words, everything you've heard or read about the new digital paradigm or 5D has now become a reality. Which also means that pre and post are obsolete, compositing will have to be redefined and so might previs.

Avatar also crossed the Uncanny Valley while advancing "pure cinema."

"This was a total revolution in that these environments of Pandora could speak to him in the moment and changed how he actually shot a scene, " boasts Rob Powers, who first served as animation TD before becoming virtual art department supervisor. "So final scenes in the movie were affected and changed because he was able to live and explore things as if he were really in the jungle. He could place characters where it was the best place to put them because it existed. It wasn't something where he would just shoot them bare and then, later on, Weta would create something from that. He was in the environment and those key creative decisions that previously would've been done by animators and visual effects houses at a later point --and who knows how many hands would've touched it at a later point -- were done by Jim Cameron himself. He didn't have to rely on other processes to complete the vision.

"The fascinating thing is, if that realtime environment had not been there for him to explore and shoot in, the film would have been immensely different because it was a process that is timeless -- it is the filmmaking process in its essence that he was able to tap into. This has not been the case ever before with these heavy visual effects films. And there's been no film like Avatar to this degree. It's definitely changed the art of digital filmmaking, and especially visual effects, which are increasingly part of our movies. It's never going to be the same because once people grasp what Avatar represents-- and the majority of the industry is still struggling with what this new paradigm shift is -- they'll understand how Jim's vision propelled the process and the hard work of everyone involved [executed it]."

Powers was part of the core group that started in 2005. He worked out concepts and problem solving for the creatures: The first time the Leonopteryx flew, it was through his animation; the first time a Direhorse galloped, he animated it; and the first time the Na'vi walked through a Pandoran jungle, he created the CG jungle and animated the Na'vi. But Powers' biggest contribution was to the environments of Avatar and the virtual moviemaking workflow used for the production.

Cameron's ability to make key creative decisions in the environment alters the role of directors, animators and visual effects houses.

"I was a strong believer that MotionBuilder could handle shadows and lighting cues and atmospherics that Jim required. I did a test of the log scene and populated it with ecosystems and tried to create a sense of what the artwork conveyed. The two-tier contribution that I was directly responsible for was bringing this level of art direction to what MotionBuilder could display for him in realtime and what the virtual production could see, introducing that in realtime so Jim could see that world and shoot in that world on Pandora. Also, coming up with techniques so he could do those in the moment changes like foliage layouts; and coming up with techniques for beautiful daytime plants to become bioluminescent at night with the flip of a switch.

"Some of the most important techniques were in the organization of realtime kits and the biospheres that he developed. The CG could go on forever because these are entire planets. To maintain the integrity of the realtime system, I had to come up with ways of continuing the look of a world that went on forever but not bog down the realtime render engine. One of the techniques that I came up with was biospheres and domes that you could place a camera in a scene that went forever. And we came up with proprietary tools that you could render a 360 sphere view at a certain radius that we would set, depending on how far we needed to interact, and then beyond that point, the geometry was literally collapsed into a dome but still looked like actual geometry. We would also combine that at later points with matte painting work by the art department itself. The organization of the kits was completely configurable. Jim would scout a virtual set with production designer Rick Carter as though it was a real one, but it was an [interactive] process that gave him total control."

For Carter, who helped design the life forms of Pandora with Rob Stromberg (the co-production designer), Avatar represents the hybrid in form and content as a new meta-experience -- redefining everything from mise-en- scene to visual effects.

ILM worked closely with Cameron and Weta in finishing vehicles.

"In that first rendered shot that came back [of Neytiri aiming her bow at Jake], you could see the introduction of all the levels that had to play out in this movie," Carter suggests. "From the introduction of Neytiri as the love interest, her interest in him, her change of heart about him from being an intruder to something she needs to accept because she's getting a sign from somewhere, which we really don't understand at that point. There are lots of things not only going on in the shot but also on a deeper level.

"That's why I always saw the movie as The Wizard of Oz meets Apocalypse Now. It's like this EKG kind of brain wave going from Kansas into Oz and into this mystical, bioluminescent dream state, the phantasmagoric, which is what he called it in the script. When I started tracking that almost like an EKG through three acts, I could see that as the film progressed you spent less time in Kansas, the real world, and more time on Pandora, the dream state. The scientific and spiritual binary components of the film dealing with the life force that binds all living things was already in the script as an intangible, but he elevated it into a whole movie going experience."

Carter even gets existential about VFX: "What do they mean? What's the point? And it's so obvious in this movie because none of it can exist in front of our eyes, so you have to create something that doesn't exist. Once you get to an entirely new planet with a new ecosystem connected spiritually with flora and fauna and characters. And with Jim's eye for detail, because he's been to places -- the bottom of the ocean, among others -- it gets right to the core of what is a visual effect, which is not just a series of pixels or colors or forms that combine to form a fantasy. You're actually trying to create a reality that can only come across with this new form that is introduced to us by the computer because of the amount of detail that it can create.

The Wizard of Oz meets Apocalypse Now. For Rick Carter, it's an EKG brain wave into a mystical, bioluminescent dream state.

"And somebody as mind-expanding and pragmatic as Joe [Letteri] has to be able to see it and put it into a system. And I remember when he said early on that we're going to have to grow these forests. And it wasn't a matter of creating layers of things that looked like forests. But to actually grow an environment so that it could be evocative of life. It's the thing that I found that would enhance your movie."

Indeed, for Letteri and the entire Weta Digital team, Avatar exceeds Lord of the Rings and King Kong in both complexity and achievement. In fact, Weta worked on 1,800 out of approximately 2,200 shots. But the experience transcends mere shot count.

"We made low-resolution models of everything for the stage," Letteri says. "In cases where we were far enough along to have high-resolution models, we down res'd those and prepped those for the stage, textured them up and made sure there was a back and forth so that whatever we were sending them would work on the stage: made sure the rigging worked for the MoCap system and for our animation system; if we were updating a model, we'd send it to them; if they needed to make any changes on the stage for rigging purposes, they'd send them to us. It was a pretty good workflow.

"Jim, of course, would capture everything and then go back and do his cameras on it and put together what he called templates, which are MotionBuilder renders run back out in realtime, but played back through the cameras that he had captured, and that's what he used for editing."

Rigging six-legged creatures allowed for bonding with the Banshee.

Letteri echoes that this whole new virtual system was a real director-centric breakthrough. "It's no longer the director saying, 'Give me something that looks like this.' It's Jim picking up the camera and saying, 'Here's my shot; now you guys take it and make it look real.'

"For Avatar, we had seven main and 14 secondary speaking parts. And we had to turn that into 200 or so for the Na'vi clan, who don't speak but who are still very expressive. It required a whole new level of building characters and environments. The trick for the land creatures was working out a believable six-legged walk and run cycle and the flying creatures had four wings, so we had to figure out how to make them fly without the wings getting in the way."

According to Andrew Jones, the animation director, there was a lot of R&D "figuring out the best model type and most resolution to pack in there and really get the rig to behave well and still not be too slow for animators. The facial rig was within Maya but with plug-ins. We had hoped for a full muscle-based system but wound up going for a blend-shape system but using muscles as the basis for the controls. At any one time, we could swap in a muscle system and see what it looked like, but any of the blend-shapes went much faster."

Beautiful daytime plants could become bioluminescent at night with the flip of a switch.

For the animation, Weta put a lot of effort into the facial solves and tracking "because one of the problems with the way that we were doing it was you've only got a single point of view using one camera," Letteri continues. "Ideally, from a technical point of view, it would've been good to go with two or more cameras. But from a performance point of view, that was going to add weight, it was going to slow down the process to changing out drives and it was going to be cumbersome for the actors."

Weta also created a new optical solver for the eyes to track them and paid a lot of attention in animation to the movement to compensate for what the solver couldn't achieve.

FACS (the Facial Animation Coding system developed by Paul Ekman) was utilized once again by Weta. But one of the problems with FACS is that it doesn't cover dialogue. And so that came as a secondary layer, where the motion editors and animators looked at the data coming in and had to figure out what the track was doing and how to solve that. It's really hard to track the shape of a lip because it just changes constantly. So the system was built as a big solver that you could input training data into so that the facial editors would try and interpret what was going on and keep adding to the system until it converges on the right answer, and then have the animators go through it again and take another pass with the rig and make sure that everything behaved properly and worked in the right combination.

ILM had to raise its game for stereo in matching focal length precisely and doing very accurate tracking.

In building this whole world, Weta had hoped to at least create the plants procedurally, but ended up hand painting everything to make sure that it was of the highest quality and uniform in 3-D space.

They additionally adopted a global illumination system for lighting. "We came up with a system based primarily on image-based lights but then converted the whole system to spherical harmonics," Letteri explains. "What that meant was we could pre-compute all the lighting contributions in a scene and then put the characters and everything in with the lighting and the TDs could move the lights around. It would tell you what the influence was of all the objects around each other. And you could solve that in a global sense."

A new full-on compositing system was devised as well for 3-D. "We started outputting all the depth information for everything we were rendering, so you don't need to rely on mattes anymore," Letteri adds. "You know where everything is in space and can figure out the relationships. That's really important for things like the jungle, where you've got lots of plants and you could layer them in the right order based on depth because you're dealing in pixel to pixel to pixel. That will become standard for compositing from here on out because of the flexibility, even if you're doing a non-stereo movie. It's just easier to composite in 3-D than in 2-D."

ILM created a tool that lets you custom tailor a high quality explosion that has controllable behavior and can tightly interact with CG objects.

Yet Weta still required vfx assistance and received great collaboration from ILM, Framestore, Hydraulx, Pixel Liberation Front, Blur Studio, Buf Compagne, Hybride, Prime Focus, Halon, The Third Floor, among others.

ILM, in fact, focused on vehicle-oriented shots, which numbered around 250, according to Letteri. These included the shuttle, the Samson, the Scorpion and the Dragon helicopters and the AMP suit. Scenes included the opening fly over the Pandoran jungle, the shuttle re-entering and landing on Hell's Gate, the first glimpse of the floating mountains, the vehicular assault on the Hometree and parts of the explosive climax.

"For the most part, all of the vehicles were designed and textured by Weta, so we built them up to parity of look," explains John Knoll. "The only exception was the Dragon, Quaritch's big helicopter. They built the model but hadn't textured it, so we did the texturing here.

"One thing that did complicate every aspect was stereo. Everything had to work properly in depth. Our matchmoves had to be very precise because just looking good in screen space wasn't sufficient. So you have to make sure that you match your focal length precisely and you're doing very accurate tracks on a lot of features. So you have to pass very high quality data in.

The idea of pre and post goes out the window in this new digital paradigm, but the code hasn't been cracked yet.

"One of the big breakthroughs for us was the explosions. In the past, we've done explosions by doing elements, but given Cameron's request to exactly match the templates and the need for the explosions to interact with other CG objects, the best solution was CG. How far can we push CG explosions to look good enough in close-up? TD Chris Horvath, who was instrumental on the fire on Harry Potter and the Half-Blood Prince, was responsible for the shading side of the solution."

The explosions are fluid simulations using the same engine ILM used for The Maelstrom and Poseidon. However, there were modifications to the engine "so that it behaves appropriately as gas expansion volume and to carry around temperature attributes. And the shader takes the whole volume density grid and makes it look like fire. Chris learned a lot from Half-Blood Prince. There's that whole black body radiation curve that you want to use so that your fire has all the right colors and color gradience in it. I think having a tool that lets you custom tailor a high quality explosion that has controllable behavior and can tightly interact with CG objects is going to be an important thing for us on future shows."

Letteri agrees there will be much to be learned from Avatar: " I think what everyone discovered as you went along is that if you're going to put a virtual stage together like a live-action shoot, then this becomes the front end to a visual effects piece. Because you not only start thinking in terms of takes and selects, but as shot design. You have to be able to switch from one to the other. And it requires a level of infrastructure for the whole thing that I think is going to benefit everyone if we can come up with some system across the board to make that easier."

Bill Desowitz is senior editor of AWN & VFXWorld.

Bill Desowitz's picture

Bill Desowitz, former editor of VFXWorld, is currently the Crafts Editor of IndieWire.

Tags