Search form

Cameron Geeks Out on 'Avatar'

The King of the World tells us how he literally moved mountains with his revolutionary Avatar.

Check out the Avatar trailers and clips at AWNtv!

James Cameron transports us all to Pandora, thanks to his revolutionary virtual production. All images courtesy of Twentieth Century Fox Film Corp.

The long-awaited Avatar finally opens today from Twentieth Century Fox and, as you've read by now, actually lives up to all the hype about transporting us in a much more visceral and immersive way. I spoke to James Cameron by phone last week from the London junket, and he really enjoyed geeking out for us.

Bill Desowitz: What I found significant about Avatar is that you've broken down all barriers between you and the viewer in transporting us to Pandora. That was the whole idea, right?

Cameron wields his virtual camera, which enabled him to view and interact with CG characters and the CG Pandora while directing actors on a stage.

James Cameron: Yeah, well, the ideal scenario is that you've fallen in love with Pandora and you want to go back there, which hopefully translates to ticket sales for repeated viewing.

BD: And the stereoscopic aspect?

JC: The interesting thing is that so little of our focus was on the stereo. I mean, honestly, only about 2% of our focus was on the 3-D because we very early on figured out that we weren't going to have time at the end to tweak the stereo, when 1,000 shots would be coming in the door the last couple of months, which is pretty much the case. So we put the stereo part of the workflow early on when we generated our template, which went to Weta, which was essentially a CG camera with unfinished assets within it, and then they would put in the high resolution assets and the fully rigged models and then render it. But the camera didn't change. Once we had those cameras, we took that opportunity then to work on the stereo space, and we did all our interocular dynamics and our convergence dynamics.

BD: Talk about your design for 3-D.

JC: We tried to create a visual stylistic unity between the live action and the CG. I physically operated the camera myself in the live action and I physically operated the camera myself in the virtual production so the same kind of aesthetic -- the same moment by moment decision making -- would be taking place in both environments.

BD: What was it like for you?

JC: The virtual camera stuff was pretty much new to me and pretty much new to this film. Rob Legato came up with the idea to structure it as what he called a "director-centric" workflow. And so we went through about four or five iterations of the virtual camera within a few months, as I kept asking them to make it lighter and change the configuration, and we really evolved the process from scratch using that camera. That wasn't a change for me because I had never done it before: that was a case of inventing the wheel. In the live-action production, the stereoscopic camera is something I had been working with on the documentary films, but I hadn't done it in a truly theatrical film, cinematic style. So adapting to a dolly and techno crane didn't really take long. I'd even been working with those tools on the documentaries. But we had to be more rigorous about our process and really check the quality of the stereo space as we went along on a shot by shot basis, which we did by having live viewing with a 2K projector within 50 feet of where we were working on the set.

BD: You get a great sense of POV with Jake. It was liberating for him being on Pandora. Was it liberating for you shooting virtually?

JC: Yeah, I enjoyed working in 3-D but I tried not letting it dominate. What I found with the virtual production was that it was very liberating in the sense that if I wanted to change the environment around, I could pretty much right then, even subsequent to having capture with the actors. I could change the background, I could move the sun, I could move the mountains. Sounds all very giddy and God-like, but what you find is for every moment where the CG allows you do these big gestures that you could never do in live-action filmmaking, there's something that would be so ridiculously easy in live action that you wouldn't think about it that would take five seconds, that in CG took a bunch of time. So it sort of balanced out. It was ultimately not better, just different.

Cameron cracked the code of performance capture and attained an emotionally stirring performance from Zoe Saldana as Neytiri.

BD: A perfect blend of style and content.

JC: Yes, that feeling of transport -- there was a nice consonance between the content, the style and the form: the content being a story that took place in an alien rainforest ecosystem; the style being a very subjective camera that's moving with characters, as I normally try to do; and the form being stereoscopic, widescreen cinema where you feel like you can reach out and touch the planet.

BD: A much more visceral experience.

JC: And that put huge pressure on the Weta guys to create extremely high resolution assets with a lot of detail. In fact, they found pretty quickly that to get to the photoreality that we needed, they couldn't really model every plant and every leaf on a tree and every vine and so on. It would've just been hideously prohibitive in terms of man hours so they can up with procedural processes for essentially growing the forest.

BD: Talk about the performance capture or E-Motion capture, as it's now called.

JC: Yeah, I didn't coin that but I prefer to call it performance capture because that's what it is: it's capturing the actor's performance -- that critical moment of truth that the actor creates and capturing it fully and completely.

BD: How challenging was it?

JC: We found that lighting the blue skin was very difficult, especially when you start to add sweat and oil sheens to the surface of the skin, which all human skin has. There was a point at which the blue skin could very quickly turn plasticky if you weren't careful. And the normal lighting cues that you'd have with human skin we studied. We actually went to a rainforest in Hawaii and studied how the light reflected off the plants and how the face takes light from the sky (whiter parts of the sky; bluer parts of the sky) and the interaction of colors off the face, except we did it all with human faces. And then we tried to apply those lessons to blue faces and they didn't work because the color relationship between the key and fill sides of the face in daylight is usually a shift from white light to blue -- and you just couldn't see it. So we wound up experimenting with other ideas. And if you study the film, you'll see that the bounce light is actually a green light, so we took this conceit that sunlight was bouncing around in the forest and it was reflecting back green, or transmitting green in the leaves or whatever. And there was this green ambient light at all times. And then there was a mixture of blue skylight and white sunlight in play with that. So there are actually three colors of light used to light the Na'vi faces but only two of those colors -- the green and the white -- are really creating enough of a color difference to work. And that was something that Joe Letteri [the senior visual effects supervisor from Weta] and I worked out very early on; actually, in the very first fully rendered scene that they did. But it wasn't intuitively obvious that that's how it would work.

The blue Na'vi proved problematic, so they used green bounce light in conjunction with white to properly convey the faces.

BD: How did you achieve that "critical moment of truth"?

JC: I had this philosophy of life: I actually borrowed it from Arnold Schwarzenegger because he used to say you program yourself for success, not for failure. And at first I didn't understand what he meant. But the more I thought about it, what I realized what he was saying was that every decision you make as if you were going to be successful because, in his world view, you're never going to fail. Take that idea and apply it to performance capture. So I go in and say, all right I'm going to go down this road and spend all of this money and we're going to do this. The success would be defined as: 100% of what the actor does arrives, finally, in this CG character's face at the end of the day. Let's take that as a given. Then here are the ground rules: you don't over act; you do the most subtle thing that you feel is correct for the moment. You don't try to modify the acting process for this imagined result down the line.

That's one rule. The other rule is: as a director, I don't leave the set and walk away from the actors until we have the exact nuanced performance we think we need. We're not going to try and fix it later; we're not going to try and embellish it with animation. We're going to assume that we are neither going to lose any information nor add anything. And if you go by those assumptions, which were my assumptions going into this, even though it's fairly cheeky because this had never been done, it exactly defined how we worked. Which meant that it was highly focused on getting the right performance, which is why Sam [Worthington] and Zoe [Saldana] are so damn good in the movie.

BD: Technology caught up with need.

JC: It was more like flogged forward with bull whips.

BD: Let's discuss the virtual environment breakthrough.

JC: I think if you sum it up in general terms, what we found as a pattern is that the more [Weta] made everything real, or behave as if it looked real, the realer it looked. Meaning, if they just had a single sun source, and bashed it into the forest and let it reflect around, no matter how computationally intensive and nightmarish that might be in terms of the code necessary to do it and the amount of time on the render wall that was needed, the more real it looked -- and the more they were observant of nature and seeing how much light would be bounced off the surface of a leaf, how much light would be transmitted through the leaf and come through as a green color on the backside. The same thing for the Na'vi and Avatar characters: the subsurface scattering, the transmission through the ears, the transmission through the nasal cartilage. We went in with a lot of ideas but it took months and months of testing to see what would or wouldn't work.

CG environments are now indistinguishable from live action, thanks to Weta's authentic use of global illumination.

BD: What was the moment when you realized that it worked?

JC: Do you remember the scene when Neytiri is observing Jake walking through the forest and she aims her bow at him and she gonna shoot him?

BD: Yes.

JC: There's a shot where the Woodsprite lands on the thing and she relaxes her bow and there's a tight close-up where she's thinking. She gasps and she's obviously affected by it. You don't even see the Woodsprite: it's just a very tight shot of her face. And that was one of the first shots that was completed that I signed off on as Neytiri now fully alive. There was a moment when I was sitting in the editing room, by myself, just staring at the screen, and realizing that, though, it had taken two-and-a-half-years or more to get there, she was real -- every aspect, every pore in her skin, the reflection in her eyes, the structure in her iris, the expression on her face, the lighting, the hair, everything was real. Now, of course, that was a very simple shot, and there was a lot more work to be done, but if you couldn't get to that threshold, it couldn't be done.

BD: And the most terrifying moment?

JC: Probably the first time I saw Neytiri's face come back from Weta, which would've been months earlier. She didn't look like Zoe, she didn't act properly… We went through the same cycle with Sam and then to a lesser extent with Sigourney. Each character had to go through a process of rigging the facial musculature to get the muscles to fire in the right way, to get the lips to curl and evert and deform in the right way and every character was different. And it was an interactive process of me sitting with them -- and they were very good: they did 80% of it. But when they would present something to me for discussion, it still was not right, and we had to sit in a dark room for many, many hours and discuss it and then go back to the drawing board. I called it cracking the code and they cracked the code on Neytiri first. And the beauty of it is once they cracked her, every Neytiri close-up after that -- and there were hundreds of them -- looked spectacular. And that absorbed very little of my time. Usually, I was only dealing with lighting at that point. And then the same thing with Jake.

BD: What plans to you have for an extended director's cut for the Blu-ray/DVD?

JC: It's in discussion. I haven't figured out exactly how I'm gonna approach it yet. We might do something where we create an interactive DVD that's got a pathway so you could watch the movie the way it was released or watch it with other scenes in it or maybe do your own version with more of this or less of that.

The fulfillment of Pandora was more than a pipe dream: it's a game changer for the way VFX movies will be made in the future.

BD: And will it be in 3-D?

JC: Eventually, because the new Blu-ray players that are 3-D enabled to a 3-D TV screen actually produce a pretty spectacular image. They're not widely available yet, but I can imagine that we'll do a 3-D release, if not immediately, certainly within two years.

BD: What do you want to direct next?

JC: I haven't decided. I've got a number of possibilities -- all cool stuff that I've developed -- and I just really want the dust to settle from this one to see what my appetite is.

BD: What's on your wish list for technical improvements?

JC: Lots. Here's a big one -- and not enough people are talking about this: 3-D makes us see better certain defects in the basic system of cinema -- the 24 frames-per-second display rate, which already has been eliminated by sports broadcasters as being insufficient. They've already got 60 frames. So I would like to shoot a movie at 48 or 60 frames-per-second, and have it displayed digitally at that rate. There's no reason why the digital projectors can't do it: the little Mims device that is the DLP chip can oscillate at, I think, up to 160 Hz. So, that right there allows us to have a new horizon in cinema, whether it's 2-D or 3-D. Now I think it gets complicated with respect to visual effects because you don't want to be rendering 60 frames when you used to be rendering 24. So what do you do? Do you render at 30 frames and do a 2-D interpolation with optical flow to generate the inter frames? That needs to be looked at. But that's the kind of thing I think about as the next horizon in terms of presentation and really blowing people away in the theater.

Cameron now has his sights set on improving the virtual production process to make it faster, more efficient and more streamlined.

BD: But you've definitely broken down any distinction between pre and post.

JC: Absolutely. And, by the way, we didn't 100% crack the code on this movie for how to proceed. For me, at this point, filmmaking is as much about process as result, so we're actually meeting with all our department heads from virtual production, post-production. Joe's coming in and we're going to do kind of a big post game where we do a retreat for three or four days and actually try to codify and document everything that we did and everything we need to do better next time, because another one of my process goals next time is to make the whole thing faster, more efficient, more streamlined, because I don't want to take four-and-a-half years next time I make one of these things. And I don't think the fans want to wait that long either, now that they're going to get a taste of Avatar. If it's successful, they're going to want something a little quicker.

BD: So the idea of a sequel intrigues you?

JC: Oh, yeah, but I think that whatever I do, whether it's a sequel or going on to something else, the same rules are going to apply. And, again, it's streamlining the pipeline, making it all clearer, more direct and more efficient because we stumbled around a lot. And, of course, we knew that was going to happen because so much of it was R&D, so much of it was experimental.

BD: But you must take pride in watching some of your advancements being utilized on Tintin.

JC: Yeah, it is cool to see the whole head rig system -- the image-based facial performance stuff being utilized, because when I initially presented Avatar to Digital Domain in '95, and I did a napkin sketch of a helmet with a camera on it, shooting the face from a few inches away, that's exactly what we wound up doing. They all thought I was out of my mind at that point.

BD: Did you enjoy it as much as everyone else the first time you watched it in a theater?

JC: Absolutely. That's the beauty of a film like this where it goes through the whole process of Weta bringing the shots up from our template level, which is fairly crude, to the finished level, that I actually get to sort of see the film almost as an outsider, even though I've been intimately involved with every shot. But the level of execution of these shots, the photoreality is so great, the detail is so great, that I can sit and watch it in 3-D and see stuff I'd never seen before."

Bill Desowitz is senior editor of AWN & VFXWorld.

Bill Desowitz's picture

Bill Desowitz, former editor of VFXWorld, is currently the Crafts Editor of IndieWire.