Bill Desowitz speaks with Oscar-winning visual effects supervisor Joe Letteri about staying on the cutting edge of digital acting and 3D environments at Weta Digital.
After winning an Oscar for vfx on King Kong, Joe Letteri has remained at Weta Digital to supervise work on The Water Horse (Sony/Revolution, Dec. 7, 2007), including the CG sea creature, and Avatar, James Camerons long-awaited, first feature since Titanic. Letteri also discusses early work on the CG Silver Surfer from Fantastic Four: Rise of the Silver Surfer (Fox, June 16, 2007). Under the vfx supervision of Kevin Rafferty, Weta has reportedly enhanced its CG animation process that employs performance capture techniques to add further dimensionality to the liquid-metal hero performed by Doug Jones.
Bill Desowitz: Please fill us in on the state of the industry with regard to digital actors and 3D environments.
Joe Letteri: I think as far as digital actors go, in terms of characters rather than digital doubles, weve come pretty far. Especially with motion capture and facial motion capture, you can really work with an actor to develop the character. Its an extension of what we did with Andy Serkis with Gollum and Kong. There is still a huge call for an animation team to work the actors and help develop the digital characters the rest of the way, because, typically, there are characters that are non-human and invariably there are things that a human actor cannot do. Its too dangerous or physically not possible because of the configuration of the character. So you need this integration with the actors and the animators to pull everything together. I dont see that going away anytime soon. Im not sure you want it to go away because its really a great combination to have.
BD: In terms of techniques, how do you see it evolving between performance capture and keyframe?
JL: What weve always done here is to try to make the path as two-way as possible so we can start off with performance capture but then layer on the same set of tools that the animators are used to working with. So its always the call of the animators to make requested changes. Sometimes it sails straight through based on what the motion editors are doing. Other times, if you get a request to change the performance, you just have to decide at some point the data is too heavy and its going to be easier to reanimate it. The performance is a guide. But again, by having the data there, youve got a good starting point for the character and what you need to do next.
BD: What do you think of the various techniques that are available, including what Sony is doing with Imagemotion, what ILM did with Imocap on Dead Mans Chest, Face Robot from Softimage and the new Contour from Mova that was introduced at SIGGRAPH?
JL: I think those are all great ways to go. They all bring a little bit of something to the toolkit that you have and each has its own strengths and weaknesses, so you can tune which technique you use to the situation. In the course of a large feature, youre probably going to use several of those techniques. Its great to see all of these things being developed in different ways to look at an important part of the problem for a particular task and try and solve that.
BD: I had a chance to ask Andy Serkis about performance capture recently and he thinks there will be a time in the future when a director can look through a viewfinder of a handheld camera and see in realtime physical and facial capture. Is this something youre keen on?
JL: Yes, I think that would be a great thing to have to be able to work with the actor to get as much of his performance as possible. For example, on Kong and even with Gollum, we had a lot of the body stuff working in realtime. But looking at the facial, there was a problem of translating and learning the character, particularly with Kong. Yes, I would like to see that happen. One thing that it means, though, is that all that character development has to be done upfront if youre going to do it all on stage with a director. We spent weeks and weeks with Andy on the motion capture stage for Kong and we got to digest all of that and turn that into his character. So it means shifting the way youre doing things. Its another one of those paradigms where what we used to call post-production pushes more and more into preproduction.
BD: How close are we to conquering The Uncanny Valley?"
JL: You mean the real/not real, the human/not human? I think it depends on the application. Obviously we have seen digital doubles done really well using a lot of image-based techniques, for example, where you couldnt tell the difference between the original performance of the actor and the digital performance. So from that point of view, its almost already been hit. But to take a human performance and have it actually be a human character? I dont know. Ive never been in that situation before. We usually just hire a real actor. Were always looking at creating the other characters.
BD: And that includes Avatar? (The sci-fi film is about Jake, a paraplegic war veteran who is brought to another planet inhabited by a humanoid race at war with humans.)
JL: Well, were early days, so there could certainly be that coming up. To me its not the problem thats most interesting [about the project]. So youre only going to use that in those situations where its too dangerous for an actor. So if it does come up, Im sure well dig into it and tackle it. But the idea is to get a full-on performance that you can carry a movie with.
BD: What can you tell us about Avatar at this point?
JL: I think its going to be a combination of a lot of the things that weve been doing that you and I have been discussing. Theres performance capture techniques out there that really allow us to work in realtime with actors that allow us to develop the characters. And then just allowing the time to work with it after the fact, because the characters -- no matter what you do on a performance capture stage -- really come alive when you get them into the scenes and get them lit and rendered and see what they really look like and how they behave. And that always influences your perception of them. You start making adjustments to bits of the performance that you might need to do just to bring them alive. In a CG performance sometimes you need to add a little life, so you might add a little movement to the eyes. Its stuff you start to define once youre in there working with it that alters it in a subtle way. The final result is what youre going to see on screen.
BD: What kind of toolset fine-tuning are you planning?
JL: Well be building on what we have here because we have a pretty robust toolset for character animation. Again, well look at that a lot more once we see what the performances are and see what we need to do to bring it all alive.
BD: And does the added immersion with 3-D excite you?
JL: Yeah, the 3-D stuff looks really cool. Weve been doing some tests for ourselves just internally. When you do everything in a 3-D world digitally, then you can play with different things and figure out what works, and start to answer the questions about how to watch a 3-D movie without getting a headache. Were learning about how to do it properly and its been really good to chew through those problems.
BD: Moving on to the 3D environments, what are you looking to improve?
JL: Probably in general to use the same technique we used to create New York City in Kong [Maya-based software from Chris White dubbed CityBot rebuilt the city, floor-by-floor, section-by-section, block-by-block, adding intricate and period-accurate detail to the low-res dataset.] And extend it to build any type of environment, not just cityscapes. As you get more and more down the line, different locations that are called for are hard to reproduce. And its not just fantasy. Its getting more and more difficult to do big scenes like New York in a real location. The amount of cleanup and replacement and set extension that you have to do dovetails into, Gee, we just replaced that digitally -- we couldve done the whole thing that way. So were looking at that for various types of environments to give us the freedom to answer that question.
BD: With Avatar, there was work done by Rob Legato in creating a virtual production studio in L.A. How does that fit in with your current plans?
JL: Thats still the basis of everything hes doing and that dovetails nicely with the system that weve developed down here. We wound up using a lot of the same technologies and things. Theres not that much of a difference. Probably the real difference is one of approach. Rob has created an immersive system, which is necessary for Avatar. Whereas our system was designed mostly to work with things that had been shot on set and to add motion capture into those. Were finding it very compatible, which allows us to keep a consistent workflow back and forth. We both use the Giant system for motion capture. The rest involves developing an infrastructure around that to support the film.
BD: What can you tell us about The Water Horse and the mysterious Scottish sea creature?
JL: Were right in the middle of that going through our first pass on the animation blocking and getting everything up to speed. Where it plays into what weve been discussing is in the area of character design and character performance. The main character is really fun to work with and its been great to continue what we did [with Kong].
BD: And how is Fantastic Four 2 going?
JL: Were still early days on that but the obvious question is you want something with a cool silver look to it. Weve been coming up with some things on that. So thats more like taking what theyre doing onset and use that to drive the Silver Surfer.
Bill Desowitz is editor of VFXWorld.