The DNA for Rise of the Planet of the Apes was set with Avatar, but Weta Digital takes performance capture animation to the next evolutionary level with the new origin story about Andy Serkis' CG Caesar, the uber chimpanzee.
To begin with, Weta revamped the performance capture process by placing the performance capture actors in the live-action set or out on location with the other actors, and replacing reflective optical markers for motion tracking with an active LED system with infrared lights to work in a variety of environments and lighting conditions.
Weta additionally upgraded its system for hair, muscle, tissue and eyes. "There's new technology for all of those pieces, but I think making the performance looks as realistic as possible is still the main thing," suggests Oscar-winning Joe Letteri, Weta's senior visual effects supervisor.
The most prominent improvement is a new facial muscle system that adds all the dynamics, ballistics, and secondary motion to keep the volume of the face. "There are a lot of artistic decisions that by the end of the movie come through more easily the first time around," adds Letteri.
It's still a work in progress because figuring out how facial muscles work is not easily understood because they don't behave like the other muscles in the body. They are not so bound by the skeleton. But there's enough of it working for the animators to drive the performance, whether it comes from the capture or the animation or the combination of the two.
Despite all the primate research and realistic attention to detail, there were human touches built into the design of Caesar to instill intelligence, familiarity and empathy. "We treated them as individuals, playing on the idea of the caged gorilla, Buck, as the muscle, Caesar and the other chimps do the scheming and Maurice, the orangutan, was given the intelligence," Letteri says. "He knew sign language. It was to set them in the vein of what we see later on but not necessarily the exact same roles."
Meanwhile, Weta ditched its old procedural system for hair for one that allows direct manipulation of the hairs so the modelers and creature TDs can comb the hairs individually, getting away from interpolating hairs and plugging in procedural controls for them. This time they went for the level of detail, which sacrifices the ability to do things quickly.
One of the misunderstandings that people have with performance capture, according to Letteri, is they tend to view it as a mechanical process."I think that comes down to concentrating on the body, where it's very joint-driven," he suggests. "They look at those markers and think you're just moving a skeleton around. But on the face you don't have that. The way these things interact with each other and float against each other, there's no mechanical reference. When you're capturing the shapes of the face, nothing on the face is ever fixed; there's nothing locked down to refer to it, so the first thing you have to do is figure out your baseline. And so there's a big interpretive effort that goes into that. But then it comes back full-circle: You go through this whole process of tracking and analyzing the data, interpreting it through these FACS poses and then putting it back on the face through all the combination muscle shapes. And then you just look at it side by side with the performance from the actor and say, 'Does that look like the right performance or not?' If not, why not?"
Bill Desowitz is former senior editor of AWN and editor of VFXWorld. He has a new blog, Immersed in Movies (www.billdesowitz.com ), and is currently writing a book about the evolution of James Bond from Connery to Craig, scheduled for publication next year, which is the 50th anniversary of the franchise.