Search form

The Future Of Motion Capture Animation: Building The Perfect Digital Human

Digital humans are right around the corner accordingto Giant Studios and Webbie Tookay couldn't be more delighted. Laura Schiff explains. Contains Quick Time movie clips!

schiff01.gif If you have the QuickTime plug-in, you can view Webbie talk the talk. © Giant Studios.

I'm an avid surfer and I love to dance," the Elite model told WABC-TV Eyewitness News last December, adding that she also enjoys sports and working out at the gym. Who is this athletic beauty? None other than Webbie Tookay, the world's first virtual model, making her digitally animated debut via satellite feed. Webbie is the brainchild of Illusion2K, the "virtual management" division of Elite Models Inc., and Giant Studios, a motion-capture and performance animation studio in Atlanta. Webbie's television "interview" served as a marketing tool to demonstrate the capabilities of Biomechanics Inc.'s cutting-edge 3D motion-capture system, called Motion Reality. Though the low-resolution version of Webbie that we saw was crude and rudimentary, Giant insists that photo-realistic "digital humans" are right around the corner. They should know: they're the exclusive worldwide reps for the Motion Reality system. The Next Step Motion Reality is the leading motion-capture animation software to go beyond the limitations of visual markers, evaluating and utilizing as much information as possible about the motion source it's capturing, be it person, animal or animatronic rig. "The system is optically based, and we do use reflective markers [to track motion]," says Matt Madden, Giant Studios' Director of R&D (research and development). "That's certainly a big part of the capture process and important to calculating motion, but there's only so much information you can get from a set of markers. What do you do when those markers aren't visible? For example, when you make a fist, you can't see the bottom part of your fingers, but you certainly have a real good idea of where those fingers are. That kind of intuitive information is what we put into the software. It becomes smarter, essentially." To accomplish this, the Motion Reality software utilizes a very detailed algorithmic formula to define the kinetic properties of the motion source. Explains Madden, "We get a clear understanding of what the person or source is comprised of, right down to a person's exact bone length, for example. We have a specific scaling process so that the software can figure out the specific dimensions of the subject that it's capturing. It has to know all those lengths and ranges of motion and even other things like connective tissue and ways to stabilize this skeleton. It evaluates all of [a source's] movement properties, as well as the different forces involved in generating those movements... and then our software tracks and creates skeletal transformations. This gives us more information for enhancing or modifying the motion" through 3D animation.

schiff02.gifWatch this Quick Time movie of the preparation for a motion-capture session. © Giant Studios.

Another major advantage of the Motion Reality system is that it operates in real-time. "There are other optical systems that operate in real-time," says Madden, "but they are very limited to the type of motion and number of characters that can be captured. I've only seen them capture one character. With Motion Reality, we are doing three characters in real-time and are working on capturing four for a project due this spring. We can also do on-set captures, which means motion-capture on a live studio set, so you can capture and direct both the live actors and CG characters interacting in real-time. The animator or director is getting real-time feedback of how the end product looks, and they can evaluate that on the fly and say, `Okay, that's not quite what I'm looking for,' then modify it for the operator or engineer. So you're creating and defining and developing a new style for that particular character in real-time. I am sure that no one else has this capability right now."

Finishing Humanity

Once the skeletal motion and the movement style are created, the shader developer creates the visual surface elements, such as shading, lighting, rendering and texturing. "Shader development is creating the 3D programming tools to be able to mimic real life surfaces," says Giant's effects supervisor Rudy Poot. Formerly the Lead Color and Lighting Supervisor for Warner Bros' The Matrix, Poot is now Giant's resident expert on shading development. "No one's ever really been able to do human skin before," he says. "Human skin is really something that everyone is trying to achieve because it's so complex. There are so many layers of light being absorbed by our skin and bounced around, and it's very hard to mimic that in a program. With virtual humans like Webbie, we have to take many high resolution photographs of real skin, and then we have ways to stretch that skin onto the 3D model. And then a special code is written so that skin will react naturally to light."

While the creation of photo-realistic skin is supposedly just a few months away, creating realistic hair is proving to be a bit more difficult. "Hair is still evolving," says Madden. "The 3D people in the industry have done a really good job improving the look of fur, but hair isn't quite there yet." Once this technology is perfected, the ultimate result will be digital humans that look so photo-real, we won't be able to tell the difference between a computer-generated person that's being digitally manipulated by an animator, and a videotaped image of a flesh-and-blood human. "It's kind of scary, but it was bound to happen," says Poot.

Early wire frame models of Webbie Tookay. © Giant Studios.

Early wire frame models of Webbie Tookay. © Giant Studios.

Cut!

The big question is, why make digital humans at all? Sure, they look cool and they've got a certain kitsch value, but what purpose do they really serve in society? "Considering the growth of the Internet, and the many different digital features such as on-line movies and virtual fashion shows, we believe that virtual celebrities are a perfect fit," says Ricardo Bellino, co-founder and executive director of Elite Models, Inc. "Besides," he says, "you can already see the trend of the `virtualization of Hollywood' in big productions such as Titanic and Toy Story." Digital humans, it seems, will primarily be used to pitch consumer products and facilitate the production of thrilling, never-before-seen special effects in movies. "For things like The Matrix, for example," says Madden, "if you want to do things that are a little bit superhuman, you can have [digital characters] climb up the wall, do their back flips off the wall, and still make it look like maybe that's possible, so the audience doesn't think that's obviously an effect. The character can have this capability that's never been seen before. It's not just a stunt man falling off a building." And unlike stunt men, digital humans can be built to look exactly like your lead actor, even close up. This has the Hollywood stunt community understandably, though perhaps unnecessarily, nervous. "They were concerned about this making their job obsolete," says Madden, "but that's certainly not the case. In fact, you rely on stunt men more than ever to do a lot of these captures that are extremely difficult. You still need people to be the source for this motion."

The Time Is Soon

Ready or not, expect to see the first photo-real digital human in approximately 18 months, when Giant Studios unveils a top secret film project that is currently in the works. "I'd love to be able to tell you about it, but, unfortunately, I can't," Madden teases. In the meantime, he reveals that Giant is working with New Zealand visual effects company Weta, Ltd. on New Line Cinema's much-anticipated The Lord of the Rings trilogy. Directed by Peter Jackson, the trilogy requires 1,300 computer-generated effects shots. "We're creating virtual characters through our software," says Madden. Though not human, they are main characters and, Madden assures, "We're definitely not doing Jar-Jar Binks." Further details on the production are under wraps. Other upcoming projects include a possible Webbie Tookay ad campaign for Nike of Europe, an on-line Webbie-hosted talk show that mixes live-action with digital imaging, and the digital scanning of real-life Elite models in order to represent them virtually on the Web.

schiff05.gif Here's another Quick Time movie of Webbie in action. © Giant Studios.

"Some of the models were a little bit threatened by that," says Madden, "but obviously, you wouldn't capture or portray a supermodel without their permission. You certainly wouldn't do it without some kind of contract, and you would always use their voice and probably their motion as well. It's just a more efficient way to produce an appealing marketing piece for less cost. The intention is to build digital versions of them in a computer, and those things can then be reused and restructured in all these different ways, with motions combined and recreated for a completely different product or shoot. If a model is walking and talking, doing a commercial, for example, you can recapture the facial motion and audio track and play it back in Spanish and re-purpose it worldwide. The model would own a part of that, even if she herself didn't actually participate in that commercial." Says John Casablancas, Elite Models' Chairman of the Board, "I'm thinking that you need cyber everything nowadays. You need people who are available in two places at the same time, people who are flexible to change. A virtual model is absolutely the ideal person for that." Webbie would no doubt agree. Prior to becoming a freelance journalist and screenwriter, Laura Schiff sold animation art for Hanna-Barbera Cartoons. Her work has been published in Animefantastique, Creative Screenwriting, People, Mademoiselle and Seventeen.

Tags