Jacquie Kubin looks into how motion capture technology is helping drive the technical improvements in the next-generation systems.
The Xbox 360, PlayStation 3 and Nintendo Revolution are taking videogaming and gamers on a new roller coaster ride. All three consoles, with small differences making each one unique, promise to deliver more of everything.
With the upgrade in hardware technology, we have been able to push the limits of motion capture, suggests Scott Gagain, exec producer at House of Moves. We are now able to capture an actors full performance all at once. Motion capture lets you capture the essence of distinctive signature movements that animators have a hard time reproducing with the same sort of accuracy.
MoCap does hold a very sexy allure for animators because it opens up the possibility to create a lot of lifelike animation quickly and while it is not a new technology, the more powerful next-generation systems are expanding its uses.
Leaving the door open for the inclusion of even more motion capture techniques leading to even more lifelike animated characters, the next- generation consoles have taken an exponential leap forward delivering greater power, sequencing capabilities and true hi-def images.
Those hi-def images mean that we can see not only the bend in Tiger Woods foot as he follows through on his signature swing in the EA Sports game, but also the bend in the grass in front of his shoes.
Thus, images are going to become more and more visual and important to a demanding audience.
At the core of game development has always been visual definition and how far could the animator go, says Matthew Bauer, general manager, Motion Analysis Studios in Los Angeles. With the next-generation console, the power will be there to put finger bones into a characters hands so they can realistically grip objects, bend their feet. There has always been the ability to give this definition to the game, but now with the next-gen consoles, the animator can actually use it.
Creating all that data by hand would be time and money consuming and tiring not to mention the fact that it is almost impossible to hand animate truly believable human movement, without motion capture.
Motion capture as a technology has grown with its gaming end-users. Today how an animator receives motion capture data can vary. The most widely accepted avenues are optical, using film cameras capturing data from markers, or full body systems, where the actor wears a suit on the body with attached sensors; either a frame that has motion capture devices at various joints, or tiny inertial gyroscopes attached to the body that capture movement data.
Pioneering the field of wireless, full body systems is U.K.-based Animazoo. Capturing realtime motion data, the Gypsy 4 and Gypsy Gyro-18 are both lightweight, fully portable motion capture systems.
The data is captured and processed on the suit and then transferred to a receiver that can be positioned up to around 300 meters away. This gives Gypsy systems a very large capture area; Suzuki for example has captured a motorcyclist going around a 500-meter track wearing the suit.
The difference between the body and optical capture systems is that the optical camera uses external sensors (the cameras) to collect data from reflective markers placed on the body. With the Gypsy, the sensors, the potentiometers or gyroscope sensors, are worn on the body with no need for external devices like cameras, meaning full body systems are not restricted to the studio.
The sensors show the rotation of the bones around the joints capturing accurate rotation data and join angles direct from the actors body.
What makes these unique is that they have no bounds as long as the subject stays within the broadcasting/receiving range of the system. With this full body system, it is now possible to capture raw data that would be otherwise impossible.
All manufacturers are working towards the Holy Grail of motion capture which is high quality data captured over a wide area with no data cleaning, suggests Sam Berey, director of sales and marketing, Animazoo, U.K. Optical systems vs. the body system, are just two different methods of doing the same thing capturing high quality data that is lifelike. We are looking to get that data in a way that is different and work within an industry where optical is seen as the norm.
The camera rig motion capture system has grown from one actor in black latex adorned with markers to camera rig systems that can capture a stage of almost infinite size.
Vicon dominates the optical capture field. Jon Damush, vp and general manager of Vicon Motion Systems, U.S., states that optical motion capture is limited only by the space and the amount of money a studio or gaming company has to offer.
Motion capture, five years ago, was pretty limited, Damush says. In games, the character moves had to be tiny little chunks of action that could be believably sequenced by the game engine as the player made decisions whether to walk left or right or turn around.
Early MoCap systems were built around that, and the assumption that motions will be 15 seconds and movement space will be 30 and direction will be just this way.
A large chunk on the limitations of early motion capture was in making it look right. A lack of processing power, meant that the number of sequences an animator had was limited which sometime meant that a cycle of movement was not always as smooth, and lifelike, as it could be.
Now, with greater power, come more microscopic squares of movement and color from which animators can create extraordinary worlds of magic, myth and mayhem.
Demonstrating just how far optical motion capture has come, Vicon staged an interactive event during SIGGRAPH 2004. They had an unsuspecting audience take part in a game of Squid Ball, only the game was played with 12 weather balloon sized balls covered in reflective material.
It was an exercise in motion capture and human behavior, Damush continues. When the balls were first launched, the audience bounced them around and then slowly they began to realize that there was a correlation between the balls they were bouncing and the game on the screen. So for that capture, the stage area was about 330' x 300' x 40' in height. The limitations are definitely becoming broader.
And those challenges for bigger captures with more data have grown an industry from a few cameras shooting one person to higher resolution cameras, filming at higher speeds. Camera rigs often hold up to 60 camera units at a time in order to get full stage coverage, overlapping data and facial expression.
Whether it is a gridiron battle, a sci-fi fantasy, suspense filled mystery or conflict set in a foreign desert, this is the new challenge for game development studios and animators alike. Yes, they have more processing power to use; however, using up all those extra pixels and sequencing abilities will take time, and money.
And those demands necessitate that animators be able to create ever more spectacular visual experiences quicker, and less expensively, than they have ever done.
Though traditional frame animation is still a necessary tool for the animator, motion capture has quickly become the norm for most game developers. Originally developed as a medical research tool, motion capture does exactly what its name implies it captures motion.
However, it does not end with the raw data. For Torsten Reil, ceo and co-founder, NaturalMotion, located in the U.K., the answer lies in the software, preferably their software, endorphin.
Meeting the interpretive needs of turning raw data into walking, running or falling figures is NaturalMotions endorphin software. Using artificial intelligence programming, endorphin can take a cycle of motion capture and create something entirely different.
Thus, endorphin applies a nervous system to the skeleton and animators can now eliminate the keyframing steps. With previously captured data, the animator can change the action and the character now goes from walking to slipping, and bringing the other characters down.
The character can be directed to make natural movements without the necessity of always having to go back to the studio to capture more data.
Motion capture technology allows the animator to capture character, actor or animal, movement enabling them to spend their time and talent not on a simple walk but on making the environment, the textures, the body, the clothing true to the human eye.
And if it doesnt look right, the gamers are going to know it. Sequences must be believable. Environments must be complete, down to the dew on the grass or the shadow of a tree. Hand, foot and facial motions must be true to actual physical movements.
When you put someone instantly recognizable into a game, such as Tiger Woods, he better not only look like him, but he better walk, talk, bend, twist, reach, sit, dance, run, fall, jump, wave and swing a golf club like him as well!
We are going to start to fool the eye in a way that will be obviously different because of the way we will begin to process raw data, Reil suggests. The next-generation rendering quality and resolution are going up. We will be seeing more clearly what is on the screen, and what it will eventually boil down to is the amount of data you need to create fluid motion, how you are going to get that data and what you are going to do with it. The time and talent cost of having to keyframe all that data can be enormous.
Software can make a world of difference to the animator, and the games bottom line: cost. With the next-generation comes that new software and now the sky is becoming a much closer limit. However, one must remember the game has two directors the creative director at the studio and the guy sitting with a controller in their hand.
And the next-generation consoles are going to provide game developers with what they need to push the envelope in gaming. But with the new hi-def function of the new platforms, things are going to have to look very, very good. Better than good. Real.
You would be surprised how close we can come to fooling the human eye, asserts Stefan Van Niekerk, associate CG supervisor, Electronic Arts, Canada. I have personally caught myself walking into a meeting and they will be testing [a game] on the big screen. At first you think that you are a looking at the soccer game and then you realize it is a game that they are playing.
People are going to be surprised.
Electronic Arts (EA) has bet a lot on motion capture, building what is most likely the largest motion captures facility today. The facility includes two stages allowing the group to film two separate games simultaneously. Using Vicon camera rigs, the capture range is 70' x 70' and 35' into the air. EA has had 20 marker-suited actors on the stage at one time, and often capture five on five basketball games.
Once captured, EA uses Diva software to track the points and then attaches them to the skeleton with MotionBuilder. The dots of the motion capture then become the driving points of a marionette type character, a digital puppet.
And to capture all that movement and take it to a team skeleton takes their animators a day.
From a motion, or animation perspective, the bottom line with the new consoles is that we are going to be able to put more animations into the game, Van Niekerk continues. Where there used to be a jerky motion when a character moved from a run to a walk, we can now smooth that out with more animations in between.
There will also be more signature moves, [visual] effects and there will just be more, more and more, pushing the on screen experience to that level of reality that is standard for humans because we are so visual.
Is there anywhere to go from here?
Definitely. According to Van Niekerk, the consoles will just get bigger and better and though this next-generation of consoles offers so much more, there are still limitations to be expanded upon.
And then there is the whole online world.
I think that one of the amazing aspects of gaming is games plugged directly into the worldwide web, Van Niekerk concludes. With EAs Battlefield, you can go online with 64 people from around the world and create the most incredible adventures. You become part of a square with a commander and team. You have satellite images, demolition teams, special ops and snipers. You can walk into a situation, see a helicopter and fly away in it. The player completely directs what he or she is going to do and that is amazing. The player truly becomes the director.
Jacquie Kubin, a Washington, D.C.-based freelance journalist, enjoys writing about animation, pop culture, electronic and edutainment media as well as music, travel and culinary features. She is a frequent contributor to the Washington Times and winner of the 1998 Certificate of Award granted by the Metropolitan Area Mass Media Committee of the American Assn. of University Women and 2002 HSMAI Golden Bell Award.