Search form

The Digital Eye: Just in Time Features

In the latest Digital Eye, Craig Zerouni of Side Effects Software explores how quickly experimental CG techniques become mainstream features.

Image courtesy of Deron Yamada. © 2004 DYA367.

Image courtesy of Deron Yamada. © 2004 DYA367.

The history of CG is really a history of features. Some features made things possible that were unattainable before (from texture mapping to ray tracing to radiosity), and the rest made something that had been possible, much easier or quicker to do (such as interactively animating objects, rather than typing in keyframe numbers and XYZ coordinates). The interesting thing about this process is how features, once introduced, quickly become indispensable.

For example, if you had asked me five years ago to name a promising piece of software to develop for the CG industry, I would probably not have named crowd creation as the place to be. Yet almost from the moment that Massive was introduced to do just that, it became an essential tool. So much so that I think that Massive has become a kind of tax on doing CG you cant be taken seriously unless you own a copy.

But more than that, it has created its own demand. Now that crowds can be done efficiently and well, they are everywhere. There are crowds in all the epics The Lord of the Rings trilogy, of course, as well as Kingdom of Heaven and King Kong; and already this year weve had huge crowds in V for Vendetta, and this summer they will be featured for the first time in a CG-animated feature.

And now weve even got commercials with billions of extras running around in them, and those commercials are being made not just in the U.S., but also worldwide. Crowds of extras including horses and mythical creatures of all types are now a completely mainstream capability, and something that all clients around the globe regard as a generic production tool.

Another example of this is global illumination. It was not that long ago that GI (in all of its manifestations, such as Ambient Occlusion) was being described in SIGGRAPH papers in nerdy gee whiz looky here terms. But because it adds realism and because computers continue relentlessly to get faster this technique has now become a common tool, even to the point of being used in animated films such as The Incredibles and The Ant Bully. I dont know of any offhand, but Id bet money that there are also commercials using this technique.

Side Effects Softwares Dynamic Simulation technology came just in time for features such as the upcoming Over the Hedge. This technology has become the latest indispensable tool to have. Courtesy of DreamWorks Animation SKG.

Side Effects Softwares Dynamic Simulation technology came just in time for features such as the upcoming Over the Hedge. This technology has become the latest indispensable tool to have. Courtesy of DreamWorks Animation SKG.

The most recent example of something new becoming indispensable is Side Effects Softwares Dynamic Simulation technology. SESI announced Dynamics (also called DOPs) last summer, and delivered it to customers in September. And now, after eight months, there is already a list of features that use this technology, including Poseidon and BloodRayne, as well as the upcoming X-Men: The Last Stand, Superman Returns, Over the Hedge and The Ant Bully. And in this case, usage in commercials actually came before feature films, when Digital Domain used DOPs to do a Gatorade commercial last year. What happened at any number of facilities is that Dynamics were pressed into immediate service to solve a particular problem on one film, and that experience drove its use on a subsequent one.

So this is something that didnt exist a year ago, and now it has become an embedded production tool. Until this point, people were making do with simulations that they couldnt really control, or else they were just faking them. Or they took too long to compute. Or they couldnt integrate them with the other software and processes that they had. Or all of these. And DOPs filled that need, solved those problems and allowed them to continue to use all of their other tools, including Maya and RenderMan, without change. So suddenly, its an integral part of the pipeline.

At one level, its amazing that software manufacturers seem able to come up with new features on a just in time basis, where just as they are releasing something, demand for that very thing peaks. Of course, thats not exactly how it works. Manufacturers are constantly being bombarded with requests for new features, or better versions of existing ones, so, of course, they have a pretty good idea of what is important. Therefore, some of these features might be said to be demand-led.

But there is also the supply-led part of the equation, the if you build it, they will come nature of these features. To take an example from a different market, I dont recall that people were camping out in front of electronics stores, demanding that someone invent portable music players. But as soon as Sony showed them the Walkman, everybody had to have one.

The same thing goes on in effects with software features. Consider the case of DOPs again what happened more than once is that this technology was used on films by facilities that originally bid on them when they didnt have that technology but not only that, in many cases they didnt know that they were going to have the technology. Yet when they saw it, they thought: Aha! and, suddenly, it was impossible to imagine not having it.

That is something I have always found intriguing. The fact is, all of these films would have been made whether these particular technologies existed or not. All kinds of classic CG-based features have been created in what would now be considered prehistoric, unusable versions of software. Terminator 2 was released in 1991. At that time, the current version of Maya, for example, was 0.0, since Maya was not released until 1998. In fact, in 1991, Alias, Wavefront and TDI were still separate companies.

How, one might reasonably ask, is this possible? How is it that these features (and, indeed, whole products) are simultaneously completely vital, and yet totally optional?

The answer lies, I think, in the skill and dedication of the artists and technical people who use these tools. When people bid out shots, they have to decide how they are going to do them, and frequently those conversations revolve around using existing technologies in new ways, or in simply assuming that a lot of hard work will go into making something possible.

If, for example, you had to do a crowd scene without access to Massive, you would have to do a lot more work to get a less pleasing result. Youd have to worry about animating many more individuals yourself, and/or you would rely on writing some software that would try to bridge the gap between what you could do directly, and what you really want to achieve. Effectively, you would end up developing a partial version of Massive, and using that. The gap between what youve got, and what Massive delivers, is bridged by a lot of late nights and botched render tests.

This is what new features really give artists leverage. They allow people to do the same thing in less time, which in turn allows them to get a much better result in the same time. You dont quit early you just get to add more subtlety, or respond to more changes of direction in the same time span. It has always been this way.

New features allow artists to get more complexity in the same amount of time. Compare the battle sequence in the original Star Wars (left) with the opening sequence of Episode III. © &  Lucasfilm Ltd. All rights reserved.

New features allow artists to get more complexity in the same amount of time. Compare the battle sequence in the original Star Wars (left) with the opening sequence of Episode III. © & Lucasfilm Ltd. All rights reserved.

When film opticals took a week to generate and then process at the lab, people built their expectations around having that number of bites at the cherry. Now that opticals can be done in hours, or even in realtime, they expect more complexity in the shots, (and, yes, the production schedules also get compressed). Compare the battle sequence in the original Star Wars, for example, to the opening sequence of Episode III.

Of course, you cant get better features without better hardware. The relentless application of Moores law means that yesterdays experimental techniques with their unwieldy render times, become tomorrows mainstream technology.

Craig Zerouni.

Craig Zerouni.

It reminds me of something that happened long, long ago, when my London-based CG facility had the first realtime graphics displays in Europe. In those days, just seeing a wireframe image move across the screen in response to a joystick was so exhilarating, you would break out in an involuntary grin. I remember sitting with a client once, previewing a simple animation (they were all simple then).

Wow, said the client, This is amazing. I bet this means you can get done a lot quicker.

Not really, I replied. It just compresses the time between mistakes.

Craig Zerouni is currently a production consultant with Side Effects Software, developer of the award-winning Houdini family of 3D software. Zerouni collaborates directly with production studios and technical directors to help build superior CG pipelines and create cutting-edge digital animation for feature films. The 20-year veteran most recently served as technical director with Rhythm & Hues. Through his work with CFX Associates and Silicon Grail, Zerouni consulted on First Knight, Godzilla, The Prince of Egypt and Hercules, among others. He has also supervised the vfx for hundreds of TV commercials.