Search form

From the Latent Image to the Digital Image

Digital production technology is moving so quickly that it is hard for even 15-year computer animation veteran, John C. Donkin to predict. But here he hazards a guess at areas soon to be revolutionized.

donkindigital01.gif

John C. Donkin.

Ten animators huddle in an open area, and stare at green and white terminals, without a mouse in sight. The cables from their terminals lead off into an oversized computer room and connect to a single VAX mainframe computer. Sucking down power and having to be cooled by a giant air conditioner, the VAX is the central hub for all of their rendering capability. Images are rendered to a single "frame buffer" that is shared by all the artisans in the other room. The behemoth computer costs hundreds of thousands of dollars, and requires custom-built software and hardware to produce the state-of-the-art images of the day.

Who would have predicted then that a computer, that sits on an average desk with 100 times the speed and 100 times the RAM, would be 1/100 of the cost a mere 12 years later.

It's Impossible to Say This is the dilemma we face as we try to predict what will come in the future of computer animation. Where will the new advances come from? The images that we produce today are 100 times more complex than they were in the mainframe era. Yet we still face challenges in trying to produce images with the richness of what is found in the real world. Behind all of the advances and staggering images which dazzle us, is an equally amazing technological display. Somehow, the technical artisans have reduced what we see with our own eyes into something which can be described in mathematical ways. Visionary film directors are seeing the potential in our medium and keep pushing us to produce new techniques to supply their vision. We see it on the screen every day, from commercials to blockbusters.

Fabricating Reality? Current computer animated films often feature creatures and characters which are more easily defined on a computer. Things like toys and bugs are less complicated than furry critters like monkeys and rabbits, but this is clearly where we are headed. Soon, there will be no limit to the types of characters that can be realized on a computer. What we will be able to create will not stop at reality. One wonders about our industry's obsession with producing photorealistic imagery. But as we get to the point that we can convincingly represent reality, we will then be able to convincingly represent fantasy as if it were reality.

The rendering of convincing human characters in anything other than wide shots still eludes us. Things like hair and skin remain a difficult challenge, let alone the nuances of human movement. As humans we have developed an amazingly sophisticated critical eye when we look at ourselves or other people. We are able to detect, without thinking, even the tiniest of flaws that clue us in that what we are seeing isn't real. It comes down to point of reference. From our very first glimpses as babies we look to the faces of our parents, studying them. It is precisely this familiarity which makes representing humans and human motion so difficult. It's much easier to represent something convincingly if we have nothing to compare it against.

Some see the development of motion capture as a solution to this problem, and it may well be for certain types of applications. However, the literal translation of life-like human motion onto a life-like rendering of a human doesn't guarantee success. The animated human must be able to convey convincing emotions. Matching proportions, body weight and height are a crucial part of this solution.

Another solution is by speeding up the animation tools and the rate at which tests can be viewed to allow animators to better develop their craft. They will therefore be able to spend less time working the controls of the animation and more time analyzing their motion, bringing them closer to that "point of reference."

Seeing the True Light The way in which we represent light in a computer is also changing. It's becoming more and more sophisticated and this trend will continue as machines and software enable us to evolve and develop more accurate and sophisticated techniques.

The conventional lighting model in computer graphics does a fairly decent job of lighting surfaces directly from sources, but there are some obvious shortcomings. Soft shadows that behave the way we see them in reality are not generally available. Moreover, light which is reflected off of surfaces and bounces through an environment is not taken into consideration by the renderer. Both soft shadows and reflected light are "faked," sometimes quite convincingly by the artist wielding the tools.

Rendering techniques to solve these problems, which had been too expensive to use, are now finding their way into production. The computation of real soft shadows based on light sources which have a defined shape, results in shadows that have areas of umbra and penumbra as we find in reality.

Radiosity rendering technology simulates the amount of light energy that bounces around off the surfaces of objects. A white ball next to a blue wall will have a bluish tinge as a result of the light bouncing off of the wall back onto the ball. The use of radiosity rendering in production up until now has been very limited because it is so computationally expensive. It is now becoming a viable production tool as is evidenced by Blue Sky Studio's Bunny, a short animated film which utilizes sophisticated radiosity and ray-tracing rendering techniques.

Other New Frontiers Because the limits of what we can render realistically are continuing to be pushed, we can expect that our work in the film production community will continue to grow. We can expect that fully CG characters will be portrayed in otherwise totally live-action films. We can expect full-blown computer generated sets to replace traditional movie sets and lots.

Our work is not limited to what we see on screen either. I expect that as computer graphics become cheaper, faster and more available that the very creation of films will benefit. Directors may "pre-visualize" their cuts, action, pacing and timing on a computer, before any frames are shot.

On-set editing facilities are already in use in production. Sequences are literally edited together between takes. Directors can see and experience what the final cut will look like while the crew is busy preparing the next shot. This allows them more flexibility than had been realized before and this is likely to continue.

We are moving more and more toward a method of filmmaking that allows us to make changes, and create new imagery that before could not be realized. We are no longer in an era of the latent image. We are in an era of the digital image and it's revolutionizing the way that we put moving images together.

John C. Donkin, a 15-year computer animation veteran, is currently a Senior Technical Director/Managing Technical Director at Blue Sky | VIFX. He has had many projects exhibited at the SIGGRAPH Electronic Theater including Dinosaur Stuff (1988) and a Monopoly commercial (1995). Throughout his career, he has developed systems for animation, facial animation and particle systems, as well as numerous production support programs.