A major milestone being marked at SIGGRAPH '08 is the 20th anniversary of Pixar's RenderMan software system, which has been used to render the digital graphics in 44 of the last 47 films nominated for an Academy Award for Visual Effects. This anniversary arrives in a year when Pixar is riding high with its latest hit movie, WALL•E, studio co-founder and President Ed Catmull delivered a candid assessment of Pixar's history to an SRO SIGGRAPH crowd, and the company is unveiling the latest version of its software, RenderMan 14.0.
Catmull expresses some amazement as he ponders the evolution of a tool whose development was launched when he led the computer graphics department at Lucasfilm back in the 1980s. "When I realized it's been 20 years, I thought 'Has it really been twenty?' There's a positive thing, and also something that's kind of embarrassing. The positive thing is that a long time ago we set some impossible goals for ourselves. We solved some problems and set down a path that enabled RenderMan. The embarrassing thing -- and it's kind of hard to describe -- is that a lot of credit was given to us for the time that we spent getting it started, which of course had its value. But the place that we've gone to from that initial start is miles ahead of even what we conceived at the time."
Catmull is quick to lavish credit for this on other studios that have used RenderMan over the years. "The impossible goals that we put in front of ourselves were far exceeded, but they were exceeded by the other people who kept on adding to it and changing it. We ended up with the situation where companies like ILM and Sony come in and say 'We need this... and we need that... ' They push us forward."
Catmull notes that the RenderMan Group, which is based in Seattle, Washington, and not at Pixar Animation Studios' headquarters in Emeryville, California, pursues an independent approach to dealing with outside customers. "The RenderMan Group, which has been led for many years by Dana Batali, has had a philosophy that they would treat outside people -- like ILM and WETA and Sony -- the same as Pixar's internal group." He adds wryly, "Being in Seattle probably helped protect the RenderMan people from being grabbed for 'local emergencies' at Pixar.
"Pixar didn't get things before anybody else, which a lot of people found puzzling because they thought we'd take advantage of things before we made them available to other people. There was one time (a software feature called Deep Shadows) when we did hold something back and we soon realized that we'd made a huge mistake. It was embarrassing, but it helped prove the fact that there's a community out there." For the RenderMan Group to build and maintain healthy relationships within that film community, Catmull believes, "You have to let go of immediate gratification. You basically have to say it's more important to build connections than try to hold on to a secret.
"In trying to serve the film community, we've always had the philosophy that we're not making a consumer product," stresses Catmull. "We're making something that's completely tuned for the film industry. It's been a very clear focus." (This single-mindedness is one of the reasons that the Motion Picture Academy awarded an Oscar statuette for RenderMan, a rare achievement in the history of AMPAS' Scientific and Technical Awards.)
Dana Batali, director of RenderMan Product Development observes, "Ed Catmull has led RenderMan development to focus on one problem -- film-quality photorealistic rendering. We don't get sucked into real-time game visualization or architectural visualization or lots of many other interesting problems. In the long run, RenderMan probably isn't the largest money-making renderer. You could say that the Quake [game engine] might be. And the NVIDIA graphics cards are another take on this. At any one moment you can choose from probably 10 renderers -- there are many out there for people to choose. It's just that the larger facilities have come to realize that our commitment to the film-quality market, and the investment that Pixar makes in rendering technology, and the connections to ground-breaking facilities means that if you want to pick a renderer not just for today, but for five years from now, it would be silly not to consider RenderMan.
"I think there's a trend right now where people are realizing that they can get certain portions of the rendering job done in different ways than with a single RenderMan pipeline. They can get cheaper renderers -- or free renderers -- to do depth passes for example. And if they're RenderMan-compliant it makes it much easier to have a heterogeneous collection of rendering capabilities in their production pipeline."
Batali makes a point of mentioning that some confusion exists with respect to the term RenderMan itself. "It's a mixture of a brand and an interface. In 1988, when the term RenderMan was coined, it was also our renderer at the time. Rob Cook and Loren Carpenter had produced the REYES (Renders Everything You Ever Saw) scanline rendering algorithm. There were no other renderers at the time that understood the interface. But it's only one small part of the larger puzzle. The larger puzzle includes the REYES algorithm and motion blur. Motion blur was an important feature. You wouldn't care about that particular effect in architectural visualization, for example. It's only the folks who look at pristine-quality film who try to make sure that you don't get a headache when you watch movie images. But you can have a RenderMan interface renderer that doesn't have motion blur. There's nothing illegal about doing that!"
Batali notes, however, that using a heterogeneous collection of renderers has its downside. "The more pieces you have, the more pieces can break. You have more complexity in your pipeline. Truly, there are some instances where it seems unwise to pay for a full-on RenderMan license. A graphics card might be able to do it, and do it faster. These are the kinds of things that people are dabbling with. But I don't think anyone can point to a world-shifting success story in that way."
One of the ironies of RenderMan's evolution is that the multi-year production schedules of Pixar's animated features sometimes have prevented the studio from taking advantage of the RenderMan Group's latest code because productions can't change software in midstream. Pixar's next movie, Up -- from Monsters, Inc. director Pete Docter -- is a case in point. As Batali observes, "It's frustrating for those production people, and for us, because when we have some cool new thing, we can't get them to test it. So it might go out with more bugs in it."
"But the good news," he notes, "is that with lots of different customers there's always someone starting a new production." So the visual effects studios, with their ever-shrinking schedules, are ideal test beds for new RenderMan features. Batali has watched this happen for two decades, having been involved with RenderMan since its inception. Lately he's witnessed some exceptional fluidity in the use of RenderMan for visual effects. "Some facilities, like Weta, don't lock down on a rendering version. They have a fairly sophisticated way -- on a per-shot basis -- to assign a toolset to any given shot or artist. They've made the strategic choice that if a new version of a tool, late in the game, offers some incredible possibility, they'll take advantage of it. As long as they can limit the risk to the place where that will potentially bring value, they have an important tool in their arsenal that other facilities may not have."
"In any given year, one or the other of the big effects studios would be doing something that nobody has done before," says Batali. "And it's not always the same people." He cites the examples of ILM's work on Pearl Harbor and Sony's on The Polar Express. "We hear customers say things like 'We're randomly accessing 3 terabytes of data in order to get subsurface scattering information from real light sources in the universe.' And we say 'What? You can actually do that?' The fact that our software operates in that environment is a wonderful surprise."
By comparison, recalls Batali, "Way, way back, the idea of a complex frame was measured in small numbers of megabytes. Now it's measured in gigabytes. So complexity is growing at a Moore's Law pace, and it's all driven by further appreciation of the kind of visual complexity that we can deliver on computers. And it's not just delivering more complex lighting. More and more geometry -- for grass, fur and trees -- kicked in less than 15 years ago. The other thing that was dramatic was that, 20 years ago, the idea of a shader was a 10-line program. And 10 years ago it had gotten to the point of tens of thousands of lines on every single object in the scene. So there's complexity in every direction."
With respect to realistic lighting, Batali remarks, "The idea of trying to mimic reality has been fundamental to rendering from the very beginning. The film community has always wanted effects like caustics, but had to fake them. When ray tracing came within arm's reach -- in a computational sense -- it made things simpler. ILM's ambient occlusion work on Pearl Harbor was a landmark use of ray tracing."
The RenderMan Group originally may have been urged to add on-demand ray tracing to meet the needs of its visual effects customers, but Pixar's own embrace of ray tracing -- evident in director John Lasseter's 2006 film Cars -- marked a notable evolution. The film's vast landscapes and scenes with highly reflective vehicles prompted Pixar to use a hybrid rendering approach, combining REYES with ray tracing.
Batali thinks that people will continue to weigh the advantages of one approach vs. the other. "A RenderMan TD who's been in the trenches for 20 years knows how to get an effect without having to pay for it in a physical simulation sense. But the downside is the cost of a more brittle pipeline that may involve more multiple passes. Then you have folks who are bound entirely to physical simulation. And if, all of a sudden, a director says, 'I don't really like that,' they have no way to make a change. If you're going for pure physical simulation, you can't change physics, even though the director would like you to do so!
"In some sense, tweakability means that you can program the local laws of physics to your own requirements. That's always been the basis of a RenderMan-centric pipeline. We have to be able to program everything. You don't want to program everything, but you have to have that option, and that's really why I think we're on pretty firm ground. We have to provide 'back doors.' The art of illusion is the art of programmability."
With each passing year, demands to extend RenderMan continue. With Pixar's most recent film WALL•E, director Andrew Stanton wanted to simulate an anamorphic look to evoke a widescreen, sci-fi film feeling. Batali explains, "Pixar doesn't use real lenses -- we have fake lenses. It's not a hard problem to stretch images, but the kind of artifacts that a real, physical anamorphic lens produces -- which we see when we watch a Cinemascope movie -- are the artifacts the director wanted. So we introduced a new feature -- anamorphic depth of field -- which means that the depth of field simulation inside the renderer can simulate the effects of an anamorphic lens. Unless you're extremely nerdy about looking at pictures, you'd never notice it. WALL•E also has gobs and gobs of complexity, and figuring out how to bake out various portions of the computations and reuse them is another area where we've made very significant strides over the last five years. In order to make pictures of great complexity on computers, you still have to play games.
"One of the things in the next several years will be trying to figure out ways to make things simpler than they are. To still get the visual complexity and the control that you have to have, but bring it down to the level where it doesn't require a Ph.D. in programming to turn the knobs. That's a battle that we continue to fight."
When he surveys the current landscape, Batali observes, "Studios have pipelines that are either works of art or almost Rube Goldberg machines in order to have all these multi-pass renders with sub-surface scattering." He notes that, for some, the compositing phase has almost begun to resemble a look development editing session. "They have 20 or 50 channels as inputs into their compositing scripts! I've detected a clash of cultures between those facilities that are clearly comp-driven and those that are trying to resist that particular slope. Pixar has tended to be anti-comp because our decision-makers haven't been the comping people. And there are so many decisions that you can't defer to comp. Maybe in the feature animation world that kind of comp-centric workflow is less natural, whereas, when you're doing effects, you're sitting with the director and a bunch of assets that you've produced ahead of time. Your director's time is very valuable and sparse, so you want to be able to make decisions at comp time."
Looking forward, Batali muses, "We're always looking for revolutions, but success is an evolution. We certainly have little side projects where we're exploring fairly radical alternate ideas, but, ultimately, all we can do for practitioners of movie production is make something faster or easier. Certainly, 'faster, easier and more complex' has always been our driver and I don't see that changing. There's no shortage of avenues where we can deliver the means by which richer pictures can be delivered to the screen cheaper, easier and with more control."
As Pixar marks RenderMan's 20th anniversary, the company's development endeavors are divided up into two broad categories, explains Batali. "There's the Studio Tools Group, which develops our in-house animation system, and then the RenderMan Group, which basically delivers two forks of products. One is the Pro Server, the core rendering technology, and the other is what we call bridge technology, the 'studio in a box'-type technology. That's called 'RenderMan studio' and it's broken up into a plug-in for Maya -- to output RIB and convert a Maya scene into RenderMan form -- plus SLIM, which is a shader-development environment, plus the queuing system Alfred, and a tool for viewing images. These are all basic tools, none of which alone would be 'best in class,' but their power is in their integration."
Ed Catmull may not have the time these days to grapple with the specifics of RenderMan development, but he remains intrigued by the future possibilities. "The next wave is that multiprocessors are coming out," he observes. "There'll be more processors and people will have to figure out how to use them efficiently."
And Catmull has always remained keenly aware of RenderMan's competition. "When people started to give out free renderers bundled with software, we wondered, 'If they're giving away renderers for free, how can we possibly sustain this as a business?' But an interesting thing happened. When people give away something for free, they can't afford to put resources into it to keep it at the level that it needs to be. What we've got is this small, dedicated group whose only job is to make the studios happy. Studios aren't trying to get things for free. What they want is to get the quality on the screen and they want reliability. In the end, we give people the sense of trust that they'll be able to deliver their films. And that's a source of great pride."
Ellen Wolff is a Southern California-based writer whose articles have appeared in Daily Variety, Millimeter, Animation Magazine, Video Systems and the website CreativePlanet.com. Her areas of special interest are computer animation and digital visual effects.