Search form

NVIDIA Ventures Deeper into Middleware Enabling

Bill Desowitz chats with Jeff Brown of NVIDIA about its recent acquisitions of mental images and AGEIA.

mental images, a significant NVIDIA acquisition, is best known for mental ray, which is part of the rendering pipeline on the upcoming Speed Racer. Courtesy of Warner Bros.

The industry has been all abuzz about NVIDIA expanding its video graphics arsenal through the acquisitions of mental images (best known for its mental ray renderer, which is part of the rendering pipeline on the upcoming Speed Racer) and AGEIA (best known for its gaming physics technology via the PhysX processor). This will not only impact game development but also the larger issue of interactivity, which Lucasfilm CTO Richard Kerris discussed in a recent interview with VFXWorld. Jeff Brown, NVIDIA's GM, Professional Solutions Group, elaborates on the significance of the acquisitions for the companies involved as well as for the industry.

Bill Desowitz: Let's talk about the strategy behind NVIDIA's acquisitions of mental images and AGEIA, and what it means in terms of advancing interactive technology.

Jeff Brown: Actually, we made another small, stealthier acquisition a couple weeks ago. We didn't make it public at all because it was purely an asset acquisition, just the IP and the people... a little company called modviz. They have technology that does distributed rendering [a technique for distributing a single render job within a single frame across several computers in a network]. It's part of this larger strategy of increasing value in the middleware layers. Those are the layers that the applications will talk to. We have this scene graph called NVSG that people like Autodesk are using. The whole idea is to abstract the overly complicated programming of the GPU, the different middleware layers. And it first started with CG and HLSL [High Level Shader Language], ways to program pixel overtech shaders. And now it's abstracted even higher into these middleware layers, which the ISVs [Independent Software Vendors] really appreciate because they can focus on building their apps and not have to worry about programming the GPUs as much. That's the spirit of this middleware layer that we're providing now. And mental and AGEIA and modviz all fit into that middleware layer and complement each other through common APIs.

BD: Let's start with mental images...

JB: They're known for [renderer] mental ray, which is pretty ubiquitous in any of the design applications. It's the renderer within Max and Maya; it's the renderer on the CAD side as well. So you get a lot of pretty pictures rendered with mental ray, which today is no longer 100% CPU based. They recently started taking advantage of the GPU, which is resident in the system to boost performance and also some effects that aren't really possible using an off-line renderer. So the [mental images] team is putting technology back into this major source and the result is going to be faster, more interactive, higher quality renderers that may or may not use the GPU. It actually turns out that... it's just a matter of speed.

Jeff Brown talks about NVIDIA's recent acquisitions of mental images, AGEIA and modviz.

BD: This is what Richard Kerris, the new CTO of Lucasfilm, recently alluded to.

JB: I think making more portions of their pipeline interactive at Lucas is one of the things that Richard is interested in, and I think this is reflective of some of the other studios as well: For example, shading and lighting or previsualization. It turns out that a lot of the functions of the film pipeline lend themselves better to the parallel processes [that we specialize in].Our highest end GPUs have hundreds of cores relative to two or four cores that are in our current CPU, so any application or process that can be parallelized well will run that much faster in orders of magnitude on the GPU. So you can think of mental as about 90 engineers. Their business model is entirely what they call OEM, which means working with the ISVs and working with the studios on their own in-house tools. That strategy won't change. In fact, mental's charter won't change. They will maintain their brand externally and one of the really exciting technologies that they have outside of mental ray is this tool called mental mill. It's a shading authoring environment that's targeted for artists rather than programmers. It's based on the concept of what mental images Founder Rolf Herken calls "phenomena," which are these atomic building blocks and through the shader graph you can put together pretty sophisticated looks. And the goal at Lucas, which Richard talks about, is to share all of the assets from the game to the film to the animation and even to the web, so mental mill helps them do that with MetaSL, which really is a meta shading language that allows you to describe any shader... and the beauty is that it actually has various fall backs like level one , level two or level three capabilities. If you have only a CPU, you'll render an image; if you have a CPU plus a hardware GPU, you'll render exactly the same pixels, maybe faster; and if it's pure GPU, it's even faster. It's pretty exciting. We can't yet talk about which ISVs will be adopting it, but it's the holy grail for the ISVs and studios and game developers to be able to share assets across your CPU renderer or your GPU renderer and be guaranteed that you're going to render the same pixel regardless of the backend.

BD: And how does AGEIA fit into this middleware expansion?

JB: Again, it's a little premature to talk about ISVs that will adopt it, but it's interesting in that they have a similar philosophy, which is to do [integrated] physics [scaling], exposing a higher level. It's kind of like mental mill, but they call it APEX [Adaptive Physics Extensions], which expresses physics in artist's terms rather than programmer's terms. So, for example, a programmer might say cloth but an artist would say clothing. And so you could call a clothing separatine and it would actually simulate the dynamics of cloth of a dress or a shirt. And this will all start to fit into these pipelines together and there's this very strong relationship between the appearance of something and its physical property, which is physics. And the last piece, which the Max guys are starting to put together, is the AI aspect. So if you've got destructible physics, you want your AI to recognize that there's something different in the scene and react that way. We're not doing anything in AI, but you can start to see how this all fits together in the dynamics of an application or a pipeline.

In 2007, AGEIA offered a PhysX Mod-Kit for Unreal Tournament 3 that allows developers and modders to create new experiences in the game and customize existing levels to enhance game-play. Courtesy of Midway Games.

BD: So where is this headed?

JB: It's probably best to think about it as a hypothetical application. For example, like 3ds Max. If you could imagine the next version of Max -- again, I'm speaking hypothetically -- you would have a common share environment based on MetaSL that would allow you to basically share your shaders from the Max viewport all the way through the offline renderer and into the game. That is a big issue right now: being able to simulate a shader within the Max viewport and then make sure that you get the same appearance on your target platform, whether it's a PC or console game. That's actually a pain point today. So MetaSL could potentially solve that problem. You'd want to render using mental ray and you could use MetaSL shaders within mental ray. And because mental ray will start to leverage the GPU, you can actually do more interactive rendering, whether it's shading and lighting or final frame. And, finally, you can imagine having more physics in the viewport as well, where you kind of get "what you see is what you get" physics. And, again, rather than having to compile out to the target, you could actually get a sense for how the physics are going to work within the viewport. I think, ultimately, what happens is you end up with capabilities in the current tool chain that speed up this production process. So I think that's where this is going. And how this helps NVIDIA is it makes the GPUs more valuable because it allows the applications to tap into the natural parallel horsepower of the GPU that's resident there. It's actually a shame that the amount of GPU power that the average app consumes is very small compared to its capabilities.There are really free computing cycles available and I think these middleware layers are going to help expose those things.

BD: And are we starting to see some of the results of this?

JB: Yes, we have announced that you will be able to port the AGEIA PhysX to the GPU via CUDA and that is happening as we speak. And the mental and Gelato teams are working together on hardware acceleration of mental ray and ray tracing. The mental mill spec is locked down so people are starting to adopt it. The wheels are in motion. The next step will be that you will start to see adoption of the technologies within applications or within the studios like Lucas.

Bill Desowitz is editor of VFXWorld.

Bill Desowitz's picture

Bill Desowitz, former editor of VFXWorld, is currently the Crafts Editor of IndieWire.