Bill Desowitz chats with Rolf Herken, CEO, CTO and president of mental images, about the company's roadmap and the merger with NVIDIA.
NVIDIA's acquisition of Berlin-based mental images late last year certainly caught the attention of the industry, offering new possibilities for the convergence of GPU and CPU rendering and overall visualization. This was bolstered by NVIDIA's additional acquisition of AGEIA (best known for its gaming physics technology via the PhysX processor).
NVIDIA's venture into middleware enabling obviously caught our attention at VFXWorld, too, and I had a chance to speak earlier this year with Jeff Brown, the company's GM, Professional Solutions Group. Brown divulged that it's part of a larger strategy of increasing value in the middleware layers. "[mental images] recently started taking advantage of the GPU, which is resident in the system to boost performance and also some effects that aren't really possible using an off-line renderer. So the team is putting technology back into this major source and the result is going to be faster, more interactive, higher-quality renderers that may or may not use the GPU. It actually turns out that... it's just a matter of speed."
Fast-forward to last week's SIGGRAPH 2008 in L.A., where mental images had a very large presence and introduced mental ray 3.7, the flagship rendering software that has been integrated into Maya 2009, with significant enhancements to multi-camera rendering, motion blur and advanced render passes. In addition, the 3.6 version is part of Softimage's new XSI 7 with ICE (Interactive Creative Environment) technology for complex deformations and character effects in this node-based workflow.
mental images also unveiled RealityServer 2.2, which will be demo'd at NVISION 08 (NVIDIA's maiden conference on computer graphics) this week in San Jose. This program is the server-based, highly scalable 3D web application and services platform available to developers and system integrators, and was integral to the Rome Reborn exhibit at SIGGRAPH. The new version is geared toward providers of 3D web application services, including Software-as-a-Service solutions.
Earlier, mental images released mental mill Artist Edition bundled with NVIDIA FX Composer 2.5, the shader creation tool that now works in conjunction with NVIDIA's shader development environment and new shadder debugger to streamline content creation.
With this flurry of mental images activity, I caught up with Rolf Herken at SIGGRAPH, and he briefly discussed the company's roadmap and the merger with NVIDIA. The CEO, CTO and president of mental images also addressed "The Future of Rendering" at NVISION 08.
Bill Desowitz: In delving into the roadmap, let's begin with the synergy that's being established with NVIDIA.
Rolf Herken: Both hardware and software are inevitably glued at the hip. For NVIDIA to have a pure software company that truly is established in the business and advances the state of the art in software is certainly attractive because it allows this synergy of the development of hardware and software, which goes in lock-step in a way. We can give them input on how an [excellent hardware solution] should look in order to accelerate our algorithms, and they give us input on how the hardware looks, and we can take advantage of [that]. It's this interplay, without the barriers of confidentiality and all that, which is very attractive.
On the other hand, what we do is really what our customers need and want, and that requires that we listen to them and cannot just have an agenda for promoting a particular hardware for the sake of doing that if nobody can use it. So we really have to work in this triangle of the customer and NVIDIA, the hardware company, and us as the software company, so everyone is happy. And that's our goal.
BD: What about broadening your customer base?
RH: Well, we certainly do that in terms of developing new product like RealityServer, which is totally a foreign animal from the point of view of visual effects production or games producers or animation production. On the other hand, it's also a convergence in the end of interactive game content being created that runs on RealityServer that comes literally out of the production pipeline of an animation studio using the same ISO shaders, using the same assets created with 3D tools in an interactive context of an online game. We really never go very far from the core of our interest. But RealityServer definitely broadens our markets and customer business opportunities significantly...
The online entertainment 3D content of the future, with very large virtual worlds, has very complex interaction among users, characters, avatars and immersive worlds that will run on the server, with video streamed down to the client, displayed in a browser with the kind of latency that we see now reduced to the range of 13 milliseconds across the whole U.S. And it's made possible with the GPUs becoming part of server architectures and with broadband deployment and the affordability for the general public. I see a great future for very complex 3D virtual worlds -- online entertainment forms -- that are the expansion of what entertainment today is about.
And building the technology for this is our goal. Working with NVIDIA, our interests are pretty much aligned, so I think this alliance makes perfect sense. As I said, as long as our customers don't see there's anything wrong with the combination, it's only benefitting all of us. There's great talent at NVIDIA and great talent at mental images, and together we can produce some stunning results. And I would expect to see many highlights in our strategic technology roadmap (with an eye toward the importance of the GPU for anything we do) for the next five years as a result of this synergy.
Bill Desowitz is the editor of VFXWorld.