Bill Desowitz gets the scoop on new innovations at Autodesk this year from Marc Petit, SVP of the Media & Ent. division.
Bill Desowitz: Let's begin by discussing what's new in the hot areas of realtime and previs.
Marc Petit: Because we get all of this powerful realtime graphics hardware, we can start doing a lot more things in realtime and leveraging all of this interactive technology for several purposes. One is for sheer productivity gains and the other is to offer more creative iteration cycles. It's important to allow people to make better decisions during the process by the use of realtime, and to do them earlier in the process. So that is one of the things driving our roadmap for this year. In terms of previsualization and virtual cinematography, at our SIGGRAPH User Group last year, we showed a research project called Sextant [a previs and concept design tool for film and games]. And what's interesting about Sextant is that it's a realtime environment that bridges 2D and 3D. It has an editorial timeline, realtime graphics, it speaks 3D video and audio fluently and it allows people to mix and match and do very elaborate story development, pretty much like interactive 3D storyboards.
We have MotionBuilder, which is still being heavily used for onset capture. It's even used by Lightstorm for Avatar's realtime cinematography and lighting elements. And Sony Pictures Imageworks also used MotionBuilder on Beowulf for realtime cinematography.
And then we have Maya as a generalist visualization tool for complex scenes. So the good news is that you can have bidirectional workflows because all of those applications communicate through FBX, our data backbone and we see a lot of presence in the pipeline
BD: Would you characterize realtime as this year's roadmap theme?
MP: Yes, there is more realtime everywhere. There is full leverage of the hardware, with GPU acceleration going on at a lot of places. And, again, it's realtime for the sake of productivity and more creative cycles. And it's the overarching theme. One other thing we are doing in this spirit is thinking of a game-like user experience. My sister divisions are doing a good job of converting 2D CAD people to 3D CAD, so people start to manipulate 3D models and when it comes to visual communication, you can still use the good old techniques of software rendering, which can be long, but, if you think about the gaming model and the realtime interactive model applied to CAD, you can start bringing visualization capabilities to many more people. And, you know, an architect is not very well versed in using 3ds Max or Maya. We have another project, which is Project Newport, which is about giving interactive storytelling tools to professionals to do visual communication. It's about aggregating content from many platforms. Take Terrain, which is generated in a GIS CAD package like Civil 3D; take buildings, generated in a CAD package like Autodesk Revit; take machinery, generated in AliasStudio or Autodesk Inventor. Now put all of that together and tell your story: show your factories, show your buildings and do that interactively like a 21st century, game-like heads up display application. We showed that [recently] at Autodesk University and it's about more people as non-media professionals creating 3D visualization.
BD: Talk about how this applies to gaming.
MP: A lot of the technologies are relevant for the gaming industry. For example, Sextant is designed for somebody who has the skillset of a storyboarder or who works in story development. They're not the [high priced] character animators or Maya junkies. Because we give them all of our simple tools for character animation in games like Human IK, they can actually manipulate characters. What we want to do is put the ease of use of gaming and the performance of gaming and the interaction models of gaming at the front end of the production pipeline.
BD: So there's definitely a bigger push in gaming this year.
MP: Yes, definitely. Although that is the space that we can disclose the least at this moment, we have been very successful with our character animation middleware and I think we can do a lot more investment in providing the game industry with better character tools. The game industry right now is very technology oriented. People will talk about physics, they'll talk about pathfinding, they'll talk about Artificial Intelligence, they'll talk about Inverse Kinematics. But that's not what the people who design games want to know about. They want to know about characters, they want to know about vehicles, they want to know about vegetation, they want to know about buildings they can destroy. And that's something that we are working on in our tools. So when we consider the game market, we are trying to wrestle the power away from the engine people and the programmers, who use technology to make it go fast, and give to the designers so they can make games that play well. We want to get to a stage where we deliver components to game designers and they deploy a brick [construction] model.
So we want to make sure our products, 3ds Max and Maya, manipulate not only polygons and vertices and keyframes but they also start to manipulate objects that the game engine will understand. And get the "what you see is what you play" paradigm.
BD: And what about Autodesk's involvement in stereoscopic moviemaking?
MP: When you think about a pure post-production world, stereoscopy is a big thing. So we have more going on in 3ds Max and Maya to do the production of stereo.
BD: And what are the production needs?
MP: There are two things. One is to create stereoscopic images in 3-D, so we've done a lot of work for some of our customers with new features. That aspect is to ease the process of stereoscopic creation. Now the manipulation of the stereo as a concept -- how you design the depth or add depth to your design or how you actually edit and make all your corrections in the context of a 3-D movie -- that's further out. Everybody's starting to understand the problem and not even think about the solution. But we've developed this concept of stereo grading, because, we believe, when you can start manipulating the depth early in the pipeline process and use the depth as an element of the story, just like color, then you have to carry those decisions all the way through and be able to grade and fine tune. And we're looking at this process holistically. We're starting to understand what we need to deliver as a set of tools for people to do stereo management.
So stereoscopy hits all of our products: Lustre, Smoke and Flame. You know, some of the tricks of camera mapping that we used to rely heavily on, don't work anymore. So there are a myriad of projects and features going on right now to actually address all of those problems.
BD: What about making animation better and easier and improving workflow?
MP: This is another overarching theme. In the film pipeline, whether it's to put animation in the hands of people who are not animators, like architects, or story development people, we have many reasons to make animation easier. And easier doesn't mean simplistic. They want few controls, pre-defined behavior but still a pretty fine level of reality. The high-end people want more control; the other people want less control. So this is the way we approach it in terms of levels of controls. One way to make 3D easy is to make it understandable and one way to make 3D understandable is to make it behave more like the real world. And this is where our assimilation technology comes in with our [Maya] Nucleus unified simulation framework and the introduction of the first module, nCloth. So we are driving a lot more simulation and Nucleus is one example when it comes to cloth or hair or fluids. But we also do a lot of light simulation for people doing lighting previs or architects doing shadow-casting studies. You basically tell the software where you are in the world and what time of season and we take care of realistic lighting.
And we have a lot going on in the backend of the pipeline in terms of sharing the same color management system, sharing 3D geometries through FBX, sharing media, of course, and we're driving a lot of integration between Maya and Toxik because those two products go hand-in-hand in CG pipelines.
BD: And what about Mudbox 3D? There was a lot of concern when the Skymatter acquisition was announced last year about it remaining a standalone product.
MP: Although we can't get into any details yet, we can say that, yes, it will be a standalone and that we are also looking at better integration with Max and Maya. What the Mudbox customer is looking for is to add texturing into the package. Not only to do with the shape but also the color. So we have a good history for these things with our customers, and there is a lot of hardware acceleration that is going on as well. And Andrew Camenisch, Dave Cardwell and Tibor Madjar are in Toronto now and are working on the next version of Mudbox, and we're starting to see with them how we can help them impact more products.
Bill Desowitz is editor of VFXWorld.