In this month's issue of "The Digital Eye," Rob Pieké explores the inner workings of R&D at London-based MPC, where ALICE, Furtility, PAPI and Tickle reign.
The R&D team at MPC, which has achieved great success in recent years with its technologies for crowds (ALICE), fur (Furtility), dynamics (PAPI) and lighting (Tickle), is now confronted with a whole new set of industry challenges, ranging from shrinking schedules and budgets to globalization.
First formed in 2001, when the company had roughly 50 employees in the film department, the R&D team reached a peak size of 40 in 2007, servicing MPC's 680 staff on everything from project setup to pipeline to artist tools and support. The company has now roughly leveled off at a total head-count of 450, and is in the process of restructuring our previously monolithic development department into more manageable and versatile pipeline and R&D teams.
The R&D team develops and supports software at three levels. At the lowest level is a core set of C++ libraries designed to consolidate as much reusable technology as possible. Three years ago a significant amount of the source code in our Maya plug-ins dealt with generic geometry processing routines. Centralizing this technology has resulted in a single maintainable geometry processing library that all tools can use, and has greatly reduced the amount of duplicated code throughout our plug-ins.
We use a number of third-party and open-source technologies, where appropriate to our development, but try to integrate them at a very low level in our core. This enables us to provide an interface such as mpc::Thread to a developer working on an artist tool without them having to worry about whether they're ultimately using Intel's Threading Blocks, NVIDIA's CUDA or an in-house technology. Furthermore, it allows the developers working on the core to swap out one low-level technology for another without the entire R&D team having to perform a major overhaul on every higher-level tool. We have successfully exploited this capability both ways in the recent past: swapping out in-house technology for an open-source project which had recently matured significantly, and swapping out a third-party project for in-house technology when we realized we were pushing the technology in a direction it wasn't designed to go.
Compiled Artist Tool
Our second level of technology is the suite of compiled artist tools. These make use of the core libraries and account for the majority of our technology where speed is a chief concern or there is no other interface to one of the dependencies. Technology at this level is often both directly accessible and wrapped so that it can be used by even higher-level tools.
While the majority of our technology is able to operate in isolation, we continue to use Maya as a framework which we can hook our tools onto. This provides the artists with a familiar interface and allows us to use Maya's technology to supplement our own when needed. One of our core libraries allows for the seamless translation of data in Maya's format to and from our own native format. We can then focus our efforts on achieving the required functionality of our artist tools without having to worry as much about whether they will end up interacting with Maya or not.
An example is our crowd technology, ALICE, which was originally developed for Troy. In recent years, ALICE has moved away from its dependence on Maya, but still provides a MEL binding to all the functionality and a MEL event framework to specify procedures which will execute at key points in a simulation. Using PyQt, we have built an interface that runs both inside and outside of Maya and allows TDs to build complex behavioral state machines by connecting "agent operators" in a node graph. These operators take ALICE beyond simply blending motion clips and allowing it to synthesize completely new motions to handle footfalls on uneven terrain, aiming heads, transitioning to and from physics simulations, etc. We still prefer to rely on motion clips where possible, but the result of an operator graph may be the blending of many clips, not just two. Our "additive" operator, for example, can isolate animation such as walking or clapping from a set of motion clips and combine these bits to generate the output. At the time ALICE was first developed, there was no viable off-the-shelf alternative (to serve our particular needs), and given how far ALICE has grown, we feel it is still a better solution for MPC than anything we could buy today. It has worked remarkably well for simulating crowds of elves, medieval armies, birds, bats, drowning people and mammoths to name a few, and plugs seamlessly into our pipeline.
Scripted Artist Tools
The third level of technology is our suite of scripted artist tools. We have made it very easy to wrap our compiled tools using Giggle, our language-independent script binding technology, so that they are accessible from Lua and Python. This allows for a very rapid development cycle for new tools at the possible expense of reduced execution speed. We routinely develop new technology using script first and re-implement in C++ only if necessary. With full access to all our lower-level tools, the scripting interfaces are sufficiently appealing and accessible to the point where a number of TDs are developing their own tools this way.
MPC gains a significant competitive edge with this framework. Rather than being locked in to a single methodology for developing, we can look at the demands of performance, robustness and availability for any tool and pick the best way to deliver it. Additionally, by providing all these entry points to our technology, we're able to achieve incredible interoperability. We can use the same custom forces to affect particles in Maya and fur or cloth in our in-house simulators, and are able to write scripts which gather information from our rigid body system, PAPI, and drive custom geometry deformers or even our crowd system, ALICE, with it. We have also written generic Maya nodes (locators, deformers, particle fields, etc.) which use scripts to dynamically define their inputs, outputs, execution and drawing.
Our rendering pipeline was the initial driver of this multi-tiered tool-set, when MPC worked on Troy, and continues to make extensive use of the concept today. Virtually everything we render in RenderMan is spawned from procedurals which run the same scripts to generate RenderMan primitives as Maya uses to generate OpenGL previews in the viewport. We also provide this procedural-based support for our mental ray output. This renderer-agnostic approach means we save a lot of disk space by only storing general-purpose data caches which can be easily interpreted and/or manipulated "on-the-fly" depending on the required output. When possible, our characters are skinned at render-time which means we only need a single mesh and a light-weight skeleton cache on disk.
Philosophy in Action
Perhaps the best recent success story of our software philosophy is our fur technology, Furtility. The first version of was written as a combination of C++ libraries, Maya plug-ins, MEL and Lua scripts to address the needs of 10,000 BC where MPC was responsible for the digital mammoths. As that project ramped down, MPC had already started on The Chronicles of Narnia: Prince Caspian, which required the company to deliver dozens of furry hero animal characters, from mice to minotaurs. To keep Furtility as lean as possible, we audited it and identified generic technologies such as point distribution algorithms which were moved into our core libraries where they could be shared with our fx and crowd teams. We also looked at the script-based tools and re-implemented certain ones in C++ where we determined they had matured to a level where we would benefit more from sheer execution speed than the rapid development cycle. This summer, MPC started work on Wolfman, which has allowed us to go through another iteration of centralizing generic technology, optimizing stable technology, and focusing on building new features into a lean and clean software base. The emphasis for this third generation has been on performance, and we have provided an incredible experience for the artists who now get near-real-time detailed feedback as they define and design the characteristics and style of the fur grooms.
Another technology which has seen a lot of growth recently is Tickle, our lighting environment and scene translation tool-set. During Prince Caspian we added the concept of a "shader proxy" which is a self-contained asset defining both shaders and the passes they should be invoked in. While lighters previously had to manually setup passes and specify shaders per-object per-pass, a single shader proxy for a furry creature might now define all the render-passes with custom settings and shaders used to bake out point-clouds for occlusion and subsurface information, as well as the render-pass for the final output where a completely different set of shaders and settings are used. These proxies can be easily versioned and stored in or retrieved from our asset management system.
During Harry Potter and the Half-Blood Prince, we added a shader-builder to Tickle so that artists can craft their own final look from pre-approved building blocks. This is particularly useful for fx elements, where there are far too many possible variations in the desired look for a generic multi-purpose shader to be realistic. To handle the complexity of Prince of Persia: The Sands of Time, we've revamped our texture baking system so it can easily store any shader output in a point-cloud or texture map for future retouching and reuse. All these additions allow our lighters to focus on lighting rather than wrangling shader setups and update issues.
Every project has its own look development challenges, which also generates work for R&D. A significant amount of academic research has been done into the way hair reacts to light, but often yielding models that required an array of inputs to produce a quality result. For Prince Caspian, R&D implemented clever methods to approximate the local self-occlusion of fur and simplified a popular lighting model so that it executed more quickly and could be intuitively controlled. Similarly for Watchmen, we've had to find ways to deliver fantastical approximations of reality. This principle of approximating and optimizing for performance or artistic control is routinely used by us and has delivered other technologies such as our subsurface shaders and our octree-based particle lights.
Despite the variety of work, MPC has invested heavily in ensuring we have a single unified pipeline to handle it all. Given the number of artists involved in Prince Caspian, it was completely unrealistic to rely on emailing paths to Maya scenes as shots progressed from department to department. Instead, roughly a third of the software developers were tasked with turning the asset management system into a complete suite of pipeline technologies. The concept of hierarchical packages to wrap our assets in had already been started during Poseidon but was extended with automation utilities and a flexible approval-driven scheme, which allowed artists to control when packages should and should not be automatically updated. This streamlined our working process to the point where an artist could release a new set of animation curves and the system would take over, automatically running our muscle system with the new data, generating new geometry caches and updating all the relevant packages up to and including the single shot package that a lighter would gather to render the entire frame.
Packaging was invaluable for the volume of work we had on Prince Caspian, and has subsequently proved to be robust enough to handle the multiple film projects we currently have running in parallel. What this now means for MPC is that when a new project comes in, there's no confusion as to how it should be organized or how assets should be named. Furthermore, once the project is running, an artist from another project can easily jump in and be immediately productive. Our pipeline team still accounts for roughly a third of the R&D staff, but this has allowed us to continue extending our packaging system to handle everything from humans to multi-resolution submarines as hero "characters".
Perhaps the most unique challenge we've been faced recently with is our globalization. In the last year, MPC has opened offices in both Vancouver and Santa Monica, which has forced us to reexamine some of our policies and approaches to technology development and support. Even in London, where our R&D team has hundreds of artists as clients, it's much easier to engage with the end-users and provide timely feedback to queries or bug-reports about our tools than when we're faced with an eight-hour time difference between London and the west coast of North America. While using VNC allows developers in London to see what's on an artist's screen in Vancouver, we still only have a small overlap in our working hours in which such interactive sessions can take place. As a result, we've had to emphasize the importance of good communication where a significant quantity of information can be clearly but concisely delivered in an announcement rather than only emerge from a back-and-forth discussion. We still have a series of weekly conference calls between the studios, but they've matured from just keeping up with the day-to-day operations to now discussing our longer-term goals and focuses.
As has been noted by others in the industry, schedules and budgets are shrinking, and R&D is often one of the first departments to feel the squeeze. While not immune, MPC has handled this situation in an interesting way by recognizing the long-term impact that R&D has and reducing its dependency on project budgets for financial resources. This allows us to be far more proactive as, if we spot a trend in the projects we're bidding on, we can start working on the required technology immediately rather than wait for a project to be awarded, have its budget setup, etc. We are also now better able to focus on cross-project technologies without having to be as concerned about being included in the accounting for multiple projects.
A prime example of this is some of the digital destruction tools that the fx R&D team has been working on. MPC has had a set of tools to handle destruction for a few years, but none of them were particularly robust. At the same time, we noticed that the company was bidding on a few projects which would require dynamic destruction on a much larger scale than we'd ever done. R&D started working on the problem immediately and, by the time the first such film had been awarded, we were already producing proof-of-concept results.
MPC prides itself in always trying to do something it's never done before, whether it means working on 1,000 shots for a single project or taking on eight films at once, each with its own focus and challenges. To help the company achieve its success and maintain its competitive edge, the R&D team is constantly busy maintaining our core technologies while investigating and implementing the next generation of our tools and pipeline.
Rob Pieké is an R&D lead and software manager at MPC in London. Pieké has been active in the visual effects industry for more than five years and, prior to joining MPC, was the R&D supervisor for C.O.R.E. Digital Pictures in Toronto. He has most recently been developing technology for Harry Potter and the Half-Blood Prince, Quantum of Solace, Watchmen and GI Joe: Rise of Cobra.