Search form

Liquid Magic: Exocortex Technology, Part 1

Longtime fans of the Harry Potter film franchise are familiar with the series’ ample visual effects (VFX). Among the many emotive sequences in the Harry Potter film franchise is the memory pool sequence created by Gradient Effects. Here is a behind-the-scenes look at how Gradient Effects and Exocortex Technologies worked together to pull off this visually stunning special effect.

Longtime fans of the Harry Potter film franchise are familiar with the series’ ample visual effects (VFX), and this summer’s box office hit Harry Potter and the Deathly Hallows: Part 2 -- the final adventure in the series -- ranks as one of the biggest VFX-driven Hollywood productions of 2011. Among the many emotive sequences in the Harry Potter film franchise is the memory pool sequence created by Gradient Effects. Here is a behind-the-scenes look at how Gradient Effects and Exocortex Technologies worked together to pull off this visually stunning special effect. A Large-scale, High-fidelity Sequence Harry Potter’s memory pool sequences required an ink-in-water look where the ink would behave realistically in a large volume, then quickly assume dynamic shapes that can seamlessly transition to live-action footage. This complexity presented Gradient Effects with a significant challenge. Not only did it require fluid simulations of unprecedented detail and scope, but it also needed a specific inky look that most traditional simulators have difficulty producing, even in restricted domains. And because the sequence’s timing and transitions had already been decided, there was no artistic leeway if the technology ran into limitations.

On the technical side, artists had to be able to see, in real time, the results of increasing the simulation resolution to more than half a billion points for final output. It was clear to Olcun Tan, co-founder and head of research and development at Gradient Effects, that traditional fluid simulators were not up to the task.

While searching for new simulation solutions, Tan discovered Exocortex’s Slipstream technology. “Before committing to any technology, we did extensive research by testing all well-known technologies for the type of work required on Harry Potter,” says Tan. “After seeing the first tests from Exocortex, I was instantly convinced that we had found our solution in Exocortex’s technology.” Exocortex’s Slipstream Technology Ben Houston, Exocortex’s founder, has long known about the limitations of traditional simulation methods, having led the Flood fluid simulator project at Frantic Films. Exocortex was founded with the belief that the traditional limitations could be overcome, giving artists more creative freedom while also controlling costs.

In the pursuit of these goals, Exocortex’s team had been researching and developing its proprietary simulation technology since 2008. The result is Slipstream, which allows rapid accurate previews -- and realistic and unbounded liquid simulations -- while cutting costs for VFX studios.

A key element of Slipstream technology is the elimination of a simulation bounding box. The technology simulates freely in any environment where one needs results at an arbitrary level of detail.

Slipstream is stable and predictable, so an artist’s real-time preview of a fluid simulation element is the same as the final beauty pass. Studios can handle tremendous amounts of simulation without excessive investment in hardware and personnel because Slipstream is efficient when it comes to both memory and computation. Designing a Modern Agile Pipeline The project’s compressed timeline -- together with the scope of research, development and production involved -- required the development of an agile pipeline that is capable of tying together software, creative talent and hardware. Tan started by combining Exocortex’s simulator technology with Gradient Effects’ proprietary technology running in Autodesk Maya.

Under Tan’s direction, Exocortex modified its simulator technology to be fully deterministic. Feeding in the same inputs to the simulator would produce exactly the same results every time. This addressed two very important elements of the production pipeline. It allowed artists to design the simulations in real time while assuring all their details would be realized as previewed when the simulators were rerun for the final beauty pass. It also allowed a single simulation to be split across multiple machines with the results automatically synchronized. To minimize the amount of data, camera-based culling and level-of-detail were employed. Many-core Distributed Simulations Using one machine per simulation proved to be inadequate because of the unprecedented detail and scope required for each shot, combined with the need to provide a workable turnaround time and the desire to minimize compositing tricks.

Because the simulations could be distributed across multiple machines, between 10 and 20 machines could be assigned to each simulation. This enabled an astounding 480 cores and 960 GB of memory to be simultaneously dedicated to a single fluid simulation.

The approach proved to be an effective time-saver with respect to simulation times, but the raw intensity of the computational power presented its own challenges. A Storage System for High Compute Intensity Gradient Effects’ simulation and rendering farm consisted of machines with 24 cores and 48 GB of RAM each. They churned through the simulation much faster than the simulation data could be transferred across the network to the storage servers.

At the same time, initially writing 1 GB per frame per computation node also created a bottleneck. To solve the problem, an innovative distributed storage system was created using the simulation nodes themselves.

The speed advantages of local storage greatly outweighed the minimal CPU cost per simulation node of managing a distributed storage system across the simulation nodes themselves. Many-core processors already offered significant CPU power, a small portion of which could be assigned to managing the distribution store. Even assigning one core to this task would reduce the overall machine’s simulation performance by only less than 5 percent -- significantly less than waiting for the completion of large data transfers to a storage server.

To make the distributed store more robust, the team deployed a background data-mirroring process. This simulation slicing allowed for the use of 300 to 500 million elements per frame of the simulation and a turnaround time of a mere few hours.

“Distributing the simulation, combined with the speed of the simulation engine, allowed us to simulate hundreds of millions of particles, whereas in that same time other tools could simulate only thousands,” says Dave Rindner, senior effects artist. “The biggest scenes required, at most, a few minutes per frame for simulation with particle counts of half a billion. Smaller scenes were only seconds per frame.”

Check out Part 2 of our Exocortex coverage later this month.

Dan Sarto's picture

Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.