Search form

'Poseidon': Making a Big CG Splash

Alain Bielik dives into latest state-of-the-art fluid simulation advances and other CG modeling breakthroughs for Poseidon.

In this remake, Poseidon boasts a digital model ship and two state-of-the-art fluid simulation technologies. All Poseidon images courtesy of Warner Bros. Pictures.

When The Poseidon Adventure was released in 1972, it was a smash hit and its worldwide success launched the era of disaster movies. The movie went on to win the Academy Award for best visual effects. The key scene of the luxury liner Poseidon being overturned by a giant wave was realized with a 21-foot long (six meters) model shot in a tank. Thirty-four years later, director Wolfgang Petersen had a much more sophisticated bag of tricks at his disposal to shoot the remake, Poseidon (released by Warner Bros. on May 12). The handcrafted miniature of 1972 is now a digital model, while the good old water tank has been replaced by not one, but two state-of-the art fluid simulation technologies. Overall visual effects supervisor Boyd Shermis spread the effects workload among eight different vendors, including (in credits order) Industrial Light & Magic (ILM), The Moving Picture Co. (MPC), CIS Hollywood (which leveraged AI-driven Massive to populate the ship with digital passengers and crew), Giant Killer Robots, Pixel Playground, Hydraulx and Lola Visual Effects.

Kim Libreri and his team at ILM, in charge of all the exterior shots of the fictitious 1,200-foot long liner, created 140 shots for the film.

In charge of all the exterior shots of the fictitious 1,200-foot long liner, including in the capsize scene, ILM set up a team led by visual effects supervisor Kim Libreri and visual effects producer Jeff Olson. We did about 140 shots, Libreri says. It may not seem a lot, but that includes the opening shot: a three-minute long, almost entirely computer-generated shot in which the camera flies around the ship in broad daylight. The only live-action element is actor Josh Lucas when his character is seen jogging around the ship. This shot alone was one of the main reasons why we opted to create the ship in CG. In the scene, we go all the way from a wide angle to the point where a passenger fills 50% of the frame. With a miniature, we would have had to shoot many different scale pieces, and then go through all the effort of trying to stitch them together to make one continuous shot. Also, we wanted to create the impact of billions of gallons of water hitting the full length of the ship. Since we now have a brand new fluid dynamics technology, we just thought: lets have a go!

A CG Puzzle of 6,500 Pieces

The ship model was built in Maya, and then set up and lit in Zeno, ILMs proprietary 3D package. With such a complex object, the CG unit faced the challenge of creating a highly detailed model that would still be perfectly manageable. To this purpose, lead digital artist Vincent Toscano developed an asset management system that allowed the team to manage each individual component of the ship in an efficient way. Altogether, we built more than 6,500 individual pieces, but they were replicated many times over the ship with little changes to the paint or to the material to create a sense of variation, Libreri explains. For example, only three different CG chairs were modeled, but the system instanced them and populated the decks with hundreds of them. It allowed us to have an amazing amount of detail, without having to deal with crazy amounts of components.

The Poseidon model was lit with two basic lighting set-ups, both using the same technique. The daytime shots were mainly lit by ILMs high dynamic range lighting system, while the more subtle lighting effects were created with Cinema Radiosity, a proprietary global illumination approach developed by CG supervisor Philippe Rebours in mental ray. For the nighttime shots, the global illumination integrated more than 1,000 individual lights illuminating the walls, the decks, the chairs, etc. Digital effects supervisor Patrick Conran oversaw the rendering pipeline.

The Poseidon model was lit with two basic lighting set-ups. The daytime shots were mainly lit by ILMs high dynamic range lighting set-up, and global illumination integrated more than 1,000 individual lights for the nighttime shots.

Once the ship was lit and textured, it was populated by about 100 digital passengers. Individual animations were derived from a motion capture session done at House of Moves. In the challenging opening shot, the trickiest aspect was the integration of Lucas in the scene, both as a digital double in wide angles, and as a live-action element in close-ups. The shot was first previsualized in 3D, and the camera move exported into a massive 400-foot square Cablecam system that was used to shoot Lucas.

ILM used a new fluid dynamics engine developed in cooperation with Stanford University to create the ocean, including wave, turbulence and bubbles.

Multi-processor Fluid Simulations

As for the ocean, ILM used a new fluid dynamics engine that had been developed in cooperation with Stanford University. The Physbam simulation system had first been used for the chrome T-X in Terminator 3, and then for the ship sequence in Harry Potter and the Goblet of Fire. On Poseidon, ILM was able to employ a new generation of Physbam. Cooperating on the project were Stanfords associate professor Ron Fedkiw and ILMs senior R&D engineer Nick Rasmussen. The ocean was entirely created with this fluid dynamics engine, including wave, turbulence and bubbles. We used a hybrid approach to simulate fluids, says associate visual effects supervisor Mohen Leo, who oversaw the water simulation and rendering on the movie. The volume of the fluid is simulated on a regular three-dimensional grid. In addition, particles surrounding the surface of the water are advected with the fluid flow and help improve mass conservation. In highly dynamic areas where the resolution of the grid cant resolve the surface anymore, these particles are removed from the fluid simulation. When a simulation is run at a sufficient resolution, the removed particles are ejected in areas that visually match areas where, in reality, the waters surface tension breaks, so they can be used to represent spray and bubbles.

The main difference with the Goblet of Fire version of the software was that it was now possible to run multi-processor simulations on eight or 16 processors. This dramatically improved turn-around and allowed the team to work at much higher resolutions. At low resolutions, the fluid motion appeared very viscous, Leo notes. Only at fairly high resolutions did the fluid begin to resemble water. Unfortunately, high resolutions meant longer simulation time and higher memory demands. Due to the change in viscosity based on resolution, low-res simulations were only of limited use for faster tests. Until Stanford implemented multi-processor simulations, we struggled with turn-around (simulations could take four or five days to finish) as well as memory limitations. With the new distributed calculation, each processor only deals with a section of the full grid, which means that memory requirements decrease as well.

Full integration with Zenos particle solver made it possible to set up a semi-automated system designed largely by CG supervisor Willi Geiger in which the artist could create water surface, spray, foam, bubbles and floating debris, all based on one core fluid simulation.

Digital artists had to solve how to visualize the difference between a 50-foot and 200-foot wave. For shots where the whole wave is visible, ILM took a traditional approach by sculpting geometry to represent the main body of the wave.

Taming a 200-foot CG Wave

One of the key aspects of the capsize sequence was how to make the difference between a 50-foot wave and a 200-foot wave on screen. Initially, ILMs team tried to run full simulations for the 200-foot wave approaching the ship, but this soon proved to be impractical. Shermis prevised the sequence of shots leading up to the impact of the wave, with a strong focus on dramatic effect, not realism. Thus, many of the shots required the wave to have a shape and motion that defied physics Since the fluid solver was designed to create physically accurate behavior, forcing it to match a somehow unrealistic previs turned out to be too difficult. For these shots where the whole wave is visible, we employed a more traditional approach by sculpting geometry to represent the main body of the wave, Leo remarks. Fluid simulations were only run for the areas where this main body had to interact with the ship. On the other hand, for the final sinking of the ship, the water was not required to exactly match predesigned motion, but only to react to the ship motion in a realistic manner. So, many of these shots were done using full simulations for all the water surrounding the ship.

The capsize shots required dozens of render passes, all quite complex: full ray-tracing and global illumination on the ship, multiple scattering on tens of millions of particles, etc. In order to maximize flexibility in the composite, the renders were broken up into groups of lights (moonlight, different groups of ship lights, ambient light, etc.), as well as different aspects of lighting (diffuse, specular, reflection, global illumination, etc.). This increased the number of render passes, but ultimately allowed compositors to do the final balancing of the lighting.

CG supervisors Henry Preston, Lindy De Quattro, Joakim Arnesson and Kevin Sprout oversaw other aspects of the sequence. Finally, the shots were put together in Shake with compositing supervisors Patrick Brennan and Mark Hopkins overseeing the effort.

Chas Jarrett and his team at MPC created all the mayhem inside the ship. Their work concentrated on extending partial sets and creating full CG environments that included digital props, water, fire and characters.

Extending the Sets

Unlike the original movie, the overturning of the ship is spread out over a four-minute sequence encompassing dozens of exterior and interior shots. All the mayhem occurring inside the ship was created by MPC. The London-based team included visual effects supervisor Chas Jarrett, compositing supervisor Adrian De Wet, CG supervisor Steve Moncur and lead effects technical director Ciaran Devine. We worked on over 200 shots, but about 80 of these were deleted from the final cut for editorial reasons, Jarrett recounts. Our work covered two types of shots: extending three partial sets of the inside of the ship, and creating full CG environments that included digital props, water, fire and characters.

MPC first contributed set extensions to several scenes preceding the capsize sequence. The most complex set was the eight-story lobby. It was built on a soundstage as a two-story pie shaped set that covered 120° of the complete site. MPC completed the set by adding matching 3D geometries (derived from LIDAR data), and texturing them with photographs of the set elements. Built in Maya and lit with global illumination in mental ray, the CG stories were populated with motion captured CG characters. For the later shots of the grand staircase collapsing, MPC wrote a shattering algorithm to pre-break the mesh. Then, proprietary rigid body dynamics engine PAPI was used to constrain everything back together for the simulation. All the shots were composited in Shake.

For the capsize sequence, a gimbal rig was employed to rotate a small section of the ballroom set in front of a blue screen, which provided a live-action foreground. We first extended the set with our CG ballroom environment and then, set out to populate it with literally thousands of CG elements, Jarrett says. We had 200 CG people, plus all the furniture that you can imagine: dishes, food, poker chips, playing cards, etc. It was incredibly complicated.

MPC completed the eight-story lobby set by adding matching 3D geometries and texturing them with photographs of the set elements. The CG stories were built in Maya and lit with global illumination in mental ray.

To animate the CG passengers, MPC motion captured three categories of movements: people partying, people rolling and falling and, finally, injured people getting back on their feet or helping each other. Each category ended up with a library of about 300 different movements. Some motions were used as approved pieces of animation, but most of the time, they were blended with other motions in ALICE, our crowd animation system, Jarrett explains. For example, we would capture somebody walking, and then, separately, somebody getting knocked over, and finally, somebody standing up again. The system allowed us to throw an object at the walking character, at which point it would switch to the falling character, and then, to the character standing up, and finally go back to the walk cycle, all in one continuous move. Whenever the characters had to take a hard fall, on a chair for instance, or to roll around with other people, we used PAPI to create the dynamics of the animation. It was then blended with the motion captured motions.

Switching to a mental ray Pipeline

Given the huge amount of moving objects in any given frame, the hero shots were designed in layers. First, the digital set was added in the live-action plate or created as a full 3D environment. Second, the biggest objects, like chairs and tables, were simulated, all rigged to break apart under a randomized level of force. Third, the people were added and the whole simulation was re-run again, so that the characters and the furniture would interact with each other. At that point, a multilayered cloth simulation was run in Syflex for each character. Fourth, the smallest objects, like forks, glasses or playing cards, were integrated into the scene and the simulation was re-run another time. Finally, one last simulation was run with confetti and balloons added in.

To create the collapse of the grand staircase, MPC wrote a shattering algorithm to pre-break the mesh and then used a proprietary rigid body dynamics engine to constrain everything back together for the simulation. All shots were composited in Shake.

The simulations ran in ALICE, with PAPI creating the rigid body dynamics animations, all within Maya. The images were then rendered, in many passes, in mental ray, which marked MPCs first use of the software in a feature film. Adapting to this major switch in the pipeline added a lot of pressure to the team, but this effort was deemed necessary to exploit the full power of a new fluid simulation engine. After looking into Real Flow and studying the option of writing our own application, we decided to license Flowline, a tool developed by Stephan Trojansky, head of R&D at Munich-based Scanline VFX, Jarrett recalls. What they showed us was far beyond anything that we had ever seen on the market. The basic concept of how Flowline does its simulation and the interface are very unique. I was especially impressed by its ability to generate CG fire that looked absolutely real. Most of all, Flowline was able to create water, oil, fire and smoke, and have them all interact with each other in one digital simulation without the need for any practical element. This was the ideal tool for Poseidon, as we had a scene in which oil drops on water and catches on fire, which produces smoke: all this was entirely created in Flowline. It can even generate dust: a function that we used for the collapsing staircase simulation. The relationship with Scanline went beyond mere software licensing, as Trojansky became digital water and fire supervisor for MPC.

One particularity of Flowline is its need for a powerful ray tracer, such as mental rays, to function with all its features. The calculations in the rendering are mostly based on ray tracing and many of the surfaces are actually implicit surfaces, not real surfaces. It required a real commitment from everyone in the CG department, as we had a lot of staff who were new to mental ray, Jarrett concludes. But I believe the results speak for themselves.

One movie, two groundbreaking fluid simulation engines, both intercut with each other in the final cut One thing is sure: on release date, the people at ILM and MPC will be the first in line to watch and analyze their peers competing water effects!

Alain Bielik is the founder and editor of renowned effects magazine S.F.X, published in France since 1991. He also contributes to various French publications and occasionally to Cinéfex.Last year, he organized a major special effects exhibition at the Musée International de la Miniature in Lyon, France.