Alain Bielik talks with visual effects supervisor Raymond Gieringer on how vfx were needed to keep political thriller The Sentinel in its White House setting. Featuring QuickTime vfx progression clips.
If you have the QuickTime plug-in, you can view vfx clips from The Sentinel by simply clicking the image.
Shooting a movie that uses the White House as a set piece was never an easy task, but it has become almost impossible since 9/11. Security measures have obliged filmmakers to rely ever increasingly on visual effects to establish the action of feature films and television series around the landmark location. Such was the case for The Sentinel (released April 21 by Twentieth Century Fox), in which a disgraced veteran of the Secret Service (Michael Douglas) tries to foil a conspiracy to assassinate the President.
To create the White House, and many other visual effects, director Clark Johnson turned to Toronto-based Intelligent Creatures. The company had just produced a series of invisible effects shots for Mr. & Mrs. Smith and for The Matador, precisely the type of work that was required on The Sentinel. President and visual effects supervisor Raymond Gieringer talks to VFXWorld about his companys work on the movie.
Alain Bielik: When I started preparing this article, the studios first answer to my inquiry was that they didnt think the movie had any visual effects at all, which is obviously a nice compliment How many effects shots are there in the final cut?
Raymond Gieringer: We started at about 60 shots, and by the end of post-production, with additional paint fixes, monitor replacements, etc., it grew to a total of 116 shots. The team that created the shots included Krista Allain as project manager, Mohammad Ghorbankarimi as associate vfx supervisor, Mai-Ling Lee as lead 3D artist, Jason Maher as 3D artist and Clancy Silver and Virginie Lamotte as 2D artists.
AB: What were the major challenges of the project?
RG: The first act of the film establishes that we are in Washington, and establishes the White House setting for the rest of the movie. Our creative challenge was to build the CG White House/Oval Office content for various scenes as the movie opens, in order to help establish the genre of the film. The scenes that involved a CG White House were fundamentally important in terms of scope of work.
AB: How was the digital White House created?
RG: We surveyed the area around the White House in detail and shot hundreds of gigabytes of high-resolution digital stills from every vantage point available. We used a Canon EOS-1Ds Mark 2, as we found that its 5 x 3K raw images are excellent for textures and digital matte painting, etc. We also managed to find a great deal of historical reference on the White House itself. The 3D model was built, textured, and finished in Maya. Most of the CG components of the shots were rendered using mental ray. RenderMan came into play for a great deal of the foliage that was created for a shot in which the camera flies up to the roof of the building. It was of great value for rendering speed with motion blur for the tens of thousands of leaves on each tree.
AB: Did the lighting and rendering of the CG White House present any particular challenge?
RG: Indeed. During the process, we learned that it is quite challenging to light an all white building. The surface of the White House may appear white, but in fact there are many subtle lighting variations across it, and it also has a tendency to take on subtle colors from its surroundings. To match this, we created additional bounce lighting & color passes that the compositors could adjust accordingly within Digital Fusion.
AB: The other major vfx sequence of the movie is the destruction of the Presidential helicopter. How was it realized?
RG: We opted for a full 3D approach. A complete digital photo survey of the practically dressed helicopter served as the foundation for our CG helicopter. We then shot both ground and air-to-air background plates that we would animate our CG helicopter into. We initially did various rigid-body simulations within Maya to break apart the helicopter. They worked well, and in the end were a good point of departure for the sequence. However, we eventually baked the animation into the shards to give the animators the flexibility to adjust the motion of the pieces as required. The director had a precise vision for the sequence, and we needed total control of the placement of the fragments in any given shot. We also tested a plug-in called Blast Code that has the ability to shatter objects into thousands of pieces. The idea was to use it to destroy sections of the helicopter fuselage. In the end though, we just relied on Mayas dynamics module.
A CG missile, smoke trail, pyro and debris elements were also created for the sequence. The rocket trail was created using Mayas fluid dynamics system. The debris were modeled and textured in Maya, and then run through a rigid-body simulation with dynamic forces applied to simulate the effects of the explosion. We also filmed practical pyro explosions at night to place into our shots. All the elements were then rendered in mental ray and composited in Digital Fusion. The final scene has a fully realized 3D helicopter disintegrating before our eyes.
AB: You also created several crowd duplication shots. What technology did you use for this sequence?
RG: The sequence takes place at night at the Toronto City Hall during the G8 Summit, surrounded by angry protestors. There were a couple hundred extras on hand, but this was not sufficient to produce the effect that they wanted. So we were charged with enhancing the crowd to give the impression of thousands of people. The initial plan was to shoot the extras all over the place and tile them together in post, but on the day, we simply did not have the time to get the necessary crowd passes. So, using Maya, we ended up creating 3D extras, along with various signage, that could be cloned and placed wherever necessary to fill the scene. We employed Digital Fusion for rotoscoping, and boujou or 3D-Equalizer for match-moving. We had about a dozen different CG models in a grouping with signs and a looped animation. Four master groupings were created, from which the animation cycles and colors were randomized to produce additional CG elements. The CG crowd elements were then composited into the scene using Digital Fusion. The net result is a series of shots where the digital extras blend with the real ones in a seamless manner.
AB: In this whole project, did you tackle a shot that proved particularly challenging?
RG: Absolutely. We had to create an impossible camera move from the Northwest Gate to the top of the White House roof for a 360-degree view of Washington. We started by previsualizing the shot to check lenses and choreograph how the move could work, using the blueprints of the practical set pieces to ensure accuracy. In terms of the live-action component, we shot three different plates:
A crane shot of Michael Douglas walking from the Northwest Gate up the White House driveway; this was a set piece built by the art department.
A crane to Steadicam shot of a 270-degree rotation around the White House roof against greenscreen; another set piece created by the art department.
A Steadicam shot of the final 90-degree rotation around the White House roof against greenscreen, captured on the same set piece.
Our job was to then create a CG White House environment to stitch the first and second plates together, and then to digitally blend between the final two plates to create the full 360-degree move.
AB: How did you ensure on set that the three camera moves would eventually match?
RG: The previsualization was used as our frame of reference to make sure that we were shooting what we would later need to make the shot work. We also spent a great deal of time laying tracking markers out for the various plate shots. Ultimately, to make the transitions work, we relied heavily on our ability to track the live action plates and to create virtual camera moves that lined up our 3D intermediate environments. First, using boujou, our artists tracked the live- action plates, and then imported this data into Maya to fine-tune our virtual camera move. The White House as a whole was textured and lit to ensure continuity. However, when it came to the compositing, we needed to have control over the various sections of the building, so we could minutely tweak the lighting, bounce lighting, color bleeding, reflections, dirt passes, etc. In the end, the White House comprised between 50 and 60 different sections, all of which had multiple layers rendered: spec, bump, diffuse, etc.
Apart from the challenge of the White House itself, there was also a great deal of foliage (grass, trees, plants, shrubs) that had to be digitally built for the scene. For some of it, matte paintings were created and projected onto 3D geometry. The trees and some of the grass sections were made from L-systems, a formal grammar used to determine the growth behavior of plant development. We had the ability to run dynamic forces such as wind through our trees and plants, but we soon realized that, based on the speed with which the camera was moving through the shot, we were not able to perceive these subtle movements.
AB: Since you never had the opportunity to shoot plates or stills from the actual White House rooftop, how did you manage to recreate the panoramic view from that precise location?
RG: We were cleared to access hotel roofs and patios that were located to the North, East and West of the White House. From these vantage points, and with the high-resolution digital stills we took with the Mark 2, we were able to get all of the pieces we needed for our panorama. It allowed us to create, using Photoshop, a multi-layered digital matte painting of the Washington DC skyline. Our master file was a 10k, 16-bit image of approximately 600MB. We eventually split it up into multiple layers to be placed at different distances from the camera. This permitted us to achieve the proper parallax as we moved across the image. In the end, we had well over two hundred CG elements to incorporate into the scene. But all the viewer will see is one seamless camera move from the front gate, over the grounds and onto the roof of the White house for a 360-degree panoramic view of Washington, DC.
Alain Bielik is the founder and editor of renowned effects magazine S.F.X, published in France since 1991. He also contributes to various French publications and occasionally to Cinéfex. Last year, he organized a major special effects exhibition at the Musée International de la Miniature in Lyon, France.