Search form

The Visual Effects of Wes Anderson’s ‘Isle of Dogs’

Senior VFX supervisor Tim Ledbury details previs, production and post for Anderson’s second stop-motion animated feature.

‘Isle of Dogs’ directed by Wes Anderson. Images courtesy of Fox Searchlight.

Wes Anderson’s set his stop-motion film Isle of Dogs in Japan, where, supposedly for health reasons, Mayor Kobayashi (voiced by Kunichi Nomura) has exiled all dogs to an island full of garbage, including his young ward Atari’s (Koyu Rankin) guard dog, Spots (Liev Schreiber). Produced by American Empirical Pictures and distributed by Fox Searchlight Pictures, the film, which went into wide release on April 13, has achieved a 91 percent aggregate approval rating on the website Rotten Tomatoes, where critics praise the film’s creativity, imagination, and wit. Already, Isle of Dogs has received two awards: A Silver Berlin Bear at the Berlin International Film Festival, and the Audience Award at SXSW.

With few exceptions, all the characters and sets are practical puppets and hand-made elements shot in camera, often against greenscreen, and composited together. Thus, every shot in the film had visual effects.

“Wes is quite anti-CG. We had quite a bit of CG in ‘Fox,’ and I don’t think he was fully comfortable with it. Not because he hates CG, but he feels it dictates a look.”

“It was an extension of what we had done for Fantastic Mr. Fox,” says senior visual effects supervisor Tim Ledbury, “but we pushed it further in terms of the number of elements and shot composition, and in layout. We shot miniatures of backgrounds and background puppets. We would create backgrounds from multiple elements or composite in miniature backgrounds. All the skies were comp’d in -- we didn’t have any backdrop skies on sets.”

AWN: How did you begin?

Tim Ledbury: We had around 35 artists at 3 Mills Studios in East London, essentially the same group of people who had worked on Corpse Bride, Frankenweenie and Fantastic Mr. Fox.

Wes [Anderson] had been working for a year on a storyboard/animatic, so we largely had that when I started. When I first saw it, I did a lot of head scratching. Then, we broke down all the shots and worked out what we would shoot in-camera.

In a lot of ways the animatic didn’t change in terms of structure, layout, and shots. It was the guide for production design, cinematography, and effects -- we tried to shoot what we saw in the animatic. But, to get to the point where Wes could launch a shot, I had to show him what it would be like.

AWN: You did previs?

TL: Before you start animating, you have decide the shot and the camera moves. If you can’t shoot it all in-camera, the next best thing is to shoot as many real elements as possible. A lot of my time was spent doing mockups and tweaking backgrounds so Wes could make framing choices before he said “go” to the animator.

So, I did previs, mockups, and painted elements that would be built later -- thousands and thousands of mockups to send Wes options for how he might want a scene or shot to look. Especially if we would be using greenscreen; he’d want to look at the background. I’d talk with the director of photography [Tristan Oliver] and decide with him what lighting passes to shoot, then mockup what a final shot would look like.

AWN: How did you create the previs and mockups?

TL: We designed and conceived the previs in CG, working in [Autodesk] Maya, and I painted in [Adobe] Photoshop. I’d paint elements or use test elements.

For example, for Trash Island, the model makers had done test elements in pre-production. I had a turntable unit set up so I could shoot each element -- a hill or a prop -- under multiple lighting conditions and angles. So, we had a huge library of background elements. As the shoot progressed, we could offer those elements as a temporary solution for Wes’s previs, or use them as final elements in the film. One of our biggest tasks was tracking and managing all those elements.

Once they had shot a scene, the designers and model makers would make any final elements based on the mockups, and in post-production, we’d base our first comps on the mockups.

AWN: How many sets were there?

TL: On our previous films, we had around 80 sets. On this one, we had three times that or more. I’m not entirely sure what the final number was, but every time we had a new set, we had to set dress it and set it up for the animation. It was a huge task. We didn’t have any more money or time than we had for Mr. Fox. So, we had to use all our tricks to achieve the density Wes wanted.

AWN: What sort of tricks did you use?

TL: We didn’t invent any new tricks, but we came up with a lot of efficiencies to shoot the amount of footage. For instance, we used greenscreen checker-boarding; that is, sliding greenscreens into the set. We used forced perspective when we could to minimize the need to composite shots together. We hid rigs. Anything we could to speed up the shoot and save the heavy visual effects work later. That’s the critical point with these films. It’s a big job getting to the point where the animator is alone shooting the shot.

AWN: Did you create any CG puppets?

TL: Wes is quite anti-CG. We had quite a bit of CG in Fox, and I don’t think he was fully comfortable with it. Not because he hates CG, but he feels it dictates a look. So, he would rather have something built. The puppets you see are all real puppets. Even most of the audience in the municipal dome. We shot multiple characters and composited them together. In the very, very distance there is some CG, but only because we needed that extra little bit. I think if you’re going to make a puppet movie, it’s important to shoot as many puppets as you can. Otherwise, what’s the point?

AWN: So, you didn’t use 3D printing for the puppets?

TL: The puppets were hand sculpted and painted. That was a decision of the puppet department and Wes. Obviously, you can get greater tolerance with 3D printing, but the idea was to get a handmade look. It was an ethos, not a practicality.

One difference from previous films was the system we used to animate the puppet faces for the humans. On previous films we used mechanical heads. But these puppets were smaller; too small to have a seam line across the eyes. So, we had mouth plugs to do different mouth shapes, and had to do a lot of work blending the seam lines around the mouth and make it look convincing. It can look strange to have an animated mouth and nothing else happening on the face. We had to move the faces slightly so there would be a little flicker when they talked. With Tracy [Greta Gerwig], we had to warp her face to move the freckles slightly.

We did do a lot of 3D printing for the film though, mostly for props. Bits of aircraft. Dog cages. Some of the statues. Complicated mechanical stuff. A lot of elements were 3D printed and cleaned in the art department. Also, if they sculpted a large puppet head, we would scan it, scale it down in the computer, adapt it if needed, then print it out and use that as a basis for a mold to make a new puppet face.

AWN: Did you use CG for backgrounds or elements?

TL: We did use CG to help us with the plastic seas, when the dogs are on the raft traveling along. Also, we used some CG to help with the oceans in the background. We couldn’t get the motion control camera close enough or low enough to the plastic seas we created to move along the ocean.

When the dogs are flushed away in the tunnels, to have enough tunnel for them to rush along, some of the tunnels are CG based on real sets. We used photogrammetry to create them. And, when the dogs are on the bridge and the robot dogs attack, we extended that set with CG, but again, based on the real set. For that scene, we had a previs model that Wes saw with a background. We shot elements later on to replace the previs model.

We also did a lot of matte painting to manipulate and combine sets together, but it was all based on physical models. We tried to shoot everything as elements rather than resorting to CG. CG filled in gaps and augmented what existed.

For CG, we used a standard set of tools – Maya, Arnold for rendering, Mari for texturing, Zbrush for some modeling tasks, and Nuke for compositing.

AWN: What about that amazing laboratory. Was that a practical set?

TL: We combined long sets together and augmented the sets to make the lights blink. We did 17 lighting passes for every frame when we shot those shots to give Wes the ability to change lighting as he wanted. He had 17 different light states all triggered by the motion control software for each frame -- that was a common approach. Wes could decide later whether he wanted the shot to be overcast or have stronger light. We could change lighting direction and mood. It was a Nuke thing.

AWN: It sounds like this film, perhaps more than others, really came together in compositing.

TL: We had lighting passes, we combined a lot of sets, we had elements. There’s a shot where part of the dog pack is in a trash compactor and there’s all this machinery. We shot over 80 elements and combined them together. When we have puppets in the foregrounds, we have to blend the ground together with the background miniatures. But, we tried not to polish everything too much. There’s a natural tendency to clean up and polish, so a lot of our work was in making it not look too good. We decided to leave some of the set pops to get a natural feel.

AWN: What was the most difficult thing about this movie for you?

TL: It was the volume of elements and the management of them. In terms of footage, we shot about five times more elements than we had on any other film to achieve what we did.

AWN: And the most fun?

TL: Just being around all those sets and puppets. It’s the third stop-motion movie I’ve done. When you see the film, you don’t appreciate how rich and detailed this stuff is in real life. It’s a pleasure to be around.