VFX supervisor Axel Bonami talks making, mixing, matching, and maneuvering the many visual styles in Akiva Schaffer’s hybrid live-action/CG animated action-comedy, now streaming on Disney+.
Like its famed 1988 predecessor Who Framed Roger Rabbit, which it inevitably evokes, Akiva Schaffer’s hybrid live-action/CG animated action-comedy, Chip ‘n Dale: Rescue Rangers, has great fun combining animation with live-action. And it goes the 1988 landmark film one better by also mixing and matching a half-century’s worth of animation styles, from its 2D/3D lead duo, to its Muppet-like cheesemonger, to the many stop-motion, claymation, and other characters that appear in its sizable cast. Picking up 30 years after the TV series of the same name, the film, now streaming on Disney+, revisits Chip (John Mulaney) and Dale (Andy Samberg) in modern-day L.A., where they rejoin forces to save the life of a friend who’s been kidnapped.
As the VFX supervisor for MPC, one of the world’s largest and most successful visual effects houses, which did the lion’s share of the visual effects work on the film, Axel Bonami was in on the action for most of the production. A veteran digital artist, compositor, and VFX maven – whose credits include Artemis Fowl, Ghost in the Shell, X-Men: Apocalypse, and several Harry Potter films – Bonami worked closely with production VFX supervisor Steve Preeg and director Schaffer to bring the ambitious project to fruition.
In a recent interview, Bonami talked about his creative journey and the joys and challenges of working with 12-inch protagonists.
AWN: What was the scope of MPC's work on the film?
Axel Bonami: MPC was involved at a very early stage, before the movie was greenlit. Disney wanted to run a test for budgetary reasons, because whereas Roger Rabbit cost $117 million, the budget for this one was in the $60-$70 million range. It's a big difference.
The world that would be created would be using a lot of full-CG environments, because of the small scale. There were a lot of retrofitted features. So we took a hybrid approach to the main characters. We pitched the idea of one being photorealistic and the other one 2D, existing in a photorealistic world with other types of animation. And that's what got the show greenlit.
When we started pre-production, we were the sole vendor on the project, covering a little bit south of 1,500 shots in the movie. Then a sequence was done by Passion Pictures. It was a full hand-drawn sequence, one of the introduction clips of the Rescue Rangers. And I think there were a couple of shots done in-house as well, that Steve Preeg was supervising. But other than that, we were doing pretty much everything.
We had over 160 characters designed. We generated over 30 full CG sets as well. The only plates shot were with physical humans. When there's not a human in the shot, whether a background crowd or one of the actors, everything is computer-generated.
AWN: I wouldn't have figured that because it looks like there's a tremendous amount of live-action photography throughout the film. How did you prepare for this kind of hybrid medium, where CG characters are the lead actors? What were some of the challenges you addressed?
AB: They’re pretty extensive on a movie like this. I worked on it for almost two years, including pre-production R&D. We spent quite a few months prevising the entire movie, so we that we understood the framing and the camera, because there's a lot of small-scale work involved. They had to do some special Steadicam rigging to be able to put the camera close to the ground. We had to understand how fast the characters move – how fast they walk, how fast they run, what type of framing we can do. They actually built a little radio-controlled car that had a small chip on it, so they could get the speed.
Then there was pre-production R&D regarding lenses. We were facing the problem that when you shoot something that’s very small, you end up with that macro-photography feel. If you're trying to film an actor that’s 12 inches tall, you have to put your camera very close, and the whole environment become very de-focused. We were going to be spending so much time with actors in small environments. So, when we did tests, we realized it was going to be too tiring. It's like shooting into a scale model, or a dollhouse. When you put a camera in a dollhouse, it looks miniature, and that’s tiring for the eye. So, we did tests to determine, how do we bend those rules? How do we make people feel that, even though they're looking at a 12-inch character, it looks like a real actor? We had to understand those rules.
We had to understand the constraints of hand-drawn animation. There was a test regarding whether they would be using ones or twos. Usually, in traditional animation, you don’t draw every frame. Roger Rabbit was an exception – they did it on ones, because they wanted to make sure that all the frames actually lined up with the film, because they were doing everything by hand. But Chip was a traditional TV character, and we wanted to give it the same feel that he had in the TV show, so we used twos. We also had some stop-motion characters that had to be done on twos.
So, we had to work out what happens when a character that is in every other frame moves spatially into a world that is every frame. How does it work when they're carrying objects that are real? Sometimes Chip is in a costume, but the costume is supposed to be real, so the costume should be rigged, but then the toon should be on ones. So, we were bending the rules in that sometimes they were actually on ones, because they were interacting with a physical object, or with another human.
AWN: Did the inclusion of the Bjornson character, who was meant to emulate a Muppet-like puppet, present unique problems?
AB: For Bjornson, we consulted with some professional puppeteers and we looked at Stan Winston videos in order to understand the mechanism. It had to be convincing that it was an actual hand-animated puppet. Those mechanisms were actually reproduced in CG. We had a physical hand inside the head, because when you move the top of the head, the whole head is going to flip back as you talk. In a human, it's the bottom jaw that goes down. In a puppet, both the top and bottom move. So that had to be captured as well. And how the cloth is going to react. It was pretty much us needing to fully understand everything we were going to use in the movie, so it was a lot of research. We needed to be experts in puppetry, stop-motion, claymation, and different styles of traditional animation, because we were going from the 1950s up to the 2020s.
AWN: Speaking of the evolution of animation, those were really funny segments in the Uncanny Valley.
AB: The Uncanny Valley was very fun, and Steve Preeg was the right man to work with. He won the Oscar for The Curious Case of Benjamin Button and he worked on Final Fantasy and some other films with digital characters. But it was also technically difficult in a funny way, because we had trouble making Bob the Viking look bad enough. Akiva told us, “You have to forget everything you've learned to make characters look good.”
AWN: Can you walk us through a sequence that might have been especially challenging and talk about how you handled it?
AB: When Chip comes home and we meet Millie for the first time is a good example. In the office sequence, Chip says that he has to go home to Millie, and we don't know who the character is at this point. And when he gets home, we discover that Millie is a real-sized, photorealistic dog – a computer-generated dog – that lives with him in a small-scale house. Steve Saklad, who was the production designer, had some mockups in the previs to provide an idea of the type of space they wanted. So we had to think about how to make a scale model of that apartment with real-life features. For example, that means we're going to design some small furniture, and we're going to make sure that the wood fibers are real size – so it's like the furniture has actually been made for small people.
As you build up, you must constantly think about how to make this seem like people can actually touch it. People can see it as a cute little dollhouse, but they also have to feel that the world exists, and they don't question it. It’s a really fine balance. The challenge in every sequence is you want to make it feel real, but you need to bend some rules so that people feel that they are the 12-inch person, so they can connect to that character. It was very important to us to put the viewer into the action.
AWN: Was there anything that you thought wouldn’t be too much of a problem that turned out to be unexpectedly difficult?
AB: The big one was regarding Chip, because we were taking a 3D approach, and pretending that it’s hand-drawn. That required a lot of actual manual work. Even though he's CG, he was blocked in 2D. So we had a kind of reference for different positions, targeted emotion, expressions – all that was hand-drawn. Then, after rendering, all the line work was reworked in post-production by hand to get the type of thickness that we wanted. We had over 60 characters on the movie that were fully hand-drawn, and fully hand-drawn animated.
We integrated Toon Boom, the 2D software, into our pipeline. The questions were: How do we get into Toon Boom with a plate? How do we export out of Toon Boom? How do we present the 2D characters with the other characters? We had to create a workflow that was new for MPC. And then we had reviews with our 2D team. How far do we push the line? Does the emotion work? Does the feeling translate into the action?
Chip was in 900 shots, so it's a lot of work, a lot of hand animation involved, a lot of steps. So we had to find a process that allowed us to tackle that amount of work, in that amount of time. That's the mountain that I had to climb at the beginning. But because of all the great collaboration that we had, it was a lot of fun to accomplish.
Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.