Search form

Wētā FX Helps the Merc with a Mouth Get Expressive in ‘Deadpool & Wolverine’

VFX Supervisor Dan Macarin dissects his team’s intricate work enhancing the motion of Deadpool’s mask to match Ryan Reynolds' emotions, comedic timing, and dialogue, along with digitally enhancing Wolverine’s corpse, on Marvel Studios’ blockbuster superhero adventure, now playing in theaters.

Probably the last thing most viewers are thinking about when they’re watching Deadpool & Wolverine, the latest action- and humor-filled entry in the ever-expanding Deadpool franchise, is the expressiveness of the sarcastic superhero’s mask. Yet, for the VFX team at leading New Zealand-based studio Wētā FX, enhancing the motion of Deadpool’s mask to ensure that it matched actor Ryan Reynolds’ emotions, dialogue, comedic timing, and body motion – with the goal of staying true to the original comics without becoming too cartoony – was a central objective of their remit. Along the way, they also had the pleasure of digitally enhancing Wolverine’s corpse, as well as creating and fine-tuning a number of critical environments.

We spoke with Visual Effects Supervisor Dan Macarin about his team’s work on the film, including the new tools that were used to facilitate the process, the creative temptations of the Void, and the delight of working with Reynolds.

Dan Sarto: To start, why don't you walk me through the work Wētā did for which you were responsible?

Dan Macarin: Originally, we were called on to pick up from our work on Deadpool and Deadpool 2, which is mainly around Deadpool's facial expressions on his mask. And over the course of the show, we started getting involved in trailers and those trailers became shots in the film. And then we started picking up more and more film shots. The work started including full environments, completely digital-to-plate extensions, and an augmented skeleton for a zombie Hugh Jackman during the opening credit sequence. We were involved in a lot of small shot additions, like adding claws to Wolverine, or little fixes that needed to be done here and there.

DS: In evaluating the work in the beginning, what did you foresee would be the major challenges and how did you prepare for those?

DM: Well, the challenge is always time – trying to give the artists as much time and as many iterations as possible. Ryan performs so well, and his comedic timing is so specific, that it's not something you hit on the first try. You really need to keep pushing and pushing the amount of subtlety that is in there. It's strange when you're telling people, "When that brow lands, just shift it one frame," and they're like, "Really? One frame?" It makes a tremendous amount of difference. With those things in mind, generally what we tried to concentrate on with the tools was freeing up the artists to work on the performance more than, say, cleaning up the mask.

What I mean by that is we don't go digital on the masks; we don't change the props or the costumes from what they are. The on-set team, the costume team, the art team have done such a tremendous job with the detail and the level of accuracy that we want to keep as much as we possibly can. So, we're working straight onto the plates and we're augmenting those plates. Now, the problem with augmenting the plates is, if you go very large on the eyes, then suddenly the shadow that was under the eye starts stretching. The leather texture will start stretching. And you have to rebuild all of those things to make sure that it still feels real, and the distances are the same and the texture scale is the same. And if we can focus our tools on doing a lot of that for the artist and keep them strictly on performance, then we get a much better result.

DS: So, you're not using full replacement CG heads or faces?

DM: Right. The thing that we found very quickly is that Ryan's body movements connect with his head in a very specific way. And we didn't want to take away from that. We didn't want to lose anything. We didn't want people to look at Deadpool and ever say, "Yeah, it looks like an animated character. It looks cartoony." If it's not integrated absolutely perfectly, the audience will see it. It was of paramount concern to us to make sure that we never took the audience out of those performances.

DS: Are there specific parts of the face that are harder to animate than others to bring out that nuanced performance?

DM: They’re all difficult in different ways. But the more expressive he is, the more time-consuming it can be. This might seem strange, but what’s most difficult are those moments when he winks or closes his eyes. That might seem like a very straightforward thing, but Deadpool doesn't have eyelids. And so, you're not talking about augmenting at that point, you're painting and recreating. It has to go over the shape of the eye when it closes, it has to look realistic, it has to fold back in a realistic way. So, it’s a lot more of a manual, artistic process to make him do something like that than the general movements that we've built into the rig of his performance.

DS: So, there's no such thing as a quick wink for you guys?

DM: There is no quick wink.

DS: Tell me about the 2D facial system that you built.

DM: We started off on Deadpool 1 with a traditional system, going from matchmove to 3D facial animation, lighting and rendering into a comp package, which in our case was Nuke. There's nothing wrong with that system. It's tried and true and it continues to work. But, if we're going to use the plate, it seemed like a better solution was to keep as much of it in the same package with the same artist as we possibly could. And that meant going into an entirely Nuke pipeline. Normally, that's not such a big thing, Nuke handles 3D just fine. But moving our 2D facial rig, which has been decades in the making at Wētā, took a very long time.

We've been refining it and we're now at a point where it's quite sophisticated, although on the surface it looks very simple. An artist will bring up the facial animation package, which is really just a node that’s been built for them, and they can just dive right into the performance. That's what makes it incredibly efficient – they don't have this long, complicated script with things that are very confusing or difficult to understand. The package also includes tools for relighting, which becomes important for registering things like wrinkles.

There’s also something in particular for the Deadpool character, who has black leather around his eyes and, if he is at the wrong angle, he might get light on only one side of his face, which can give him an asymmetrical look that doesn’t fit the performance. So, there are times, especially in low-light situations, when we'll have to get a bit more light on the other side – just enough so that you can read the performance and what his intent was, rather than specifically what was in the plate.

DS: I was told there’s an AI tool that analyzes the audio and then calculates what the motion would be in the mask. That sounds like a pretty sophisticated tool.

DM: It's not as sophisticated as it is clever. Our comp supervisor on Deadpool 2 was writing neural networks at the time and he said, "I think if I matched an audio level to a keyframe of an expression on shots that we've done over the past few years maybe it could give me a facial animation for nothing." And of course everybody's interest piques, and it was like, "Yeah. Okay. Let's try it."

We found a couple of problems right away and it took a couple of weeks to work out the initial bugs. The first one was training data. We had a couple hundred shots to work with. That might seem like a lot, but in terms of machine learning and AI, it's a drop in the bucket. So, the accuracy was off from the start. And I think, within the first day or two, we realized that we wouldn’t be able to just hit a button, and it would do the face.

But when we started going through it, we found something very interesting about machine learning that we were missing before. And that was environmental stimuli. Anytime someone shuts a car door, or there's an explosion in the background, or someone else is talking, it affects the data. The machine learning is doing thousands of keys, and oftentimes it doesn't know that someone else other than Ryan is speaking – it has no way to tell what audio is coming from where. So, it just registers, okay, I have audio and I'm going to throw an expression. And you're like, "Okay, I don't really want you to do that."

But it added something. It was this ambient motion that was reacting to the environment, that was reacting to other characters in the scene. And it was thousands of key frames that an artist would never take the time to do that gave extra life to the face. So, we dialed that way back to about 5% and, where before his face is going all over the place, now it actually looked really natural. The more shots we completed, the better the accuracy got. And we were able to mix in more and more of that before an artist jumped on the shot. We could see what worked and what didn’t, and people were able to use that as an extra tool in their performance toolset.

DS: You mentioned in the beginning that you started picking up more and more shots as they were offered to you, full CG environments, as well as set extensions. Was there anything in that work that was particularly interesting or challenging?

DM: The Void is always a fun kind of challenge. Wētā has worked on so many Marvel movies, we've been a part of so many things, it's like it's become kind of a creative sandbox for us. What do we want to throw in there? What Easter eggs do we want to have fun with? And so, for example, as they're taking their road trip from the cornfield going into the forest, and we're transitioning through different areas of the Void, we wanted it to feel continuous but, at the same time, we wanted the audience to have a little fun. But the challenge is not to push it too far. Because if the viewers aren’t looking where they're supposed to be looking, and they're not paying attention to the performance, you failed the story.

We added in things, we were referencing Loki and all these different old comic books and showing how things got into the Void. And we made these really intricate effects, portals and things that we were dumping items in there. But if you show someone the shot and they go, "That portal thing was awesome," you're like, "Right, but you're supposed to be watching the car driving through the desert and if you missed that, I failed." So now I have to take the portal out. And those things are sometimes heartbreaking, when you've worked really hard on something that looks incredible and you're thankful to the artist, but you still have to put the story first.

DS: Anything else you just want to share before we wrap up?

DM: The team loved this work. We loved getting to desecrate Hugh Jackman's corpse and zombify his head and watch Ryan swinging that everywhere. That was a lot of fun for us. Being involved in almost the entire film and our relationship with Ryan and the team is always something that we love. And our team knows we’ve been successful when our work doesn’t get noticed. When people don't know what we did, it's actually quite rewarding for us.

Jon Hofferman's picture
Jon Hofferman is a freelance writer and editor based in Los Angeles. He is also the creator of the Classical Composers Poster, an educational and decorative music timeline chart that makes a wonderful gift.