Digital Domain’s Kelly Port and Darren Hendler discuss the blood, sweat and motion capture work that went into the visual and digital effects of Maleficent.
If live action filmmaking is all about bringing make-believe to life, live action fantasy filmmaking is about bringing the impossible to life. Just ask Kelly Port and Darren Hendler, who, as Digital Domain’s visual effects and digital effects supervisors, respectively, had the challenging task of creating CG doubles for Maleficent’s three pixies as well as for Angelina Jolie’s title character. Whether trying to find convincing ways of shrinking actresses down to half their size or recreating Angelina’s flowing costumes, the duo and their team brought their “A” game to the party, delivering exceptional visuals to the exceptionally beautiful hit film. They might even be interested in a second go at the lively beings inhabiting Robert Stromberg’s fantastical world, from the sounds of things…
Dan Sarto: Tell me about your duties on the film…
Kelly Port: I was Digital Domain’s visual effects supervisor on the film. The primary vendor visual effects vendor on the film was MPC so we each had our own supervisors. DD was responsible for Maleficent and her various digi-doubles and costumes. The other big body of work was the three heroic flower pixies. I was on set when they were shooting back in July, August, September and October 2012 in London at Pinewood.
Darren Hendler: I was digital effects supervisor. In some ways you’re kind of the glue of the show. You’re responsible for overseeing the various departments and the pipeline. In this show, I was very much involved in pixie development from the early stages and performance capture through till the very end. The responsibilities are so varied. A lot of it is about being where you’re needed at the time.
DS: Kelly, what was your main onset role during the shoot?
KP: The vfx supe’s responsibility is to make sure that all of the information and data detailing how a special effects scene was shot is translated to our team. We take reference photos, we take high dynamic range photos to record how the lighting was set up, we record camera and lens information to make sure we know how the camera is moving, we want to make sure eyelines are correct when there are supposed to be pixies flying around, we make sure that the lighting is even where a blue or green screen is involved…things like that. You’re there in an advisory capacity.
DS: How do you manage the onset dynamic so that you collect what you need while having a minimal impact on the director and rest of the crew?
KP: There’s not really a whole lot you can do. You really just try to keep the disruption to the flow at a minimum. The film’s financial burn rate is nothing compared to a little extra rotoscoping or some paint work. If it’s a really big deal or something that keeps us from doing what we need to do, you have to run some numbers in your head based on experience and make a split-second call.
DS: Was any SimulCam work or other virtual production integration happening onset?
KP: No, we didn’t do any SimulCam stuff on this project but it was quite an involved motion capture setup. All three of our pixie actors were being handled by set coordinators, flying them through space on these rigs. So, all three pixies had head cams on them for facial capture and for each of the pixie actors we had two camera operators off to the side, one of them taking a close-up of the face and one of them taking a full shot for reference.
DH: Gary Roberts is our virtual production supervisor and he oversaw all the performance capture shoots. He was responsible for the body and facial capture and before going into those shoots, we had to create real-time versions of all the characters that our virtual production group could use. What they basically do is create a map from the actor to the character, mapping out proportion changes and things like that. They create a setup that could actually work out in real-time. The actors are moving around and you can see these digital characters moving around too. It really helps to give a sense of what the final version is going to look like.
KP: Because the pixie characters actually had to transform into their full live action selves for the second act of the film, they needed to resemble their human selves. So Knotgrass had to resemble Imelda Staunton [the actress playing Knotgrass]. It wasn’t a one-to-one. We didn’t want to just make a small Imelda Staunton. We wanted it to be a sort of pixified version of Staunton. So the eyes were a little bit bigger, the nose was a tiny bit smaller, the head was generally a little bit larger. This was the rough template we followed for each of the pixies. But it was really critical that they still had the essence of the live action actor within the pixie, so that was a design challenge.
One of the things we decided early on was to actually create a full photoreal digital actor. This turned out to be incredibly efficient in the long run because we still needed facial animation and dialogue but didn’t get a full sign-off on the pixies until relatively late into the process. We needed to keep moving forward without knowing if the designs were finalized. [By making a digital actor] We were able to retarget any animation on the actor and transfer that over in terms of muscle, bone structure and proportions to whatever the current pixie design was at the time.
DS: So, by creating a photorealistic digital double of the actor, you could map it back to the actual actor and integrate it with the final design once decided. You always had something to go back to for reference…
KP: Exactly. We sometimes ended up with over three thousand face shapes, and to redo them for each redesign would have been incredibly time-consuming.
DS: Sounds like a very fortuitous decision…
KP: It paid dividends in the long run.
DS: What sort of technical innovations went into creating the pixies?
DH: We’ve done a lot of work on our virtual human developments here at DD, and on this show we were definitely trying to push that further. We spent a lot of time with our actors setting up all these different data acquisitions. We tried to see how their blood flow changed as they made various facial expressions, how blood flowed into and out of different areas. Then we mapped that so we could plug the info back into our characters. So when a character scrunched up their face, you could see the blood draining out of their face. That’s something we’d never tried or been able to really do before. When you don’t have that, often the facial performance can feel pretty lifeless, like it’s missing something and you can’t quite tell what it is. These kinds of elements just helped to make the performances that much more realistic.
DS: What about the intricacy of the performance capture process?
DH: We really wanted to give the actors a great working environment so they could be together and interact with one another. So we did a lot of work on our facial capture system and upgraded our head camera system we’d been using on previous shows like Tron. We had much better resolution cameras and a much faster frame-rate. Our actors wore those cameras the entire time they performed. We had markers that were painted onto their faces that gave us around two hundred points. Once they’d actually done their performances and editorial had gone through and made their selections, we went through those performances, looked at the head cameras and tracked all of those markers on the actors’ faces and tried to create a moving 3D point cloud of dots that represented exactly what that actor’s face was doing at the time.
From there we used our proprietary facial software, something we’ve been working on for the last five or six years, which takes that point cloud and transposes it onto our animators’ facial rigs. What we got was the full facial rig of the actual character moving and performing as close to the actor as we could get it. It wasn’t the final animation and we still needed to work from there, but it was a really good first step for our animators to work on.
It’s the same thing with body capture. If you go in and try to hand animate a body performance from scratch, it never looks quite realistic. You miss those high frequency motions. That’s why when you do motion capture it’s a lot more realistic. The same thing is true of facial capture. You can go in and animate a face by hand but you have these micro changes going on.
KP: I would say for sure the pixies were some of the most complex assets we’ve ever worked on. We improved our skin shaders and the look of the skin and we made huge strides in our proprietary hair grooming software, Sampson. All of the characters had relatively complex hair and their costumes were all very intricate. We were dealing with petals and leaves and the fine little hairs like peach fuzz on their face and arms. We spent a ton of time focusing on the construction of the eyes and getting the muscles and connective tissue working well, all the way down to the way they blinked.
We actually created CG effects shaders that were able to tie-in with an almost real-time facial animation rig, where the animators were able to see full displacement with fine wrinkles around the eyes, forehead and mouth, essentially what would be rendered as a final render. These are all things that hadn’t really been seen before from an animator’s perspective. When you have actors that have expressive faces, a lot of the wrinkles in the skin are incredibly informative. The animators were actually able to see this in real-time as they were dialing in the face shapes for the dialogue, which is really great. Given how complex the rig and all the face shapes were, and what was under the hood, the fact that it could run so fast, actually in real-time, was fantastic.
And of course, our actors were also in these flying rigs and a lot of times they were picked up at the waist, so their center of gravity wasn’t exactly right. They’re supposed to have wings. They’re supposed to lift up from in-between their shoulder blades with a slightly different center of gravity and be only a third of the size of a human. So all of these were complicating factors for us to address in animation.
DS: What about the challenges you faced in getting the desired look for Maleficent?
DH: We knew there was going to be a huge amount of scrutiny on her and her digi-double, and making sure that it looked exactly like her. We knew that [Angelina Jolie] was going to be looking at a lot of the material herself and she was going to be the best judge of how she looked, so we knew we had to get her digi-double right. It had to look like her and move like her and feel natural with those giant wings.
DS: Were there any innovations that came into play for Maleficent’s digi-double specifically?
DH: I don’t know if there were any specifically new innovations. We just really pushed further on a lot of the development we’d done in the past. We did a lot of work on Sampson, and that was really necessary for the pixies as well as Maleficent, who had a really long free-flowing hairstyle.
I think one of the more interesting aspects involved the early discussions about her wardrobe, because we knew she’d be flying. We thought, “Wouldn’t it be great to get her into a skin-tight wardrobe, very form fitting, so it would be easy to do all the rig and wire removal?’ Then on the first day of shooting, Angie arrived with five-foot sleeves. Of course, her wardrobe was wrapping around the rigs. So we knew from that day on we’d have to create a CG wardrobe that could hold up. For many of the shots where we did digi-doubles we had to remove her entire body and create a CG wardrobe to paint out the visible rig.
DS: That sounds like fun…
DH: We sat down and looked at how much work it was going to take to paint out the rigs and recreate her wardrobe and we figured, “OK, we’ll just keep her facial performance and head and neck as much as possible, and the rest of her body we’ll just create in CG.” So her wardrobe really had to hold up on the big screen. We had to create a free-flowing, dynamic, chiffon-like wardrobe and in the end no one could really tell when it was the CG or when it was real.
DS: What was the most rewarding part of the project for you?
DH: One thing that really worked out much better than I ever anticipated was how we made CG versions of the actors and then transferred all of that to their pixie forms. With the sheer number of revisions and pixie concepts we were going through…at certain stages we were building two or three different types of pixies a week. Yet our animators were able to carry on working with very little impact. We were rebuilding new characters with different concepts and different facial designs and the animators just carried on working. That whole process worked out really, really well.
KP: The work we did on the pixies was very rewarding in the sense that it built upon stuff that we had already done before. But also, they were just really wonderful characters. They were so detailed and full of life. The actors themselves were just great people and really fun to work with and their personalities really came across in the pixie characters. All of us here at DD are trying to convince Disney to make a whole movie just about pixies!
Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.
James Gartler is a Canadian writer with a serious passion for animation in all its forms. His work has appeared in the pages of Sci Fi Magazine, and at the websites EW.com and Newsarama.com.
Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.