Real Steel, the new boxing robot movie, takes the Simulcam developed for Avatar and puts it into a real world setting for the next advancement in virtual production. Giant Studios, under the leadership of Matt Maden, the virtual production technical supervisor, came up with a new system for a new paradigm.
"It really worked beautifully for us with production and with Digital Domain," Madden suggests. "We spent the time upfront figuring out how the pieces would fit together and how we would communicate. It's a model we're going to be referring to time and time again moving forward."
Unlike previs, Giant knew they we were ultimately going to be delivering a form of the movie back to DD in a game-level quality in terms of rendering. But the action itself was going to be fairly close to final, with the exception of the additional animation layer and effects that they would be putting on top of it with the electronics and liquids and ripping metal.
One of the main principles of this virtual production pipeline is that Giant was in sync with the visual effects department and its department in terms of the look and structure of the assets that were created. So there's an approval process and Giant knew whether it was from the art department or VFX.
"Our responsibility is to get them ready for this interactive world of virtual production so we can play them live, we can record changes, we can add new versions of prop elements, if we need to change a lighting setup, we can do that, and all those things can be recorded and referenced in a data base so the visual effects department can access that information intuitively," Maden continues.
What was helpful with Real Steel, however, was that director Shawn Levy completely bought into this process. "The whole MO is to make it more like traditional filmmaking and make it interactive like live action," Maden emphasizes. "Only we come in with a real time display of the CG elements.
"He was able to direct his fighters, which were ultimately the robots, prior to location in Detroit. And then, once he reviewed the cut from our renders at the virtual production level, he could then request changes to speed or the blocking or the timing of a punch, and we made those changes and submitted updated renders back to him to lay into the cut. He and the editor [Dean Zimmerman] and the producers were happy with the general action and timing of the fight prior to going to location. And, consequently, we were able to get through those fight scenes in record time because we were armed with that prior to photography."
Everyone involved in the physical set up got to review the process as well, not just as boards, but as an actual cut. So they understood upcoming beats, they understood the coverage and they understood where the camera was and what was in the background. And what wasn't in the background. Maden says it helped across the board.
But Giant significantly took the Simulcam process of simultaneous CG display to the next level. It wasn't just cranes and dollies; there was quite an extensive use of steadicam. But it required Giant to have a system that was robust enough to record this fast-moving, dynamic camera action.
According to Digital Domain's Erik Nash, the production VFX supervisor, previs was achieved completely through real time interactive means in which Levy was in the ring with the boxing performers, directing them as he would human boxers, and then was able to come up with his camera moves in a very hands-on way.
"So heading to Detroit we brought the motion capture technology with us, but, unlike Avatar, we were putting our synthetic characters into the real world," Nash explains. "We were able to make the boxing robots visible to the camera operator and to Sean on his monitor. We now have plates that are photographed as if the robots are there.
"So the efficiency is huge, but, to me, the reason for taking this technology and pushing it to the next level was to attain a grittier and more visceral experience.
But the motion capture was only a foundation for the performance. Because of the two-foot scale difference between the real actors and the CG robots, all of the data prior to virtual camera and the Simulcam process in Detroit slowed down 10%. "We did that to help sell the weight, size and mass of the robots," Nash offers. And then once that data was turned over to the animators at DD, the process had several phases: to attain the robotic nature of the characters, they addressed the fidelity with which motion capture records all of the subtle nuances of human motion by developing tools to filter the MoCap data. Then there was a lot of keyframing to heighten the action and make some of the movement less fluid. There's always inaccuracy when you have two CG characters making contact with each other. Plus the MoCap actors didn't actually hit each other as hard as the CG robots needed to, so they sped up punches, hardened the punch impact and exaggerated the reactions.
"One of the biggest challenges was a result of the fact that three of the hero robots had practical onset animatronic versions built by Legacy," Nash explains. "That was great to have something physical for our robots to be intercut with."
Digital Domain used Vray at the renderer in conjunction with its lighting pipeline, creating more than a half-dozen prime robots. The toughest was the villain, Zeus, to fight the hero, Atom, because he was all-black and didn't have an animatronic counterpart.
"But our job wasn't done until you couldn't tell them apart," Nash concludes.
Bill Desowitz is former senior editor of AWN and editor of VFXWorld. He has a new blog, Immersed in Movies (www.billdesowitz.com ), and is currently writing a book about the evolution of James Bond from Connery to Craig, scheduled for publication next year, which is the 50th anniversary of the franchise.