Search form

Casey Schatz Talks Techvis on ‘The Martian’

The Third Floor’s virtual production supervisor discusses high precision coordination for shooting actors, cameras, stunts, rigs and backdrops on Ridley Scott’s award-winning sci-fi adventure.  

With a string of award nominations and wins in hand, Ridley Scott’s The Martian is both a critics’ darling and box office hit. And while the extensive use of visual effects is quite obvious, Scott’s desire was to film in-camera as much as possible, thus hopefully avoiding the use of digi-doubles. Consequently, the live-action shoots required a tremendous amount of coordination and planning, bringing together in unison the efforts of the VFX, SFX and camera crews. Enter The Third Floor, one of the world’s leading visualization studios, to support the film’s live-action efforts with extensive techvis and virtual production support.

I recently had a chance to talk with The Third Floor’s virtual production supervisor Casey Schatz about his work on the film. He shared his insights about a host of production topics, including the demanding precision required to seamlessly integrate the efforts of the VFX, stunt and camera teams, along with the film’s key actors, to capture the look and feel of zero-gravity space.

Dan Sarto: What was the scope of your role and The Third Floor’s role on this film?

Casey Schatz: The Martian was a really great project. Both fun and challenging. I worked with Richard Stammers [VFX supervisor] and Matt Sloan [VFX shoot supervisor]. We all had worked together on X-Men: Days of Future Past and received a lot of recognition for the kitchen scene. It was great to team up with them again.

Our general contribution on this film was techvis and technical planning, which I did while being onset to help plan shooting for the zero-gravity sequences.  The Third Floor then had a bigger company team that came on to handle postvis after principle photography wrapped. We had nine or ten artists and handled around 170 shots.

Filmmaking is such a collaborative process. Ultimately, my job was to absorb all the information from the departments, help assess it and work out a technical shooting plan so that when we got onset, there were no surprises. Because we put so much energy up front, the shoots worked out very efficiently.

For my work, I first joined the film in Budapest. They already had previs, which had been done elsewhere, that I started working with, absorbing all the scenes, organizing them and coalescing them in a way so we could start planning how actually to shoot them.

Phase I is having previs QuickTimes telling the story. Phase II is determining how to produce that action using real-world filmmaking tools. So my work was to breakdown the Maya scenes into the components that would eventually be shot. Part of the job I love the most is collaborating with the different departments on a film, saying, “Here’s what we’re thinking...how does that impact your department?” We find out things like, “Oh, the rig can’t go that high because of safety concerns?” OK, well, I’ll make a note, then model it in Maya, then run it back to that department for review. So, I was running from department to department with my laptop, working in Maya, visualizing everyone’s needs for the shoot.

For example, take the art department and stunt department. The stunt folks don’t want the actors to be up too high because of safety issues. But the art department had already begun building the set pieces and we had just learned what the maximum safety heights might be. So I was orbiting around all the departments, assessing needs and bringing those together into a comprehensive shooting plan that would tell us what piece of the ship needed to be built, where it needed to go and what its height needed to be. Essentially, this process turned the shooting stage into a Cartesian coordinate system with a common origin point so every department knew where to put their set pieces.

From there, I could give information and animation from Maya to the stunt department to help drive stunt systems and plot paths for the astronauts moving around the ship that convincingly suggested zero gravity. The actors were on four point harnesses or being moved linearly along the truss. So spatially, we knew where everything was going to go, and why. We rehearsed these moves with the stunt team and that helped relay to the talent where to land on the ship and where to grab hold.

When Sebastian Stan’s character Beck makes his journey after placing the bomb on the nose of the orbiter, his trip to and from the nose was a big part of the stunt work. The trust Richard Stammers put in me I took very seriously. I worked as an extension of the VFX department.

Another fun part was working with Steve Warner, the SFX supervisor. We made a rig that could tumble Matt Damon plus or minus 30 degrees, as well as yaw, which allowed us to handle much of where you saw him tumble through space.

We also used the technodolly, which is probably the most beautiful piece of equipment ever made. I just love it. We used it extensively on the kitchen sequence in X-Men. It’s basically a SuperTechno 15 but it’s repeatable. It has absolute encoders so you can do a move manually and it will play it back. Or more importantly, I can feed it information.

DS: It sounds like your work involved a large amount of iterative design work, pulling in data, refining designs, reviewing work, refining, bouncing back and forth between departments.

CS: Absolutely. Team communication is so vital. This type of collaboration can’t happen over email threads. Even with the most intelligent and talented people, you’d be surprised how quickly conversations can go off the rail using email alone. When you have something visual to look at and the ability to make changes in real-time to say, “OK, the results of that change will be this, this and this,” then everyone can contribute to a unified shooting plan. It’s important to me to encompass feedback from each department and synthesize all of their input.  From there, collectively, we practice the film shoot before we shoot.

DS: Working from the previs, was your techvis used to determine if it was even possible to shoot certain scenes, or was it assumed those scenes had to be shot and you just had to help figure out a way to make that shooting possible?

CS: Well it was mostly the latter. We knew we had to figure out a way to do it. The bigger distinction was if we would need a digi-double, which we wanted to avoid. We always want to capture as much in-camera as possible. Using motion-control and these awesome rigs that SFX built allowed us to do that.

The first thing Richard wanted me to do was figure out how to work with a relatively static camera and a tumbling subject for shots with the MAV capsule. The inverse is a camera doing a DNA helix around a static subject. So one of the first things I did in Maya was make the MAV capsule static and use Maya to determine how the camera could do all the work. We determined that in order to shoot that, we’d need the camera to go through both the floor and the ceiling. Which of course is impossible.

Since that couldn’t happen, I ended up dialing in these ratios of subject movement to camera movement to best preserve the integrity of what Ridley [Scott, the film’s director] liked while figuring out the parameters of what was actually shootable.

In previs, you want everything to look cool and be shootable. If you’re too cautious, you're not pushing the visual envelope. But in this scene, the previs had the camera continuously spiraling beyond what could actually be shot.

Another important distinction was motion-control versus handheld. Imagine if you have a chair or an apple box and you want to make it look like it’s tumbling. The ellipse that has to be described by the camera has to be absolutely flawless otherwise it won’t look like the subject is in space moving according to the laws of physics.

So, we programmed the technodolly and SFX rig to work in concert. The layout of the track, the angle of the track, everything was meticulously calculated. This was a perfect marriage of a motion-control camera and a motion-controlled rig that Matt was sitting in. The SuperTechno 15 gets its name because it goes 15 feet high. So, in this case, the camera couldn’t get high and far away enough to look like it’s over Matt’s head, which is needed to convey the tumbling. But what if I can maximize the height and then use a chair that can pitch towards the lens so that the net result through the camera is that you can’t tell if it’s the camera doing the hard work or the subject?

So I asked the stunt department how far forward I could safely pitch Matt. It was decided that 45 degrees was perfectly reasonable. So, looking at the base of the ship and knowing when he’s closest to the camera, when we could and couldn’t cheat, I programmed both the camera and the rig to each do 50% of the work. The camera moved high when Matt was pitched forward and moved down low as he was pitched back. The final effect was showing Matt doing a complete backflip when in reality he was only pitching back and forth 45 degrees in each direction.

DS: So you’re onset while all this is being built and filmed?

CS: Absolutely. I prefer to work onset. I would work in the VFX office when I needed quiet time to meditate on these issues. But I like to be out where the drills are going and things are being built so I can not only see if something is deviating from the plan, but if the plans need to be modified.

DS: How do you handle the onset dynamic so you get the information you need without causing too much interruption in the flow of the shoot?

CS: It’s a balance. You are always conscious of the filming that is taking place and the need to respect that perhaps your issue might not be the most important thing at the moment.  It helps when you have a really good family vibe among the crew where you are helping each other out.  It also helps to focus concerns or requests for information around things that will genuinely impact the shoot in a positive way and make sure you aren’t interrupting the flow of the day. You always have to keep in mind it’s a much bigger train than just your car.

DSTell me about virtual production on The Martian and your prior experiences with these techniques?

CS:  The second part of what I was doing was once the techvis was all mapped out and we would have rehearsal days and things were working properly, I’d put on my virtual production hat.

I was the Simulcam supervisor on Avatar and it was my first foray into MotionBuilder, virtual production and motion-capture. It was definitely a trial by fire way to learn this stuff. I worked at Giant Studios for a couple years on films like Real Steel, which was a fun hybrid of previs and virtual production and how that marries with live-action cinematography. I think that’s what I was put on this earth to do, work where traditional live-action photography shakes hands with the fancy computer stuff. If I were only doing one or the other, I would go bananas.

Technically my major at CalArts was photography, though I did most of the cinematography curriculum and the computer graphics curriculum. But eventually I came to a fork in the road – which career choice do I make? Do I become Mr. Cinematographer and never touch a computer or become a computer guy who is never onset? I think I would have been miserable either way. When I got hired to do this thing called previs I had no idea what it was. But I soon found out I didn’t have to make that choice as I could do those two things fused together.

So the way we did the Simulcam composite for The Martian’s zero-gravity shoot was via the technodolly. The technodolly has absolute encoders and motors and will send its camera position into MotionBuilder.  I was receiving the camera positions so I could triangulate where the crane was relative to the rest of the set. Then I could produce a real-time composite that showed, “Here’s the piece of the spaceship that’s been built and here’s the entire spaceship in relation.” Ridley saw that and was very pleased.

DS: What were some of the main tools you used on this production?

CS: Maya was my bread and butter application on this project. I did a lot of Python scripting to write some of the tools we used on the winch control system that moved the astronauts. There were a couple shots where Jessica Chastain’s character moves down a corridor right past the lens. For that, I wrote code that could output the numbers from Maya in a meaningful way, that were fed into the winch control system. Using Maya and Python, I took the animation for how the astronauts were supposed to move and gave that to the winch system. I did the same thing for the technodolly, which essentially acted as a motion-control camera in this case. And all this was happening on a Dell laptop, which was neat.

I used MotionBuilder 2014 with the TDCam technodolly plugin to feed the camera position data in real-time. I’d then send that comp to the video assist on the QTake. They’d take the A over B and we’d have a 50-50 simulcam comp. Because we’d have so much scaffolding in the ceiling, it was smarter to do a 50-50 dissolve though we had greenscreen in some places. This gave Ridley and the crew an idea of how massive the Hermes really was at 240+ meters long.

Initially, the assets came from the art department. Because the designs were still being tweaked for some time, we were always getting new versions, which were remarkably detailed engineering-based files that were tough to work with in real-time in Maya. But they were extremely accurate.

About halfway through the film, I started working with Framestore’s CG supervisor, Chris Lawrence, who was totally awesome and who won an Oscar for Gravity.  We collaborated quite a bit on the techvis work. Framestore at that point had made a cleaned-up version of the Hermes model, which they gave to me to use. I converted those Maya scenes to MotionBuilder.

DS: Any big innovations or uses of the production technology that had never been done before?

CS: Well the stunt department told me they’d never done anything like this before in terms of amount of detail. I’d be lying if I said, “Oh my god, we changed the virtual production world!” I don’t know if we did things that have never been done before. But we produced a lot of really cool stuff on a huge volume of shots on a very compressed schedule. I don’t know that we invented any one particular thing that we’ll get a Sci-Tech Award for. But we did use some sophisticated tools in unique ways to get around 100 shots done in a very short period of time. It really was about the efficiency we achieved.

DS: As a one-man band on this techvis, working with a tight schedule on a big production, how would you summarize the key challenges you faced?

CS: Challenge can have a negative connotation. To me, the challenges are the fun part. The key challenge was to interact with every department on the film and be their representative regarding feedback on how shots needed to be handled. Making sure the voices of all the departments were heard. Being able to visualize everyone’s ideas and put together cohesive shooting plans.

Of course, sitting in the same room as Ridley Scott, I’d think, “OK, is this actually happening?” A few times, I got to take the art department’s models and do some live camera position visualization there in the video village tent, which was, “Pinch me, is this real?” He is truly remarkable. Really clear vision, really great working energy.

Sometimes we’d be in meetings and I’d have my laptop hooked to a big screen with an HDMI cable, showing the various scenes to the key people on the film. Someone would say, “What if we moved that over 10 feet?” and I could say, “Wait, hold on one second” and in real-time, make the change. Then we could discuss the pros and cons of that suggested change. That live collaboration was the most fun part.

--

Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.

Dan Sarto's picture

Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.