Sony Imageworks VFX supervisor Dan Kramer helps bring iconic low-res 1980s arcade game characters into the CG world of 2015 in the new Adam Sandler film.
Based on Patrick Jean’s 2011 Annecy Animation Festival grand prix-winning short film of the same name, Columbia Pictures new Adam Sandler comedy, Pixels, pits vintage 1980s arcade game characters unleashed by aliens bent on destroying Earth against a group of old gamers trying to save the world.
Leading the effort on Pixels for Sony Imageworks was VFX supervisor Dan Kramer. Kramer’s team, in addition to their work on the big third act D.C. Chaos sequence, was tasked with recreating a number of 1980s low-res arcade characters, spending months trying to ensure the game characters’ original “cuteness” and visual simplicity remained intact while their design was updated to entertain 2015 audiences.
I recently had a chance to speak with Kramer about his work on the film and the challenges faced working with digital assets and character designs that proved more difficult to build properly than originally imagined.
Dan Sarto: Tell us a bit about the scope of visual effects work your team handled on the film.
Dan Kramer: The biggest part of our work was on what we were calling D.C. Chaos, which was the end battle when all the different arcade characters come to life, start spewing out of the mothership and attacking Washington D.C. We created 27 unique characters plus the mothership. We had to develop all sorts of destruction techniques that looked pixelated and interesting. Those were large shots – there was a lot of scope to those shots.
The other big part [of our work] was Q*bert, which was really the only character in the film that needed to emote, that we needed to do extensive character animation on. Most of the other characters were really just in attack mode. They didn't necessarily have to show a range of emotions like Q*bert did. Q*bert, rather than being an adversary to the heroes, was a trophy for winning the Pac-Man sequence. He becomes the sidekick to the arcaders, Adam Sandler and his team. He ended up hanging out with them and we had a lot of fun with that.
Beyond that we did some scene work, set extensions, more traditional stuff. You'll see the White House at the end of the film. We did the White House lawn and digital White House extensions. We also did a lot of research. There was a ton of research and development done on how to build the characters, how they were made and so on.
There were also things like hard surface destruction of buildings. But in every case, we had to develop something new to show where we had light energy passing through the buildings being destroyed. Rather than it being traditional rubble and debris, we mixed in large voxelated chunks to make it look like pixels.
DS: You worked quite closely with Digital Domain on this film. Was there much sharing of assets or integration of each other’s work? Describe the dynamic of how the two teams worked together.
DK: There was a lot of overlap with Digital Domain [DD], but most of their sequences were pretty self-contained. They had Pac-Man, Centipede and Donkey Kong. They had these really iconic sequences that happened at night. But there was overlap when we needed their characters. For example, we needed Centipede in our end battle. Q*bert is inside Donkey Kong. So we animated Q*bert throughout, rendered him and delivered elements of him to DD for final composite. In some cases we did final composites for Donkey Kong.
Matthew [overall VFX supervisor Matthew Butler] was really good about keeping things collaborative and treating us all as one big team. He set that tone early. So, between Matthew, myself and Marten Larsson, who's the VFX supervisor on the DD side, we had really open dialogue. They were really open to sharing that information. For example, the way the light cannons fired. DD started that with Centipede, doing nighttime tests. Then a film trailer came along [to be produced] and we needed to do a light cannon in the daytime. So we took what they had built and added some voxelated muzzle flashes and other extra elements. That ended up iterating and going back to DD for their sequences.
Then in a similar fashion, we had light energy cubes and there were a few that had to be very hi-res. DD did some tests. Matthew wanted the cubes to not look like they just had an animated texture on the exterior. If you see them, they're sort of like light energy in this pattern. But for the hero cubes you wanted to make sure they seemed almost holographic, where you could see they had parallaxes you rotated around. You could tell there was a complex interior. DD had done some development on that.
We had some shots that had very hi-res cubes. Right at the beginning of the movie in Guam, an airman sees some cubes on the floor right in front of him and that's the first time they're revealed. We also did the shot where they captured a cube in the lab and were analyzing it, figuring out how to destroy it. We ended up taking what DD did and then riffing on that by adding some of our own ideas and comp treatments. There was a nice feedback loop between the two of us where as soon as somebody would come up with something successful it would go to the other team.
DS: Historically speaking, sharing assets between studios like that is not easy or ultimately helpful. Different pipelines and software tools, the need to re-rig – you can’t just import these assets with a button click. Has that gotten easier or is it still problematic?
DK: It’s not plug and play. DD was using V-Ray and we were using Arnold. They have different systems for how they convert their assets to the renderer and so on and so forth. For the hi-res cubes they sent us renders of what it was in some layers. We talked it though with them but basically, rebuilt things ourselves. When they sent us Centipede, similarly, they sent us the model. They sent us the Houdini file actually that we were able to mostly convert over. But then we had to spend a quite a bit of time, for example, attaching attributes to the character that made sense for how we set up our shaders, and how Arnold was expecting the Centipede to be. Because for Arnold, we had written our shaders for how our characters worked.
They don’t just come across and render. Hopefully they come across with enough building blocks to retrofit. For Centipede, for example, it took several weeks to get it done, even though we were given a model, an animation rig and a Houdini file. It took a few weeks to put all that together and get it looking the same as what DD had developed for their Centipede sequence.
DS: What were the biggest challenges that you faced on this film?
DK: One of them was just the creative challenges of figuring out what Q*bert looked like. Chris [the film’s director Chris Columbus] really wanted Q*bert to be a friendly, fun sidekick. When we did our first version of Q*bert, Chris said it looks like it hurts to be Q*bert because he's got all these angular edges. But it's the nature of the film - it's Pixels and he's made of voxels. Chris wanted him to be cute and cuddly, but the building blocks for Q*bert are these hard surfaced angular edges. We spent months developing different looks. We probably did hundreds of different Q*bert iterations.
We rendered out all sorts of different passes. We combined them in different ways in the shader. We had a paint artist work in Photoshop trying to come up with a look that Chris would like. It turns out that the way we solved it was using different scales. We found low-res looked cuter but the higher res gave you more fidelity in the animation. We ended up combining the two and doing a hi-res inner core, while the larger cubes on the outside are semi-transparent and you can see through to the inner cubes. They each have their own light energy that's firing. There's a lot of depth to the character. We played with the eyes quite a bit - getting shading from the outer skin around his eyes onto the eyes to make it feel set in there and to make him feel like he's got a soul, that he's a real character.
Something that we struggled on with all the characters was that you really want to beauty light your character. You put him out into a scene. You may want to put a fill and a rim and a key on him, but everything is so angular and square. Imagine something that's just a pure square. It's pretty hard to catch a rim. We have a bit of a bevel on all of our cubes but it was hard to shape them. What you think of a round character, he doesn't really light very round.
What we ended up doing for all of our characters was build a simple, smooth skinned model, like you would see on a traditional character, and then do all the voxelization as a post process in Houdini and procedurally attach all the voxels. We still had that smooth shaded version that was animated for every shot. When we voxelated, we would compare the normal of the inner surface of the smooth Q*bert. Because we lit that he would get the proper lighting, the pleasing lighting that you would expect.
Then we would basically compare all the angular faces on Q*bert to the underlying smooth model. If the normals were close enough, within about ten degrees, we would steal the inner smooth normal and attach it the outer faces, to the angular faces. If it diverted too much, then we used the geometric normal of the voxel. That allowed us to shape him quite a bit. We could actually rim him a little bit and we could shape white in. If there was reflection on his side, there was a little shape to that. That softened him out quite a bit and allowed us to add a little rim white under his snork or whatever we wanted to do.
The characters seem pretty simple but there was a lot of development to figure out how to make this work. It was quite surprising actually. I don't think I realized how technical it would be. Like figuring out how to attach the voxels. Every single character needed to run through Houdini. We were augmenting normals to get them to light right. There were quite a few little challenges that were surprising.
DS: As far as the big third act D.C. sequence, how difficult was it to shoot the live-action plates you needed? What are the current realities of filming in D.C.?
DK: That's a very good point. Most of the shooting was done in Toronto but we did have to get a lot of the street action. D.C. actually has a height limit on the buildings. Toronto has all these giant skyscrapers everywhere. So for those scenes, whenever we were shooting Toronto for D.C., we would have to chop off the tops of buildings and replace them with sky, then cap the buildings to give them a little D.C. look. Or we might put the Capitol building down at the end of a street to give you the feeling that you’re still in D.C.
It's very difficult to shoot in D.C. In the trailer, there's an attack on the Washington Monument. There's one continuous shot where the camera rotates 360 around the monument while alien characters are coming out and attacking it. That's an impossible shot to get a backplate for. We got permission to fly the perimeter around the National Mall but pretty far away from the monument. We planned it out where we identified three or four strategic locations where we thought we could hover and acquire tiles which we could stitch together into a backplate.
I went on that shoot. You have to go up with a Homeland Security officer when you're in D.C. He's there, he's got guns and he's making sure that you're not violating the space and trying to fly over the White House. He controls how close you can get. So I had this plan where I thought we could go and hover, but he wasn't going to let us get anywhere near those spots once we actually got up in the air. He was very conservative. So we got plates, and they were useful, but they weren't super useful.
We did end up flying around the perimeter and collecting as much data as we could. We used photogrammetry to reconstruct the area. Fortunately, you can go inside the Washington Monument and take pictures from the windows at the very top. There are thick scratched-up plastic windows but you can get through and take photographs. The pictures aren’t the highest quality as a result, but they kind of saved us in a lot of ways.
Then we walked up and down the mall and took a lot of panoramas of buildings. It took a lot of elbow grease to stitch it all back together. We got a model of the area, which was only so accurate. It was pretty low-res. We found the key projections that we got from the helicopter footage and from the monument itself, which we used to project and rebuild the geometry. We spent months on that to get one seamless panorama that worked from the point of view of the monument.
Of course, while we were there we took tons of reference of the monument, which we built in CG and then destroyed. It’s a pretty cool shot. But it's completely CG.
DS: The film is based on a fantastic short film [of the same name] filled with old 1980s low-res video arcade game characters. CG today is light years ahead of where it was when these characters were originated. Besides some of the technical challenges, what were some of the design challenges you faced bringing these beloved vintage characters, whom many in the audience have never seen, into the world of 2015?
DK: I thought the short was really charming and well done. One lesson we learned quickly, that I kind of latched onto, was that the dichotomy between the detail level and the simplicity of those characters versus the environment around them was so jarring. The characters were so cute and the colors were so simplistic that I think on a basic level, that simplicity worked really well.
We found as we were building the characters, as far as the resolution, the lower the res, the lower we could go, the better. We basically had enough voxels to capture the animation but no more. We tried not to overdo it. We found that the chunkier they were, the cuter they were, the more interesting they were.
But we didn’t want our characters to look too simplistic. Everyone's used to these amazing effects being done today. If our characters look too simple, is that going to look odd? Chris wanted to differentiate the look of them from Legos. He was worried that they were just going to look like plastic Lego blocks and that they wouldn't be very different. By the time I came onto the project, Matthew was already talking about doing light energy, which is what we ended up doing. I think it was Chris, Matthew and probably Peter Wenham, the production designer, who came up with the idea that just like pixels on a CRT that are self-illuminated, our characters could have that light energy inside. That gave them an extra level of detail that we could bring in and out, but when that light energy was off, would look simplistic and cute. That was a really good way of bringing in some of the modern elements of something that's a little more detailed but harkens back to something very simple.
Initially we were trying to be literal. If you think about 2D sprites, how are we going to animate them? Do we build the all actual motion into static models and put between them a sprite sheet? That was something that we talked about. We actually did that with the space invaders. We just built two different poses and flipped between them because it's so iconic and simple.
We did different techniques for different characters. The first thing we did was a pure, world space voxelization. If you imagine a voxel field that's just static in space and as our smooth shaded character or smoothly built character moves through that field, he just turns on and off the voxels that he's intersecting or moving past. We're revealing him within this space kind of like a sprite that moves through pixels which are fixed on a CRT screen that is just firing them on and off. That was the first thing. Chris and Matthew really liked that notion because it makes them look a little more digital, when they're constantly reconfiguring themselves and voxels are popping on and off.
At times though it can look a little too frenetic and it wasn't as pleasing. It was also difficult to keep the characters on model. You might build a character and get it bought off where the voxels are aligned a certain way on the face and it looks great. But then he turns his head 45-degrees and now all the voxel edges are going through the face and it's a different look. So we ended up doing a world space technique in part for some of the characters. Then for some, we did a hierarchy of different spaces. We had a voxel space that was attached to the head, or one to the upper arm, the lower arm, the legs, and the torso and then in that way, those 3D spaces would translate along with the head for example. But then, if the head did any squash and stretch, that would cause a revoxelization within that domain. In that way we got a hierarchy of spaces where it calmed down the amounts of popping and revoxelization. It also helped keep us on model a little bit more.
It also made things a little bit easier to light. If you imagine the world space case character scenario where the voxels never actually turn or move. They're just turned on or off based on where the character is, for example, if you want to light him from a sun position. The lighting never really changes on the character. Those voxels never actually move, they're just turned on and off. It's not as interesting and it's not as easy to integrate them into the environment. So having those spaces turn a bit was a lesson we learned and that made things more interesting.
The other thing was just figuring out the light energy. At first Chris and Matthew wanted most of the characters to be self-illuminated. That worked really well for DD sequences, which were mostly at night. You can really imagine what light energy looks like at night. You can understand that that's going to look really cool and the light is going to cast on the environment. But, most of our shots were in broad daylight with a hard key. We found that if we lit up our characters internally, it was hard to tell that they were glowing because they would have to be so bright and overpower the sun in order to make you feel like they're actually illuminated. Once you do that, now they no longer have any self-shadowing because that's all filled in and look flat and uninteresting.
At the same time, we learned that if you could see a voxel catching real sun in key and then a moment later it fires up and gets bright and then dims down again, it's really clear that that's light energy. There are always elements that don't have light energy that you really feel fit into the environment and that helps quite a bit. Because as you can imagine, a light bulb turned on in broad daylight isn't very interesting. But I can see that the light bulb looks cool at night. That's sort of a challenge, like using an LED brake light that's really bright in broad daylight, that you can tell is really bright, but it's next to the rest of the car which is not self-illuminated and that tells your brain what's what. It gives you a visual reference.
Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.
Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.