Ellen Wolff discusses vertiginous flights of fancy and other superhero feats in Spider-Man 2 with visual effects supervisor John Dykstra.
Sony's hugely successful Spider-Man franchise is the latest hit on the résumé of John Dykstra, but it isn't the first time this visual effects supervisor has been in a position to push filmmaking technology into new terrain. Dykstra's career was launched back in 1977 with Star Wars, which won him an Oscar and arguably marked the start of the modern era of visual effects.
The motion control photography technology he co-developed for Star Wars also earned Dykstra a Scientific and Engineering Award from the Academy of Motion Picture Arts and Sciences, and his photographic ingenuity became a key characteristic of the company he subsequently founded, Apogee Film Effects. Among his achievements during that period was an Emmy Award for the visual effects in Battlestar Galactica.
Dykstra focused on commercial work from the late 1980s to the mid-1990s, and then joined Sony Pictures Imageworks to create visual effects for 1999's Stuart Little. That work earned Dykstra an Oscar nomination, as did the first Spider-Man in 2002.
Spider-Man 2 presented several new effects challenges, and Dykstra discusses his approach and the strides made since the original film. He also reflects on how advances in digital technology have changed the role of the visual effects supervisor itself.
Ellen Wolff: Given the success of the first Spider-Man, what did you want to expand upon in the sequel?
What Sam [Raimi, the film's director] wanted was to give people watching the movie a sense of the reality of the situation. That's tied to our empathy with the character of Spider-Man, a guy who's representative of our better selves.
The trick that the visual effects had to do was to take you from the guy-next-door world into his superhero world. Because in order for you to understand the conflict for the character, you have to be able to partake of what it's like to be a superhero. You may not have a punch that could knock out a gorilla, and you may not be able to hold a bus over your head, but certainly one of the things that you can participate in is flying through the city. So one thing that was critical was to create a realistic sense of flying with our hero, to give you a sense of the joy of being this character.
The experience of flight was fairly evocative in the first movie, but in this film the character spends more time in the upper climes of the city. Sam wanted a more vertiginous experience for the audience. I think that the more sophisticated version of the flight through the city that we did for this movie is part of what makes you realize that's there's something beyond the responsibility of being a superhero it's the experience of being able to fly through the city with ease.
EW: Was your approach different from the original film?
We used a cable-mounted camera called a Spider-Cam to photograph the real world. We used this same cable-mounted VistaVision camera on the first picture, but we used it a little more on this one. We got all those wonderful buildings on Wall Street for the price of the photographic setup. The camera went about 4,000 feet from 22 stories high down to a foot off the ground and back up again. That was a pretty amazing feat. But that was the way we captured some of the images that represented New York without having to make them all digitally.
One of the interesting things about film is it's an incredibly dense medium. It records all of this great visual information- as opposed to the digital world, where you have to make up all those individual pieces of information. I think, as much as anything that the key to this film is a verisimilitude. There's something that comes from photographing real things.
EW: On the first Spider-Man, you used photogrammetry to generate some of the New York cityscape. Did you do any of that on this film?
JD: We upgraded our pipeline for the creation of the buildings. And though we used a little photogrammetry, for the most part, this was strictly a survey the buildings and put texture maps on them approach, rather than photogrammetry, where you derive the geometry from the original photography. So this was a little bit different than what we did on the first one.
We had a team that went to New York and surveyed the actual buildings that we wanted to use. We had another team of three digital photographers who photographed those buildings during the right time of day to get skylight on them as opposed to hard sunlight. We were going to have to make adjustments to the buildings in the movie and change their lighting, so we wanted fairly flat lighting to start with. The photographers spent a long time in New York because they had to capture all sides of each building at the right time of day.
EW: Those building landlords must have enjoyed welcoming your crew onto their rooftops for a nominal fee.
JD: (Said with mock innocence) Oh, we're not from Spider-Man. We're from UCLA, doing research! Good old research!
We had people from New York who dealt with this in the most fitting way, and they worked it out amongst themselves!
EW: What changes were made to the digital versions of Spider-Man himself between the first and second films?
He had a new costume, and cloth is always a headache. I have yet to work on a movie where cloth wasn't a mix of two or three pieces of software, because nothing does all kinds of cloth. You can't define the weight of the cloth; the woof and the warp; the material from which it's made and how it's lined; and then have it simulated.
Most software is still based on spring-tension simulation. You can make it look like silk pretty easily, but it's real hard to make it look like oilcloth. It took extensive work by the artists who did those simulations. In many cases, we did several simulations and blended them together. We'd simulate something that looked great for the beginning of the shot but didn't work so well at the end. So we'd do another simulation that worked well at the end but not so well at the beginning and then did blend shapes between the two. There's a huge amount of artistry involved in making cloth work.
The other thing that we did was the structure of the digital character. We put better musculature in him and gave him a better skeleton. In all ways he came a more malleable character for the animators to use. We upgraded our version of a digital actor to I believe a point where there will be moments where he's indistinguishable from a real actor.
We see him with his mask off, in extreme close-up. A truly new thing for us was the creation of digital skin at a believable level. Although it has been done a lot, it has yet to be done in a way that's convincing. It's significantly difficult. It's not so much about creating a gee whiz shot as it is making a believable performance for the character in a situation where you have to see his face.
EW: Were you able to utilize any of the motion-capture data from the first film?
You never do. Everything changes and you can't use the old database. You hear: That's Irix, we gotta go with Linux and the conversions don't work. We did harvest some work that we did on the first movie with regard to our Spider-Man character in terms of his structure.
But with regard to motion capture, for the most part, what we found on the first movie was that when you deal with a superhero, you're dealing with a character that does things that we know are not doable. For example, if you watch somebody jump off the top of a four-story building, when they hit the sidewalk, you kinda know what's going to happen.
So the trick is, in order to perform that, they have to hold their body in a certain position and absorb the energy in a way that is different than a real human being. When we did motion capture, we'd have a stunt person jump off of something that was not so high that he couldn't survive. We'd motion capture him as he'd land. But if we just transferred that data to the CG character extending the time that he fell and making the landing the same you'd watch it and say, He just jumped off the top of a stepladder. We'd have to do so much modification of the motion capture data that we'd essentially have to animate the character from scratch anyway. So we used very little motion capture for our Spider-Man character, and no motion capture for the more global issues involved in Doc Ock.
EW: Doc Ock's huge tentacles make him the most complicated villain yet in the Spider-Man universe. How did you approach the design of this character?
JD: I think he's the unique character in this picture. We had New York City and Spider-Man before, and certainly we had the whole business of moving POVs before, but this time we've got Doc Ock. The whole thing started with the design of his tentacles.
In the first Spider-Man movie, The Green Goblin flew his flying wing the way kids skateboard or snowboard. That was the conceit that contemporized that character. With Doc Ock, we couldn't go to any known commodity. There aren't any quadrapedal devices out there. Everything that we did with him had to create its own reality, so that as you watched him, you'd believe that the mass and movements of his tentacles were correct.
We also had to figure out a way to come up with a tentacle that would at once be a mechanical device and a personality. Sam wanted these tentacles to have some sentient quality to them, so that they could act on their own. Over the course of the story, Doc Ock uses them to pursue his goal, and then at some point he becomes at odds with them and they have a will of their own. They have to take on this persona, and actually be a character throughout the course of this movie. So it was critical that the design of the tentacles had to incorporate enough anthropometrics into them that when you looked at them, you could see a character in them.
The production designer, the costume designer and myself worked to make tentacles that could be created in reality, like puppets. Edge FX created how the puppets would work. Of course, [Sony Imageworks] animation director Anthony LaMolinara and the animators worked with us side by side as the puppets were being created - both the physical tentacles and the designs for the CG versions to make sure that we incorporated elements into them that would work for the animation. There was a really tough set of criteria that was placed in front of the animators with respect to creating the personality of these tentacles.
Of course, during the design phase, Alfred Molina, who had to wear these things, also had input. It was quite a collaboration with Sam, of course, as the arbiter of all the things that would be included in these tentacles. They took a long while. In some form or another, they were in development even past the beginning of principal photography.
EW: What were your criteria for when the tentacles would be puppets vs. CG?
There were mechanical tentacles wherever the character had to come into intimate contact with them. If the tentacle had to grab him around the throat, or light his cigar and hold it to his mouth, or take off his sunglasses it was done with a puppet.
The tentacles were, variously, four feet long and 18 feet long, depending on what we needed them to do. We discovered that if we used the full-length tentacle we had to have three pieces. So if we had all four tentacles on the character, it would be him plus 12!
Generally speaking, the puppets were used in the close-ups and in the medium shots where the tentacles didn't have to be any more than three or four feet long. The CGI tentacles took over from that point on, for the most part.
EW: What were the most challenging aspects of doing the hand-offs between the puppets and the CG tentacles?
There were lots of places where there would be a live-action puppet tentacle in the foreground that would go off-screen, and then come back on-screen as a CG tentacle. We had puppets and CGI mixed in the same shot. The trick was making them all look the same. And that was difficult.
Audiences are familiar with things like metal and rust. So making a CG version of something that was real, photographed on camera, and then combining them in the same shot was tricky. All of that CG stuff had to respond to light in the same way that the real objects responded to light.
Also, Doc Ock's tentacles did things that they couldn't do in reality. They had mechanical attributes that we couldn't have had in any real device. So there was a lot of stuff done with the animation that stepped beyond the bounds of the physics of the real world. That's the tough part of doing this stuff.
We had to make sure that the center of mass of the tentacle made sense. There was a huge amount of work that went into creating the characters of these tentacles, to make them be at once mechanical and organic. The tentacles had to look as though Doc Ock was controlling them with his brain. They were supposed to be tied directly to his hypothalamus: to be lower brain functions. The tentacles themselves were supposed to move as if they were extensions of his own physiognomy. Alfred loved working with the puppets. We often did a shot with the puppets and then did one without, in case we had to add the CGI. It was such a joy to be able to make these things do this crazy stuff that they couldn't ever possibly do. That's the best part of making movies!
EW: Were there any aspects of making this movie that took you by surprise?
At the end of the movie, a climatic fight between Spider-Man and Doc Ock takes place on the pier, which for the most part was a miniature. We had miniature water when we photographed the miniature pier, and it was decided that we needed more activity. So we had to do CG water at the last minute. That was something that we had to develop and put into the shot over a very short period of time. Water is difficult because it doesn't scale worth a damn.
EW: How do you think the challenges facing visual effects supervisors have changed over the course of your career?
In the early days, because there was no way to create images from whole cloth, you had to figure out how to move a camera to capture images either via miniature or a bigger scale than real. You were essentially an engineer who brought mechanics to bear on the problems that the filmmaker had. You didn't spend much time thinking about the impact of the image that you were creating it was just whether or not you could create that image. So it was more about process and less about content.
I got out of film visual effects for a long while and directed commercials because I could do things in the video medium on multiple channel compositors. The whole idea for me was the electronic manipulation of images without any generation loss. I didn't come back into film until I could get an image into the computer, do something to it and then put it back out and make it indistinguishable from film. Then I knew I no longer had to invent a new camera system for every movie!
Now it's become much more about content; about the composition of the image rather than whether or not we can get a camera to go fast enough and stop quick enough, or whether we can repeat 753 moves with exact precision. This is a huge change.
We get to apply all the electronic technology from the video environment, with regard to the technical issues of matting, of decreasing or increasing image size or the addition of grain, softness and sharpness, color correction all of that stuff became available to us in the movie industry. It really changed the role of the visual effects supervisor from being an engineer to being more of a designer. It's been a watershed time for visual effects. We can be more creative than ever before.
Ellen Wolff is a Southern California-based writer whose articles have appeared in publications such as Daily Variety, Millimeter, Animation Magazine, Video Systems and the Website CreativePlanet.com. Her areas of special interest are computer animation and digital visual effects.