In this breakout year for 3-D animation, AWN takes a glimpse at what's aesthetically and technically at stake.
More than 50 years after its debut, 3-D movies are an idea whose time has finally come -- and nowhere is that more apparent than in animation. It's animation -- CG, in particular -- that to date has made the best use of 3-D and is in many ways best suited to the technique with its 3D virtual space.
Interestingly, CG-animated features themselves have been around for barely more than a decade. And as the digital technology that creates the 2-D animation has rapidly matured, 3-D adds yet another wrinkle to the challenge. With every major animation studio now wagering on 3-D as a key to theatrical success, they are each adapting it into their production pipelines in different ways and distinctly working out its aesthetic impact on the final product. However, there is a definite View-Master approach being adopted throughout the industry instead of the more gimmicky in your face reliance.
And that includes Henry Selick's Coraline from Laika, which has the distinction of being the first stop-motion feature shot in 3-D. "When you're doing stop-motion and weighing it against the other formats, you're trying to determine its strengths and weaknesses," Selick offers. "For me, the strengths are that it's all real stuff: it's all real props and miniaturized but the stuff really exists and 3-D captures that. This feels like we finally captured that experience of what you get when you finally visit the film in production... And we actually designed the film for 3-D, changing the shapes of sets and so forth... it was more about bringing people into the space as Coraline is seduced by this Other World that's full of magic in a place that she's starting to feel real good about."
But without a doubt, DreamWorks Animation has been at the forefront of pushing the 3-D envelope at the behest of Jeffrey Katzenberg, who believes 3-D is the future of animation. After being wowed by The Polar Express in IMAX 3-D, the DreamWorks Animation chief has been preaching the gospel of stereoscopic immersion, suggesting that it's more about extending the proscenium than breaking it. Thus, beginning with the March 27 release of Monsters vs. Aliens, DreamWorks has integrated 3-D completely into its production process.
Phil McNally, stereoscopic supervisors at DreamWorks, had worked on 3-D projects where the process was added in post and was drawn back by the chance to integrate 3-D into the film at an early stage.
"What got me excited about coming back to DreamWorks was Jeffrey Katzenberg's explicit promotion of the idea of authoring in 3-D, which is something that is a creative decision and has all the creative opportunities separate from doing 3-D in post," he says.
McNally says achieving this meant getting 3-D tools into the hands of every artist so they could see their work in stereo at every stage. Artists begin using 3-D at the layout stage, with the earliest blocking of animation and camera work being done in Maya or DreamWorks' in-house software.
"We see the scene in 3-D, in stereo, on the desktop so we can see what we're doing," McNally says.
That's required the studio to come up with some of its own tools, even as applications like the compositing package Nuke have begun to add stereo tools. Stereo capability also had to be added to the editorial stage, so that footage could be viewed in 3-D through Avid. McNally says the stereo work is essentially camera work, and 3-D becomes more of an extra component at every step rather than a separate process.
"I kind of made the joke one time that it's a bit like having some animators that just animate in X and Y and then pass it on to another animator who moves it in Z. And of course it doesn't work like that all," he says. "Animation is, at least in the CG world, is a 3D process already. So in theory, there isn't an extra Z-component in the pipeline, it should just be that each individual artist is working in X, Y and Z all at the same time."
The difficulty with that is that few animators have much expertise in working with 3-D, and there is therefore a tendency to max out the 3-D effect. 3-D also exposes cheats that work in 2-D.
"A very typical thing would be eye line or the placement of characters in relation to each other or objects that look completely fine in 2-D, but when you put 3-D space into that scene you then unravel the fact that there was a cheat going on there," he says. "Or maybe that someone was closer to the camera and someone is further away and they're supposed to be looking at each other."
McNally says he will do a final pass and make sure the work is up to snuff and dialed in to where it needs to be. That's the last of three 3-D passes, starting with a "70%" pass.
"We call it 70% because the craft of the stereo settings needs to only be 70% good enough to be able to passed to animation," he says. "It's really only once animation is done that we can really dial in the final settings for the stereo."
The second pass is fairly complicated, as convergence points are adjusted and multirig is used to mix different stereo volumes for different elements within a single shot. "You might want to use a longer lens, which has the tendency to make the characters look like cardboard cutouts. So we will actually use a different stereo setting for the character to enable the character volume to feel nice and round without having the backgrounds go so deep with that lens that it won't hurt your eyes," he says.
The last step is McNally's, and it's mostly a blending pass in which he focuses on how shots are jumping across the edit and mitigated to avoid creating eyestrain for the audience. McNally says this kind of 3-D process will have a profound impact on filmmaking, though it will take experimentation and time for it to evolve beyond the traditions of 2-D.
"If I kind of project out 10 or 15 years and assume 3-D has become successful and people have been able to develop the idea, I think we're going to see a much more passive camera style and composition," he says. "Meaning the frame and camera style will be more passive and the dynamics of a shot will come from the 3-D staging within the space."
At Sony Pictures Imageworks, Senior VFX Supervisor Buzz Hays says 3-D has been a learning process of constantly refining tools and techniques.
Each of the 3-D projects the studio has worked on has been different, and those differences have had a huge impact on how the work is done. For example, given the length of development process, many films were not conceived as 3-D films, and many that were converted to 3-D were done so either very late in the 2-D production process or after the 2-D version was completed.
Hays says that while it's possible to start integrating 3-D at any point, the best results and the most efficient processes come from being able to integrate the 2-D and 3-D versions as early as possible.
"The longer we wait to integrate 3-D into the process, the more it tends to be a separate process, or at least a separate department," he says. "Whereas if it's integrated into the show from the beginning, then the entire show is both 2D and 3D and there is no differentiation between the two."
The 3-D was a separate department on such studio projects as Beowulf, Monster House and Open Season. That's changed and the 3-D process will be fully integrated into the current production, Cloudy with a Chance of Meatballs (opening Sept. 18), all the way back to the camera layout phase, Hays offers.
"It saves a lot of resources internally to do it that way," Hays says. "At least as far as we can predict, it's more cost effective to do it that way because the same artists are dealing with both the 2-D and 3-D."
The decision to do a film in 3-D still can come late in the process, especially when a studio is uncertain whether the cost is worth it. Hays says the standard number thrown around for the cost of 3-D is between 10% and 15% of the below-the-line cost, but that shrinks the more 2-D is integrated into the process to the range of 6% to 10 or 12%, he says. "But it's hard to say because every situation presents itself differently."
Additional pipeline issues come from there being few commercial tools for working in stereo. Animation studios and visual effects houses have had to come up with their own solutions and tools for automating the generation of a second image without having to render it from scratch.
One such solution, Hays says, is photogrammetry, or reprojection, where the depth and color info from one eye can be used to generate at least part of the other without having to go back and re-composite it.
Animation offers a certain type of control that live-action does not. Hays says CG animation allows each element to have its own virtual camera, allowing the depth of each object to be fine tuned. Adding a 3-D camera alters more than just the pipeline -- it creates new aesthetic challenges.
"There's a language to telling stories in 2-D that is distinctly different from 3-D, which hasn't really been fully explored yet -- in fact, it's been barely explored," he adds.
Hays says camera placement becomes the most important change 3-D brings to movie making. In 2-D, filmmakers have long relied on focus, camera position and framing to direct the audiences. 3-D is different in that it requires changes in framing, pacing and editing. The depth of field look that gives 2-D images the illusion of depth are unnecessary in 3-D, and the comfort of the audience becomes a factor as cuts that force eyes and brains to adjust too radically can cause headaches.
But that doesn't mean 3-D has to be slowed down, Hays says. "There are certain things to avoid so you don't make an uncomfortable experience, but that doesn't mean you can't cut quickly or use fast motion or anything like that," he says. "But I do think people need to experiment a bit."
Taking a slightly different approach is Blue Sky Studios, making its 3-D bow with Ice Age: Dawn of the Dinosaurs (opening July 1).
"What we've done is we've made a parallel pipeline for stereo versus the mono pipeline, which is our traditional pipeline," says Jayme Wilkinson, technical supervisor for stereoscopic development at Blue Sky. "At certain locations in the mono pipeline, we take data out to adjust additional camera data for the stereoscopic view of the second camera."
Though the process separates 3-D from a lot of the 2-D work in the Blue Sky system, the 3-D planning takes place early on. "After the previs layout, we'll take the 3-D data scene and bring that into our world of stereo and do some depth composition, using the tools and tech we've developed internally," Wilkinson says. "We'll set up how much depth do we want to get out of this, how much volume."
That 3-D info is put back in the standard pipeline as 2-D camera data and rides along with each shot through the process. After the shots are animated, the settings will pulled up and checked to ensure they still work for each shot.
"They (the animators) may tweak the camera to best play to the animation of a character or a shot," Wilkinson says. Animators have the ability to check or view a shot in stereo at any point, but generally concentrate on creating their performances in 2-D.
Once approved, the shot is software rendered, then heads back to the mono pipeline for master lighting, rendering, paint and composition. The rendered elements are then returned and the additional pieces required to render the second eye are created so it can be assembled into a stereo film.
"We're not relighting it for stereo, we're not making new materials for the story, so we're pretty intelligent and efficient about how we go back and forth between the two pipelines," he says.
There will, however, be some adjustments between the 2-D and 3-D, such as depth of field or blurs that need some tweaking before the movie is finished, Wilkinson says. The main motivation in fashioning this pipeline was to keep the processes that had worked for previous Blue Sky projects while avoiding redundancies. That has posed technical challenges, such as synching up the work schedule for both versions in the pipeline, as well as finding animators with the right skill sets.
"I need people who are good camera people and good compositors -- so there are two roles on their head," Wilkinson says. Animators also, again, can't use the 2-D cheats, and while it's possible to fix many such shots, a few do have to go back to animation to be fixed up. Education and experience will minimize such problems in the future, he says.
Depth blur and optical effects like lens flares and halos have proven a particular challenge in stereo. Blurs tend to flatten out and optical effects, which typically happen in the eye, are difficult to place in 3-D space to look believable or natural.
Some of the ways Wilkinson thinks 3-D can add to storytelling is in shots where a character can lean into and enter the audience's space, as well as the overall feeling of depth in a scene.
To date, most animated 3-D films have been converted from 2-D versions. Over at Disney, producer Don Hahn, who has worked on converting both older films (The Nightmare Before Christmas) and recent ones (Meet the Robinsons and Chicken Little), suggests that the process is more complicated than it sounds.
"On CG films, for example, there are many patches and paint fixes on the original 2-D image that reveal themselves in the conversion process. A lot of the work on those first films was deciding if it could be done, and, if so, how much patching and painting work had to be done to fill in spaces behind characters," he says.
Early films also posed major aesthetics issues, as it had to be decided how far to take the 3-D effect. Hahn says they tended toward a conservative approach where the 3-D effect was more subtle and only popped out occasionally for a big effect.
Hahn is currently four months into converting Disney's Beauty and the Beast, though, with some improvements. The production began by going back into the original CAPS files for the film, which were converted into a more contemporary software package with all the original elements preserved in layers.
The 3-D models for the characters are created with a number of new techniques, including a proprietary system that automates the process.
On the aesthetic side, Hahn says he's working to create an immersive experience for the film, so that the audience feels like it's in the ballroom and the hallways of the castle appear miles long.
Meanwhile, Walt Disney Animation Studios took a similar approach to DreamWorks with Bolt, which is up for an Oscar for best animated feature. Robert Neuman, stereoscopic supervisor on the film, says this was the first feature at the studio where 3-D was part of the making of the movie instead of being a mostly post-production process.
While 3-D was integrated into the film's design early on, Neuman says it was still the 2-D engine pulling the train. "The challenge was to deliver an uncompromising 3-D experience, but still be able to make the best 2-D film that the filmmakers wanted to make at the same time."
The early creative development was done in 2-D: storyboarding, animatics and editorial, Neuman says. Once sequences with shot breaks were determined, shots went to layout, where the final 2-D and 3-D cameras were built. While many creative decisions had been made, Neuman says this was where the ways in which 3-D can add to the narrative were considered.
"We had the sense that we were going to be using 3-D as a storytelling device, something more than just throwing on an added dimension to it," he says. "We wanted to use that as something that would actually support the narrative to the film instead of something that's just tacked on."
Supporting that was a depth script, in which Neuman went through the entire film and used the major beats of the story to guide the depth of each shot. "We'd wind up at the end of the day with a number that would reflect the emotional intensity or the conflict level of the shot, and that would be what we'd use to guide the application of depth."
Depth was especially useful in amplifying the emotional impact of each shot by controlling the relationship of the subject to the screen. "If the audience is supposed to feel closer to a character, then we'd be literally taking the character and putting them on the audience's side of the screen," he says. "And if we were trying to create emotional distance, we'd do the opposite and push them behind the screen."
The amount of depth used was governed by a system of supply and demand, with the demand limited by what was comfortable for the audience.
One tool used on the film to ensure a progression in the 3-D effect is what Neuman calls a "floating window." The technique is essentially a mask introduced to each eye with a stereoscopic offset that allows the filmmakers to control the perceived location and orientation of the screen.
"Say (that) for narrative purposes we wanted to keep a character behind the screen. I was able to buy extra stereo depth bandwidth by moving everything out into the theater," he says. "That was a huge win for us, being able to use it in a narrative sense and still maintain a comfortable viewing experience while giving plenty of volume and internal depth to the screen."
Bolt also used the multirigging technique to dial in appropriate depths for individual elements in a single shot. Neuman says this reduced the effect of individual elements looking like cardboard cutouts, a particular problem when using a long lens. Similarly, the film also dealt with depth of field as an effect, added in compositing so it could be adjusted for the 2-D and 3-D versions of the movie. The production went for efficiency in using the left eye image as the 2-D version for most of the movie.
"For shots where we had to do something like remove or dial down depth of field effects, then we had to create a unique left eye for that shot," Neuman says. "There was more of a hit in terms of storage than rendering because, as I said, we didn't render in the depth of field, it was a compositing effect."
The extra overhead of doing Bolt in 3-D was minimal compared to Neuman's previous 3-D projects. For the future, he'd like to see more of the same, moving up the 3-D process further in the pipeline, and making the process more user-friendly and efficient.
"I'm super happy with how Bolt came out," he says. "If you see it in 3-D you're getting a little something you don't see in the 2-D version."
Indeed, Ed Catmull, president of Disney and Pixar Animation Studios, believes Bolt "is the best [3-D] that's ever been done anywhere... a good addition that doesn't get in the way of the story, but, on the other hand, gives it real depth." In fact, both Disney and Pixar Animation Studios have now fully embraced stereo on all of their upcoming CG-animated features. Up (May 29) marks Pixar's first 3-D venture, with 3-D conversions of Toy Story (Oct. 2) and Toy Story 2 (Feb. 12, 2010) in the works as a setup for Toy Story 3 (June 18, 2010).
"... For this type of film, we're trying very hard to make it as subtle [as possible]," offered director Pete Docter in describing Pixar's View-Master approach to 3-D after a sneak peek of Up earlier this week. "It adds to the richness, to the depth of the environments. You walk through the jungle, and you can see all of these layers going back. And the space when you set foot on the edge of that cliff along with Carl [the elderly protagonist], and he sees Paradise Falls, it adds a real richness there."
Thomas J. McLean is a freelance journalist whose articles have appeared in Variety, Below the Line, Animation Magazine and Publishers Weekly. He writes a comic book blog for Variety.com called Bags and Boards, and is the author of Mutant Cinema: The X-Men Trilogy from Comics to Screen, forthcoming from Sequart.com Books.
Mind Your Business: Recession-Proof ResumesPrevious Post
Docter Goes Halfway 'Up'