Search form

All The World’s a Virtual Stage in Disney’s New Camera Capture System

Disney redefines animation layout, staging and visual development with their paradigm-changing virtual camera system.

Layout artist Terry Moews on the Camera Capture stage. All images ©2012 Disney. All Rights Reserved. Images courtesy of Disney Feature Animation.

When you consider Disney’s animated feature film legacy, it’s easy to overlook the studio’s history of camera technology innovation.  Going back to the 1930s, Disney introduced the multiplane camera, bringing an unprecedented visual depth to films like Snow White and Bambi.  In the 1990s, they introduced CAPS (Computer Animation Production System), a revolutionary digital version of the analog multiplane system that enabled integration of CG elements and camera movements into complex shots never before possible. With the release of Tarzan, Disney introduced Deep Canvas, which enabled the camera to move through a digital painting, allowing the audience, for example, to surf through trees alongside the King of the Jungle. 

With the release of Wreck-It Ralph, Disney’s camera technology focus has gone “virtual.”  The development and implementation of their new Camera Capture system represents a fundamental change in how animated films can be produced.  The proprietary tool suite provides artists a quick and easy way to visualize and stage environments, layouts, camera movements and the placement and interaction of visual elements within a scene.

The goal of the Camera Capture development project has been to replicate the feel of a live-action camera, providing a higher quality, more lifelike camera motion. Using rough layouts with simple geometry or more finished environments, Camera Capture let’s artists see what it would be like to move through each world and create exciting camera moves to meet the demands of the story.

Software developer Alex Torija-Paris at one of the capture stage's workstation setups.

Audiences have become more sophisticated in the visual language of filmmaking and they can tell when things look overly synthetic.  Even super sophisticated CG rigs, with many different camera properties thrown together, don’t provide the same organic look as a real camera. A significant amount of keyframing is still needed to make the animation look real.

While Disney is certainly not the first studio to embrace virtual production systems and methodologies, their pipeline integration sets them apart.  Assets don’t need to be moved into a separate capture system within a separate pipeline that then requires re-integration of data back into the main production system.  The studio’s production pipeline has been built to fully integrate the Camera Capture system.  They’ve created the capture system within their existing pipeline, built around their common Maya backbone.  As a result, they can access all the assets used in their normal scenes as well as all the tools used in scene assembly for their normal shots.

More Cy-Bugs – I Gotta Have More Cy-Bugs

The capture system was used extensively within Wreck-It Ralph in both subtle and more overt ways.  Hero’s Duty stands out as a more overt use of the system.  It employs a more hand-held feel, a rougher first person point of view. Using the capture system to add shake and motion to the camera adds energy and excitement to different scenes, really making them come alive.

There are two facets to any new production technology.  First, there is the hardware and the software.  Then, there is the studio, the pipeline and the culture. Camera Capture represents a true blend of all these facets, bringing together new virtual production tools that enable a new way to visualize and create movement.  The cultural and creative implications are enormous.

The collaboration needed to design and build the system brought together people from every area of the studio – software and hardware systems designers, technical directors, interaction designers, layout artists and art directors to name a few.  Like most production innovations, the system is a work-in-progress, constantly in a state of updating and refinement.  The Camera Capture studio itself houses development systems right alongside the performance area, enabling ongoing iterative code improvements in real-time.

Three main areas of animation production utilize Camera Capture technology.  The first area is digital scouting, or “DigScout.”  This entails taking the director, the art director and the lead modeler, for example, and putting them right in the middle of the set, getting their input on what, and whom, should be where and when. As manager of the Camera Capture technology team Evan Goldberg explained, “When you go out on a live action set, you scout the location to find all the right vantage points. Now, we do it in the digital world. Now, we can have people who were never proficient in a 3D package such as Maya stop hovering over someone’s shoulder saying, ‘Oh, go over to the left.  Now look over there.’  Now, they can pick-up the virtual camera device themselves.  They can use this virtual viewport into the world and really explore the location in a way they were never able to before.”

The second area is rough layout and animatic, for traditional layout and a first pass at the blocking, additional staging of the characters and initial motion of the cameras.  This is where the layout artists, many who have backgrounds in live action and operated cameras, can make the first attempt at getting a nice organic camera motion. 

The third and last area is camera polish.  Once a scene has gone through animation, artists can make sure everything is exactly where it should be, following the characters through the staging of their final performance.

Lights, Virtual Camera, Collect Data!

Here’s how the Camera Capture system works. The action all takes place within a stage similar to what you’ve seen in countless motion-capture studios.  With Disney’s new system, through the use of a handheld or mounted “virtual camera” device, the operator controls a virtual object inside Maya. Movement is tracked using sensors within the performance area and fed into Maya.  The camera movement is captured in real-time within the virtual set.

According to Evan, you can use different input devices, including handheld and tripod-based, depending upon what type of motion you want – something fast, something smooth, something really precise. Once you’ve chosen your input device, you need to define your stage. Literally, you define where within the “action” you wish to be, from what size perspective and from what vantage point within the scene.  You set the scale of the environment.  You could be in the middle of a single room or an entire city. Sometimes a step on the stage translates to a step within the scene.  Sometimes a step translates into 300 yards of movement.  Whether you’re a plane flying over New York, or a mouse facing off against a cat, the environment is all based upon the scale of the defined stage.

The system allows you to populate your virtual stage with actual production assets, including characters.  It also provides you with a set of “take management” tools, like those used by an editor, to comb through potentially hundreds of different takes and pull only the ones you want to use.  As Evan explained, with regards to assets used within the virtual set environment, there are many parallels to previs, where low-res assets are often used for proof of concept and then thrown away.  Camera Capture uses all real assets.  There isn’t a separate team creating assets specifically for virtual layout.  Optimization tools have been created to allow for faster playback, given Maya’s limitation in how quickly it can display such a large amount of high-res production data. 

It’s a Great Big Virtually Beautiful Tomorrow

Camera Capture represents a significant paradigm shift in the way directors can layout an animated film.  According to Terry Moews, a layout artist intimately involved in the system’s development, “Now, we can put a director virtually into a set and he can look 360 degrees in every direction and get a sense of the scale and placement of everything within the scene.  He can plan shots. We can hand him this device and we can watch him visualize the scene.  It used to be the other way around. They’ve always had to watch us. We were always their eyes and ears into the set.”

Terry went on to describe how changing assets within a virtual scene is a simple process.  A director can look at a scene and say, “Let’s take that guy out of this scene.  Now get a close-up.”  You can instantly capture camera positions, angles and record the camera arcs the director likes and later integrate them with the actual scene.  The director can “bless” the layout, giving artists a firm position from which to create a scene.  This preempts, for example, a review session where the director might say, “Wow, this feels really cramped.” That’s all eliminated before it ever has a chance to happen.

As Terry explained, traditionally, the director gives layout some input and ideas, and then he won’t see any work for several weeks. The Camera Capture system lets the director and layout artists collaborate in a real-time virtual environment to make layout decisions together.  “Working with a director, we were walking through hundreds of shots in an hour, passing cameras, creating ideas, thinking about stuff.  Iteration, from a layout perspective, is our biggest achievement.  The faster we can iterate on story points, find the shots, find the ideas that we like, the better the movie.  When you have the director involved in that process, it’s just gold.” 

Though on Wreck-It Ralph the Camera Capture system was used primarily on Hero’s Duty, directors working on new projects at the studio are actively looking to use the system on their entire film.  As Terry says, “It’s coming.”

--

Dan Sarto is editor-in-chief and publisher of AWN.com.

Dan Sarto's picture

Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.