Stop-motion animation began as a special effect that was essentially done in the camera. Before long, other processes were invented for creating further visual effects in post-production. These effects evolved to the point that differentiation would be needed between special effects and visual effects. Today, the term “special effects” is used for practical effects that are done on set or in front of the camera during production. “Visual effects,” on the other hand, are done entirely in post-production, and, in some cases during the film era, the elements were simply prepared or created within the camera itself. The basic principles of many of these visual effects have remained the same, but the tools used to create them have certainly changed. What used to be done with a great sense of tedium on a single strip of film can now be done with a great sense of tedium in the computer. Today’s digital tools still provide challenges and require just as much patience and skill, and they can range from very basic to much more complex.
As filmmakers become more savvy and higher-end tools become more widely available for the average person making homemade stop-motion films, the creative possibilities open up a whole new world of potential. Some in the stop-motion community even believe that today’s digital compositing tools can be used to bring back the classic effects used in Ray Harryhausen’s films and are moving forward to bring that genre back to the modern era. When looking at the level that stop-motion and visual effects have reached by themselves, I think the potential behind this pursuit is pretty exciting. Harryhausen’s films have inspired an entire industry, so it makes sense to continue that sense of inspiration for future generations. The goal of any film should be to create inspiration, whether for a moral message, for a good story, or simply to create more films. Ultimately, the use of visual effects should not be done simply for the sake of using them, but to allow for more creative control over the performance or look of a scene. The effects should become transparent to the audience and should not draw attention to themselves, but they should always serve the story. This chapter will show some techniques that can be used to combine stop-motion animation with other elements, whether live action or digital.
To better understand how visual effects are done today, it helps to understand a little bit about how they were done in the “old school,” before modern digital tools were available. For stop-motion, a good number of these effects came to fruition in the original King Kong, which brought together several different processes for marrying animation with live-action footage. One of the most basic compositing effects that can be done on film is a split-screen matte shot. It is so basic that I used it myself many years ago on my student film, Snot Living, at the University of Michigan, which I shot in 16mm. For a shot where my live actor, Brandon Moses, stared at the animated clay puppet in the same shot, I simply framed up my shot and attached a glass plate to the camera lens with poster putty. In the area where I wanted the puppet, I masked out that part of the frame with black paper on the glass, creating a matte (Figure 9.1). The black matted area would not be exposed on the film, but the rest of the frame would. In the area surrounding the matte, I shot Brandon in live action. Next, I had to rewind the film to the same frame where I started, cover up the rest of the frame with black paper, and remove the previous matte, essentially reversing it (Figure 9.2). Then, in this area, I shot my animation through another pass on the same frames in the camera. After sending the film to the lab and getting it back, both exposed elements were blended together into the same frame (Figure 9.3). The risk in using this technique was that if something went wrong with either side of the matte, the whole shot would need to be re-done.
I used the same technique for another shot in the film, where Brandon gets hit in the head and three tiny versions of the clay puppet spin around his head (like in those old cartoons where little birds or stars would spin around a character’s head after a serious injury). In this case, the matted area where the animation happened was a tiny rectangular space near the top of the frame (Figures 9.4 to 9.6).
The extra trick with this shot was that because most of the frame was matted out, I was able to rig a large set in the same place that Brandon had been lying earlier. Because I wanted the puppets to look like they were spinning in mid-air, I placed them on a horizontal sheet of Plexiglas so that the background would show through. The Plexiglas was held up (precariously) by two footstools and a stack of phonebooks on each side to bring it up to the level of the matte window. This whole set-up actually came crashing down in the middle of my animation, but luckily I was able to set it back up and place the puppets back where I thought they had been. Surprisingly, it worked, and I didn’t have to re-shoot anything, which was a complete fluke and stroke of dumb luck. (Snot Living can now be found on YouTube by typing the title in the Search window.)
The limitation to the split-screen matte is that the live-action and animated elements in each half of the shot cannot cross the matte line. If they do, they will be cut off. For this reason, the effect can only be used for certain shots where the two elements don’t need to cross over each other. For shots where a stop-motion puppet needs to move across a live-action frame or interact with it, Hollywood movies have used rear projection, a technique in which a puppet would be animated in front of a movie screen projecting previously shot live-action footage one frame at a time. The puppet would be moved to match the background, the camera took a frame, the rear projector advanced to the next frame, and the process repeated. For any moments of interaction, the puppet was simply positioned to match up to the live-action footage behind it. This was the basic premise of Ray Harryhausen’s Dynamation process, which would also be combined with matting out any foreground elements in front of the puppet, and matting them back in through another camera pass. When you understand how they did this, it makes watching those old Harryhausen films that much more awesome.
In other situations, a traveling matte can be created. This technique requires more steps and passes of the film through the camera, often done through bi-packing two strips of film together to create the various composites. One of these methods, the Williams process (named after its inventor, Frank Williams), was used for some shots on King Kong and other films. To illustrate this process for black-and-white film, I put together a digital re-creation of a film matte using a miniature dinosaur and a photo of Vancouver. First, the stop-motion puppet could be shot against a neutral or black background (Figure 9.7). The actual film in the camera, however, would capture a negative image of what was on set, creating a negative puppet against a white background (Figure 9.8). This strip of negative film was printed against another strip of high-contrast film stock, which created a silhouette of the puppet and left the background transparent (Figure 9.9). A live-action background would be shot separately and developed into a positive print. This positive print was re-printed behind the high-contrast silhouetted image of the puppet (Figure 9.10) and showing through the transparent negative space around it. The result was a new negative of the live-action background with a transparent shape of the puppet cut out of it (Figure 9.11). This negative was then combined with the original negative of the puppet, which would fit exactly into the transparent shape (Figure 9.12) and then be developed into a new positive image of both elements (Figure 9.13).
With color film, this process gets much more complicated because it essentially deals with using a blue screen as the neutral background, filtering the lens with the same color blue, and running more strips of film with alternating negative and positive images repeatedly through an optical printer. An optical printer is a combined movie projector and camera that can create composites in a similar manner to the in-camera processes, which is how special effects were done up to the digital revolution of the past 15 to 20 years. The tricky thing about these methods, other than having to think about alternating positive and negative images that are backwards, is the reliance on exact alignment of every element. If one thing goes wrong, an entire composite needs to be scrapped and repeated. Although these exact methods mostly have been phased out in today’s digital filmmaking era, it is fascinating to look back at how these cinema wizards brought classic images to the screen with what they had. These guys were true technical magicians, and their innovations can help us better understand and appreciate the tools available to us now. Everything we can do now comes from the logic behind these techniques.
Today, we have a lot more freedom allotted by digital tools that can create seamless composites and work around many of the errors and setbacks that would occur from using film. They are essentially a combination of the foundations laid by the old-school film techniques and other developments in video technology that bridged the gap to computers. One common tool used in digital imaging today is the alpha channel, which essentially makes any part of an image transparent and allows another image layered behind it to show through. This is very much a digital extension of the transparent negative image from a strip of film, and it can be created for the entire background around a subject or as any shape within an image where a transparent area is wanted. Many compositing software programs used today also have the capability of creating masks that will cut or matte out any part of an image to combine it with another. Also popular is the option of chroma keying out a blue- or green-screen background and replacing it with a live-action or digital background. This has been used for matte work on films and is also used in video production for weather reports, talk shows, and special effects.
Split-Screen and Masks
The split-screen and traveling matte processes have transitioned into the digital era using the same principles from film, but obviously with more flexibility and creative options for the filmmaker. To demonstrate some very simple techniques that can be done for compositing stop-motion with live action, I’m glad to present some contributions by Vancouver-based independent filmmaker Rich Johnson. I discovered Rich’s films online and became a big fan of his hilarious web series My Friend Barry (http://www.myfriendbarry.com ), which is about a character named Frank (played by Rich) and his little blue stop-motion friend Barry. Part of the charm of the series is its simplicity, including the subtle compositing effects that bring Barry into the live-action world. Frank’s dialogue is scripted but sometimes improvised, which allows for many possibilities for having the silent animated character Barry react to the action.
Many shots are done in a simple split-screen technique, where live action and stop-motion are shot as separate scenes and brought together into one shot. This can be done very easily in any non-linear editing program by applying a mask with an alpha channel into one of the scenes, and then layering them together in Premiere or After Effects. In this situation, the split-screen matte line still acts as a division where the two elements should not cross over each other (Figures 9.14 and 9.15).
Other shots require a little more work and planning in the compositing and layering to bring Barry into interaction with the live-action world. Here, Rich himself describes the steps he takes to accomplish this:
I start by locking the camera down, and shoot the live-action video with markers so the actors know where Barry is going to be when looking at him or following him as he moves. I also make a rough note about how long things take and what new improv comes out of the shoot so that I know where I need Barry to move, react, and look. After the live-action video is done, I use a remote to capture frames of Barry moving around with a clean background behind him. I also take one or two frames of the clean backplate with no Barry or actors, in case I need it for any holes and to mask bad reflections or unwanted shadows.
For a shot in Episode 1 where Barry comes out from under the bed and rolls in front of Frank, three layers are needed to make this comp work:
1. Stop-motion layer with Barry animated and saved out as a high-res MOV file the same frame rate as my live-action plate. In this case, it was NTSC 29.97 (Figure 9.16).
2. Live-action video layer with Frank, shot using NTSC 29.97 frames per second, in standard definition (Figure 9.17).
3. Clean background plate in case it’s needed (Figure 9.18).
I import and/or capture the stop-motion and video layers into my editing program in this same order, with stop-motion on top. I use a temporary “garbage matte” (drawing a rough matte around the general area where Barry is) on my stop-motion layer so that I can see the video layer underneath. If you can’t make a temp matte, another method is to reduce the transparency. The key is to be able to see both layers so that you can match them up for your final edit before compositing them together. This is the most important step, and you need to lock down the edit in this stage because the last thing you want to do is go back and make changes. It's too much work to do that. Each layer is edited and timed out, the temp matte is removed, and stop-motion and video layers are exported as uncompressed files to my compositing software.
Then, I import the uncompressed files into compositing software the same way, with layers arranged top to bottom. I add a 2-pop* 1 second before and after each clip to help ensure that they are lined up.
[*Author’s note: A 2-pop is a sound tone one frame in duration that is typically placed 2 seconds before the exact start of a program for cueing purposes.]
Next, I mask out the stop-motion layer frame by frame as needed to reveal Frank in the video layer (Figures 9.19 and 9.20). For the mask, I only concentrate on the areas where Barry's layer passes in front of Frank's. The rest of the picture on both layers does not change from one to the other, so I don't worry about perfecting the mask in those areas. I finesse the mask by feathering the edges by two pixels or so. That softens up the edges of the mask and blends nicely with the video layer, making for a seamless composite. Now Barry passes in front of Frank (Figure 9.21). It's like magic!
For another scene in Episode 2, where a live-action hand comes in to wipe Barry’s face, the same technique is used. The only difference is that it's not Barry who is masked—it’s just his eyes. One mask for each eye means all reflection and shadows are real on the rest of his body, even in his eye sockets. I put on the actor’s coat and sweater and wiped Barry's face with my own hand. I made sure that Barry’s eyes were closed when I did this, so in the video layer, Barry's eyes are closed (Figure 9.22) and the only animation going on is his eyeballs (Figure 9.23). Sometimes, Barry's eyeballs were crooked, so I would grab the left eye, make a mask, duplicate it, flip it 180 degrees, and add it over the right eye. Now, I had two fixed eyes animating in sync. When done correctly, masks are very powerful for this type of work, and I use them for everything.
Sometimes, in production, I would choose to pose Barry in many different ways, animating his basic moves: looks right, looks left, looks up, looks down, blinks while turning, and blinks at camera. From that, I could make him do anything I wanted in the editing, and it also meant that I could improvise with Frank, which gave me tons of freedom in crafting the jokes and pacing of the show. In this case, I would only plan for his entrance or exit for the shot. Barry is made of Play Doh, which makes him tougher to animate and makes him look kind of lumpy and cracked, which is part of his charm. Sometimes, obstacles are good to have, and little mistakes can help shape the work into something new and original. As long as I stay true to that and don’t get too hung up on the details, the show’s overall character stays pretty consistent. Barry is an easy shape and has no mouth, so basic stop-motion with him worked perfectly for what I was trying to achieve and convey in my storytelling.
Ken A. Priebe has a BFA from University of Michigan and a classical animation certificate from Vancouver Institute of Media Arts (VanArts). He teaches stop-motion animation courses at VanArts and the Academy of Art University Cybercampus and has worked as a 2D animator on several games and short films for Thunderbean Animation, Bigfott Studios, and his own independent projects. Ken has participated as a speaker and volunteer for the Vancouver ACM SIGGRAPH Chapter and is founder of the Breath of Life Animation Festival, an annual outreach event of animation workshops for children and their families. He is also a filmmaker, writer, puppeteer, animation historian, and author of the book The Art of Stop-Motion Animation. Ken lives near Vancouver, BC, with his graphic-artist wife Janet and their two children, Ariel and Xander.