Computer software and tools allow for all kinds of live-action or CG effects to be composited into stop-motion to embellish shots or add any elements needed to tell the story. Effects such as smoke, water, fire, explosions, or gun-muzzle flashes can be downloaded or purchased as QuickTime files through various websites or service companies. These effects will typically be shot against a black background that is pre-keyed with an alpha channel. This way, if you simply drag them into a timeline in Premiere or After Effects, they can easily be laid on top of any other movie file with the background being automatically transparent. In many cases, they will then need to be re-positioned and modified to line up and match with your scene.
This effect was used for my two-character dialogue scene that is featured in Chapter 7: Character Animation and watchable on the accompanying CD. Searching through various movie files of fireballs, I found one that was suitable to use for the effect of the monster shooting fire out of her mouth. The movie file itself had the fireball shooting upward in the middle of the frame, so it would obviously need to be rotated and re-positioned to shoot diagonally off the right of the screen. This was all done in After Effects and lined up to match the monster’s mouth at the proper frame in the animation. Initially, the edge of the fireball was a flat line based on the bottom frame of the movie (Figure 9.51), so the shape was modified using a mask (Figure 9.52). The mask could change shape and essentially be animated in every frame to get the proper shape for the overall effect (Figure 9.53). Two copies of the same fireball movie were ultimately mapped over each other, rotated, and blended to give all edges of the fireball some variety and texture. (Compositing and screen grabs for the fireball effect in Figures 9.51 to 9.53 courtesy of Gautam Modkar.)
Online resources where you can find effects to composite into your own stop-motion films include:
www.stopmotionmagazine.com (under Free Stuff)
In addition to compositing in live-action or CG elements that are pre-photographed, it is possible to simply draw stylized effects right over your animation frames, such as lightning bolts, laser blasts, or anything that fits your scene. This can be done easily in newer versions of stop-motion software programs or externally in Photoshop. It can also be done using TVPaint, which is a software program used primarily for drawing 2D digital animation within the program itself. It can also be used for shooting stop-motion very effectively, and all of the drawing tools that come with it can be executed right on top of the stop-motion images. You can easily add hand-drawn effects, smear your stop-motion images, paint over them, blend the edges of seams on your puppet, and do a variety of other creative tricks.
Rig and Shadow Removal
Making a puppet fly is a trick that has employed several different methods over the years. Often, the puppets would be flown on invisible strings, stuck to a plate of glass, or suspended by a rod holding them up from behind where the camera would not see it. These methods can still be used today, but in most cases, a rig is simply placed visibly into the frame to hold up the puppet and is digitally erased out of each frame of the animation afterward. This makes the animation process go much more quickly because you don’t have to worry about concealing any tools that are suspending the puppet. In post-production, it can become tedious and time-consuming, but this also depends on the length of the shot and how many frames need the rig removed.
One of the most straightforward ways to remove a rig from your stop-motion frames is simply to have a clean background plate prepared in addition to your animation frames. If you are shooting on any kind of set, shoot some frames of an empty set without the puppets in it and set those frames aside to use as a clean background plate in post. In the animation I did for the Thunderbean Stop-Motion Marvels! DVD, there were several frames of an empty stage at the beginning, and the entire scene was shot with a white limbo background. This made it pretty easy to select a background plate, and I would open this in Photoshop along with each of my animation frames (Figure 9.54). The next step is to paste the animation frame as a separate layer over the clean background plate (Figure 9.55). Then, with the animation layer selected, the eraser tool is used to simply erase the rig out of the frame, and the clean background plate will show through (Figure 9.56). It is best to make the brush size smaller and to use a hard edge for delicately erasing the rig at the edge of the puppet itself. Then, you can make the brush a bit larger for quicker removal of the rest of the rig. Once it is all erased, each frame is complete, with the puppet suspended in air (Figure 9.57), and they can be flattened to go back into the animation sequence.
Some of the trickier frames to work with on the Thunderbean project were those where the shadow of the rig needed to be removed but the shadow of the bean remained in the frame. Because the shadow was a little fuzzy, it was difficult to tell exactly where the edge of the bean’s shadow was. In the last few slow-in frames, it was also difficult to keep the shadow from jittering. To help soften the effect of the shadow’s edge, I used a feathered eraser tool instead of a hard-edged one and played around with the edge in the various frames until I got it to look right (Figure 9.58).
Removing a rig from a stop-motion scene is now a built-in tool in some newer versions of certain frame-grabbing software programs, which helps avoid doing it in another program like Photoshop. Alternatively, a rig can also be removed using masks and alpha channels in After Effects. Another alternative to using one still frame as a background plate for rig removal is to shoot a series of frames of the clean background plate and place them into another layer under the animation with the rig in it. The rig can be masked out, and underneath will be an actual movie sequence of the clean background plate. The advantage to this approach is that if there are any lighting changes or pixel fluctuations in the scene, there won’t be any noticeable difference between the animation frames and the frozen background image under them. The background plate has a danger of standing out as a still image because of the lack of noise that would be present in the sequential animation frames.
Being able to erase or mask out parts of an image in stop-motion also comes in handy for fixing mistakes that occur on set while shooting. One mistake that can occur in the middle of a stop-motion shoot is the animator’s shadow flashing into the frame. Ideally, you should be standing in exactly the same spot each time you capture a frame, with your shadow completely free of the camera frame. In the heat of the moment while animating, though, it is common to forget this and have certain frames where your shadow creeps into the shot. Unfortunately, this happened to me a few times while shooting my two-character dialogue scene. However, using After Effects, these problem frames were identified and noted as to how much of the frame had a shadow flash into it. A mask was created from some held frames that didn’t have a shadow flashing into them (Figure 9.59), and this mask could be composited over the problem frames (Figure 9.60). The edges of the mask were feathered slightly to help blend them into the scene, and then all of the shadow flashes were gone. (Compositing and screen grabs for the masks in Figures 9.59 and 9.60 courtesy of Gautam Modkar.)
Motion blur is a favored technique of stop-motion animators for replicating the smooth movement of live action in their work. Part of the reason that older stop-motion films always had a jerky quality to the movement was that every frame was always in focus. An even bigger part of the jerkiness, however, was the distance between frames and poor registration of the positions in relation to the speed of the movement. If the distance between two positions on a fast movement (a sword swooping through the air, for example) was too far apart, a strobing effect would occur because the eye was not able to fill in the gap between those two very clear images. If that same fast motion occurred over just a few frames captured in live action, it is likely that some of the frames would be blurred if studied frame by frame. In Chapter 4: Digital Cinematography, I went over a few techniques for achieving motion blur on the actual stop-motion set. In this chapter, I will present a few examples of ways to get motion blur into your animation in post-production.
One really interesting method for creating an illusion of motion blur was relayed to me by Ron Cole. I noticed his work on In the Fall of Gravity had a very smooth, ethereal quality to it, so I asked him if he used any particular motion blur technique. He told me about a relatively simple method he used that was actually borrowed from an old film technique. The effect is one of blending the frames to suggest a kind of look that isn’t really there as a blur but makes the animation feel much smoother. Ron created at least three copies of each animation scene; he then removed the first frame from the first copy, the first two frames from the second copy, and left the third as-is. These copies were layered together in QuickTime Pro, and the opacity was altered in each of the layers. That way, each frame showed three images overlapped, with the one in the center the most visible and the before-and-after frames very transparent (Figure 9.61). This gives the illusion that one frame at a time is fading in and out, and the various degrees of opacity could be adjusted depending on the speed and quality of the animation. The multiple exposures of images typically shows up more on a fast movement, but for slow movements, it can create a much more subtle motion blur effect. This same effect can be done easily in After Effects, TVPaint, or any other package that allows you to layer copies of the same sequence over each other and adjust the opacity.
For a more realistic motion blur applied to certain frames or every frame of an animation sequence, there are tools and plug-ins like ReelSmart Motion Blur for After Effects, which will do the job nicely if you have the budget for it. Other simple techniques can involve using Photoshop to add an overall blur to an entire still image, using a blur effect like Gaussian blur and adjusting how extreme you want it. Another Photoshop tool that I have used for creating blurred frames is the smudge tool, which can just be dragged by hand over any part of the puppet where you want to the motion to blur, like in a fast snappy action, for example (Figure 9.62) Whatever technique you use, the important thing to realize is that an effective blur should follow the object’s path of action. If you are blurring an arm moving upward in a sharp movement, try to smear that arm so that the blur is trailing downward in the opposite direction, with a smaller amount of smearing in the direction the arm is going.
Many of these effects for green screen, rig removal, masking, and motion blur, as well as other innovative techniques for stop-motion, are demonstrated beautifully together by Patrick Boivin in some of his YouTube videos that break down the process of his entertaining short films. Visit his YouTube page (http://www.youtube.com/user/PatrickBoivin ) and within the Stop-Motion Animation playlist, check out the “Making of” videos for Bboy Joker, Jazz with a General Problem, and Black Ox Skateboard. The process is described in a very entertaining way, and the shorts themselves are fantastic to watch.
Eye Compositing Effects for Madame Tutli-Putli
Madame Tutli-Putli (Figure 9.63) is an Academy Award–nominated short film from 2007, which was directed by Chris Lavis and Maciek Szczerbowski for the National Film Board of Canada. The film told the story of a young woman who takes a suspenseful journey aboard a train at night, using atmospheric lighting and intensely detailed puppet animation. The film amazed audiences worldwide, not only because of its cinematic resonance and story, but also because of a particular effect in the eyes of the puppets. The eyes were actually made up of video footage of real human eyes, which were painstakingly composited onto the faces of the puppets. The effect and technique for compositing the real eyes into the stop-motion frames was conceived and executed by artist Jason Walker over a period of 4 years from concept to the final result. The innovation behind this technique has certainly advanced the art of stop-motion animation to a whole new level in terms of performance and technical mastery.
I asked Jason Walker himself to share the process of his technique for Madame Tutli-Putli and how the project got started:
Around the year 2000, I had started playing around with computer animation and been able to get to know the film’s directors, Chris and Maciek, who were primarily doing illustration and animation at that time. I became their post-production artist on various projects, including a commercial we did for the Drive-Inn Channel in Toronto, where they had animated a stop-motion mouse. I ended up tracking and positioning a singing mouth onto the puppet, which did have eyes, but only a tracking dot where his mouth was. I tried a technique of having the puppet move only in two major positions, and tracking a 2D shape onto a 3D shape, but looking like it was turning along with it. It was all set to the beat of music, and it worked really well.
Later, we found ourselves having a meeting to discuss a project for the National Film Board and what we could do. This was around the time that Peter Jackson’s Lord of the Rings had come out, and everyone was amazed by the effects for Gollum, so we joked that we needed to create a “poor man’s Gollum” for our film. I had an idea that had been in my head for a long time, since I was about 14 years old. When I was a school kid, we had a project where we had to make a papier-mâché head around a balloon, and then pop the balloon to create a mask. I had the idea that instead of painting flesh tones and eyes on it, I would paste a collage of magazine clippings on it for skin and the facial features. When it came to the eyes, I found a Vogue magazine cover and glued the eyes onto the mask. Then, I had a thought that if I were to create an animation of this mask with these photographs stuck onto it, every time I moved the head, I would need to find a different set of eyes that were set at a different angle. This idea from my childhood came back to me at that meeting with Chris and Maciek—to shoot live-action eyes and composite them onto a puppet.
I asked Chris and Maciek to shoot some simple moves of a test puppet with blank eyes and did a test with filmed footage of actress Laurie Walker. Three basic steps were required to try to make it work: film the actress, track the puppet, and stabilize the footage of the eyes to stick onto the puppet face. Luckily, after about a month of working on it in my spare time, the test worked (Figure 9.64).
The directors were blown away by it, but I told them if they take this farther, they needed to make sure the puppet didn’t move around very much. I went away, and they got approval from the National Film Board to make the film. Coming back a month later, I saw their footage of all this puppet movement, with flashing lights and shadows from the moving train scenes. I started thinking, “I hope my method will work for this!”
Based on the timing and movement of the puppet animation, on the live-action set, we worked together to simulate the light flashes and direct Laurie to mimic the head movements. She had make-up and tracking markers applied to her face for matching to the puppet, and she was told exactly how to move her head and directed on the acting and emotion of her eyes (Figure 9.65). I tried to keep her on track with the correct movements and orchestrated the lighting, and there was no room for error as scenes became more complicated.
When I had the final eye take, I would bring it into the computer and try anything to make it work. I had created a timeline chart in After Effects and nicknamed it the “Wunderbar” (Figure 9.66). Once I had been given the puppet footage, I would analyze what was a head move and what was a camera move, and indicate each on this timeline as a different color. This way, I could see a separation between what was a move and what wasn’t. Also, for moments when she would encounter a light or a shadow, the Wunderbar would record what kind of light it was and how long it lasted.
On the stop-motion set, there were tracking dots for the eyes built onto the face of the puppet. You would think that would be helpful for all the stabilization, but the dots were only used to track the puppet so that I could adhere a mask layer to the face. I had the eye layer separate from a layer of masks that cut the eye out (Figure 9.67).
However, when it came time to place the eye footage onto the puppet, there was no way to do it except by hand. That was the most intricate part; the computer helps you organize your layers, cut masks, and feather the edges, but the computer has no idea what an eye is, and it has no idea of the subtlety of a human eye in the area of a human face. When it comes to visual effects, it’s on one level to make it flawless, but the other level is to make your brain convinced in a way that you don’t have to think about. A bad composite is when a scene seems to look right, but the brain tells you it doesn’t. When it comes to human eyes, there is absolutely no room for error. When placing the eye onto the puppet in After Effects, if the eye was off by even a fraction of a pixel, it wouldn’t work.
So, I developed a system where for every frame, I would need to zoom all the way in, use the arrow keys to move the eye up and down, and then zoom out and see if there was any independent movement. This also had to be done for scale and rotation of every frame. Often, I would zoom in, move it over one pixel, and then zoom out, and it would be too far over. Then I wondered, “How can that be, if it’s only a pixel? How can I move it less than a pixel?” It started to become insanity at this point, but my solution was this: Let’s say my eye was in the right place on frame 10, and I move it over one pixel for frame 11, but it was too far. What I would do is put a point there, but then drag that point over to frame 12 so that frame 11 was right in the middle. To explain this further, let’s say you were on one side of a fence and you could only jump to the other side of the fence, but you wanted to be on the fence. You would build a wall so that when you jump over the fence, you would hit the wall before you could land on the other side, but at least you would land on the fence. That was the only way to make the eyes convincing, and the toughest part was this method of sub-pixel positioning so that it always looked like it was on the puppet.
Another point to make is that this technique has sometimes been described as simply adding live-action eyes to puppets, so people think it’s not animation. This is partly correct, in that they are real eyes, but it’s not merely live action. When I paint portraits of children, they don’t sit still very long, so I shoot video of them, and afterward I can search until I get that one instance of the child’s face that is right for the painting. The same technique applied here, where I would film the eyes and the actress going slower than the puppet, matching accuracy of the movements, but not timing of the movement. Often, I would film the actress moving at least six times, take these separate takes, and join the eyes together. I would bring in these sequences, which would add up to many hundreds of frames, but inside the scene of the puppet there might be only 100 frames.
Then, it was a matter of selectively going through each frame of the eyes, using the time-remapping feature of After Effects, and sliding through the frames one at a time until you find the one frame that works for that frame of puppet. Essentially, it’s a re-animation of video stills—people may think it’s not animation, but it is! I have to animate the character that’s coming out of the eyes, and part of that is measuring how much she reacts to things by how many extra frames you have her looking there. You have the body language of the puppet and great performance of the actress, but there is also a third level where you can change the acting. Going back to Lord of the Rings again, it was noted that if the actor playing Gandalf didn’t look concerned enough, for example, they would use a subtle computer mesh to change his expression that much more.
I have ideas for some similar animation techniques I want to try with the entire face, but I will only do that with the right team of people. I often get approached to help other people with these kinds of techniques, and many people ask me if there is any new technology developed in last 3 years that will help, like 3D scanning. None of that really helps because you still have to manually position a human eye one frame at a time onto the head, and only your brain will know if it looks right. Your computer is not going to understand human emotion—only your brain can do that. Seeing human eyes on a stop-motion puppet in Madame Tutli-Putli is something we’ve never seen before, and the effect of the film comes down to the fact that it looks like the eyes are there. Anything beyond that is a failure because it’s the eyes, and everyone in the world is an expert on this. If there is something wrong with the eyes, you know it right away. There is a quote where someone said, “If you don’t believe eyes hold the human soul, then take a picture of someone you love and stab it in the eyes with a pair of scissors. I bet you can’t do it.” That’s the power of eyes.
For more information on the film Madame Tutli-Putli itself, visit http://films.nfb.ca/madame-tutli-putli/ .
Ken A. Priebe has a BFA from University of Michigan and a classical animation certificate from Vancouver Institute of Media Arts (VanArts). He teaches stop-motion animation courses at VanArts and the Academy of Art University Cybercampus and has worked as a 2D animator on several games and short films for Thunderbean Animation, Bigfott Studios, and his own independent projects. Ken has participated as a speaker and volunteer for the Vancouver ACM SIGGRAPH Chapter and is founder of the Breath of Life Animation Festival, an annual outreach event of animation workshops for children and their families. He is also a filmmaker, writer, puppeteer, animation historian, and author of the book The Art of Stop-Motion Animation. Ken lives near Vancouver, BC, with his graphic-artist wife Janet and their two children, Ariel and Xander.