In the latest excerpt from The Advanced Art of Stop-Motion Animation, Ken A. Priebe finishes his discussion of digital cinematography.
Once you have set up your sets, puppets, lights, and camera, the shooting process is relatively straightforward. You take a frame, move the puppet or object, take another frame, and move the puppet or object again. That is essentially what it’s all about, but at the same time there are plenty of options for embellishing your shots for a richer cinematic experience. Certain effects can be achieved by your camera right on your stop-motion set, with little to no additional work done in post-production. Post-production refers to the process of adding certain elements to your scenes after they have been shot and is typically referred to in the business as simply post. When a filmmaker talks about doing something “in post,” including “fixing in post,” it means that effect or fix will be done later. Fixes in post can include attempts to line up frames where the camera was accidentally bumped, for instance, or where there were fluctuations in the lighting. Today’s digital tools give us more options for fixing and adding effects in post, but all the same it’s a good idea to avoid using this as a crutch too much. Ideally, you want to shoot your stop-motion properly enough that very little post-production work is needed to fix mistakes. Effects are another story; there is a great deal of artistic and technical freedom allowed today that not only enhance the film itself, but also create ease in production. (More detail on post-production effects is provided in Chapter 9: Visual Effects.) The decision of whether to create an effect in post or in camera during production will often depend on several factors—anything from artistic reasons to technical or budgetary restrictions. Most importantly, how you shoot your film and what kinds of effects you create are all determined by your story. Changes in focus, lighting, composition, or movement by the camera should happen because the story dictates it, not because of the technical “wow” factor behind it. The filmmaking, in essence, should become transparent so that your audience becomes involved in the story and the characters.
A particular composition often seen in live-action films or still photography is when there is a foreground subject close to the camera and another subject in the background or middle ground of a shot. This composition is often used for over-the-shoulder shots between two characters having a conversation, for example. If there is a considerable distance between these foreground and background subjects, and the camera is focused on the subject in the background, the subject in the foreground will be out of focus. Alternatively, if the foreground subject is close enough to the camera lens to be focused on, if the focus is on that subject, the background will be out of focus. If the focus between these two subjects shifts visibly in the middle of a shot, this is referred to as a rack focus shot. Aesthetically, a rack focus is used purposely to draw the audience’s attention from one subject to another within the same shot. In a live-action film, the rack focus is typically done by a camera assistant called a focus puller, who physically moves the focus dial on the lens while the cinematographer looks through the viewfinder. Since the focus puller cannot see what the final shot looks like, the start and end points of the rack focus must be determined in advance so that he can simply move it based on the numbers on the focus dial.
In stop-motion, a rack focus must obviously be done frame by frame, so often it will be the animators themselves who do this, along with animating their subjects. The same principle applies of knowing where the start and end points are on the focus dial, and moving them incrementally between frames. For stop-motion, of course, it is important to avoid touching the camera itself because this can create unwanted bumps and jitters in the shot when played back at speed. On big-budget stop-motion productions, shifts in focus are often programmed by a motion-control system, where a computer programs the start-end points of the focus and moves them infinitesimally between each frame, along with all other camera moves. Before motion control was an option, or even today for those who cannot afford a motion-control system, a rack focus can only be achieved by touching the lens. This is still possible without touching the camera, but it must be done very carefully. According to stop-motion director of photography Pete Kozachik (full interview in Chapter 5: Interview with Pete Kozachik, ASC):
[A] solution that many people employ is to attach a stick (such as a chop stick) to the lens with hot glue. This provides a more accurate lever arm with more control, and doubles as a pointer to line up with calibrated marks on cardboard or tape around the lens. Another thing that helps is to include a slight pre-load from a rubber band so the lens can’t flop around, and it helps to move in one direction only.
This functionality of hand-animating a rack focus has been taken a step further by Brett Foxwell, a mechanical engineer, machinist, and stop-motion animator originally from Chicago. Brett devised a special focus-pulling mechanism (Figures 4.13 and 4.14) that he describes here:
For the rack focus, I attach a stiff metal strip sticking radially out from the camera's focus ring. Then a lead screw or a micrometer is mounted in a position such that its travel pushes the metal strip and turns the focus ring. The lead screw has a ball attached to the tip, and a spring pulls the strip onto the ball. The lead screw typically has an inch of travel for the typical rack focus shot. The rig is attached to the camera base, so a firm mounting and a very light touch on the lead screw helps. The focus ring is still a problematic aspect and is likely to be bumped during the animation, so I have had to re-start some shots or just live with a slight shift midway through the shot. Higher-quality lenses have sturdier focus rings, and the tension spring helps with this problem.
Figure 4.15 shows a series of frames that display the stages of a rack focus shot from Brett’s independent stop-motion short film Fabricated, a creation myth of the life-forms present on Earth after the age of man. His film is still in progress and has been in production for about 6 years.
If a subject you are shooting with a stationary camera moves while the shutter opens to take the picture, the result will be a blurred image of that subject. You may have seen this happen when taking pictures at a party or any other situation. For instance, your friend was talking when you took the picture or spun around quickly without realizing you were snapping the frame, and the result is that his head is all blurry. This is one example of unwanted motion blur in a picture, but in other cases there are aesthetic choices for deliberately using the technique. You may have seen images of cars zooming down a highway at night and becoming long streaks of light caused by the headlights moving across the frame. This is caused by the light moving in a consistent direction while the shutter is open for a longer shutter speed, typically a full second or more. A long shutter speed is typically the key to making this happen, and it’s important to remember to combine this with a low f-stop since light is entering the lens for a longer period of time. The effect is to artistically achieve a feeling of motion within one static image.
For Fabricated, Brett Foxwell employed an artistic application of motion blur for a certain scene. For the effect of a surreal flame that is encountered by his puppet character, a pleated copper sheet was mounted on a motor shaft spinning along its vertical axis. The motor would be turned on to spin in front of the camera shooting a 1-second exposure, which made the sheet appear blurry while the puppet was static and stayed in focus (Figure 4.16). Between frames, Brett progressively cut the sheet apart and tacked on curved pieces of brass foil, continually bending and twisting them to create additional movement (Figure 4.17).
Motion blur can also be added to a puppet character or object that is meant to be moving very quickly across the screen or as a method of making the animation appear smoother. If the individual frames from the animation are blurred, the shot has the potential for a closer resemblance to live-action photography, which will typically have blurred action if any motion being filmed is faster than the camera’s shutter. One simple method for achieving a blur effect on set is to place a sheet of glass onto the camera lens or directly in front of it, and then use Vaseline to smudge the place where the puppet is. K-Y can also be used since it is water based and easier to wipe off the glass. The smudging would need to be removed and re-applied for each frame as it follows the motion of the object. Using this method means your puppet or object still needs to be static, but the illusion of an unfocused blur is created by the glass effect. The alternative is to find a way to actually move the puppet while each frame is being taken. People have achieved this in various ways. Attaching invisible strings to the puppet and yanking on them while capturing is one method. Finding a way to vibrate the set is another. Whichever method is used, the trick to effective motion blur is finding a way for the blurring action itself to follow the path of action in which the puppet or object is moving. If an object is moving diagonally from left to right, for instance, the direction of the blur should appear to be trailing behind in that particular direction. Applying motion blur to a puppet can also be done through various post-production methods, which are covered in Chapter 9: Visual Effects.
Camera Moves Depending on the film project you are creating and the kinds of shots required, you may want to create shots with camera moves like trucking/tracking shots, pans, tilts, and any other variety of motion. For subtle camera moves across one shot, digital pans or zooms can also be created in post-production through After Effects or other programs. Pans done in post will change the framing of your shot, but the perspective of the shot will not change. You are simply shooting a wider composition of your shot and moving the dimensions of your screen around within that composition. With a zoom, you are doing the same thing. For this effect, be aware that there may be a change in resolution quality when you zoom into your frame. When planning ahead for camera moves in post, this might give you reason to shoot your images in RAW format so that you reduce the amount of image quality loss.
In a shot where the actual camera is moved around the set, the perspective will change throughout the shot, which gives a different cinematic effect. To accomplish this effect, your camera needs to be mounted on some kind of rig that can also be moved frame by frame, along with whatever you are animating. Usually the camera itself will be mounted on a base that can be moved forward and backward, or left to right. If you want the option of tilting the camera up or down, the base itself can be a geared tripod head with incremental-motion dials on it. As usual, you generally don’t want to touch the camera itself, but rather only move the track it’s attached to. It also helps to have a ruler or some kind of marking system for registering each tiny move you make to the track. Even a long strip of tape with marks drawn on it will work just fine; there should be a point on the base where the camera is attached to line up with each mark. Your camera move can be planned beforehand, especially if you are using an exposure sheet and know exactly how many frames long your shot is. You can plan where the camera move starts and when it ends, and also plan out a slow-out and slow-in. Some software programs now have a special calculator that will help you plan out how much to move your camera over any number of frames.
Camera rigs can be very simple or more complex, depending on your skill level, building tools available, and what kind of shots you ultimately want to create. You want to think about the camera moves that are needed to tell your story or achieve a certain effect, not just move the camera for the sake of moving it. One instance where you might want the option for a camera move would be a scene where a puppet character is walking through a tunnel or hallway, and you want the camera to follow him. This action was called for in the script for Brett Foxwell’s Fabricated, so he set up his camera rig to do this, with the camera suspended from above (Figure 4.18). Having the camera rigged from above allowed for it to move through instances where the floor is visible through most of the scene. The rig can also be interchangeably assembled to have the camera supported from below.
Brett describes the construction of his rig here:
The camera support bracket is constructed out of 1-inch by 0.5-inch aluminum bar stock. I machined several different lengths and drilled many through holes and threaded holes at 0.5-inch intervals. The camera sits on a 0.25-inch aluminum plate Swiss-cheesed with holes. The result is a somewhat modular system that can accommodate many different arrangements and set-ups. The camera rig is in turn attached to the overall set structure, which is a commercially available aluminum extrusion system called 80/20. This extrusion system is the backbone of the whole set-up. The visible set and the camera set-up are both attached to the extrusion framework, so everything is integral and quite resistant to jostling. Another important factor with all of the mechanical movements (the focus puller setup, the geared heads, and the dolly track) is to have all of the components biased in the direction they will be moving before you start animating them. When you are returning to the start point, go well past the start point, and then go forward in the intended direction, stopping at the start point.
On a larger set, your camera track can be built to move through the set itself to create a trucking shot that is level with the ground. (Although in terms of scale this would be equivalent to a camera on a tripod trucking through a real set.) This was the approach taken by a former student of mine named Lucas Wareing on his student film AVA, made at Emily Carr University of Art + Design in Vancouver. For one particular long trucking shot in the film, the script called for an establishing shot moving through a large set, with a puppet sleeping at the far end and the sun moving across the sky (Figure 4.19).
The camera track itself was relatively simple, just consisting of long, flat pieces of wood (Figure 4.20). Two long, rectangular pieces are glued to a flat base, and the tripod head is affixed to a smaller piece of wood that fits snugly inside and can be slid back and forth. The track was designed with additional pieces that slot together like a puzzle so that as the camera moves forward and reaches the end of the track, more pieces can be added so that the move can continue (Figure 4.21). This also helps conceal the track at the beginning of the shot and is created in steps outside the camera frame. This solution was also used because the shot included the camera tilting upward, which may have been easier to achieve with the camera mounted from below rather than above. The camera was moved forward by hand on the sliding track, and about midway through the shot the camera also began to tilt downward while still moving forward.
To figure out exactly when the camera tilt should start and get the desired effect, a virtual camera move was programmed beforehand using Final Cut Pro, as explained by director of photography Chayse Irvin:
Final Cut Pro 7 has the ability to create logarithmic curves over a timeline. Basically what I did was create a sequence that was the same time as the shot. Since we were working on one's and not two's, a 23.98 timeline is perfect. I took a video-generated filter called "color," and I could manipulate the motion settings of that, using the scale function to animate the dolly and rotation function to animate the tilt. I found the beginning and end marks by physically moving and setting the camera where I wanted it to begin and end. That would give me a measurement in distance, as well as a degree in tilt. I made those values my beginning and end in FCP over the 23.98 timeline and duration of the whole shot, then guessed where I wanted it to accelerate or decelerate and applied those "curves" in the FCP motion tab. During animation I would just press the forward key, moving to the next frame. Then I would click on each text slug, and it would give me FCP's calculation of what the next frame's movement was. Then, I applied that to what we were working with physically. The movement in the shot was 15 seconds, which equaled 360 frames of animation.
For another animated element in the shot, in the background there was a large plank of wood with a light attached to the end, which was meant to represent the sun moving across the sky (Figure 4.22). A large chart was drawn on a sheet of wood to which the plank was attached and could be moved frame by frame according to the timing marks drawn on the chart. The light itself, although visible to the camera, was not intended to be the actual sun in the film. An illustrated sun would be composited over it in post, so the animation of the sun light was only there as a guide for tracking it. On the set, between the background and the camera, a circular disk on a wire was positioned to line up with the sun in each frame, thereby covering it. The end goal of animating the sun light throughout the shot was simply to give the proper light and shadows moving across the set.
Animator Anthony Scott, animation supervisor for Corpse Bride and Coraline, designed and built a camera rig of his own (Figure 4.23) for a recent stop-motion music video he worked on, a collaborative project with artist K Ishibashi of the band Jupiter One. Anthony named his rig “the LumberFlex” (which started as a joke, but the name stuck) and designed it for shots that need to get close to the set and move through it. The wooden camera base moves along a track made of two pipes on a long wooden platform, which is hinged in the middle and essentially works like a teeter-totter. It’s weighted on the opposite end with about 20 pounds of weights to counter-balance the heavy geared head for the camera as it slides forward, and a bungee cord keeps things from going flying if the camera is removed. Another device, a Model Mover (Figure 4.24), which incrementally pushes the platform upward by turning a wheel at the bottom, was added to the front for boom shots. For some tilt shots, Anthony would move the geared head and also attach a stick and a sheet of foam core to the back of the camera, marking the increments for the tilt motion on the foam core (Figure 4.25). All of the mobility for the LumberFlex, like any camera rig for stop-motion, is designed to be moved in increments frame by frame. The animation of both the camera moves and the objects on set was primarily shot on twos (2 frames per movement) for this project, which is surprising because the general rule for camera moves has traditionally been to shoot them on ones (1 frame per movement) to avoid a strobing effect. However, on Anthony’s previous animation work for the titles to United States of Tara (which won the stop-motion team an Emmy), they found that camera moves on twos actually looked better. More information on Anthony and K’s new animated music video project is included in Chapter 15: The Stop-Motion Community.
Finally, going from these rather advanced methods and large-scale rigs back to the very simple, filmmaker Patrick Boivin creates many of his tracking camera moves simply by attaching his camera to a miniature train track and pushing it along (Figure 4.26). His short stop-motion films, which have become a big sensation on YouTube (http://www.youtube.com/user/PatrickBoivin), such as Bboy Joker and Iron Man vs. Bruce Lee, also use a lot of dynamic camera moves that mimic a handheld quality. Patrick explains how he does this in stop-motion:
I do a lot on set with a classic photo camera tripod. There is a crank on the side that allows me to raise the head gradually. I also work with a digital camera with a much higher resolution than what I need at the final, so I can easily crop and move inside the image in post-production.
As these various examples show, no matter what tools or resources you have, a little creativity can go a long way when it comes to achieving the effects you want.
One of the biggest emerging technologies in filmmaking today is stereoscopic photography and projection. The term “stereoscopic” is a more technical term for what most people simply know as “3D.” The idea of projecting movies in 3D is nothing new. It had been experimented with since the beginning of film, but it first became popular in the 1950s. The way it worked was by rigging up two projectors, each running a duplicate print of the same film, synchronized exactly to the same frame. The projectors’ images were shown through a polarized filter and lined up in such a way that both images were spaced slightly apart on the screen, creating a small overlap between them. When viewers put on the special 3D glasses (Figure 4.27), they re-polarize the overlapping images to create the illusion of depth and actions popping out from the flat screen. The overlapping images were an attempt to mimic the fact that when we are looking at an image, our right and left eyes see the image from a slightly different perspective. (If you stop reading for a moment and look at any object close to you, first close your left eye, and then close your right eye. You will notice that the object shifts a little bit. These are the two views seen by each of your eyes, and your brain puts these images together to recognize the depth of what you see.)
3D movies in the 1950s came about mostly as a gimmick to increase declining movie attendance. Once television came along, people moved towards getting their entertainment and news from the comfort of home, rather than going to the movies. Seeing movies in 3D made the theater experience more of an event, something you couldn’t get from television. Most of the 3D movies were horror films like House of Wax and Creature from the Black Lagoon, with cheap thrills and chills to heighten the horrific effect. Because of limited projection technology and audiences complaining of headaches, the trend didn’t last very long, so movies went back to being projected normally. 3D movies emerged again for a while in the 1980s, when the availability of cable television and video rentals kept audiences at home rather than in the theaters. Once again, the trend died off until large-screen home theaters (and the practice of downloading movies onto computers and phones) became another threat to the movie-going experience. Today, many films (animated films in particular) are marketed and projected in 3D and 3D IMAX to bring people back to cinemas for a unique experience. Whether this is just another passing gimmick has yet to be seen, but the phenomenal success of James Cameron’s Avatar and the development of 3D televisions could mean that 3D is here to stay.
For the latest crop of animated films, their theatrical projection may be presented in 3D, but the films themselves are not typically made that way originally. Like the films before them, they are made with one camera and simply re-formatted for 3D projection. In 2006, Walt Disney Pictures re-issued The Nightmare Before Christmas to theaters in 3D by creating a digital copy of each frame for the overlapping image. The 3D formatting was done by a team of artists and technicians at Industrial Light & Magic, some of whom had worked on the original film. The original puppets were scanned into the computer and the sets re-created in a virtual 3D environment of featureless geometries for each scene of the film. Then each frame of the original film was digitally projected onto the geometry, the camera moved over slightly, and the frame re-photographed. This image would be shown as the right-eye image, while the left-eye image was the original version of the film. Viewing Nightmare in 3D in a properly equipped theater allowed for the detail of the hand-crafted sets and puppets to come forward in a way that brought more attention to their actual third dimensions.
Around this same time, production was going forward on Coraline, which would take the third dimension to another level of artistry and technology. The main difference was that Coraline was actually shot in stereoscopic 3D, in addition to being projected in 3D for certain screenings. Shooting in stereoscopic 3D means that instead of having one camera view that takes a flat picture of a three-dimensional scene, there are two images taken from different views of the same scene to mimic the different perspectives of our left and right eyes (Figure 4.28).
The distance between our eyes is referred to as IO (interocular distance). This same sense of distance is mimicked by stereo photography—the greater the distance is, the greater the sense of three-dimensions and depth that will be created. However, having the IO too far apart will also create a ghosting effect, where the double image shows up even with the glasses on, and the focus distracts the eye towards the edges, rather than the illusion of one solid object on screen. Also, if the IO is too great, the effect would be much too intense when projected in 3D, which would cause major headaches for the audience. However, once the comfortable parameters for the IO have been set, since the IO creates the depth to your shot, animating that distance (having the IO incrementally change throughout the shot) will cause your scene to visibly stretch away from the audience.
Putting this principle into context of stop-motion, in terms of how the viewer receives the image that is projected, the distance between our eyes is very slight, not much more than a half-inch or so. The eyes of a typical miniature stop-motion puppet are even closer together, only a few centimeters apart. Therefore, when shooting at miniature scale for stop-motion, taking the two separate images at this very small distance apart from each other is necessary to achieve a stereo effect that our brains can actually handle. On a miniature scale like a stop-motion set, two camera lenses cannot easily get as close together as mere millimeters or centimeters, so the solution is to use one camera on a slider that moves the camera back and forth (Figure 4.29). The animator positions their puppet, the camera takes a left-eye view of it, and the camera slides over slightly to take a right-eye view of the same puppet. The camera then slides back to the left-eye view in preparation for the next image, the animator moves the puppet, and the process repeats, with the camera taking two separate images for each frame of the animation. Stereoscopic films shot on a larger scale may possibly combine two lenses in one camera, but with stop-motion being on a miniature scale, if you want to shoot in stereo, it is better with the slider option.
Once you set your IO, the other step in figuring out how deep your scene will go in or pop out of the screen has to do with alignment of your shot. If you treat the screen as the middle ground of your shot, the trick is to have your left- and right-eye images line up with each other wherever you want the middle ground (also called a zero plane or zero parallax). In many cases, you may want to focus this middle ground on your puppet character or another object on screen. This way the background will appear to be deep behind the character, and if they stretch out their arm or throw something forward, for instance, this will seem to pop out at the audience.
One way to focus on the photo subject is to shoot your scene with convergence on the point where you want the middle ground. This simply means that in addition to the camera sliding back and forth, the camera is angled slightly inward in each side. It would be like two eyes crossing towards each other slightly to focus on one point. The mechanics involved to shoot with convergence are more costly because the slider not only needs to move the camera back and forth, but also turn the camera inward towards the subject. In addition to technical end of figuring this out, the end results are pretty much configured there on set, with little room for adjusting in post. The other option, favored by most for stop-motion production (including on Coraline), is to shoot parallel, meaning the camera is simply pointing straight ahead at the set while capturing the left- and right-eye images. Shooting parallel allows you to play around with the alignment in post, creating more freedom of choices for how much stereo you want to create. As far as your camera settings, you can do things however you would on any other set, although keeping a wide depth of field, with everything in focus, will tend to enhance the stereo effect.
When you have left- and right-eye images shot and want to view them in 3D on your computer to test the 3D effect, the simplest method is to create an anaglyph image that can be viewed with a pair of red-blue 3D glasses. The two images can be layered over each other in Photoshop, each on its own separate layer. Hide the left-eye layer; then for the right-eye image layer, double-click on the layer in the “Layers” window to bring up the “Layer Style” window. Under “Advanced Blending” are three checkboxes for the red, green, and blue channels of your image. Check off the “Red” box, and you will notice a color shift in the image to a bluish tone (Figures 4.30 and 4.31). Click “OK,” and when both layers are still visible, you will see a double image and can move the right-eye layer around to find the alignment you want. Viewing this anaglyph image with your red-blue 3D glasses will show it in eye-popping 3D (Figure 4.32)! The same principle of creating an anaglyph image by turning off the red channel can be applied to an entire image sequence in After Effects, along with other options for adjusting the channels for the desired effect.
When shooting in stereo, this obviously complicates the workflow of your digital images since you will have two different versions of each frame of animation you capture. For this reason, it’s a good idea to have some kind of a system for storing these images in two separate folders, ideally while you are shooting. It’s possible to shoot all your images in a row and sort through them later, selecting every other frame and copying them en masse to separate folders. However, if your software allows for automatically separating the left- and right-eye images, that might make things easier.
Independent stop-motion filmmakers Justin and Shel Rasch (Figures 4.33 and 4.34) have recently been working on a stereoscopic short film called Line in a studio they have set up in their garage, with the help of some consultants in the stereoscopic field. Here they re-iterate the principles of stereoscopic production by describing their shooting process in detail:
Basically we have a tripod with a little motion-control device, with incremental numbers we can type in for how far to move the camera left or right. Then there is a little button where we press positive or press negative, for the right eye or the left eye. In the Dragon software we’re using, we take a shot and then the software has an “Exposure 2” layer for a second set of exposures into a different directory. We hit the “R” button for the right eye, and the camera moves over, takes a shot, and moves back to the left. We have to remember we’ve done this for every frame.
It’s all a distance thing, based on how far your character is from the camera, and how 3D you want it to look off the screen is the distance of how far you put your camera movement, either left or right. We’ve been experimenting with it in After Effects to see what it looks like. You can also choose where you want the middle ground to be, so basically you decide which part of your scene you want to be screen depth, and everything in front of that will come off the screen. Also, in terms of what you want to be behind the background and what you want in the middle, you can choose that for each shot.
When you have the two images and you put them on top of each other in After Effects, you can find the point in your animation (where, for example, the character is coming forward) where everything lines up perfectly with your character so there’s no double image. That’s called the zero plane, and you’ll see a double image in the background and foreground, which are the parts popping off the screen. You can choose how close to the camera you want that zero plane to be, and all the 3D is based off that.
For more about Justin and Shel’s lives and work, see the full interview with them in Chapter 14: An Interview with Justin and Shel Rasch. Also check out the files Justin Rasch_3D.mov and Justin Rasch_3D_2.mov on the accompanying CD, with a pair of red–blue 3D glasses on!)
Whether shooting in stereo or not, I hope this chapter has helped you understand some basic things about how to set up for shooting stop-motion effectively. All things considered, once you know the basic fundamentals for your camera functions, you are free to be creative and play. When applying this creativity to a short film, though, make sure the effects and settings you experiment with serve your story first and foremost. Knowing how to use the technology to enhance the art and become part of the storytelling process should be your ultimate goal so that you can bring your audience through the story along with you.
Ken A. Priebe has a BFA from University of Michigan and a classical animation certificate from Vancouver Institute of Media Arts (VanArts). He teaches stop-motion animation courses at VanArts and the Academy of Art University Cybercampus and has worked as a 2D animator on several games and short films for Thunderbean Animation, Bigfott Studios, and his own independent projects. Ken has participated as a speaker and volunteer for the Vancouver ACM SIGGRAPH Chapter and is founder of the Breath of Life Animation Festival, an annual outreach event of animation workshops for children and their families. He is also a filmmaker, writer, puppeteer, animation historian, and author of the book The Art of Stop-Motion Animation. Ken lives near Vancouver, BC, with his graphic-artist wife Janet and their two children, Ariel and Xander.