Cinesite and Rhythm & Hues combine to bring Marmaduke to life on the big screen.
Fox's big-screen adaptation of Marmaduke (based on the long-running comic strip) goes further in its hybrid animation approach than Beverly Hills Chihuahua or other canine movies.
Cinesite in London was called on to place CG faces over live-action animals and Rhythm & Hues was tasked with creating fully-CG dogs and cats for some of the more outrageous moments (such as dancing or surfing). Craig Lyn served as overall visual effects supervisor.
"Essentially, we've taken the pipeline that was developed on a similar movie, Beverly Hills Chihuahua," recalls Matt Johnson, Cinesite's visual effects supervisor. "But on Beverly Hills Chihuahua, it was a fairly subtle style of animation with some lip sync and eyebrow raises. In Marmaduke, since they got Owen Wilson, they wanted to see him as the Great Dane. So there was a lot more facial animation work to capture the essence of Owen in the performance. I'd actually supervised Shanghai Knights, so I spent months with Owen and knew what he was like, and what we did was got the team of animators in London led by Alex Williams to basically watch every Owen Wilson movie that they could find."
According Johnson, Wilson has a slight asymmetric jaw roll and brow lift that Cinesite brought to the facial animation of the dog. This meant pushing the canine pipeline a lot further.
Cinesite used a hybrid technique combining fully textured and lit CG passes with parts of the original photography re-projected over the animated geometry. To create the CG faces of the different canine characters, Johnson's team built base head models in Autodesk Maya using photographic references of the dog actors. Blend shapes based on individual muscle shapes were then integrated into the rig using in-house tools. The muscles were further defined in the animation process. A customized rig was designed to mimic the muscle structure of a dog and this became the primary layer of canine muscles. The muscles were further defined on a secondary layer, mimicking the muscles of a human face. Cinesite additionally used RenderMan and Shake.
In many shots, Cinesite also added 3D eyes and whiskers. "Using Cinesite's hybrid texture projection and CG fur techniques allowed our talking animals to sit seamlessly alongside the production's live-action animal performances," continues Johnson.
The final result was the creation of 10 live-action dogs in 650 shots that have had CG work seamlessly blended into their faces and around their necks to sit pixel by pixel alongside the real fur.
"We had to make sure we could cut very precisely back and forth from visual effects and live-action character," Johnson adds. "So we found that the projected texture technique essentially gave you 75% of the real dog and, when it broke down, we had to put in very accurate muzzles, teeth, eyebrow area, depending on what the animation requirements of the shot were to bring the two things together. No two shots are the same so you have a toolkit that allows an artist the freedom to cherry pick which bits of the shot are going to be texture or which bits are going to be CG fur and make the shot as convincing as you can based on what the real dog is doing."
Meanwhile, Rhythm & Hues created 100 + shots, 80 of which were CG dogs. But this hybrid approach with so many characters was a far cry from Garfield, Scooby-Doo or the upcoming Yogi Bear. "We had to change our strategy to make it more photorealistic," concedes Mike O'Neal, Rhythm & Hues' visual effects supervisor. "They weren't going for a chipmunk movie with a handful of characters based on a popular cartoon. They wanted it to feel like real animals. Even our all-CG shots had to be photoreal. They wanted people to think that that's a real dog on a surf board.
"That changed how we rigged the face, a lot of the modeling," O'Neal adds. "We had to do extra controls to the head to shape it more like the actual dogs in the scene. Little things like getting the collars to stay outside of the fur ended up being a huge task because there are so many dogs and so much memory and so much information being passed from stage to stage. A lot of controls had to be added after the fact in our technical animation stage just to be able to handle collars flipping; every collar has a simulation on it so the tag is moving and pushing down the fur; all the fur is moving during the dance sequence, which has 40 dogs.
Still, there were some inherent challenges in putting photoreal-looking animals in outrageous situations. "There are moments when you jump over the line in trying to make it funny where you can still make it look like it's photoreal," O'Neal suggests. "In the dancing videogame sequence, for example, the idea was to sneak up on the audience with it. So you start with an actual dog standing on a machine, and then you go to our photoreal CG dog doing very realistic dog-like dance steps. And slowly with each cut we gradually go more over-the-top. So by the end of the sequence you see him doing things that no dog could ever do like dancing on his head."
The surfing sequence was additionally challenging because of the CG water. "We had done fluids and oceans before in your more action comic book kind of movies, but we never had to do it in a photoreal setting with a photoreal dog and to have breaking waves and white water and that level of complexity," O'Neal says. "We used Houdini for the water and Mantra to render it. The surface of the water was a straightforward grid with a lot of animation controls on it. But then every little part of the water where it needed to break and shoot off and turn into white water and foam required extra pieces. Any time the surfboard touched the water and created a wake, it had to be simulated and matched back on and rendered into the system.
"The movie is all about perception," O'Neal maintains, "so even if the dog matches the dog in the previous scene, if there's a camera angle that looks like it doesn't match, then you have to fix it: making eyes bigger, making eyes smaller, shortening the nose; lengthening the nose. We had to do a lot of things that normally you wouldn't worry about with a regular animation set up."
Bill Desowitz is senior editor of AWN & VFXWorld.