Search form

'Revenge of the Sith': Part 1 — The Circle is Now Complete

In honor of the last Star Wars movie, Episode III Revenge of the Sith, VFXWorld begins its own trilogy with an overview of ILMs technical achievements in George Lucas second triptych. This will be followed by Sith articles exploring digital environments and the making of the newest CG villain, General Grievous.

Concept art from Episode III hints at the wonder within the trilogy. All images © &  Lucasfilm Ltd. All rights reserved. Digital work by ILM. Artwork by Ryan Church.

George Lucas reiterated the belief recently that he never wouldve attempted the Star Wars prequels without the ascendance of digital technology. In fact, the determining factor was the extraordinary CG dinosaurs in both Jurassic Park (1993) and Dragonheart (1996). Credit Lucas and his dedicated staff at Industrial Light & Magic, therefore, with reinvigorating the industry with their digital revolution. With todays eventful release of Star Wars: Episode III Revenge of the Sith (through Twentieth Century Fox), that revolution has set a new benchmark, witness the stunning CGI on display, comprising 2,151 vfx shots (of which 1,269 are animated). That compares with 2,000 vfx shots in Attack of the Clones and 1,980 vfx shots in The Phantom Menace. In terms of overall animation, Revenge of the Sith contains 90 minutes vs. 70 minutes in Attack of the Clones and 60 minutes in The Phantom Menace. With nearly 800 CGI characters and 50 3D environments, thats more than most animated features!

The results of Revenge of the Sith are magnificent eye candy that will be studied for years to come, such as the massive opening battle high above Coruscant, the long-awaited duel between Obi-Wan and Anakin on the volcano planet Mustafar, the Vietnam-inspired battle on the lush Wookie planet Kashyyyk, the exciting chase and hand-to-hand fight between Obi-Wan and the Droid Leader General Grievous on the sinkhole planet Utapau, the improved CG work on Yoda and the stunning environmental shots of all the other exotic planets, including Felucia and the more familiar Alderaan, Naboo and Tatooine.

In looking back at all of the technical achievements of Episodes I-III, visual effects supervisor John Knoll (who has written a book on the digital environments, to be published by Abrams in the fall) and animation director Rob Coleman suggest that technology has finally caught up with Lucas vast imagination on Revenge of the Sith.

One of the things Ive liked is that George writes what he wants and its pretty much up to us to figure out how to do it, Knoll says. Whats typical around here is that technology is developed for one show and you need more of it for the next one. But the general tendency is to build minimal changes for the next project. Once in a while, something comes through thats so much bigger that you cant suffer through a scale up of the previous technology. An example of this from Episode I is where there were scenes with hundreds of thousands of characters. Prior to that the largest crowd scene we handled was in Mars Attacks!, where we had 18 aliens and that was the bare limit of what you could load up in Softimage and be able to manage. Adding more characters just wasnt going to work.

We rethought the problem [utilizing motion capture and more advanced simulation] and came up with a new way, which is what the Star Wars pictures do: they break the system and force a new way of thinking and we end of being better for it in the end. Look at Jar Jar from Episode I. He's computer-generated and interacts with real people. And now its kind of taken for granted. Certainly Gollum is a good example of the next-generation of that. Other filmmakers realized they could do it too.

Episode I was a breakthrough in CG characters and other digital creations, which the follow-ups only improved upon.

Matchmoving in three-dimensional space was another breakthrough in Episodes I and II. On Episode I, we had a pan/tilt matchmove system that was semi automated. That got us through most of the shots. Episode II was kind of the opposite: George had a super technocrane on set, so almost all the cameras were moving, translating through space, so we needed a way of solving these six-degree of freedom problems and sometimes seven-degree of freedom problems, where wed be moving all axes and the camera would be zooming all the time. We implemented a full 3D motion tracking system, which was the first of its kind. It enabled a lot of things and now all matchmoves are done that way. It gets better and faster. You could take real high quality images without worrying about it.

For Episode III, which Knoll characterizes as evolutionary rather than revolutionary, the ILM team pushed CG mostly for greater efficiency. For instance, they incorporated radiosity for the first time into the pipeline, thanks to faster computers, to achieve greater photorealism for both characters and environments. They also took advantage of a new iteration of the companys fluid sim engine to achieve authentic-looking glowing lava (using the thickening agent methylcel) on Mustafar. Volume is a big issue so you have to carry a big volume grid in memory. We implemented an octrie computer memory representation, allowing high res fluid sims with nice detail where you want it.

According to Coleman, on Episode I animators ran their own simulations with help from the creature development team. On Episodes II and II, however, all simulations were passed off to a specialized sim team or creature dev team that handled all cloth, hair and rigid body (such as a droid getting slashed in half or a ship that crashes and breaks apart). This became a very efficient form of specialization. Five or 10 people worked on this and became very, very good at it. We didnt understand in the 70s how vast Georges imagination was. This world is fully realized in his head and you can ask him anything.

CG Yoda and MoCap advances were some of the highlights of Episode II.

Meanwhile, the cloth sim program has come full circle in Episode III. Yodas robe is more worn, so we have this fuzzy level on top of the clothing that we werent able to do before, Coleman explains. That has been rolled into this movie. The cloth simulation program was written specifically for Episode II and we were happy where we brought the clothes, especially in Yodas fight. We worked on the new Terminator and Pirates of the Caribbean and when we started work on Episode III, we realized that they had made great strides in Pirates that had given greater control over the simulations, the turnaround time for the shots was faster, they put in some new tools to achieve clothing at a much more realistic level, so that was all rolled into Yoda for this movie. To me its a cornerstone of whether were going to be successful or not because here we have a little green man who wears exactly the same clothes as Ewan McGregor, and theyre in some of the same shots together and the digital wind has to work with both of them. And when they jump and leap it has to look the same.

More significantly, however, was a major layout change on Episode III: We did an informal layout process on the first two prequels, Knoll continues. Peter Dalton did a layout phase on the pod race for Episode I to get continuity correct. He took it to final edit plotting out who goes where and when. He did the same thing on Episode II with the long action sequence toward the end. So with Episode III we instituted a layout department [supervised by Brian Cantwell] that figures out continuity issues and works out cameras together as a sequence.

If we needed a modification to a camera or something needed to be laid out in a sensible way, he took care of it before it got to the animators and TDs. There were sometimes problems in the past when Rob and I had to look at matchmoves or approve them to go to the animators or TDs. Wed look at a particular shot in a sequence and it would look fine until it went to the animator. And wed kick it back. The [new] layout department would preflight them correctly before they got to the next step in the pipe and that meant a lot less kick back, and it would be a lot more efficient.

Coleman concurs: John and I worked on all three films and the gotcha part was the beginning of the pipeline. We needed an overall supervisor to figure out the continuity of where a shot fit within the overall sequence. And in computer graphics, when youre interested in three-space, does it fit logically with the computer lighting and computer characters?

This was never more evident than in the bravura opening battle, which is like an amusement park ride and which Coleman terms the biggest space battle of all time. Theres 10 minutes of small and large fighters zipping and zapping. On previous films, I focused my team on creatures and droid work and on this film George wanted to build on the experience ILM had on Pearl Harbor and raise the bar with this ultimate battle in space. I got Scott Benza and Paul Kavanagh, who worked on Pearl Harbor, specifically on fight dynamics. We travel with Obi-Wan and Anakin zipping right over camera and the equivalent of battleship row, except floating in three-space around in different angles, various fighters bombarding each other, while little fighters fly in between them.

Digital character General Grievous just shows off the leap forward the artists made from Episode I to Episode III.

Coleman also takes pride in skin and lighting advancements. Theres great use of subsurface scattering that you can see in the tight close-ups of Grievous and Yoda. With Yoda, you have top of head to chin shots where his sustained close-ups are very realistic. The skin has improved by leaps and bounds here. Weve also put in global illumination, created new shaders to handle skin because we specifically knew George would want to come in closer. Yoda has developed into a very strong supporting actor in this movie. He has some very important scenes where he delivers important story points and his lines are delivered more convincingly with the live actors. Both Yoda and Grievous are fully animated with no MoCap. We wanted something special about Grievous performance and wanted him to have an animated look. We wanted the control of his physicality.

Coleman admits that the biggest challenge of all was introducing a CG Yoda in Episode II and making him totally CG in Episode III: Watto was like a setup. George smiled when he saw the first animation performance of Watto. Hed always known that he going to ask for a CG Yoda on Episode I, but didnt tell us until we were preparing for Episode II so we wouldnt get freaked out. The challenge for me was that all of the acting was going to be a puppet and all of the fighting was going to be CG. I decided that we needed to introduce some more flexibility in the acting so it wouldnt be too much a jump for the audience during the fighting. We updated him just a bit. And now we are rewarded with a fully CG Yoda at the center of the action.

Theres no question about doing it anymore: George knows we can. He would call Episode II Anakin in Toontown to me. He loves animation. Every Tuesday and Thursday, we show him animation and hes literally bouncing in his seat. He loves animated movies and animated characters, particularly in his movies because it makes them exotic. Having a fat, floating character or a little green man tells you that you are in a different world. He always wanted to get past people in rubber heads so computer animation has really freed him up stylistically, visually and creativity and you can actually see it in his expression. Hes serene. Hes in his element in post-production.

Bill Desowitz is editor of VFXWorld.

Bill Desowitz's picture

Bill Desowitz, former editor of VFXWorld, is currently the Crafts Editor of IndieWire.