Dan Gregoire, JAK Films visualization effects supervisor on Episodes II and III, writes about state of previsualization on the Star Wars prequels.
Previsualization also known as previs has, among others, three main purposes: to sell a concept and to save time and money. For the Star Wars prequels, previs is an essential tool that George Lucas has used to paint an accurate picture of what his final films will look like long before, and even during, production. For a director such as Lucas, previs helps answer questions, explore options and aide in getting his point across to the numerous people in the production pipeline, offering more creative and financial control.
Throughout the Star Wars movies previs has been used to help convey the complex worlds we see onscreen. On Episode IV A New Hope, extensive use of WWII fighter footage was used to help realize the final Death Star Trench battle. For Episode V The Empire Strikes Back, hand drawn animated panels that resembled black-and-white Saturday morning cartoons were used to explain the Empires Walker advance toward the Rebel base on Hoth. Taking it even further, small video cameras were used with miniatures for the speeder bike chase in Episode VI Return of the Jedi. With each movie, Lucas continued to use the latest easily available technology to describe complex scenes before a lot of money was spent on the final effects shots.
With a track record of always using leading edge technology, it is no surprise that when digital technology became more readily available Lucas was at the forefront of using it for previs. For this reason, David Dozoretz, who headed up some of the first digital previs used in film production, was hand picked by the producer, Rick McCallum, for the job of previs supervisor on Episode I.
Episode I The Phantom Menace started as a combination of the video technology employed on Return of the Jedi and new digital technology available at that time. Dozoretz had a small team who worked on many of the larger, more complicated scenes, including the pod race and the end space battle. Much of the early work involved using video cameras to go out and shoot footage to be manipulated later using digital technology. The actors were either artists or relatives who wanted to have some fun. Costumes were provide by the Lucas archives and modified by Gillian Libbert, the costume appearance manager. All that was really needed was enough to approximate the intended sequence of shots. After filming, these shots were then taken into the computer as plates, or the base of the shot. Depending on what sequence was being worked on, specific digital elements were added to fill out the sequence into what Lucas envisioned. Flying space age vehicles, lasers and lightsabers were added, just to name a few.
While the software used then, Electricimage and Adobes After Effects, pales in comparison to what we use today, it was still more or less available to average people. It was not only cost effective, but also very powerful in the right hands. Electricimage, known for its speed and quality, had the advantage running on the Macintosh along with After Effects. No longer did you need expensive hardware to pull off rough special effects. With these tools together on an inexpensive platform, animatics artists were able to add digital elements to video plates with reasonable effort in a short amount of time. Short enough to be essential not only before production began but during as well.
As the process matured over the course of the movie, digital elements started to overtake the practical elements they were married to. At times digital elements were all that were needed. As the technology got better and better and the confidence of the artists grew, it became faster and faster to simply do everything on the computer including the actors.
Episode II Attack of the Clones started much the same way. Ben Burtt, the editor on Episode II, would film his co-workers performing in front of greenscreen and then cut them in front of old war footage or generic movies created by the previs team. In combination with what was shot by Burtt, the previs team, now using Alias Maya, on Windows PCs was able to advance forward technologically and start to do more complex animation. With Mayas advanced feature set, which included strong character animation, dynamic simulations and many other complex digital tools, we removed the need to shoot willing extras on camera. Of course, Adobes After Effects on the Macintosh was still the foundation of compositing the final previs shots that were often combinations of digital elements, photo images, character stand-ins and reference footage.
For Episode II, we wanted to have as many sequences available on-set as possible. As mentioned before previs is a tremendous tool to communicate. On set previsualization is potentially more important because there are so many people standing around trying to figure out what is going on. That costs a lot of money. In a world like Star Wars it can be difficult to convey what is going on to a group of people on-set. What the world looks like, what the actors should be reacting to and what the end goal is are difficult concepts when youre standing in front of a greenscreen. Using a previs movie on-set during shooting has the advantage of placing everyone on the same page because they have a distinct visual reference to follow.
Since we started on Episode II before principal photography, there were no actors to base our shots on. Also, the first scene we worked on was the coruscant speeder chase. It was a sequence that would be very difficult to shoot with a video camera and composite together with digital elements. To achieve the look Lucas envisioned, we decided it would be easier to build our own digital version of Obi-Wan and Anakin and fly them digitally though the city. This worked out so well that it marked the last time a video camera would be used in the pre-production of the film. For the next scenes we worked on, the Obi-Wan vs. Jango Fett rain battle and the Asteroid chase, we went 100% digital.
The Obi-Jango rain battle posed a particular challenge in that it was an all-out brawl and fistfight. This would take a lot of complex character animation to pull off. We simply didnt have the time to animate characters thoroughly enough to convey the fight. At this point, Dozoretz decided to use ILM and its motion capture facilities to form the base of the fight. In one day we captured what we thought would be a really good base for the sequence. Using that data in Maya we were able to pull off a first version of the sequence that was very believable and was closely followed on-set during shooting. However, this marked the last time we would use motion capture on a sequence. Overall it produced good results but it was a technological challenge that at the time took more time than it was ultimately worth.
Over the length of production of Episode II, the previs department became a pre-post production facility. After shooting had taken place, there were still big questions to be answered. First of all, most of what had been shot was on blue or greenscreen, making it very difficult to cut together and especially difficult to view. Watching a 20-minute sequence on blue or green is enough to send even experienced visual effects artists to the loony bin or at least into a deep sleep. At this point, it became our responsibility to fill these plates out with digital sets and characters. So not only were we putting together all digital sequences but now we were doing all the steps ILM does to complete the final shots only we were doing them very rough and very fast. From tracking the plate, to keying out the blue to rotoscoping to full compositing, each artist was a full production pipeline unto to himself. By completing this work in an early rough form, Burtt and Lucas were able to make much better decisions about how to cut sequences of the film and of course our work made these sequences viewable. By the time ILM started taking on the final shots, the previs department had almost the entire movie filled out with digital sets, digital characters and digital scenes that were believable and very story complete.
By the end of Episode II, the previs department had grown to about a dozen artists from four at the start of the show. Powered by Advanced Micro Devices (AMD) PC hardware and Aliass Maya with After Effects on the Macintosh we were able to take on a lot more tasks than we did at the start. The quality of our work had increased to the point where we were answering all sorts of other questions that were usually only addressed in the final stages of visual effects work. Lighting, texture, mood, feel, character animation, particle simulations, dynamic simulations were all entering the realm of previs. No longer were we only doing simple animation and 2D cheats to get half the point of the shot across. We were not only looking to see if the shot was working, but also how we could make it visually look better. We were taking art and design direction from art directors Erik Tiemens and Ryan Church on aesthetics, mood and feel and trying to answer more than we had in the past. This in turn gave George a much more powerful tool in the decision making process and helped him maintain greater control over his film.
Episode III is no exception to the processes we developed on II. For Episode III, we simply started earlier and with more people on staff with the express intent of providing a lot more material to Lucas while on set in Sydney. By the time you read this article, my group of 11 artists and I have completed more than 1,700 original shots since March 2003. Amazingly, this number is fast approaching the total number created for all of Episode I and were just getting started.
For Episode III, the concentration has been more on shot quality and substance. Were spending more time on character animation and shot blocking and proper cinematic techniques than we ever have before. It is important that we try to be more realistic so when it comes time to recreate shots on-set or at ILM, George has confidence that they will work. This by no means hinders us creatively. In fact, it has made our work more easily edited and believable leaving more exciting sequences.
One example of pushing the limits is our move to a 64-bit architecture done in partnership with AMD. The effects industry has been bouncing off the upper ceiling of what todays 32-bit hardware can process for some time now.
Not only is this 64-bit technology able to accommodate 32-bit legacy, applications, but it is also able to leverage 64-bit applications when they become available. A markedly improved internal architecture, meanwhile, complements the chip. Using early systems, we have seen a dramatic increase in productivity using AMD Opteron-based hardware even when running 32-bit applications and operating systems. Some of the tests weve run have improvements to the tune of double the performance, especially with heavy file texture-based rendering. This is an incredible boon to our process and gives us a great new tool to do more with less. Were looking forward to moving ahead with this and other new technologies because it strips away a few of the technical issues and allows us to simply create more effectively.
Daniel Gregoire is the visualization effects supervisor at JAK Films.