Christopher Harz takes a look at new innovations taking place in burgeoning 3D industries beyond entertainment.
Emerging 3D technology is allowing new opportunities in many new areas, including television, as the effects for the Battlestar Galactica display. Images courtesy of NVIDIA. © Sci Fi Channel.
Hardly a day seems to go by without some new application area for CG animation arising beyond the well-known fields of gaming and film vfx. Its interesting to speculate how and when all these 3D animation/graphics applications will meet in the middle when will architects, videogame designers, film previsualization supervisors, vfx specialists, CAM/CAD engineers, medical and technical visualization experts, military simulator designers, virtual set builders, VR (Virtual Reality) and AR (Augmented Reality) animators and many others be able to speak much the same language, use the same tools and file formats and cross-pollinate what are still far-distant fields, and collaborate on projects with shared workflow? Many non-graphics professionals are asking a further question: When will 3D graphics be available for the mainstream for the many doctors, businessmen, eLearning educators, engineers and others that want to do presentations with animation and virtual environments, but do not have the budget to hire an ILM for their projects?
One of the side effects of the CG application expansion is that companies which specialized in one graphics market are now being faced with a wide range of market demands, forcing them either to offer a panoply of products or focus on a specific niche in the expanding market space. One company that has decided to adapt to the broadening CG field is NVIDIA (www.nvidia.com). Once known mainly for mainstream consumer-level graphics boards, the company now offers everything from high-end workstation GPUs (Graphics Processor Units) such as its speed demon Quadro FX line to graphics chipsets for cell phones. In addition, NVIDIA, which used to only worry about getting shelf space for its products, is now intimately connected with its customers, offering support and technical remedies for professional users that are pushing the envelope. It has gone so far as to develop a programming language, Cg (as in C for graphics), which directly addresses the GPUs for gaming machines or workstation systems such as render farms, and which is now included in major 3D toolsets such as Maya, Softimage and 3ds max, as well as CAD programs such as AutoCad and CATIA.
CG is an example of a new generation of high-level shading languages that includes Microsofts HLSL and the OpenGL Shading Language. Shaders the programs that implement an effect on a set of pixels or vertices have in the past been targeted to a specific platform. Having cross-platform shading capability means that different graphics apps such as CAD or DCC (Digital Content Creation) can now share 3D models that will retain important characteristics such as surface textures and subsurface physical properties. The new shaders also allow on-the-fly iterations for designers, so they can make changes and get instant feedback instead of having to wait for offline rendering. Lastly, this advance makes possible the re-purposing of 3D objects and spaces, so that a beautifully CAD-generated piece of furniture, for instance, can be used in a virtual film set, then by the associated game, and finally (in miniature) for the toys or action figures arising from a successful movie/game project. Eventually the object could be used for an unrelated project such as for a teacher in a 3D virtual learning space, or an attorney creating a virtual room to present to a jury.
The drive for ever-greater realism is continuing unchecked. Our customers have been telling us, I want it to be better every six months, says Ian Williams, senior applied engineer, professional graphics division, NVIDIA. The level of expectations is accelerating very quickly, especially because of advances in the gaming industry. Because there is so much customer pull, we have to work very closely with them in our development. NVIDIA is clearly leveraging its core expertise in gaming consoles (its graphics and chipset controllers are in Microsofts Xbox) and desktops to develop both high-end GPUs for workstations at one end of the capability range and compact graphics chips for PDAs and phones at the other. Some companies became dedicated workstation developers without core graphics capabilities, notes Williams. We found we needed a full range of graphics capabilities in all of our products. Of course, the high-end machines have many added features, but were of the opinion that if someone uses a platform to make a living, then that is a workstation, no matter what size it is.
This design philosophy extends to laptops, which a few years ago had weak graphics sets of perhaps 16MB in them, useful for doing a little Photoshop, at best. New laptops such as the Dell M60 have advanced 128MB chipsets (soon to be higher), capable of doing rendering and non-linear editing. Laptops with wide screens and this kind of graphics power are now serious workstations for animators in the field, adds Williams.
The continual upgrade of GPUs would normally place a real logistics burden on studios, especially when a large number need to be upgraded at the same time. A key feature of our products is that they all have the same drivers, all backwards compatible, says Williams, so our clients never have to worry about that.
The increase in the power of GPUs is ironically not leading to a reduction in total processing time at studios. We have a rule called Blinns Law, notes Beth Loughney, general manager of NVIDIAs film group. It states that work such as rendering frames takes a certain amount of time, up to a studios pain point say five hours. If the frames get rendered faster, then the artists will immediately go for higher levels of detail. Blinns Law is related to the folk saying, Great Art is only abandoned, never finished, and reminds one of Alice in Wonderland, where the Red Queen had to run faster and faster in order to stay in the same place.
Since in spite of the huge increases in the speed and power of GPUs, NVIDIAs customers are still spending lots of time in the rendering/compositing cycles because the bar on fidelity and resolution keeps getting raised in the industry the company decided to also focus on saving time in the total production cycle. It does this by assuring that its graphics resources process both accurately and in near-realtime, so that major production steps such as previsualization, shading and lighting, final rendering and compositing, vfx and color correction are accomplished without serious hiccups.
Ten years ago the workflow was really stop and go for studios and gaming companies, says Loughney. Many companies used proprietary toolsets, and the formats for different graphics professionals tended to be incompatible with each other, so there were major glitches with color, shape and texture discontinuities. Rendering of even simple scenes could take days, and would go through lengthy cycles of corrections, delayed feedback, and then more corrections. Today, the processing in the graphics sets can help keep assure the continuity of the content. And the director can now, for instance, sit next to the technical director as hes making lighting decisions and get instant feedback for different options, instead of having to wait for hours until a rendered version comes back. The TD no longer has to guess what the director means when he says something like, Make the color 10% richer he can now make an adjustment and ask, Is this what you mean? The same is true when the director makes a compositing decision such as, Move that element five degrees to the left he can instantly see the result. To get closer to the feature film customer base, NVIDIA recently bought Ex Luna, and morphed that company into its studio relations group.
Another great example of a cross-cultural company is Discreet/Autodesk, which not only expanded its market for game developers in the past year its now used in more than 80% of the top-selling game titles, including Grand Theft Auto: Vice City, Prince of Persia: The Sands of Time and the Star Wars series, but has also pushed ahead with new design and visualization tools, allowing 3ds max version 6 to be used by architects, industrial and mechanical designers and professionals in the medical and educational sectors. Recent design work with max (which was once used only by entertainment animators) includes the Institute of Contemporary Art in Boston, the Great Court at the British Museum in London, Burton Snow Boards and the Culture Music Center in Copenhagen.
A major bonding of two disparate communities of practice occurred recently with the roll-out of a 3ds max plug-in that will work with OpenFlight, the dominant toolset for the military simulation community; previously, files from such simulations had been incompatible with those of the entertainment community, with a tendency for data and attributes to be lost or destroyed during importing and exporting. It was the military, specifically a DARPA program named SIMNET (SIMulation NETwork) that started online gaming, and military simulations are still world class in areas such as 3D terrain modeling of real world locations (including high-res versions of hundreds of miles of Iraq). At the same time, the military has fallen behind the videogame industry in many areas such as character development and storytelling building rich, complex scenarios is not a core strength of military simulation designers, whereas it is meat and potatoes for game companies such as Electronic Arts or THX. The ability of these two communities to cross-fertilize should be highly beneficial to both, adding real-world (as opposed to mostly fantasy) terrain to one, and a more human element to the other, says Ken Pimentel, Discreets director of business development. A dramatic indication of the coming together of these two very separate gaming communities took place at this years IITSEC show (www.iitsec.org), the militarys version of the videogame industrys E3 conference (www.e3expo.com), where Microsoft showed off its popular Flight Simulator program as a training sim that had been customized for Navy pilots!
It is clear, then, that new cross-platform toolsets and cultural exchanges are bringing formerly distant 3D communities of practice together. It is also clear that the hot action is in the vfx and game community new technologies shown at this years SIGGRAPH conference (www.siggraph.org) for the scientific and technical communities pale when compared to advances in 3D gaming. The challenge is to effectively leverage game-style environments for educational and other collaborative virtual environments, states professor Paul Sparks, who teaches Human Computer Interaction at Pepperdine University. The challenge Dr. Sparks poses is the central theme at the Serious Games Summit at the upcoming Game Developers Conference (www.gdconf.com), which will address how game developers can build bridges to new, non-entertainment markets and applications. It would make sense that graphics designers from technical communities should consider repurposing existing gaming resources and piggybacking technologies. But how can they actually access the wealth of ever better 3D models and environments that the gaming community is generating?
The market for using 3D environments in non-entertainment communities is certainly there. It was voiced by the training community at this years Training expo (www.trainingconference.com), by scores of learning and presentations establishments that want to use graphics, instead of text, but are frustrated by the daunting costs and development times of creating virtual environments from scratch. Virtual environments for educational workspaces are too expensive and take too long to build, says Van Weigel, a prominent educator, developer of 2D learning environments (www.teach2learn.com) and author of Deep Learning for a Digital Age. Though game technology and 3D environments have a lot to offer, we need to come up with faster, less expensive ways to create them.
Enter companies such as Turbo Squid (www.turbosquid.com), which offer libraries of 3D objects and environments at prices that are a tiny fraction of their original development costs. Turbo Squid has more than 80,000 royalty-free 3D models, textures, audio files and motion capture files available for downloading. What if a particular model that was used in a 3D game is not exactly suited for, say, a virtual classroom? We now offer a custom 3D modeling service, says Dan Lion, vp of sales and marketing. We have been receiving an increasing number of requests from clients about custom modeling, from changing a model they purchased online to designing a range of characters for a game, he notes. At the same time, our vendors have expressed an interested in earning additional revenue when between projects. This service is a logical extension of Turbo Squid that benefits both parties Turbo Squid offers this service at no cost to clients. Someone that wants to create a 3D environment can enter his request online, and have access to over 6,300 digital artists around the world to help him fulfill it. Although this is not quite yet a Kinkos of 3D animation (a place where professionals can come to let logistics be simply and quickly handled by others), it is an important step in that direction.
Thus, it seems to be a time when many different forms of 3D that were previously in different universes are coming together, which is especially fortunate given the trend toward collaboration. The days when all the workers for a major project lived in one building are long gone, replaced by work spread out over thousands of miles and many communities of practice for highly specialized tasks. Animators (and animation schools) should try to take advantage of this new blurring of borders and cross-train in multiple disciplines and different steps of the production process, to help the dialogue between the many sets of design teams involved and to help with the technology transfer from the gaming community into business and technical fields.
Lets hope that the day is not far away when a combination of simpler toolsets and the ability to re-purpose 3D resources will let many other professions take advantage of the production and learning available from the rich and colorful environments created by 3D animators. Perhaps in the near future a teacher can tap into a Kinkos of 3D animation, order a virtual classroom with furniture, a whiteboard, a video display and other features, and have the package delivered to him within days for thousands of dollars, instead of in years for hundreds of thousands of dollars. Such environments may not have the intense creative and artistic values of a Lord of the Rings film or a Star Wars game but they will be light years away from the reams of text and dry prose still used for most presentations nowadays, and will offer animators opportunities for using their talents to brighten and enrich new worlds.
Christopher Harz is a program and business development executive for new media enterprises around the world, and covers topics such as the Next Gen Internet, vfx, online gaming and wireless media. As vp of marketing and production at Hollyworlds, he produced 3D games for films such as Spawn, The 5th Element, Titanic and Lost in Space, and for TV shows such as Xena, Warrior Princess. As svp of marketing and program development at Perceptronics, Harz helped build the first massive-scale online game worlds, including the $240 million 3D animation virtual world, SIMNET. He also worked on combat robots and war gaming at the Rand Corp., the American military think tank.