In order to get a better understanding of how game developers are approaching next-gen game development, I spoke with many industry professionals ranging from my colleagues at Blue Fang Games to developers of the software we use to make games. What is clear from these discussions is that the game industry is excited by a future with almost no limits on our imagination.
For both game developers and gamers, next week marks a milestone: the first next-gen videogame console launch. On Nov. 22, Microsoft will release the Xbox 360 system along with 18 games. The console will be available in North America initially and then made available to the rest of the world within two weeks. Meanwhile, Sony’s PlayStation 3 (PS3) and Nintendo’s Revolution are expected early next year.
While the hardware and software manufacturers have years of hard work ahead of them, gamers are looking forward to some of the most realistic game graphics ever seen. Early looks at these next-gen game titles have shown images on par with some of the latest Hollywood vfx blockbusters. In fact, game engines have gotten so good, that many games display cinematics machinima style, via the game engine itself.
Kelly Scott, lead animator at Blue Fang Games, says, “With the graphical advances being made these days, I really can’t see the need for pre-rendered cinematics. For some titles they will remain as non-vital icing on the cake, but the whole trend of the gaming world these days is toward the immersive experience and having a distinct graphical difference interferes with that immersive illusion. If you want to be wowed by an impressive visual experience, go watch Final Fantasy: Advent Children, but you’re going for a whole different experience than a game that draws you in as part of the action. I think the gaming audience understands this and games that have a solid artistic game environment are never criticized for using it for their cut scenes.”
Shawn Robertson, lead artist at Irrational Games, adds, “With the next-generation of consoles coming out, sometimes it’s hard to tell what is in-game, and what is pre-rendered. With game engines getting more and more powerful, in-game assets are reaching the same level of detail that pre-rendered assets have. Because of this, I think we’ll see less pre-rendered cinematics being used. That being said, the pre-rendered cinematics that we will be seeing will be more over the top, showcasing things that may be too complicated to render in a game engine.”
Michel Kripalani, games industry manager for Autodesk (developer of 3ds Max), concurs that the role of pre-rendered cinematics will diminish: “The Xbox 360 and PS3 are fast enough to display great looking cut-scenes. The cost of generating anything pre-rendered will be difficult to justify. We will still see outstanding, pre-rendered movies that will look gorgeous, but they will be few and far between. They will likely be relegated to game trailers and introductory sequences.”
Alex Chouls, technical art director at Blue Fang Games, counters that pre-rendered cinematics have less dependencies on an evolving code base and can be more easily outsourced. “Obviously in-engine cinematics will continue to grow, especially as games become more sophisticated narratively, but for many games pre-rendered will still be the more practical solution.”
One thing seems certain; the rising costs of game development will force change on the industry. How much that change has a direct impact on cinematics is still uncertain. As with most art forms, tools are an important part of the equation. The tools artists use to create cinematics and game animation will continue to play a pivotal role in both actual production and production budgeting. Recently, there have been some amazing advances in software and also some surprising consolidation amongst 3D software developers.
Joe Gunn, applications specialist at Autodesk, suggests, There will always be room for speed improvements when you talk about rendering. With 3ds Max 8, the animation system that is in place is very robust and will just continue to improve as pipelines adapt the new feature set. During that process, thats where we will find areas for improvement and will invest.
Equally important, according to Gunn, are improvements in asset management and the ability for software to deal with larger datasets. The production pipeline is growing larger even for very small facilities of just a few artists. Being able to manage the assets will become critical and Autodesk stepped up to the plate with giving their Vault system away with 3ds Max 8. Also, with next-gen games, poly counts will be larger thus increasing the datasets. Levels will become bigger and more detailed than ever. This is intensified with the industry adoption of normal map technology. The next gen consoles can handle large datasets, but the source files to create those larger data sets will be even larger. In some cases, we will see the source files contain 20x the level of detail that the final game can display, which is already 8x-10x the previous generation. The issue of scale and how to manage all of this data is the core issue in-game development today, says Gunn.
When asked about tools artist have yet to see available, Robertson says he hopes for better tools for dealing with transitions between animations. Its a shame when you watch really nice animations fall apart because the transitioning between two animations never quite feels totally natural.
Chouls is hopeful that animation tools that work at a higher level of abstraction (e.g., Director), may be incorporating database training, or learn by example techniques starting to emerge in academic research.
Bruce Blumberg, project director for the Synthetic Animal team at Blue Fang Games, explains, We are using a mix of proprietary software as well as off-the-shelf solutions. Part of the problem is that as you push the envelope in terms of developing a new type of user experience, existing frameworks become less attractive. This isnt because the existing solutions arent well engineered, but rather because they are too tightly bound to existing game experiences.
Next-gen game development is sure to appeal to a wider audience than ever before. This is part of the natural maturing process for the medium but it also poses some unique challenges to developers. While well most likely always see new variations on classic gaming genres there are many other ways to entertain a mass audience besides high-powered virtual gunplay. From a game developers perspective, these challenges arent limited to tools alone. Of primary concern is the astounding time it will take and the cost developers will incur to detail a level in a next-gen game.
Wonders Robertson: One challenge we are facingis how do we create all this art? You cant get away with just an old fashioned diffuse map anymore. Now you have to make sure the normal maps are correct, the specular masks are in, plus any number of maps to support new rendering capabilities. Now, if you want to create a piece of level art, like a piece of machinery, you cant just make the low-poly mesh. You have to make the high-poly mesh, generate normal maps from that and then apply that to the low-poly mesh. With new rendering features, time also has to be spent tweaking materials in engine to get the right amount of specular or sub-surface scattering, or gloss. This has increased the art production time quite a bit. I think more and more studios are looking to outsourcing modeling tasks for this reason.
In terms of other trends in game development, Blumberg sees cross-disciplinary teams and tools becoming more important. Right now the industry seems too compartmentalized: designers do their thing, artists do theirs and engineers do theirs. Great characters require everyone investing in a truly collaborative way. And, of course, this will require the development of new kinds of tools. Animators, for example, should be able direct the in-game behavior and motion of their characters so as to reflect their vision for the character. We are doing a new type of game experience that relies on our ability to develop expressive animal characters. This presents challenges from animation to behavior to interaction.
A simple example: most animation tools are optimized for modeling bipeds, and their solution for modeling quadrupeds is: That is easy, just have the biped walk on all fours. Well, it doesnt take much of an eye for motion to realize that this is hardly a solution if your goal is to develop animal characters that move with the grace and agility of real animals. And, of course, modeling behavior is a huge challenge. But we have some interesting ideas along these lines, and, in fact, some of the next-gen capabilities may figure prominently in some of our solutions.
Interestingly, established film production companies are starting to explore alternative structures (e.g., ILMs new art pipeline and campus structure) to encourage artist investment, creativity, adds Chouls. It remains to be seen if the game industry can leapfrog the assembly line phase, or if it will have to learn all over again that artists are not invested as Ogre Toe Nail Painter #5. Another exciting trend in game animation are the tools that blend procedural and keyframed/MoCap data. The best example is probably NaturalMotions endorphin. Havoc is also focusing on character physics in the next version. Combine these kinds of tools with runtime engines, and you have a powerful animation pipeline for next-gen titles. Were also seeing a growing focus on facial animation that will be increasingly important in next-gen games.
Kripalani suggests that the next five years will see a shift in focus in game development from graphics display technology to simulation technology. For years, developers have been struggling with how to get rich graphics onto the screen. Using PS2 and Xbox technology, many companies were allocating up to 70% of the available processing power just for this task. The new systems have anywhere from eight to 20 times the processing power of the last generation. Even assuming HDTV resolutions, it is predicted that only 50% of the total computing power will be required for graphics display. Therefore, there will be ample cycles to spend on AI, physics, characters, etc. Levels and worlds will feel less structured. Character animations and interactions will no longer feel pre-baked.
As for last months shocking news that Autodesk had acquired Alias, speculation is rampant among game developers who rely on those tools about what to expect next.
Not surprisingly, Gunn beams: There has always been this rivalry between Max and Maya and that wall has come down. No longer will I have to look at the pluses & minuses of both packages. I can just use the best tools that are out on the market and know they come from one place Autodesk.
By contrast, Robertson says: I am a Max user, so you might think that I would be happy that Max won. Actually, I think it stinks. Competition is goodit only made both products better. Without the competition there, who knows where they are going to end up? I imagine calling customer support up and being treated like the cable company treats you when you call.
Fred Galpern is currently the art manager for Blue Fang Games, located just outside Boston. He is also a part-time Maya instructor at Northeastern University. Since entering the digital art field more than seven years ago, Galpern has held management positions in several game and entertainment companies, including Hasbro and Looking Glass Studios. He began his art career as a comic book creator and also has professional graphic design experience.