Search form

Epic Games Founder Tim Sweeney Talks Tech

Tim Sweeney, the founder of Epic Games, talks about how advances in processing power will continue to advance games.

Epic Games founder Tim Sweeney

By John Gaudiosi

Twenty years ago, a very smart college kid started releasing Shareware games that he made at his mom’s house, and he needed a catchy name for his “company.” He ended up choosing Mega Epic Games to make it sound like the operation was bigger than it was.

That kid was Tim Sweeney. And at the 2012 D.I.C.E. Summit (i.e., Design, Innovate, Communicate, Entertain) in Las Vegas, Sweeney, the founder of Epic Games, was inducted into the Academy of Interactive Arts & Sciences Hall of Fame. Sweeney, who has been at the forefront of pushing technology forward with Unreal Engine 3, talks about how advances in processing power will continue to advance games.

John Gaudiosi: How have you seen game development improve since you first started programming?

Tim Sweeney: I wrote my first game, ZZT, back in 1991, and it wasn’t even attempting to approximate reality. It wasn’t until the first 3D games that we began to actually approximate reality through computer rendering. Doom is a great example of that. It’s the first order of approximation, where the scene is rendered by approximating a single bounce of light from each point in the world straight to your eye without any intermediate effect.

As we got more computing power, we were able to reach the second-order approximation with Unreal, where we’re modeling two balances of light. Light starts at a light source. It bounces off a point in the world and it reaches the viewer’s eye; in between, it might encounter shadows, and color propagates throughout the environment. About 99 percent of the graphics you see in today’s games for Xbox 360 and PlayStation 3 are just using the second-order approximation. We’re just starting to get enough computing power now to reach a third-order approximation, which we displayed in our Samaritan demo at GDC last year.

J.G.: How has computer power evolved over the 20 years you’ve been making games?

T.S.: Doom’s rendering approximation required about 10 million floating point operations per second of computing power. In 1998, Unreal required a billion. Our Samaritan demo required about 2.5 trillion floating point operations per second.

We’ve already scaled performance across many orders of magnitude, but I think we have further to go still because many aspects of realistic scenes we see today require many bounces of light to simulate that accurately. For example, the soft shading you see on skin is the combination of many different effects. There’s the oiliness to their skin, which reflects light and other aspects of the environment off of our skin and to the viewer. There’s also light transmitting through the surfaces and through the three-dimensional space within your skin, picking up and transmitting colors to produce the subtle highlights that you would expect in the human face. We’re still far short of being able to achieve this in real time with complete and movie-level accuracy.

J.G.: How do you see advances in technology impacting the games we’ll see moving forward?

T.S.: Given sufficient computing power, we absolutely understand lighting and shadows and color and skill. We can expect that, over the next several decades, we’ll achieve very close to reality in computer graphics in these areas.

But we’re still a very long way from accomplishing that. I think we’re still about a factor of 2,000 short of being able to simulate these known aspects of light transmission throughout environments and representing completely accurate scenes. There are problems that we don’t even know how to solve given infinite computing power. These come in the form of simulating accurate human thought or movement or speech or any other aspect of human intelligence.

J.G.: What role will Moore’s Law play in gaming?

T.S.:

Over the past 40 years, we’ve seen computing power double roughly every two years as transistor sizes have been shrunk smaller and smaller, but we’re starting to run into trouble because our transistors are approaching the size of atoms. While you might be able to make a one-atom transistor, you certainly can’t split an atom in half and create smaller transistors. Nobody’s ever seen more than about three generations ahead in terms of microprocessor manufacturing technology, and so there are actually a lot of possibilities for the future to go beyond our current limits.

J.G.: What’s one possible direction you think things could go?

T.S.:

Things could go vertical, where you stack multiple layers of chips on top of each other vertically until you achieve a much higher amount of computing power. If you figure out the number of transistors in a chip, it’s about 10,000 transistors by 10,000 transistors. That’s a really impressive number, but the stack is only one level high now. If you made that as high vertically as it is horizontally, then there’s another huge increase in computing power. There’s also the promise of quantum computing coming up over the next few years, and in the last few years there have been a lot of practical advances in this area.

John Gaudiosi has been covering video games for the past 17 years for media outlets such as The Washington Post, CNET, Wired magazine and CBS.com. He is editor in chief of GamerLive.tv and a game columnist for Reuters and RhMinions.com. He is a frequent contributor to Digital Innovation Gazette.

Dan Sarto's picture

Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.