High-definition gaming is on the cusp of a visual evolution. The past year’s introduction and slow proliferation of 3D-enabled games, displays and laptops suggests that the next major frontier is on the horizon.
Whether 3D moves beyond a stylistic evolution and becomes a revolution, though, has yet to be seen. As some developers and players note, the unique visual effect of 3D -- with the initial disorientation of viewing a scene with an illusion of depth and then continuing to direct the action -- can take some getting used to. However, the PC games that have made the jump to 3D run the gamut, including StarCraft II, Call of Duty: Black Ops, World of Warcraft and Duke Nukem Forever.
Leading graphics card manufacturers have released platforms that comprise cards, drivers and glasses that allow developers to optimize their games in 3D, or players to apply 3D to their existing games. Studies show that game ratings measurably increase as new effects such as 3D are added to a game. So now that 3D is here, adding it to a game may only help.
PC developers need to spend much less time tweaking the rendering effects in their games, as opposed to more than several months to rewrite a console game engine from the ground up to support 3D. And Mick Hocking, a vice president at Sony Computer Entertainment Europe and the head of the company’s 3D initiative, says that while some of the technology used to produce high-quality 3D displays has existed for a long time, it’s only recently become available at a consumer price point.
With these things in mind, what do developers who are interested in 3D need to know?
Getting the Basics
A common misconception is that 3D only works for certain genres of games, like shooters that require judgments in depth, or slow-moving games that afford players more time to enjoy the view. But it’s more about figuring out how the specific aspects of 3D can be best applied to a given game.
“It’s not just about adding depth to a game,” says Hocking. The basic principle of 3D is displaying two separate images: one for the left eye and one for the right eye. Using a technique called full-frame dual-camera 3D, this means rendering two camera positions set a certain distance apart, seen through shutter glasses. Another technique called reprojection, 2D-to-3D conversion, or virtual 3D, creates the image by offsetting the pixels in the original game frame to the left and right.
Achieving a comfortable level of depth means setting the left and right images at the proper distances to achieve positive parallax (depth in the screen) and negative parallax (depth in front of the screen). In the real world, eyes are parallel when viewing an object at the horizon. Exceeding comfortable limits by placing distant images too far left and right can result in an image that goes beyond parallel with the eyes, asking them to look outward. Bringing an object too far out of the screen compels the player’s eyes to rotate inward. In terms of depth, a basic balance must be met between excitement and comfort.
“Mostly when we produce 3D, we have the main object of interest near the plane of the screen,” says Hocking. “We have a nice sense of depth going in the screen, which is typically how you’ll play most of the game.”
Moving Into the Next Dimension
PC monitors have an advantage over TVs in being able to display 3D at 1080p60. Although glasses-free monitors and televisions are emerging, and passive polarized glasses present a less bulky option, the current standard is set by combinations of active shutter glasses and 120Hz 3D displays.
Hocking says that glasses-free screens currently suffer from limited viewing angles, limited depth -- meaning that one needs to be sitting at a sweet spot for the effect to work -- and effects like shimmering or ghosting. There is still some debate about the level of eye strain caused by an active shutter setup. But passive 3D displays that use polarized glasses -- as in movie theaters -- cut the resolution in half, so games that rely on details such as text, maps and items will suffer from the more garbled image.
Anti-ghosting is another important consideration. A monitor with an insufficiently fast response time leaves a double image in the eye, a sort of shadow effect that stresses the player. According to developer Phil Nowell of Ready at Dawn Studios, ghosting is also much less prominent on passive displays compared to active displays. Considering that image resolution is especially important in many PC games, however, this may be a necessary tradeoff. Artistic decisions can also affect ghosting: Jim Van Verth, an engine programmer at Insomniac Games, found that ghosting occurs often when a bright section of the screen sits next to a very dark section.
3D as an Art Form
3D presents a number of creative challenges and questions, which will only increase as more developers use it. Convergence -- where the focal point of a scene is, determining its range of depth -- affects both gameplay and cut scenes. The specific camera implementation in a game -- whether it’s a fully controllable first-person camera, a third-person camera with a fixed distance to the avatar, or a static isometric camera -- naturally makes this more or less complicated.
Using negative parallax to suddenly bring an image out of the screen is perhaps the archetypal 3D effect. Hocking recommends restricting this to dramatic moments, as it isn’t comfortable to view for long; it may be ideal for cut scenes, where users can’t control the camera. Bringing HUD or other important UI elements just out of the screen can also be a simple but effective use of 3D.
Another basic principle is easing the player into various levels of depth with subtle transitions. “We found that when we did a camera cut from a really deep scene, we needed to just flatten everything and slowly let it expand,” says Nowell. “Otherwise people go, ‘Ah, that was a camera cut; I’m playing a game.’” Objects protruding out from the sides of the screen are also visually disruptive, as they call attention to the real-world borders of the image.
Poorly implemented 3D “feels horrible to view,” says Hocking. “It can have many problems. There can be uncomfortable depth to view, and there can be misalignment between the images. There could be a poor-quality screen that’s being used. It could be adding 3D where 3D doesn’t really need to be added to the experience.”
On the other hand, when used the right way, even intrinsic aspects of 3D can make the difference between a promising scenario and an immersive encounter. A greater sense of depth has a tangible benefit to the player in a baseball or tennis game, for example, while a clearer sense of scale can change how a massive building or boss creature is perceived.
It’s a common sentiment that 3D game development, as a creative approach, is in its very earliest stage. Experts and developers speculate on using 3D to deliver a true sense of vertigo by controlling the rate of change of convergence planes or amplifying its shock value in survival horror games. In one innovative use of 3D technology, Sony is experimenting with allowing active shutter wearers to play together on one screen by having one player view the 2D left image and one player view the right. When it comes to 3D development, the horizon’s the limit.