Search form

Raising the Bar with 'L.A. Noire'

Find out how Depth Analysis' MotionScan was used on the hottest new video game.

L.A. Noire is more immersive and cinematic, thanks to more sophisticated game play and character animation. Courtesy of Rockstar Games.

L.A. Noire, the new detective procedural developed by Team Bondi and published by Rockstar Games, recreates L.A. circa 1947 as a living and breathing city (influenced by Hammett, Chandler, Ellroy and a host of noir movies). It offers a new kind of immersive, experience, produced much like a movie in its detailed production design, and allows gamers to play detective to successfully solve cases and so as the player you have to rely not only on clues but also the testimony of victims, witnesses and suspects. It's crucial for the player to be able to make choices as a detective through reading people. In fact, if you wrongly interrogate them, characters may clam up or get upset; other characters might be very good at lying and so you have to rely on other clues to make a claim against them. To make this happen, they utilized MotionScan, which allows more believable and nuanced character animation for such critical scrutiny. Oliver Bao, head of R&D for Sydney, Australia-based Depth Analysis, discusses the technical advancements made with MotionScan.

Bill Desowitz: What is MotionScan and how was it created and implemented to meet these challenges?

Oliver Bao: Depth Analysis' MotionScan is an optical system used to capture the performance of actors in 3D, for video games, films and other applications. It was originally developed in tandem with the L.A. Noire game, as [Team Bondi] felt that no other systems on the market afforded them the opportunity to capture authentic performances.

MotionScan technology allows games to be incredibly photorealistic and that's mainly because we take a different approach (in a number of ways) to other capture technologies currently available to game developers. Firstly, when MotionScan is used to capture the performance of an actor -- what you see is what you get. When you think of performance capture, you usually think of those "behind the scenes" features that show uncomfortable camera headsets attached to the actors' heads tracking their delivery of a key moment in the game or movie. But using our technology, there are no dots or markers on the actors' face to help capture their facial motions. Instead, inside the Depth Analysis MotionScan studio, 32 2M pixel HD cameras are carefully calibrated and aligned to record 360 degrees of the actor, who usually takes a seat in the middle of the studio when it's their take.

MotionScan is a new breakthrough in photorealism.

BD: What makes MotionScan a game-changer in terms of making the experience more detailed, more believable and more cinematic?

OB: We've managed to reproduce lifelike performances of actors. Getting the data compressed to fit game discs and render back at decent speed and quality have been reasons why this was not possible before. We've demonstrated that what you see is what you get; actors have their performances reproduced faithfully to the point that you can lip read what they're saying in L.A. Noire. This is the first time we've allowed gamers to be able to enjoy believable acting on a console.

BD: What kind of research and R&D was involved to make this happen?

OB: We oversaw multiple areas of research over the past through years. These fields included computer vision for processing algorithms; machine vision for data acquisition; production pipeline for capture process; tools for data/script management, rendering and data export; high performance computing for the infrastructure to capture, store and process data.

You get more believable acting on a console, giving you the ability to read witnesses and solve crimes.

BD: What was involved and what tools did you use?

OB: In short, we wrote our software on linux using C++ and programming scripts such as python. We built the system from scratch using off-the-shelf hardware (cameras, servers, etc) and tweaked settings to maximise throughput for the massive amount of data we have. As Depth Analysis is a startup company with limited resources, we developed our tools in house and tweaked processing pipeline to match our data structure and time needed to generate data. As we're limited by the processing power on consoles, we had to tweak data compression such that it's small enough to fit on optical discs, but not resource crazy enough to prevent smooth rendering of multiple characters.

BD: What's next?

OB: For Depth Analysis, we are partnering with new games and film studios for our next project, as well as developing full body scan as our next milestone. So stay tuned.

Bill Desowitz is senior editor of AWN & VFXWorld.

Bill Desowitz's picture

Bill Desowitz, former editor of VFXWorld, is currently the Crafts Editor of IndieWire.