No VR game-changer amid the gaming tables.
Early last month in the gambling haven of Macau, I attended a SIGGRAPH conference for the first time in a long time. My initial SIGGRAPH experience as a student volunteer in 1993 was eye opening, and I was a regular attendee and occasional speaker for the next 15 years – including the very first SIGGRAPH Asia in Singapore, 2008. As an outsourced event, SIGGRAPH Asia has always been a distant relative to "SIGGRAPH SIGGRAPH" (as many folks comparatively refer to the "real" conference), but the initial offerings in Singapore and Yokohama were respectable.
Cut to today. Scarfing down a plate of Doritos at the SIGGRAPH Asia 2016 opening reception is a far cry from doing shots off the back of an SGI Onyx at the Nixon Library in the heady days of 1993 (though perhaps an appropriate analogy for the austerity arc of the graphics industry over the past two decades). Twenty years ago, there was a palpable sense that "anything is possible," even though much still was not. Now – at a time when anything essentially is possible – we seem to be holding back. These days, CGI is like Doritos: tasty but predictable. And the VR game-changer has yet to emerge.
I attended all four days of the SIGGRAPH Asia conference and decided to distill my notes into three key observations (I'm a big fan of The Rule of Three). The following takeaways are not necessarily indicative of the state of the art in Asia, but are reflective of the state of the art as filtered through this year’s SIGGRAPH Asia submissions and programming process.
1. TRADITIONAL MEDIA IS TREADING WATER.
CGI may have matured, but it's grown from a geeky wild child into a rather boring adult. The SIGGRAPH Asia Electronic Theater wasn't tedious (a dubious compliment, I know). My favorite piece was Breaking Point from the Filmakademie Baden-Wurttemberg, featuring stellar production values and a clever twist. I sat through the looping Animation Theater, grouped into categories including “Rising Stars (Student Films), "Shorts & Features," and "Games & Commercials." It was the usual grab bag: a few interesting pieces, but nothing really innovative. I noticed more burly, mustachioed fathers than before (perhaps demonstrating that Cloudy with a Chance of Meatballs was surprisingly more resonant than we thought), and the French seem to be in a deeper funk than normal. Here again, Germans to the comic rescue (who'da thunk it?) in the form of the Filmakademie's crazy Pirate Smooch and the loopy 2D graphics and surreal social commentary of What They Believe – my favorite piece.
I hiked over to the Macau University of Science & Technology, hoping for some inspiration at the SIGGRAPH Asia Art Gallery, but was greeted by an anemic display with muddled, joyless "interactive" pieces (and in some cases, videos of physical works instead of the actual works themselves). I understand that this year's Art Gallery was an ad hoc affair, but still.
2. VIRTUAL REALITY IS TAKING BABY STEPS.
My motivation for attending SIGGRAPH Asia this year was to check out emerging developments in virtual reality. With 2016 widely touted as VR's "Launch Year," I was hopeful. But my hopes were dashed. Most of the major VR players were absent from the SIGGRAPH Asia Trade Show floor (the entirety of which would fit into a major studio's booth at SIGGRAPH in Los Angeles). While it’s true that Asia is saturated with competing VR conferences and summits any given week, the absence of the usual suspects (and even the unusual suspects) was puzzling. I gathered materials from the few Asian schools and labs present, but their VR/AR offerings were squarely on the computer science side, and rather pedestrian.
Tencent ruled the roost by default, although their AR research with Tsinghua University chiefly consists of replicating Google's developments while China keeps the G-men out. Tencent's work was impressive, but more for how fast they're following than for how boldly they're innovating.
The Emerging Technologies zone, normally my favorite destination at SIGGRAPH, featured not one paradigm shift. The adjacent VR Showcase had a few glimmers. Robert Chen, former DreamWorks TD now Effects Lead at Oculus Story Studio, gave a solid presentation on the "illustrative filmmaking" of Dear Angelica, enabled by Oculus' proprietary Quill VR paint program (which takes the functionality of Google Tilt Brush to the next level). I was also impressed by the folks at Rectifeye, who demonstrated an automated, app-driven VR focal system for the two thirds of humans like myself without perfect eyesight who struggle to see clearly in today's "one-form-fits-all" VR headsets.
I eagerly attended the packed session on "VR Capture: Designing and Building an Open Source 3D-360 Video Camera," by Brian Cabral, Facebook's director of engineering. Surround360 is part of Facebook's strategy to bring VR to the public. The presentation began promisingly. Brian laid out the camera system's goals – a high quality, reliable, durable, fully spherical (and almost fully stereoscopic), open and accessible, end-to-end system – and outlined the integrated advances in computer vision, CMOS sensors, color science, photogrammetry, optics and CGI. So far, so good. Then came Q&A and the forehead-slappers: an adoption-killing $35,000 USD price tag (slap!), and (incredibly) no audio (SLAP!). As any novice knows (even if they can't articulate it), binaural audio is essential to VR content - especially in a medium that requires the art of "indirection."
Facebook's brain fart on this front was lamely addressed with the observation that binaural audio isn't cheap or easy to implement, and they’re hoping other developers will "optimize" the system (which is akin to manufacturing a Tesla without a steering wheel and hoping consumers will jerry-rig one later). Such wishful thinking may come to fruition if Facebook is willing to donate their work-in-progress camera to university labs, but at $35,000 USD their deaf camera is DOA. An astonishing goal-line fumble for a social network.
3. THE UNCANNY VALLEY IS STEEP AND SLIPPERY.
As movies such as The Curious Case of Benjamin Button and Furious 7 have arguably demonstrated, it is possible to escape the Uncanny Valley of creepy virtual humans. However, like a black hole, the pull of the Uncanny Valley is powerful.
Ever since my graduate school days at The Ohio State University's Advanced Computing Center for the Arts and Design, I've been intrigued by the work of Nadia Magnenat Thalmann and Daniel Thalmann (founders of the University of Geneva's MIRALab and now also teaching at Nanyang Technological University in Singapore) with interactive virtual humans. The Thalmanns' SIGGRAPH Asia presentation, "Modeling Behavior for Social Robots & Virtual Humans," was an insightful update of their ongoing inquiries, and featured some very cool tech. Unfortunately, "realistic" robots such as the Thalmanns’ "Nadine" tumble deep into the Uncanny Valley with their mask-like expressions and awkward cadences. It was hard to tell what was more unsettling, Nadine's "angry" voice or her eerie Adele karaoke. Furthermore, the limited range of "correct" interactions compels humans to communicate robotically themselves in order to be understood by their synthetic companions. On the other end of the spectrum, videos from Boston Dynamics of large robots inexorably rising to their feet after being knocked over by humans are unsettling on a different level (Japanese researchers at least have enough sense to put cute faces on their service robots). In a chat with the Thalmanns during a break in their course, Nadia acknowledged the public's combined fascination with – and fear of – robots, but observed that like atomic energy, "the genie is out of the bottle."
In the SIGGRAPH Asia session "Virtual Reality Meets Physical Reality: Modeling and Simulating Virtual Humans and Environments," researchers presented work on ambient crowd behaviors for virtual environments, including gaze behaviors to engage viewers in passing. Good stuff, but we again descended into the Uncanny Valley – this time on a collective level. The virtual passersby were too purposeful and isolated, like extras in the Justin Timberlake sci-fi film In Time. Crowd simulation for VR must organically emulate typical social dynamics such as distractions, loitering, couples and groups in order to be unobtrusively convincing.
Paul Debevec's keynote on "Achieving Photo-real Virtual Humans in Movies, Games and Virtual Reality" was a welcome bright spot. Paul is well-known for his work at the USC Institute for Creative Technologies in photogrammetry, global illumination, Light Stage scanning, HDRI, light field recording and facial animation capture – which has enabled films from The Matrix to Avatar to Gravity - and the large ballroom was packed. Paul's creative arc from childhood to academia to Hollywood was inspirational, and his stories ranged from the entertaining (portable Light Stage scanning of President Barack Obama) to the poignant (producing light field recordings of Holocaust survivors with USC's Shoah Foundation).
Paul wound down his keynote with these points:
- Digital technology will not replace real actors.
- Games and VR will drive automated performance technology.
- Live-action filmmaking will evolve into animated filmmaking & virtual production.
- There is still much work to be done in making the technology less expensive and more empowering for content creators.
- New challenges and opportunities lie ahead in areas we dare not yet imagine.
- Continued advances in digital technology hold serious implications for truth.
- With great power comes great responsibility.
And he concluded with this:
"We are inventing very powerful tools. They can be used to inform, inspire and entertain. They can be used to deceive, oppress and manipulate. We can't control how they will eventually be used. We can influence how they will first be used. We can help the public understand them. Don't just invent. Create! Don't just create. Invent!"
Inspirational words that hopefully will be reflected in the offerings at next year's SIGGRAPH Asia in Bangkok.