Search form

Automated Anime and VR at SIGGRAPH Asia 2018

Exhibition hall highlights included algorithms driving increased 2D character animation production speed and more immersive virtual reality experiences.

Tokyo. I never need a reason to go to Tokyo. I know what you’re probably thinking: Shinjuku. Well, these days it’s the thoroughness of Tokyo Hands and the view from the Tokyo Skytree more than anything else…not to mention proper cold soba. Though I should mention that this trip also included an unexpectedly pleasant visit to a Hedgehog Café in Harajuku. More on that another time. Let’s get down to a little business.

For AWN, I had the pleasure of attending SIGGRAPH Asia, 2018, held at the Tokyo International Forum in early December. My specific task was to check out the exhibition hall. Now, like any large convention, there’s just too much to look at and far too many people with whom to talk. Nevertheless, once I was on the floor, two areas of innovation stood out: automation and virtual reality (VR).

Obviously, this shouldn’t come as a surprise. Automation is pervasive. To use the words of Horace, “odi et amo” (“I hate and I love”)…since it is both the pleasure and the pain of a new kind of economy. And television commercials for the latest VR kits are now as common as those for Netflix and Amazon. So, automation and VR.

Since I’m usually writing about anime, the idea of introducing automation into the process of creating anime characters immediately struck me. In this case it was mostly about hair. Two research teams (Tokyo University of Science/Toyo University) have experimented with introducing algorithms to automate the hair waving process, i.e. the movement of hair in a light wind or strong breeze. For a character with a simple and repetitious color scheme, in this case Pikachu, one team has even made steps toward automated coloring. Finally, another team (OLM Digital, Inc.) has successfully introduced a blur algorithm, a common technique to make objects moving at great velocities look more realistic.

I know machine/deep learning and neural networks well. I have and continue to use them in my own work. For my datasets and the problems at hand, machine learning and the general process of automation are always introduced because that is the solution to the problem. Automation performs a task either a human couldn’t do or simply didn’t want to do because the amount of time required is immense. In terms of the animation pipeline, are the tasks of producing waiving hair, of coloring characters, or of creating a blurry effect similarly unwanted or overly burdensome tasks?

I suppose the answer may vary, depending on the animator. Certainly the animation pipeline itself is a complex process worthy of making more efficient, e.g. the early introduction of computers as a means to facilitate the speed at which animation is created. Yet we are no longer talking about tools or user interfaces for the artist. We’re talking about modeling (and thus automating) the act of the artist. Even in tiny, incremental steps that’s a big deal. Adding efficiency is great. Saving time is great. I get it. But there’s a difference between saving time and working less, especially as the consultant/contract model for employees is only increasing. Yes, automated blurring, hair waving, and coloring are not game changers, nor harbingers of a forthcoming A.I. take over of the anime industry. It’s simply a reminder of how pervasive automation is. And yes, if these kinds of automated processes make animators happy and more productive, that’s cool.

Still, let’s focus on the creative process for a minute. This kind of automation has great potential to introduce large-scale repetition into the final product. That could be very bad. While most might overlook that the blurring effect on a flying soccer ball is the same over multiple episodes and multiple anime series over time, or that Pikachu always seems to be the same color, anime characters already look too much alike. If every character now moves exactly the same, including their hair in a light wind, we’re looking at artistic mediocrity. Don’t get me wrong. The concept of templates or “templating” is very beneficial in digital development and production. But when it comes to putting pen to paper or tablet, technology needs to enhance both digitally and virtually the innate creativity of the artist, not hinder or limit it in the name of efficiency. Even Pikachu shouldn’t always be the exact same shade of color.  

As for VR, one question, I think, still persists for the general public. How immersive is it? Do I feel like I’m in the Matrix? Or do I feel like I’m watching a 3D movie with more than a few seconds of 3D? Perhaps the answer is somewhere in between.

Art Plunge is one of the first VR experiences I tried at SIGGRAPH. Designed to make the user feel “transported to the inner worlds of famous paintings,” this experience puts you in the very sky of Van Gogh’s Starry Night, or even in Mona Lisa’s room. As a VR interpretation of famous works of art, I have to say it does work. Am I surprised? No. Anyone that has had the pleasure to experience what Magic Leap is capable of knows how advanced immersive VR technology currently is. Yet Art Plunge is also indicative of the kind of “quick fix” VR that will only increasingly become available off the shelf. That is, the immersion is limited to only the movement of your head and the use of a hand held device. It is more entertainment than a virtual world in which one artificially lives and breathes.

That type of VR is coming, however. It’s just a matter of time and hardware, and much more than a headset is required. Engaging one’s senses is the challenge. And so, there was a long line to sit in the “throne-like” chair system of FiveStar VR, which offers a multi-sensory experience by having the user use their legs, arms, and, of course, their eyes to move around a virtual tourist site. Multiple groups were also adapting motion-capture technology so that users wear a reduced amount of kit – not full body suits – to move their virtual character with their physical body. Regardless of hardware, large or small, these immersive environments are impressive – for a few generations this also gives a whole new meaning to the idea of playing The Sims. Be that as it may, you don’t quite “feel” like you’re in the Matrix. Feel is the word to focus on here.

While the techniques for creating VR environments are relatively straightforward now, the introduction of sense into that environment is the next critical step. Thus, all the body movement on the exhibition floor. Still, the real “Holy Grail” is being able to feel you’re in the environment without necessarily moving your limbs. Enter the variety of haptic tech, such as Hap-Link (The University of Electro-Communication), which allows users to feel differences in texture like soft and hard. Lotus (National Chiao Tang University), on the other hand, is more interested in your nose than your finger tips. Now you can smell in your VR environment. Yet perhaps the most ambitious research on display was the “Self-Umbrelling” effect of researchers at Nagoya City University, which strives to add a sense of real-time gravity by manipulating your point of view. Their research is based on data and studies of outer body experiences (OBE).

Automation and VR. We’re getting there, or somewhere. Somewhere between William Gibson’s Neuromancer and The Peripheral.