Ellen Wolff previews some of the presentations that may presage the hot tools of tomorrow.
Believable bubbles, credible clones and cascading curls are just a few of the topics explored in the technical papers chosen for SIGGRAPH '08. While the halls of the annual ACM conference overflow with people selling all kinds of CG products, it is in the Papers presentation that one gets a glimpse of what next year's hot tools could be.
The Papers also provide an interesting benchmark for the growth of the industry as a whole, because this was a record-setting year. According to Papers Chair Greg Turk, "The number of submissions that we had -- 518 -- was a record. We've been knocking on 'the 500 door' for a year or two, but we've never had quite this many papers before." Ninety of those papers ultimately were chosen for the conference, and Turk notes that the relatively low acceptance rate of 16-20% reflects how high the bar is for SIGGRAPH. Despite the volume of submissions, he notes, "Every paper was reviewed by at least five people, which is more than most conferences. I had 65 primary reviewers, each of whom reviewed 18 to 20 papers each. We wanted to be thorough, and give good feedback to the authors."
Turk, who is from Georgia Tech University, led an international team of reviewers who determined this year's selected papers. His Papers Committee was comprised of Adam Finkelstein and Tom Funkhouser from Princeton University, Jessica Hodgins from Carnegie Mellon University and Markus Gross from ETH Zürich. They were joined by an extensive group of Area Coordinators that included experts from Autodesk, Microsoft, NVIDIA, Dreamworks, Disney and Pixar. As Turk explains, "It's an international committee. We usually have more academics, but we try to get a representation from industry; from hardware vendors to special effects."
Sims the Word
Perhaps not surprisingly in a year when the Academy of Motion Picture Arts and Sciences bestowed major Sci Tech honors on fluid simulation breakthroughs, several papers chosen for SIGGRAPH address this topic. "There are a lot of great papers in fluid simulation," says Turk, who's an expert in this area of research himself. "We had a ton of physically-based animation submissions, including fire and smoke, as well as water. I was also struck by how many papers we have about hair -- both in hair animation and in the realistic rendering of hair. Two papers are about the multiple scattering of light on hair. They show beautiful backlit hair that looks so natural, it notches up the realism. Those multiple scattering methods will eventually trickle into the digital visual effects world."
The viability of this type of investigation is a reflection of today's improved hardware and software, observes Turk. "Ten years ago, you couldn't think about doing something as complicated as these computations. Now that computers are a lot faster and algorithms are more sophisticated, you can consider doing multiple scattering of light for hair."
Cloth simulation was also a popular topic (and an essential aspect of creating believable digital doubles.) Turk singles out one paper in particular from Cornell University researchers called "Simulating Knitted Cloth At The Yarn Level," which demonstrates the simulation of a knitted scarf in exceptionally realistic detail. "People didn't think before that they'd need to simulate each strand. These guys did it. They found that cloth has different physical properties when you simulate each strand, rather than treating cloth like a stretchy sheet."
So the digital characters of the future may be well-dressed and well-coiffed, but will they appear more believable in close-ups than they do now? Turk considers the question, saying, "One of the things I thought was striking about this year's papers is that there are two pretty compelling papers about faces. If you'd asked me what would have been a fascinating topic this year, I wouldn't have been able to guess 'faces,' but two research groups, working completely independently (in New York and Israel) came up with very different takes on working with human faces."
"One of them, 'Data Driven Enhancement of Facial Attractiveness,' I think will be noticed and talked about and probably controversial. These researchers took photographs, and using training data from user studies, they tried to have the computer learn what a cross section of people consider attractive facial forms.
Then they applied that, using machine learning techniques, to semi-automatically recognize where the eyes and nose are in a photograph. Then -- in 2D -- they do a warping of those features and produce another face that looks as realistic, but has many of the characteristics that the 'more attractive' faces have, like a more symmetrical mouth. It's nothing new to try to beautify a face, but now there are more automated ways of doing it. A logical question will be 'Can we do it in 3D?'"
The other human face technique that caught Turk's eye was "Faces in Reflectance," which manipulated photos of people like Gwyneth Paltrow and Denzel Washington by replacing some of their features. "It's a bizarre thing to do," admits Turk, "but when you read the paper you find there is a reason to do this. For example, if you have a photograph of a street scene and you don't have permission to use the photo of a person in the scene -- you can 'de-identify' them, and the resulting photos look real. I think 'de-indentify' is a word they made up for this paper!"
Techniques for creating diverse physical characteristics may become increasingly appealing to creators of digital crowds, especially since researchers at this year's SIGGRAPH have studied what attributes people notice when they watch digital clones. In a paper called 'Clone Attack -- Perception of Crowd Variety,' researchers from Dublin's Trinity College considered the ways in which an impression of variety can be created in crowd simulation. When the researchers tested people's perceptions, they found that cloned appearances are far easier to detect than cloned movements. They established that cloned models can be masked by color variation, random orientation, and motion, and they believe that their insights will help artists create more realistic, heterogeneous crowds.
Real Time Concerns
A cursory glance at the SIGGRAPH '08 papers shows a not-surprising concern with real-time CG. Turk observes, "There's always a balance between real time and 'slow, but beautiful.' The people who are pushing things to be faster are certainly looking at computer games as one possible application of their work. Computer games' relationship with academia and technical papers is quite different than the relationship with the special effects industry. The computer games people are not always going after the academics with PhDs who are research-oriented. A lot more often they're looking for really good coders who have strong technical backgrounds but are not necessarily interested in research.
"Part of that is that there are not that many research groups in computer games. There are many more research groups in houses like ILM, Pixar and PDI/Dreamworks. It's only the huge computer game companies that may have a few researchers. So one unfortunate side effect of that is we don't normally see as many research contributions at SIGGRAPH in the technical papers from computer game companies. But there was one paper that I was delighted to see this year because it comes from a computer game company, Electronic Arts, called 'Real Time Re-Targeting to Highly Varied User-Created Morphologies.' The paper is very cool."
This paper relates to an animation authoring tool used for EA's soon-to-be-released Spore, an Internet-based game that allows users to create a species of creatures and grow them from cellular to full-scale life forms. Turk summarizes the gist of the idea, saying, "If you create a creature with a strange limb configuration or unusual number of limbs and you've taught the creature -- in a generic way -- how to walk, you can lift that walking motion to another creature who may not have the same body shape or even the same number of limbs." Turk speculates that "Some of these techniques might be used for motion pictures as well. If you've got hundreds of creatures in a special effects shot and some of them are different, you could at least start with something automatically generated."
SIGGRAPH is famous for presenting papers that spark an "aha" reaction among audiences, such as when Henrik Wann-Jensen presented his marble Venus de Milo demonstration of subsurface scattering several years ago. Turk isn't sure if we'll see a similar reaction this year, but his vote for an eye-opening idea goes to real time, gradient-based techniques for image manipulation. "There are two papers about this particular technique, and I think people will sit up and take notice."
"Real Time Gradient Domain Painting" (from researchers at Carnegie Mellon University) talks about how colors and tones change spatially at the pixel level -- it's a painting tool that uses differences in intensity. "Diffusion Curves" (from Adobe and the University of Washington) applies a similar idea to vector graphics in programs like Illustrator. "Both have beautiful results," says Turk. "They're totally interactive techniques, and I can easily see new painting tools being developed that use this idea. Tomorrow's matte painters may use this sort of technique."
Turk acknowledges that while companies like EA and Adobe are represented among this year's tech papers, academics represent a larger share. "Academics need to publish through peer review at conferences like SIGGRAPH. For people doing special effects in feature films, the reward is making a beautiful movie. I do think it's wonderful when people in industry also take the time to publish what they do because it benefits everyone. But not everybody takes that extra step, and that's why I think that we see more papers every year from academics." But he asserts that the benefits do trickle down to the creative community. "Researchers delve into the tough topics. What some people might think is a 2% difference in results can turn out to be much more significant. Then everybody wants it!"
For a complete list of links to the selected papers of SIGGRAPH '08, visit: http://kesen.huang.googlepages.com/sig2008.html
Ellen Wolff is a Southern California-based writer whose articles have appeared in Daily Variety, Millimeter, Animation Magazine, Video Systems and the website CreativePlanet.com. Her areas of special interest are computer animation and digital visual effects.