SIGGRAPH 2017 Technical Papers to feature computer science discoveries in fluid simulation, fabrication, geometry processing, and more.
CHICAGO, IL -- SIGGRAPH 2017, the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques, announces the acceptance of over 125 technical papers which will be presented during this year’s conference. SIGGRAPH 2017 will mark the 44th International Conference and Exhibition on Computer Graphics and Interactive Techniques, and will be held July 30 through August 3, 2017 in Los Angeles.
Submissions to the Technical Papers program are received from around the world, and feature high-quality, never-before-seen scholarly work. Those who submit technical papers are held to extremely high standards in order to qualify. SIGGRAPH 2017 accepted 127 juried technical papers (out of 439 submissions) for this year’s showcase, an acceptance rate of 28 percent. Forty papers from ACM Transactions on Graphics (TOG), the foremost peer-review journal in the graphics world, will also be presented. As per SIGGRAPH tradition, the papers were chosen by a highly qualified peer jury comprised of members from academia, alongside a number of field experts.
“Among the trends we noticed this year was that research in core topics, such as geometry processing or fluid simulation, continues while the field itself broadens and matures,” said SIGGRAPH 2017 Technical Papers program chair Marie-Paule Cani. “The 14 accepted papers on fabrication now tackle the creation of animated objects as well as of static structures. Machine learning methods are being applied to perception and extended to many content synthesis applications. And topics such as sound processing and synthesis, along with computational cameras and displays, open novel and exciting new directions.”
Of the juried papers, the percentage breakdown based on topic area is as follows: 30 percent modeling, 25 percent animation and simulation, 25 percent imaging, 10 percent rendering; 4 percent perception, 3 percent sound, and 3 percent computational cameras and displays.
Inside Fluids: Clebsch Maps for Visualization and Processing
Authors: Albert Chern, California Institute of Technology; Felix Knöppel, Technische Universität Berlin; and, Ulrich Pinkall, Technische Universität Berlin
Clebsch maps encode vector fields, such as those coming from fluid simulations, in the form of a function that encapsulates information about the field in an easily accessible manner. For example, vortex lines and tubes can be found by iso-contouring. This paper provides an algorithm for finding such maps.
Multi-Species Simulation of Porous Sand and Water Mixtures
Authors: Andre Pradhana Tampubolon, University of California, Los Angeles; Theodore Gast, University of California, Los Angeles; Gergely Klar, DreamWorks Animation; Chuyuan Fu, University of California, Los Angeles; Joseph Teran, Walt Disney Animation Studios, Disney Research, University of California, Los Angeles; Chenfanfu Jiang, University of California, Los Angeles; and, Ken Museth, DreamWorks Animation
This multi-species model for simulation of gravity-driven landslides and debris flows with porous sand and water interactions uses the material point method and mixture theory to describe individual phases coupled through a momentum exchange term.
Real-Time User-Guided Image Colorization with Learned Deep Priors
Authors: Richard Zhang, University of California, Berkeley; Jun-Yan Zhu, University of California, Berkeley; Phillip Isola, University of California, Berkeley; Xinyang Geng, University of California, Berkeley; Angela S. Lin, University of California, Berkeley; Yu Tianhe, University of California, Berkeley; and, Alexei A. Efros University of California, Berkeley
This paper proposes a deep learning approach for user-guided image colorization. The system directly maps a grayscale image, along with sparse, local user “hints” to an output colorization. The CNN propagates user edits by fusing low-level cues with high-level semantic information learned from large-scale data.
Dip Transform for 3D Shape Reconstruction
Authors: Kfir Aberman, Tel Aviv University, Advanced Innovation Center for Future Visual Entertainment; Oren Katzir, Tel Aviv University, Advanced Innovation Center for Future Visual Entertainment; Qiang Zhou, Shandong University; Zegang Luo, Shandong University; Andrei Sharf, Advanced Innovation Center for Future Visual Entertainment, Ben-Gurion University of the Negev; Chen Greif, The University of British Columbia; Baoquan Chen, Shandong University; and, Daniel Cohen-Or, Tel-Aviv University
This paper presents a 3D acquisition and reconstruction method based on Archimedes submerged-volume equality. It employs fluid displacement as the shape sensor. The liquid has no line-of-sight. It penetrates cavities and hidden parts, as well as transparent and glossy materials, thus bypassing the visibility and optical limitations of scanning devices.
Dynamics-Aware Numerical Coarsening for Fabrication Design
Authors: Desai Chen, Massachusetts Institute of Technology; David Levin, University of Toronto; Wojciech Matusik, Massachusetts Institute of Technology; and, Danny Kaufman, Adobe Research
This paper presents a simulation-driven optimization framework that, for the first time, automates the design of highly dynamic mechanisms. The key contributions are a method for identifying fabricated material properties for efficient predictive simulation, a dynamics-aware coarsening technique for finite-element analysis and a material-aware impact response model.
Registration is now open for SIGGRAPH 2017. To view badge levels and pricing, visit the conference website. Early registration savings end June 9, 2017.
Source: ACM SIGGRAPH