Search form

SIGGRAPH 2008: New Tech Demos

Eric Post reports back from SIGGRAPH's newly named New Tech Demos (formerly Emerging Technologies) to tell us what the cutting edge will be in the future.

Will digital catch catch on? The University of Electro-Communications hopes so. Courtesy of SIGGRAPH.

This year the Emerging Technologies section of SIGGRAPH was renamed New Tech Demos, and VFXWorld took a stroll through the exhibits in L.A. last week with the chair, Mk Haley, Imagineer at Walt Disney. When asked about the most exciting aspect of the exhibit, Haley was quick to say, "Seeing the concepts go to production!" But not just any production, she wants to see user oriented (ergo) in addition to consumer oriented (quality).

Haley, who has been with ACM-SIGGRAPH since 1989 and has SIGGRAPH Pioneer membership, is the perfect person to chair the New Tech Demo because of her interest in robotics and new technology.

This year, 50% of the exhibits were juried and 50% curated. One of the main themes was Cultural Heritage. Three zones or categories emerged: Tabletop Devices, Haptic Feedback and Displays & Robotics.

Haptic feedback devices are technologies that provide tactile feedback to a person through motions or pressures.

Maglev Haptics! Butterfly Haptic's New User-Interface Technology came up with a handle that you may grasp and move through six degrees of freedom. This device, friction free, has torque and force feedback in the handle and allows the user to provide position and orientation input to the user's application. Soon to be in production, the Carnegie Mellon University project sponsored by the National Science Foundation is expected to be marketed as an affordable hardware by Butterfly Haptics L.L.C.

Another Haptic technology is Programming Robots by Haptic Means, a way of programming commands to a small robot by touching parts of the robot and pressing various buttons on the machine to make it respond, such as touching the left side of the robot head and watching the robot turn its head and move to the left.

/ed technology enables the sensation of being slashed with a sword or pierced with a stick, and it's a tool to help actors with their facial and body expressions during fight scenes. This is definitely an application for game development to make your games more realistic, and there may be medical applications as well.

Digital Sports Using 'Bouncing Star' Rubber Ball Comprising IR and Full-Color LEDs and Acceleration Sensor is a ball that resists strong shocks and can be used for sports in low light conditions. The ball changes color when it bounces and the display on the floor that has light patterns on it changes when the ball passes through the light area. It may seem like a sophisticated way to play "catch" with your friends, but Haley pointed out, "It's really addicting."

For those party times when you're ready for a drink, there's the Landscape Bartender: Landscape Generation Using a Cocktail Analogy. This enjoyable device generates landscapes on a big screen by sensing the ingredients and weight of a cocktail that the bartender makes. Expect to see a nice sunrise island display when the bartender makes a Tequila Sunrise.

If you don't drink alcohol, that's fine, there's the Latte Art Machine that creates images on the surface of your favorite latte by utilizing inkjet technology and manipulating the layer of bubbles on top of the drink.

Ground-penetrating radar digitizing ants, creepy? Courtesy of SIGGRAPH. © Tamagochan.

Atta Texana Leafcutting Ant Colony: A View Underground is the first time a ground-penetrating radar was used to digitize a moving, living ant colony. This is the technology that can explore an anthill without using destructive testing methods. Haley said, "In the past, ant hills were poured with concrete and this was destructive. Now ground radar allows the colony to be explored." The technology presents the radar data into 3D format and scales the viewer (you and me) to ant size. Exploring anthills with this technology brings a very different perspective to how the colony lives and functions.

Haley said that robotics was "moving more towards service type helpers for people. This is popular in Japan." Her hope is to see more robotics that can perform services for disabled and elderly, "like the Human Washing Machine." While not a SIGGRAPH display, the Human Washing Machine is a shower that does it all for you. Just get inside and stand there and it washes, soaps, rinses and dries you.

Facial movements are the toughest to master because of the many different muscles that move to make different expressions. Animatronics for Control of Countenance Muscles in the Face Using Moving Units is taking on the challenge of a robotic humanoid (realistic) head and face that can create eye and facial movement by moving muscles in the face. Overlayed with silicone rubber, the unit currently has 26 actuators that can create 58 different facial expressions. Overlayed with a muscle texture, the unit has excellent medical and teaching potential.

The Confucius Computer: Transforming the Future Through Ancient Philosophy is an excellent tool that lets students ask the computer questions and receive an answer that Confucius would give if he were here in person.

Haley said she would also like to see an online survey to see what the next big challenge there will be. For now, making the robotics lighter, faster, and more sensitive is more of a refining process than a big challenge. Where the service oriented function of robotics takes the industry may be the land of ideas and with that, the bigger challenges. One area that is not well explored yet is the audio/stereo for the future. What ways will music and sounds be brought to users that we do not have now? She would also like to see better stereographics.

What do the robotic developers want? Advances in Artificial Intelligence. The Furby had its day. Ugobe's Pleo, a toy dinosaur that you can touch and pet and play with and is interactive, is one of Haley's favorites.

newtech03_RomeReborn-320.jpg

Rome Reborn digitally recreates the city circa 320 A.D. All Rome Reborn images © The Board of Visitors of the University of Virginia 2008. 

However, there was no doubt that this year's most magnificent technology was the display of Rome Reborn. Bernard Frischer, director of IATH (Institute for Advanced Technology in the Humanities), is the brainchild behind Rome Reborn. The goal is to digitize the entire city of Rome at its height of civilization in 320 A.D., and make an interactive real time environment. Though computers and technology are new to our era, Frischer said he is not the first to undertake the stunning idea of recreating the city.

Throughout time, there have been several attempts to recreate Rome. In the 15th Century, Rome was reborn with words in historical documents. In the 16th century, 2D pictures of Rome began to surface. By the 18th century, 3D models of some of the buildings were produced. From about 1900 to 1943, Paul Bigot produced a 1:400 scale model of the entire city. In 1930, under Mussolini, the Italians were inspired to produce a 60-foot 1:250 scale model of the city and it is this model, 90 % of the city, which is the source for Frischer's Rome Reborn.

In the 1970s, Frischer saw the model and was awed by it. He was a photographer and loved technology. He even did some excavating at the village where the poet Homer lived. His family was involved in cinema. "It was not continuous work, I would take time to do other things and pursue school and come back to this project as technology and opportunity allowed. It wasn't until August of 2008 that we were able to put the project on the Internet and still protect the IP rights of those involved."

The model, made of plaster of paris, is in a museum in Rome and will be moved soon to a new location. While Frischer's company had insurance and good technology, the museum refused to allow any overhead devices. The model had to be laser imaged from the sides. "It had to be this way, even with insurance, there's no way to replace the model. All the builders have passed on and the methods of making it may be very difficult if not impossible to reproduce." Gabriele Guidi, Politecnico di Milano, collaborated with University of Virginia IATH on the project and their main job was to reverse model the scan of the 60-foot city.

Like all models, the 60-foot city was designed to be viewed by an audience from specific locations. "So you can see where the detail is more involved in areas that are in view and less detailed where people won't be able to see it." This meant that the photogrammetry team had work to do. The model has 7,000 buildings, and 100,000 objects. The idea was to scan this and digitize it and put texture to it and make it interactive. Of course, there had to be collaboration with several companies and universities.

Pascal Mueller, Ph.D., is one of the authors of Procedural's new CityEngine modeler. This program can create cities ten times faster than existing methods. A temple, for example, can be parameterized and this makes it easy to change height, shape, size, etc. CityEngine uses some of the theories of scripted geometry of MEL and CAD.

It was great to be able to make the 3D version with procedural methods. "We think now that there were no chimneys in Rome so with the procedural, it's easy: we just take them out." Many houses and temples and buildings were similar as to be expected, so the procedural way of changing the shape or size is the better way to go.

The 60-foot model was used in many documentaries and even two movies: Quo Vadis with Peter Ustinov as Nero in the story of the burning of Rome, and the Oscar-winning Gladiator.

Frischer, who is quite persuasive in attracting other talent to the project, invited everyone to come see a remarkable website: "Making History Interactive" (www.caa2009.org). Coming to SIGGRAPH was also a challenge. It cost $10,000 to hire students and every bench and workstation and display had to be hand built and another $10,000 in travel and hotel.

There are other applications for this technology. Imagine the government GIS system going to a 3D image of various cities for zoning and planning or the county assessor's office. Imagine being able after a few years of changes to show a time lapse of the city's growth. Certainly with Hollywood's love of city disaster movies there is a bright future for this technology.

Rome Reborn 1.0 is loaded onto an IBM Cell Server. Bary Minor of IBM explained the Cell Server that uses Cell processors. The Cell server has 14 Cell blades each with two processors. Each processor has 9 cores. One core is for the OS, the other cores are vector cores. Thus the box has 252 cores, 14 of which run Linux Fedora 7 and 238 are available for Rome Reborn 1.0 and realtime crunching. Users can fly through, change illumination and shading or soft shadow views with texture off. The box has displayed models up to 320 million pixels (a Boeing 777 interactive model that is 25 gigabytes in size) and rendered realtime this same amount. Rome 1.0 has about 150 million pixels. IBM's software is called iRT or interactive Ray Tracer. Fedora 7 is a 64-bit Linux system.

The Coliseum has never looked so good.

Rome Reborn 2.0 was put together by mental images on a Sun Station with RealityServer. RealityServer is a scalable, server-based 3D web services software platform supports Autodesk, Softimage and all of the high-end CAD programs.

Frischer pointed out that the Coliseum, though procedural in its polygon build, is overlayed with a substantial number of streaming jpg's making the fly around even more impressive for a realtime program.

One might wonder how the colors and textures of Rome can be recreated after so many years. Frischer said that there are many marble and textured remains that archaeology has been able to restore to give a reasonably good recreation of what Rome looked like in its day. "The Romans loved color," Frischer stressed, and he believes that Rome was indeed a very colorful city.

Some of the ways that this project is being applied include a Google map of the same database, which is a GPU renderer.

One application of Rome Reborn is to provide a large walk around map on the floor with a handheld screen senses GPS and gyroscopic movements. When it hits a particular spot it displays the feature of that spot. Turning and facing different directions turns the display. Another feature of this device is that the user can point it at any image of the old ruins and it shows a 3d of what the building looked like.

Another way to view Rome is on a super sized wide screen display. Mersive is the company that allows low cost projectors to be combined to make large and high-resolution displays. Rome Reborn was displayed using multiple high-definition projectors on a screen. The software stitches it and makes it blend without seems. Rome displayed in panorama projection is a mere 27 million pixels.

Frischer said there is a potential for virtual sims but they are still expensive so these may not take shape any time soon. Programs such as Second Life have been kicked around. Frischer said that there is a possibility of one day creating avatars and or social interaction so that peer review can meet and be present from any location within Rome Reborn. Certainly students of history will enjoy this technology if they can walk the streets with their professors.

Eric Post is an attorney, journalist, computer graphic artist, helicopter pilot/mechanic and former pastor. Although he is a traditional artist, he enjoys modeling and landscape scenes in CG and uses various applications for medical illustrations at his office. From 2004-2006, Post was senior technical editor for the Renderosity magazine and e-zine. From 2006-2007, Post served as a staff writer for the Renderosity Front Page News, and edited various Renderosity publications.

Tags 
randomness