Search form

Focus on SIGGRAPH: Eyetronics

This year's SIGGRAPH keynote address was given by scientist, inventor and visionary Ray Kurzweil, who discussed "The Human-Machine Merger: Why We Will Spend Most of Our Time in Virtual Reality in the 21st Century." Sound farfetched? Then take a look at Eyetronics' ShapeSnatcher demonstration.

This year's SIGGRAPH keynote address, "The Human-Machine Merger: Why We Will Spend Most of Our Time in Virtual Reality in the 21st Century" will be given by scientist, inventor and visionary Ray Kurzweil. According to Kurzweil, by the 2020s virtual reality will no longer be the crude depiction that we see today. Instead, it will be so realistic that it will be difficult to distinguish from our material world experience.

Indeed, already neural or other implants have been used to counteract impairments caused by Parkinson's disease, multiple sclerosis, deafness, blindness and paralysis. The leap from life-assisting to life-enhancing and recreational applications no longer seems so far. Evidence for Kurzweil's reasoning can be found in Eyetronics' ShapeSnatcher demonstration. The ability to render real-life objects or people as virtual 3D objects in one hour or less surely is an essential part of Kurzweil's highly evolved machine-system scenario.

Lynn. © Eyetronics.

The speed, economy and simplicity with which Eyetronics representative Nick Tesi captured the likeness of volunteer Lynn was truly amazing. Using only a slide projector, a consumer video camera with a 3.5-inch floppy output (Tesi used the SONY DCR-TRV900 three-CCD digital video camcorder), the ShapeSnatcher calibration box and ShapeSnatcher/Matcher software running on a PC, Tesi was able to render 640x480 resolution images of virtual Lynn in very little time. Because the projector and camera were already set up, it only took 15 minutes for the photography. The software processing took 45 minutes, longer than usual, because Tesi took time to explain each step.

The fact that all of these components are available to consumers and that the software is priced like any consumer electronics equipment demonstrates that high-end graphics can be achieved by anyone. Users don't need any special skills or training to produce adequate results. This technology is a vast improvement over the days when artists chose digital models by browsing through a supplier's catalog of available objects. Those objects were generic and might not be exactly what the artist needed, so modifications had to be made. With ShapeSnatcher, if the object exists, you can capture it.

The process appeared to be quite simple. Tesi positioned Lynn before the projector and "snapped" her picture with the video camera, deliberating slightly over the focus of the special ShapeSnatcher grid slide projected onto her face. After he was certain she was in focus and in full view, the rest of the process proceeded quickly. He took three pictures of Lynn -- frontal, left profile and right profile -- then a shot of the calibration box, which she held in front of her face. There were not even any special lighting requirements, since illumination came from the slide projector.

Calibration. © Eyetronics.

After that, it was a short walk to the computer to download the 640x480 images from the floppy disk. The first step was to create 3D spatial coordinates by using the image of the calibration box and ShapeSnatcher. The calibration box works on given references, such as the planes being at a 90-degree angle to each other and the distances between the circles printed on the box. This information is used to calculate parameters such as camera-to-object distance and focal length. Once these parameters are established, object surface information can be deduced from the deformation of the grid projected on the object's surface.

Having established spatial references, the next step was to transform Lynn's frontal portrait into a mesh map. After some anticipatory cleanup, Tesi was ready to perform the same treatment to the left and right profiles. Hair requires a special strategy, so if the artist is not prepared for that, it is best to remove data up to the hairline. Jagged edges also should be removed, because that makes merging different mesh surfaces cleaner and easier.

Object capture is limited to the viewing angle of the camera lens, so getting a 180-degree view of a head necessitates three viewpoints. Tesi could have used a wider lens but, as any portrait photographer knows, the result is less than flattering and is not an accurate representation of the subject.

Blending the three wire-frame meshes with ShapeMatcher was surprisingly easy. In a few minutes, Tesi had the completed facial mesh map. Even the areas of overlap were seamless. The entire map was displayed as a conforming, rectangular grid, even in the highly detailed areas such as the mouth, nose and eyes.

Render. © Eyetronics.

Once the texture map was applied to the geometry, the illusion was complete. The projected grid lines disappeared. Both mesh map and texture map were created from the same image, yet when rendered, there were no grid lines on the virtual face. Since the mesh map and the texture map are one image, it is easy to filter out the grid pattern on the surface and smooth the texture. (It is also possible to turn off the grid projection and take pictures of separate textures. This technique might give interesting animated effects.) The original reason for recording the geometry and texture map within the same image was to guarantee that there would be no "slippage" between the two.

Model data then could be output in several formats: OBJ (Alias|Wavefront), 3DS (3D Studio Max), DXF (AutoCAD), HRC (Softimage 3.7), IV (Open Inventor 2.1), LWO (LightWave 3D Object), WRL (VRML 2.0) and, of course, SS3D (ShapeSnatcher 1.0).

The main objective accomplished, Tesi was free to experiment with various mesh map resolutions using ShapeReducer. Even at low resolutions, the images maintained detail in the critical mouth, nose and eye areas. In other 3D packages, this kind of overall model reduction would have meant the loss of significant detail in convoluted areas. However, this mesh remained "adaptive:" there were more subdivisions in convoluted areas, but no more than needed. The ratio of subdivisions between areas of greater and lesser detail remained proportional.

Often there are greater extremes in mesh subdivision between more and less detailed areas, sometimes so much so that the mesh has to be "balanced" in a painstaking manual process. If the object were to be animated or morphed, this kind of subdivision disparity -- extremely tiny grid size and extremely large grid size on the same surface -- would result in holes or wrinkles. This software is endowed with an intelligent system for decreasing geometric resolution.

This would be a great application to use in creating virtual objects for games, multimedia, Internet entertainment, e-commerce or information systems. Fast-loading, low-resolution models that retain their character are achievable.

The implications are widespread. Archiving would take less digital storage space. Museum artifacts, industrial or manufacturing prototypes or versions and architectural elements could be stored compactly. The camera is the only limitation. The better the camera, and hence the image quality, the better the model will be. So high-quality, high-resolution models for movies, science and medicine also are possible.

Eyetronics recently expanded into the United States, opening an American office represented by Nick Tesi. For more information on Eyetronics products and services, visit the company's Web site, or contact Tesi by phone at (800) 205-9808, or by e-mail at nick.tesi@eyetronics.com. If outside the U.S., call +32-16-29-83-43, or e-mail info@eyetronics.com.

Juniko Moody is a regular contributor to VFXPro.com.