Continuing our excerpts from the Inspired 3D series, Tom Capizzi delves into modeling resources.
Read Part 1 of Inspired 3D: Modeling Resources.
3D Scan Data
The main difference between digitizing a model and using 3D scan data is that the modeler is usually the one operating the digitizing stylus. But most 3D scanning devices and software are too specialized and complex for the modelers to operate themselves. This is not to say that modelers cannot learn to operate this equipment if they wanted to; it is only that this equipment is not as automatic as the equipment manufacturer would like the customers to believe. People who have been using the equipment for many years do the best scans. When it comes to 3D scan data, the modeler must understand what it is used for and how it is obtained. It is not important, however, that the modeler uses the 3D scan system to get the data.
A 3D scanner samples points from the surface of the 3D object. The output that can be expected from a 3D scanner normally consists of a polygonal mesh that is the actual size and shape of the object being scanned (Figure 12). In recent years, scanners have become more sophisticated. Scanners have expanded their capability to include the acquisition of color and texture data as well as 3D geometry data. The properties of an object that can be captured during scanning include shape, size, color and texture.
3D scanning is the term that describes several different technologies. Recent developments in 3D data acquisition from physical objects are more like photography than actual scanning. However, because the data that is rendered from the processes is similar to the data that is gathered from traditional 3D scanners, these processes will be included in this section as well. The two primary ways that 3D scanners are used for computer modeling are laser scanning and structured light scanning. Other methods of acquiring 3D data are out there, but if the modeler understands the basics involved with these two main types, the other techniques will be easier to grasp if they are needed for a project.
Laser scanners use a laser. The laser is a device that emits a single beam of intensely directed light. The laser is mounted in a moving device and gathers 3D information by bouncing the laser light back into one or more cameras mounted in the moving device. A laser diode and stripe generator can be used to project a laser line onto the object (single point, line or multiple lines). The line is viewed at an angle by cameras so that height variations in the object can be seen as changes in the shape of the line. The resulting captured image of the stripe is a profile that contains the shape of the object.
Laser scanners use the principle of triangulation to gather 3D data. A laser line or pattern is projected onto a part. An optical camera offset from the laser source views the laser light on the object being scanned. The 3D data is obtained by calculating the differences between the points measured closer to the laser and the points farther away from the laser.
Advantages of laser scanning include fast 3D data acquisition and no contact with the object. The physical prop does not need to have a surface hard enough to be digitized manually, so soft objects can easily be scanned. Because there is no contact, fragile models can be scanned as well. Laser scanners quickly sample a large number of points. Only the parts of the object that are in the line of sight of the scanner can be measured. Multiple scanner positions are required to cover the complete object. Sometimes laser scanners work in cooperation with part turntables that enable the part to be precisely moved instead of the scanner.
Because an optical device is making the final measurements, laser systems have a hard time on shiny objects that reflect a lot of light and dark objects that absorb a lot of light. Problem parts can be painted white or sprayed with a white powder to make them more visible to the laser.
Some laser scanners have large scanning heads that are rotated around an object, capturing all the surface data, geometry and textures in one pass. Some laser scanners utilize a turntable or similar device that moves the model in front of the digitizing head of the laser scanner. In the case of the turntable, the laser still needs to move vertically to capture the 3D sections of the object being rotated in front of the digitizing head. There are other configurations as well, but what they all have in common is that the laser is moved in some way relative to the object to capture 3D data.
As a modeler who is required to use data received from a laser scanner or from any other scanning facility, certain things need to have careful attention in order to get satisfactory data. The modeler must clearly define the expectations to the person operating the scanning equipment. The modeler needs to ensure that the data has no gaps or visible seams; that criteria needs to be clearly spelled out to the vendor. If the vendor needs to deliver texture maps as well as 3D data, that should be spelled out as well. The best way to define expectations is to put them in writing.
A studio on a limited budget may have some latitude when it comes to locating a scanning vendor. Normally, however, a studio should use a vendor that has a good reputation and has been used by many other clients. Although the equipment manufacturers that sell the scanners are quick to tell customers it is the equipment that creates the quality data, the truth is that the people operating the equipment are the ones creating the quality data. This is identical to the situation visual artists find themselves in every day. Every time a new tool is introduced to the digital production market, there is a claim that this new tool will automate the job of the digital artist. The fact is, the artist not the equipment or the software is the one creating the work. For this reason, the studio contracting out the scanning work needs to use the best people for the job, not necessarily the best equipment.
A modeler who is responsible for the collection and final use of 3D scan data represents the studio as a buyer. As a buyer, the modeler will be responsible for ensuring that the data is usable, as well as the following tasks:
Inspecting the data received to ensure that no holes are in it. Check carefully for any gaps or seams in the surface of the data. Figure 13 shows problems at the fingers and the sides of the head.
Turning all the display of the software that is being used for viewing the data so the polygons are single sided. The modeler should make sure that the entire surface of the model has the polygons all facing the same direction.
Examining the wire frame (nonshaded view) of the data to see if there is floating data resting inside the exterior surface of the data.
- Examining the surface data to ensure that when merged scans in a single data set are used, two or more layers of data are not resting on top of each other. This causes many problems when the modeler is trying to use the data.
After the data is received and approved, it is usually too late to fix the kinds of things mentioned in the preceding list. This is why it is crucial to make sure these things are checked immediately after the data is collected. Figure 14 shows scan data after it has been cleaned up.
A few companies are known for producing the best laser scanners. Following is a discussion of these well-known and respected manufacturers.
Cyberware is one of the oldest and most widely used laser scanner manufacturers on the market today. Cyberware has equipment located at scanning centers in many large cities and makes products that address specific 3D scanning applications. Cyberware makes several high-resolution scanners for capturing 3D data and texture data from models and sculptures. These scanners are not considered specialized scanners, but they are considered some of the highest quality scanners for general use. These high-resolution scanners operate by using a turntable to move models in front of the digitizing head. The data that is acquired from these scanners has to be acquired in multiple passes. The major reason for this is that the models will generally have an arbitrary shape that has occlusions (where one part of the model interferes with the scanner's ability to see the part of the model beneath it) and surfaces that cannot be seen unless the model is rotated to another angle.
Data that is acquired from 3D scanners is commonly sampled in multiple passes. The problem usually lies in assembling the data into one continuous polygonal mesh after the multiple passes have been completed. Cyberware was one of the first companies to develop software that makes this task simple and easy to perform.
Two other products that Cyberware is famous for are the head scanner and full body scanner (Figure 15). These scanners are specialized for sampling 3D data from the human body, and they generate color texture information as well as 3D models of the people being scanned (Figure 16). The color information can be stored as texture maps in the traditional sense, or the color information can be stored per vertex in the database. In the latter case, thousands of vertices packed very close together give the impression of continuous color across the 3D mesh, but there is no map associated with the data.
The problem with these scanners is that the scanner's digitizing head rotates around the body around the vertical axis. This captures more than 90% of the information required to digitally re-create the person being scanned. The last 10 percent, however, is located at the top surfaces of the human body. These surfaces include the tops of the shoulders and head. Whenever the 3D data that has been directly acquired from the scanners is viewed, it is obvious that the tops of the heads have to be completely redone in order to be used as a digital model.
Also, because the scanners are so specialized for the purpose of gathering head and body data, they are almost useless for anything else. This is why most places that provide scanning services using Cyberware scanners need to have more than one scanner available to the customer.
Cyberware has pioneered many technologies related to 3D scanning. The original program that was developed by this company to create NURBS surfaces from scanned data is called CySurf (originally known as Ntest). This program seems simple when compared to some of the other programs available for surfacing today, but this functionality of wrapping NURBS surfaces onto polygonal surfaces was the first step toward making scan data truly usable for a film and video production pipeline.
Less expensive laser scanning options are available. These devices offer the advantages of flexibility and lower cost, but, as was stated previously, these are instruments that require a certain amount of skill and labor to get usable results. These machines are not office copiers that automatically spit out 3D data quickly and easily, although this is not rocket science. Any modeler who is motivated and interested can learn to do laser scanning.
Polhemus makes a unique scanner called FastSCAN. This product can deliver quick and fairly high quality scans when the user is experienced and has some basic knowledge of the scanner's settings. The software used in this system has the remarkable capability to merge multiple passes accurately and easily. This scanner also has the distinction of being very inexpensive and easy to use compared to the Cyberware scanners. The scanner itself is a handheld wand that is passed over the object being scanned. Multiple passes randomly gathered by hand are assembled and put into accurate world-space using the magnetic calibration system (Figure 17).
This scanner calibrates itself using the Polhemus magnetic tracking system. The problem with the magnetic tracking base is that metal objects, whether motion capture, digitizing or scanning, will adversely affect any magnetic system. This wouldn't be such a big deal except that many objects have metal in them. When a design needs to be modeled and executed in full physical sculpted form prior to capturing the form for 3D modeling, the sculpture has to be specially designed without any metal armatures. In some cases, the building that houses the system adversely affects the results of the scanner.
The Minolta VIVID line of scanners are true laser scanning apparatuses. The scanning object operates the device from a PC connected to the scanner. The object being scanned is placed in front of the scanner at a fixed distance. The scanner is then activated. In less than one second, the scanner passes a laser over the surface of the object that is visible to the scanner. Because this scanner is basically a box that sits in front of the object and cannot see the entire model at once, the operator needs to rotate the model to get additional views scanned. As with most scanners that require multiple passes, multiple passes need to be assembled into one continuous, clean, watertight mesh. This takes time and skill and sometimes requires an additional software package to clean up the geometry. Unlike the Polhemus FastSCAN, this scanner also captures texture mapping information, as well as 3D geometry.
A variation of laser scanning that is used almost exclusively for very large scanning applications is lidar (light detection and rangin g). This type of scanner is also called a time-of-flight laser scanner. This technology was originally developed for scientific and military applications (Figure 18). It has become very useful for digitizing large structures because of its inherent ability to sample a large number of points within a short amount of time.
Lidar has been instrumental in addressing several problems related to integrating live-action photography with 3D digital imagery. Normally when the visual effects studio receives background plates for digital effects, some guesswork is involved when creating digital environments based on the background photography. Although it is usually not necessary to re-create the environment so it can be rendered photorealistically, it is normally required to build objects that will be used to cast shadows on and create objects for tracking purposes. If the lidar data is acquired correctly, it addresses many of these problems.
The person using the data, however, needs to pay careful attention to criteria associated with the acquisition of lidar data. Before approving any data that is received from a lidar capture session, the modeler responsible for using the data should make sure that the data meets the following criteria:
The person using the data, however, needs to pay careful attention to criteria associated with the acquisition of lidar data. Before approving any data that is received from a lidar capture session, the modeler responsible for using the data should make sure that the data meets the following criteria:
- 1. With each 3D data set should be an object in the scene of a known size. For example, if the modeler is receiving data that describes a large mountain, it would be impossible to determine the size of this mountain unless there was an object in the scene, such as a box that was five feet in all dimensions, to give scale to the entire scene. Without an object in the scene that determines the scale, a lot of guessing is involved when trying to use this data. Figure 19 shows a lidar scan of a hillside with small green balls placed in strategic areas toward the top of the hill. These balls are a specific size, so the modeler knows how large the scene is, and the balls are also used to line up additional scans of the same scene.
2. The data that is received should be placed relative to a true horizontal plane. Although the Cartesian coordinate system cannot be exactly duplicated in the real world, there should be some attempt to ensure that the data at least rests in a horizontal position.
Whenever possible, the objects that are seen in the background plate should also be visible in the scanned environment. This includes any props, buildings, and other stationary objects that can be seen in the background images. Figures 20 and 21 show how recognizable objects in a scene can help verify a database for scale, position and accuracy. Objects that are obscure shapes, such as rocks and plants, present a greater challenge when checking for scale, position and accuracy.
3. If possible, the lidar data should also include 3D data of the camera that was used to shoot the background plates. This can be useful in determining the actual distance between the camera and some of the identifiable objects in the scene.
4. The data received should be clean. Many times, the lidar data set that is received by the modeler comprises many individual snapshots of data that are taken at the sight of principal photography. When these individual small data sets are assembled into one large data set, many opportunities arise for holes and reversed faces to be introduced into the final data. Upon first viewing the data, the modeler should make sure all the faces are single sided and view the data carefully, inspecting it for holes and reversed faces.
Structured Light Scanners
A new technology, structured light scanners, provide a potentially low-cost solution to the problem of higher-priced laser scanning hardware. The tools required for acquiring data using structured light scanners can be as simple and inexpensive as a digital camera and a slide projector, or as elaborate and expensive as a structured light digitizing head and a complicated scanning rig.
What differentiates the technology of structured light scanning from the laser scanning technology is that the light that is read by the software interpreting the changes in the 3D surface is not laser light but standard light that is readily available from a slide projector lamp or some similar white light source, such as a halogen lamp.
3D data from a structured light scanner can be acquired in more than one way. One way is to use multiple cameras that interpret the data using stereovision. Another way is to utilize a single camera and software to extract the 3D information from the pattern of structured light on the physical object.
Structured light scanners typically project a predefined and calibrated light pattern onto the 3D surface of the object to be modeled. The pattern of light is distorted by the variation of the object's surface. The software in the structured light scanning system triangulates the differences in the light pattern on the distorted surface to calculate 3D geometry.
Like some laser scanners, structured light scanners capture complete surfaces from a particular point of view. Multiple passes are made to acquire the data from multiple points of view (Figure 22). These multiple passes are combined into a single, seamless data set using merging software that is usually included with the purchase of the scanning package (Figure 23).
NURBS Surfacing of Polygonal Surfaces
Applying NURBS surfaces to scanned polygonal data sets has become very sophisticated in the last few years. Several software packages can accomplish this task. Some are more successful than others, and some are more expensive than others. These modeling programs use as an input raw scan data. The final output of the software programs is a series of NURBS patches that have parametric alignment and geometric tangency.
Because a lot of cleanup is involved when working with scan data, programs available for creating NURBS surfaces from this data generally include tools that enable the user to clean up the data prior to creating the surfaces. This type of software works by starting with a dense polygonal mesh and uses specialized algorithms to draw a NURBS surface into the details of the scanned data. A modeler witnessing this process for the first time will see an impressive sight. The form and detail of the scan data are transferred to the surface as if by magic. When used correctly, this type of software can create amazingly accurate representations of real-life objects using NURBS surfaces that can be used in animation, manufacturing and design. When the model is complete, it is a fully realized NURBS patch model that has been created in a fraction of the time that it would take to build a patch model by hand.
A few of the software packages that apply NURBS surfaces to scanned polygonal data sets are Raindrop Geomagic, Paraform, RapidForm and Cyberware CySlice. Some of these packages have advantages over the others. Paraform seems to have been able to create scan data cleanup tools, curve building tools, and surfacing tools that are very powerful and easy to use. Some of the work produced using Paraform is shown in Figures 24 and 25. Paraform has been used in many films, including Hollow Man, End of Day s and Harry Potter and the Sorcerer's Stone.
The process of applying NURBS surfaces onto dense scan data is usually done in the following steps:
1. The scan data is imported into the system. The file format used for importing the data depends on which software is being used for surfacing and which file format was exported from the scanner.
2. The scan data is cleaned up. During the cleaning process, seal all the holes that exist in the scan data and ensure that the surfaces of the scanned data are free from surface discontinuities (noise) and irregularities caused by the scanning process.
3. The boundaries for each surface are laid out. Some software has defined a workflow for the surfacing operation so that only one surface can be made at a time. Some products, however, enable the user to lay out the entire model before creating any surfaces. Creating all the boundaries for all the surfaces can be advantageous. When this type of model is built, the success of the model depends largely on how the surfaces are laid out. A lot of time is wasted if the modeler has to reorganize the surfaces after the model is built. Paraform, one of the packages mentioned earlier, is built so that the patches can be laid out prior to surfacing. Paraform also has one of the most impressive toolkits on the market for laying out the exact parameterization of each surface and determining tangency conditions with accuracy and control.
4. The surfaces themselves are built. Sometimes, as is the case with Paraform, an additional step is required before surfacing, which requires the construction of a spring. A spring is a mesh that stretches between the curves that identify the surface boundaries. This mesh determines the density and accuracy of the surface that will be built on top of the spring. Although this step takes additional time, the benefits are paid back to the modeler by providing additional control. During the process of surfacing these boundary curves, there will be some areas where the curves, for some reason or another, are not exactly right for creating a surface. These conditions are places where the curves cannot meet exactly or have small breaks. At this point, the modeler must replace the curve or edit the curve to get the surface to build. Again, Paraform has excellent curve-editing capabilities.
Wrapping NURBS surfaces onto dense polymeshes holds the promise of creating complex patch models from objects that already exist. The only limitation to thistechnology, it seems, is that it will not create objects that do not already exist. The models that can be imported into these packages do not necessarily have to be scan data. Many polygonal objects can be quickly and easily built and imported into these packages. By building a rough polygonal model and using a smoothing function to increase the resolution of the model, the modeler can quickly create a complex patch model for a low-resolution polygonal model. Using this methodology, modelers can create anything they desire using this software.
FreeForm Modeling and Surfacing
Modelers often throw up their hands and say, If I could only touch the data inside the computer; if I could only use my hands. The SensAble Technologies FreeForm modeling system addresses this concern. This system uses a specialized piece of hardware that has a haptic interface. Haptics is the aspect of FreeForm's user interface design that uses force-feedback to create the impression that the modeler is sculpting with his or her hands on a model inside the computer.
Technology like this would not have been possible a few years ago. Faster graphics cards and faster processors have allowed this software and hardware combination to create amazing 3D forms in the computer.
The data that is created by this device is similar to scan data. When exported into other packages, the data consists of thousands of small triangles that make up the data set. There are a couple notable distinctions, however. One distinction is that the data that is exported is very clean. Another distinction is the position of the data is right on the construction coordinates used for the digital modeling process.
The reason the data is extremely clean is that the data that is exported is a polygonal representation of a theoretical volume within the FreeForm software. The data-base that is calculated within FreeForm uses voxels to determine the volume of the object. Voxels are theoretical bits of 3D information that make up a 3D scene in a similar way that pixels fill up the 2D screen on the computer. These voxels contain information about the 3D sculpture in the FreeForm program. Some of this information includes the coarseness of the object, the shape of the object and the color of the object. (Coarseness is the hardness of the model.) Because voxels are true 3D entities, the sculptures created in FreeForm can be painted through the object. These colors on the object cannot be exported as texture maps, but they can be exported as colored point data on the surface of the object.
The coarseness of the sculpture in FreeForm is a very important aspect to using this program. Coarseness values correspond to the size of the voxels. Large voxels have a large coarseness value. If the virtual clay that is being sculpted has a very large coarseness value, the clay is very soft and can be easily sculpted. Soft clay, however, will not hold detail very well. As the sculpting continues, the hardness of the clay must be increased to get additional detail. Hardening the clay means reducing the size of the voxels. When the sculpture is finished, the clay will normally not be very coarse. The model shown in Figure 27 was built using the following steps:
1. Use an orthographic drawing or photograph to create a background image or image plane in FreeForm.
2. Make a large solid shape with a large coarseness size.
3. Make a curve on the front face of the shape that is used to cut the profile. Cut the profile with this curve.
4. Make a curve on the top to cut the top profile and cut the top profile with that curve.
5. Round off the edges, creating the basic 3D form of the sculpture.
6. Decrease the coarseness one step and add detail.
7. Decrease the coarseness another step and add more detail. Continue adding detail until the sculpture is complete (Figure 26).
8. Create surfaces on the model using curves and surfaces available in the surfacing toolbox.
9. Export the NURBS surfaces into the animation program.
The FreeForm modeling process differs from other methods of creating 3D models because it can create models with random complexity (Figure 27), and texture and graphics can be created in 3D, unlike most other modeling packages.
Another benefit to the system is that it has a built-in NURBS surfacing program, making it a fully functional stand-alone model creation package. The NURBS surfacing package is very intuitive and reliable and the NURBS surfaces that are exported from FreeForm are very clean. The NURBS surfacing software is not as flexible as the Paraform software, but it provides about 85% of the functionality and a fantastic sculpting package as well.
Drawings, digitizing, scanning and sculpting have not replaced modeling yet. They remain tools in the modeler's toolbox. So much about the model is tied directly to animation and rendering. Therefore, the modeler must depend on information that is specific to the production pipeline to make decisions about how to make the model, not restrictions imposed on them by the data acquisition process. One day, these process may be the way all modeling is done. In the meantime, knowing about these technologies and how they work can save the modeler hours of time. The finished product, at least for now, remains the work created by the modeler.
To learn more about character modeling and other topics of interest to animators, check out Inspired 3D Modeling and Texture Mapping by Tom Capizzi; series edited by Kyle Clark and Michael Ford: Premier Press, 2002. 266 pages with illustrations. ISBN 1-931841-49-7 ($59.99). Read more about all four titles in the Inspired series and check back to VFXWorld frequently to read new excerpts.
Tom Capizzi is a technical director at Rhythm & Hues Studios. He has teaching experience at such respected schools as Center for Creative Studies in Detroit, Academy of Art in San Francisco and Art Center College of Design in Pasadena. He has been in film production in L.A. as a modeling and lighting technical director on many feature productions, including Dr. Doolittle 2, The Flintstones: Viva Rock Vegas, Stuart Little, Mystery Men, Babe 2: Pig in the City and Mouse Hunt.
Series editor Kyle Clark is a lead animator at Microsoft's Digital Anvil Studios and co-founder of Animation Foundation. He majored in film, video and computer animation at USC and has since worked on a number of feature, commercial and game projects. He has also taught at various schools, including San Francisco Academy of Art College, San Francisco State University, UCLA School of Design and Texas A&M University.
Michael Ford, series editor, is a senior technical animator at Sony Pictures Imageworks and co-founder of Animation Foundation. A graduate of UCLA's School of Design, he has since worked on numerous feature and commercial projects at ILM, Centropolis FX and Digital Magic. He has lectured at the UCLA School of Design, USC, DeAnza College and San Francisco Academy of Art College.