Animation World Magazine, Issue 2.5, August 1997


Animation and Visualization
of Space Mission Data

by William B. Green and Eric M. DeJong
California Institute of Technology, Jet Propulsion Laboratory

Editor's Note: In recent weeks we have all been captivated by the images returning to earth from Mars. Here William B. Green and Eric M. DeJong describe how images are created from thousands of miles away. To learn more about JPL and NASA's space missions you can visit http://www.jpl.nasa.gov.

Figure 1. Figure 1.

Caltech's Jet Propulsion Laboratory (JPL) has been processing digital image data returned from remote sensing instruments on spacecraft since the Mariner 4 spacecraft flew by Mars in 1964. Many of the digital image processing techniques now routinely used in desktop publishing and computer graphics systems were designed originally to process and enhance images returned from space. JPL, other NASA centers, universities, and the Department of Defense made significant contributions to the development of this technology. In the past fifteen years, sophisticated processing capabilities have been developed to support scientific analysis of remotely sensed imagery. The use of three-dimensional perspective rendering achieved by merging elevation data with two-dimensional sampled imagery has become a valuable tool for image interpretation and geological analysis. Animated sequences of rendered imagery provide dramatic, scientifically precise "fly-over" simulations that capture the public's attention while providing a visual aid to scientists attempting to understand the nature and evolution of the earth and other objects in the solar system. More recently, capabilities have been developed to support mission planning by integrating spacecraft models from Computer Aided Design (CAD) systems with remotely sensed imagery to enable visualization of mission scenarios for current and future deep space exploration missions. This article describes the basic methods used at JPL's Multimission Image Processing Laboratory (MIPL) and Digital Image Animation Laboratory (DIAL) to produce a variety of animation and visualization products from imagery returned by NASA spacecraft.

Figure 2.Figure 2.

Acquiring Image Data From Space
Figure 1 shows the flow of data for a typical planetary exploration mission. Remote sensing data from instruments on the spacecraft are returned to earth receiving stations in digital form, and transferred to data processing facilities that acquire the data and convert individual telemetry segments into scientific data records. The data processing paths for NASA earth observation missions are similar. For imaging instruments, image data records are created that contain the basic pixel data (decompressed if necessary) plus additional information including engineering data (camera temperature, voltages, etc.), navigation data (spacecraft location and orientation when the image was acquired), ephemeris data (information regarding the positions of planets, the sun, and other objects such as the moons orbiting other planets when the image was acquired), and camera geometry (where was the camera pointing, what was the view angle, etc.).

The engineering data is utilized to remove the camera signature from the returned imagery and convert the data to physical units (e.g., brightness). The geometry data is used in constructing various views of the surface later in the animation process. The formation of these data records is shown in the boxes labeled "real time" and "systematic". Note that it is often necessary to construct image data records from telemetry data acquired at different times or at different ground receiving stations. Archival digital data products are produced at various stages of the processing stream and preserved for long term scientific study. Specialized enhanced products are also generated to support detailed scientific analysis, and public information office (PIO) products are also generated for dissemination to the press and made available via the Internet.

Figure 3. Figure 3.

Basic Image Rendering
Once the image data has been converted to physical units, and the geometry is understood, it is possible to generate perspective view and animation products. This was first done at JPL in the early 1980's by a team led by Kevin Hussey. Hussey's team produced L.A. - the Movie, an animated sequence that simulated a fly-over of Southern California utilizing multispectral image data acquired by the Landsat earth orbiting spacecraft. The remotely sensed imagery was rendered into perspective projections using digital elevation data sets available for the area within a Landsat image. Figure 2 illustrates the basic process. The upper left image shows one band extracted from the Landsat image. A segment from the image has been selected for rendering, and the perspective viewpoint has been defined as shown by the green and blue graphics overlay. The upper right image is a gray scale representation of the elevation data available for the image segment, with the same perspective viewpoint indicated. The elevation along the blue path in these images is shown graphically in the lower left image. Once the animation producer is satisfied with the viewpoint and perspective, the scene is rendered in 3D perspective as shown in the lower right hand image.

The scientist or animation director sketches out a desired flight path, as shown in Figure 3. The flight path is defined by a set of "key frames." Each key frame is characterized by a specific viewing geometry and viewpoint, and software interpolates between key frames defined along the flight path to render intermediate frames to produce the final animation. The animator controls the simulated speed of the flyover by specifying the number of frames to be interpolated between each key frame. Figure 4 shows one frame from the film L.A. - the Movie, showing the Rose Bowl with JPL in the background against the San Gabriel mountains. The vertical scale is exaggerated by a factor of 2.5 to show small scale features.

Figure 4.Figure 4.

Planetary And Earth Applications
Rendering and interpolation algorithms have been improved since the era of L.A. - the Movie. In recent years, MIPL and DIAL have collaborated to produce a variety of fly-over sequences of planetary and earth imagery. Project scientists have found it invaluable to obtain three dimensional perspective views of remote planets and their satellites. The use of stereo imagery generally acquired by aircraft has been widespread in the geology community for many years. A three dimensional view of the surface provides analysis of surface features, the evolution of the surface, and the nature of surface disturbances that are volcanic or seismic in origin. Three-dimensional rendered animated imagery has become useful in planetary exploration for the same reasons.

Figure 5 shows a perspective view of Maat Mons, a large volcano on Venus. The Magellan mission mapped the surface of Venus using Synthetic Aperture Radar (SAR) in the early 1990's. Elevation information was provided by radar on board the spacecraft, from analysis of stereo image coverage of the surface, and from data acquired by earlier missions to Venus. Surface color has been incorporated into this image, based on limited radiometric measurements obtained by a Russian lander spacecraft on Venus in the 1970's. The Russian spacecraft was ultimately crushed by the atmospheric pressure, but survived long enough to provide a limited sampling of surface color.

Figure 5. Figure 5.

In the mid 1970's, two Viking landers and two Viking orbiter spacecraft provided thousands of images of Mars from orbit and from two separate landing sites. The orbital imagery provided stereo coverage of significant portions of the Martian surface. Elevation computed from stereo imagery enabled perspective rendering and animation of portions of the Martian surface. Figure 6 shows a single three-dimensional image produced in this manner.

The Space Shuttle has carried synthetic aperture radar systems on three separate occasions, obtaining high resolution radar imagery of the earth's surface. The third mission, referred to as SIR-C (Shuttle Imaging Radar mission C) provided coverage of the Mammoth Mountain area of California in 1995. Figure 7 shows a three-dimensional perspective view created from SIR-C SAR images acquired by the radar system. SAR imagery requires different interpretation than imagery acquired by a more conventional imaging system. Brightness differences in SAR imagery represent differences in surface texture and the orientation of surface features on the surface, rather than the color or reflectance of the surface. Bright features are oriented normal to the direction in which the radar signal travels, since the radar will be reflected strongly from surfaces normal to the radar beam. Dark features are generally more aligned with the direction of radar signal travel. Differing textures will also reflect the radar beam differently. This is illustrated in Figure 8 which shows a false color perspective projection of the same area. Here, false color is used as an interpretive aid to highlight differences in surface feature orientation and surface texture. This false color rendered representation provides an extremely useful tool for scientific interpretation.


Figure 6.

Mission Planning
Visualization and animation are also useful for mission planning and mission operations. It is possible to incorporate CAD models of spacecraft with remotely sensed imagery in animations to illustrate spacecraft trajectories and data acquisition strategies. Animation displays are also provided to explain planned mission events during flight operations to members of the press and the public via the news media. Figure 9 shows one frame extracted from an animation of the Galileo spacecraft approach to Jupiter in December 1995. The spacecraft model was rendered from a CAD model of the spacecraft obtained from the spacecraft design team. The star background is produced from a standard reference star catalog, and the Jupiter image was acquired by the Hubble space telescope. The spacecraft trajectory and planet motion models were derived for the animation from mission navigation files and command sequence files.

Figure 7. Figure 7.

A Growing Roll
Visualization and animation are becoming increasingly important tools in planetary exploration. High speed computing equipment and increasingly sophisticated software systems are making it possible to produce the types of products shown in this article on rapid time scales. These products are extremely useful in science analysis during flight operations, and are beginning to play an increasingly important role in supporting future mission planning and data acquisition strategies.

Acknowledgements
The authors wish to acknowledge the contributions of the following individuals for their contribution to the figures: Raymond J. Bambery, Jeffrey R. Hall, Shigeru Suzuki, Randy Kirk, Alfred McEwen, Myche McAuley, Paul Andres. In addition, the work of many other individuals within JPL's Science Data Processing Systems Section, and many individuals supporting JPL's flight projects, makes it possible to acquire the data sets used to produce the types of products shown here, and their effort is hereby acknowledged. The work described in this paper was carried out at the Jet Propulsion Laboratory/ California Institute of Technology under a contract with the National Aeronautics and Space Administration.

Figure 8.

William B. Green is Manager of the Science Data Processing Systems Section at CalTech's Jet Propulsion Laboratory. He has responsibility for design, development, implementation and operation of ground based systems used to process science instrument data returned by NASA's planetary and earth observation spacecraft. Current activities include processing imaging and multispectral data returned by the Galileo spacecraft now in orbit around Jupiter, preparations for processing images of Saturn and its moons from the Cassini mission to be launched in late 1997, and for processing of stereoscopic images of the surface of Mars to be acquired by the Mars Pathfinder lander in July 1997. The Section is also involved in supporting flight and ground software development and development of data reduction systems for a variety of earth remote sensing instruments to be flown as part of NASA's Mission to Planet Earth. The Section produces a variety of digital, film and video products; these include CD-ROM and photoproduct archival databases, and animations and "fly-over" sequences of planets and other solar system objects. The Section also develops and maintains a variety of Internet image data base browsers, providing public access to large planetary image data bases resident at JPL.

He is the author of two textbooks,
Digital Image Processing--A Systems Approach and Introduction to Electronic Document Management Systems, and numerous technical papers. He has taught image processing at Harvard University, California State University at Northridge, and George Washington University. Mr. Green is a Senior Member of IEEE.



Figure 9.

Dr. Eric M. DeJong is a Planetary Scientist with the Earth and Space Sciences Division of the NASA Jet Propulsion Laboratory and a Visiting Associate at Caltech. His major research is the creation of image and animation products for NASA Space & Earth Science missions. He led the visualization efforts for the Voyager Neptune , Magellan Venus, Galileo Earth missions and Hubble Space Telescope Saturn & SL9 observations. He has participated in the scientific analysis of observations from these missions. He is the principal investigator for the Solar System Visualization (SSV) Project, which was selected by NASA as one of three NASA science projects featured at the World Space Congress for the International Space Year. He, and his team created planetary image sequences for the IMAX films Journey to the Planets, Destiny in Space and L5: First City in Space.

William B. Green.
William B. Green.


Table of Contents
Feedback?
Past Issues


[about | help | home | info@awn.com | mail | register]


© 1997 Animation World Network