Search form

The Digital Eye: HDRI to Hit the Fans

In this months Digital Eye, Peter Plantec looks into the rapidly evolving area of HDRI or High Dynamic Range Imaging.

Image courtesy of Deron Yamada. © 2004 DYA367.

When combining real images with 3D and other vfx, a wide dynamic range of recorded light detail becomes ever more critical. The term HDRI or High Dynamic Range Imaging (or sometimes High Dynamic Range Illumination depending on your application) has been around for decades, but it is evolving rapidly with much promise in our ever-expanding range of entertainment applications.

Vfx technical wizard, Graphics Research associate professor at USC and all round interesting guy, Paul Debevec is probably the best-known originator of HDRI technology. His presentation back at SIGGRAPH 97 demonstrated how to create HDRI images by stacking regular images taken over a range of F stops. Hes currently studying the way light passes through human skin and flesh, using an amazing LED light array sphere and HDRI, but more on that in an upcoming VFXWorld article.

Way back in 1985 at Lawrence Berkeley National Laboratory, Greg Wards early work on the Radiance rendering system led to the development of the first widely used HDR image format. Later, he developed the HDR extension for jpg format and hes currently an HDRI consultant at Anyhere Software in Albany, California, where hes working on designs for new HDRI displays.

Both Paul and Greg were helpful in expanding my understanding of HDRI. Dynamic Range refers to the maximum and minimum values in a natural range of values. For light values in a scene, that would refer to the range of values from the brightest to the darkest.

Ordinary film tends to group the top range of bright things into white (blown out) and the darkest range into black. Thus film records a scene by compressing the dynamic range significantly. This means detail in clouds is lost as are the details in deep shadows.

As an aside, if you wonder why you love Ansel Adams pictures of western landscapes with their awesome clouds, he was one of the very first photographers to discover ways to maximize the dynamic range of his black-and-white film images using color filters. Notice the contrast and detail in his bright clouds and dark landscapes. Black-and-white film actually has a wider dynamic range than color film.

Some standard digital cameras can provide a slightly greater dynamic range than film, but its still pathetically limited. Debevec showed us that we can record much of the detail in the deep shadows and the bright sky by compiling a special image created from a series of images shot at different exposures or different light levels. For example, if you shoot a given scene at F22, youre likely to get fair detail in the clouds, but the deeper shadows will be black. If you then shoot the same scene at F1.8, your sky will be blown out, but youll have a lot of detail in those deep shadow regions. Now shoot a series of 10 images, one full F stop apart, and youll collect enough information about that scene to create an image with good detail in both the clouds and the shadows. Unfortunately, the typical 8-bit image formats like JPEG cant contain all that HDRI detail, so tonal mapping is used to simulate it. That is, the details in the clouds are taken from the high F stop images while detail in the shadows is taken from low F stop images, and theyre mapped onto the very limited 8-bit JPEG image.

Truly extending dynamic range is going to takes a lot more data. To pack it all in HDRI image formats, dont use the standard integer format (kinda like counting on your fingers), but, rather, use floating point math. Avoiding an abstruse discussion of mantissas being multiplied by exponents, lets just say floating point can very precisely express the large numbers HDRI requires. A true HDRI image might take 10-20MBs to store, while a tonal mapped JPEG version might take 50KB. Your visual cortex fills in all the missing information by psychologically extrapolating what should be there. Its amazing how forgiving the mind can be. But when we deliver real HDR images to the brain, it rejoices. Before we get into how that can be accomplished, lets take a closer look at HDRI.

Your eye can process a range of about 10,000-1 in luminance values out of the actual analog of true values out there. The typical monitor can only process about 255-1. Tonal mapping is a way to reproduce the general appearance of an HDR image on a low dynamic range display. Its done by scaling the luminance value of each pixel in an HDR image so that it can be displayed using your typical display device. Think of it this way: the detailed contrast and color data in clouds are mapped in over the blown out (all values of 1) values and the contrast/color values in shadows are laid over the zeroed out black shadows. That makes the clouds a little darker and the shadows a little brighter so we can see the detail more nearly as we would with the naked eye. Unfortunately, the typical display still has only a very limited range of contrast that our brain compensates for in interpreting these images.

Heres a tonal mapped image I took from my home in Colorado:

di02_CloudsInHDRI.jpg

The shot on the left is a typical digital 8-bit photo shot from my deck. The second compiled from a series of nine shots in Photoshop. Each was exposed at progressive F values, which were combined into a 32-bit image, then down-converted to 8-bits. Note the cloud and shadow detail. All images unless otherwise noted courtesy of Peter Plantec.

I didnt even use a tripod; just balanced the camera on my deck railing and shot a bracketed series of images over 9 F stops. BTW, some cameras can automatically shoot a bracketed series and compile HDRI files internally.

Brightside Technology is now offering a patented system where a standard CCD can capture two images at different exposures on each shutter opening, to significantly increase the dynamic range and contrast in standard video sequences. Panavision claims its Genesis digital cinema cameras also provide a significantly wider dynamic range and more cleanly differentiated colors than film.

In another approach, you can combine a low sensitivity sensor with a closely spaced high sensitivity sensor in a hybrid pixel matrix; you can capture a wide dynamic range for each pixel in your shot. This requires especially designed sensors of course. For example, Fujis SuperCCD S3 Pro camera has a chip with both types of sensors in a very simple but effective array.

The small sensors are the less sensitive. Such arrays result in slightly lower resolution with a big increase in dynamic range of the image. © Fuji Film.

Clearly this is a hot area of development with plenty of emerging technology that will change the face of digital photography and cinematography a great deal over the next few years. Ive spoken with other vfx people and every one seems to be looking forward to these developments. There is general agreement that the more DR info in the images, the better the effects are going to be.

I encourage you to experiment with HDRI using Photoshop CS2, creating your own HDR 32-bit images and down converting to 8-bit JPEGs. Its not difficult, but as you learn more about adjusting the parameters your work will improve. If you dont have Photoshop, There are also some excellent freeware HDRI applications including HDRShop from USC, EXRDisplay from ILM and Photosphere from Greg Ward.

Greg told me hes been consulting on a very exciting development at Brightside Technologies in Canada. Hot on the Scene at SIGGRAPH 2006 was their new HDR Monitor. It can display very wide range of brightness and contrast, which allows more accurate display of true HDRI, but more on this in a moment. First, what are some ways HDRI impacts vfx?

Light Probes

A light probe image is a panoramic or omni-directional HDR image that records incident light at one spot in a scene.

Traditionally probes have been shot using a large polished chrome ball or mirror ball placed in the center of the scene, but that too has evolved. Professionals today are stitching together complex and detailed 360-degree panoramic images that are used to place 3D assets into hyper-realistic scenes. The data in the image is then used to calculate all the light in the image. Greg Downing is a vfx consultant who has pioneered techniques in Panaramic HDRI imagery. He was helpful to me in understanding how these remarkable spherical images are shot. He has detailed instructions in his DVD training from Gnomon Workshop, if youre interested in the serious details. He also has a fascinating website (www.gregdowning.com/HDRI/stitched/) where, for free, he explains how to shoot true spherical panoramas in HDR, using Stitcher from to put them together. The cool thing is with the right software the huge dynamic range allows you to adjust your lighting for day or night shots with out much trouble, maintaining the rich detail. These images can also be used to calculate the light, reflection and shadows in 3D vfx work.

I created this simple image in Vue 5 Infinite using its image-based environment tools. The entire background and ground are part of an original HDR spherical light probe provided with Vue 5. The only geometry used is the foam walled car.

A few months ago, I interviewed Paul Franklin from Double Negative in the U.K. about their use of HDRI on Batman Begins. I think its a good example of using the 32-bit open EXR file format in vfx work. Basically, Double Neg dispatched a location team to Chicago, which was to be the prototype for Gotham. More tha a million individual images were shot and combined into more than 200 spherical panoramas. Using their proprietary dnPhotofit tool set, they extracted the data necessary to create an entire 3D cityscape with a half million individual buildings. They then used the HDR images to texture the buildings and adjusted the lighting to match the directors requirements. The results were spectacular. Few moviegoers would believe that Gotham was not a real city. Double Negs several proprietary HDRI tool sets made it all possible on a reasonable schedule and within budget.

Still, we dont watch our movies and TV in HDRI. Wouldnt it be great if we could avoid down conversion with tonal mapping and display HDRI that way it was meant to be viewed? Well, at last we can.

HDRI Display Technology

While talking with Greg Ward recently, he got me excited about whats on the horizon for our viewing pleasure:

I think the next big thing in HDRI is going to be in entertainment. Right now with the switch to high-definition TVs, we have a huge opportunity to integrate into the equipment some of the new technology capable of displaying hi-def HDRI images.

Yeah, but what about the cost?

Its coming down rapidly. In fact, the LED technology is being developed for use in laptops because it provides a much greater luminance range at a lower power drain than the florescent bulbs. That kind of massive market will surely drive down the cost so that wall mounted video displays will be able to show HDR video.

Im wondering if people care. A lot of people dont even see a need for HDTV. Do you think it will make that big a difference?

Im a big stickler on quality and image fidelity. HDRI will bring to video, what HiFi and then digital sound brought to audio. Its taking a longer time, but once you experience it you wont want to ever go back.

Try to get your mind around what an HDRI display might be like. Ordinary flat panel LCD displays have a couple of florescent tubes behind the LCD panel providing uniform light with the image controlled entirely by the variable transparency of each colored LCD cell.The over-all transparency of the panel usually runs between 3 and 8 %, so most of that light is absorbed by the panel itself in creating the image in which the brights are not all that bright.

HDRI displays are different. They use a hybrid of LED and LCD technology. The LED portion consists of an array of ultra bright white (or tricolor) LEDs as the light source behind the LCD panel. That alone increases the brightness and contrast while reducing the power consumption. But to significantly increase the dynamic range we need more. This is accomplished by taking the luminance information in the HDR image and using it to control the brightness of the individual LEDs in the back array. Each LED controls the brightness of a small area of the overall image. The LED image is lower resolution than the front color image. Thus a low resolution, somewhat blurry, gray-scale image of the overall picture is displayed behind the LCD panel. This arrangement results in a massive increase in contrast ratios and dynamic range. For example, the typical display we work with has a real contrast ratio of about 300:1, whereas the HDRI display has a contrast ratio closer to 50,000:1. Furthermore, we get about a 100-fold increase in the differential between the brightest brights and the darkest darks. The resulting display images are awesome to behold. Its a much bigger improvement than just increasing the resolution to HD status.

I predict that HDRI HDTV is only a few years away. In five years, it will be standard to shoot both TV and movies in HDRI using the new multi sensor/pixel HDRI cameras. As we see the spectacular new images, we will hunger for them. This opens some interesting new worlds for vfx houses to create sequences with a whole new level of spectacularity (my word) with much more attention paid to light detail. Truly, it will be like looking through a window at an alternate reality.

Peter Plantec is a best-selling author, animator and virtual human designer. He wrote The Caligari trueSpace2 Bible, the first 3D animation book specifically written for artists. He lives in the high country near Aspen, Colorado. Peters latest book, Virtual Humans, is a five star selection at Amazon after many reviews.