Just a few weeks ago, when I was thinking about different topics for the blog, it dawned on me, that HDRI is the basis for a lot of different visual effects assets, but a lot of people don't know what it actually is or how it works.
So what is HDRI?
HDRI stands for High Dynamic Range Imaging. Simply stated, it is a term that describes image formats and techniques to create formats that support a higher dynamic range than the "standard".
So, what is dynamic range, and what's the standard? Dynamic range (in this context, for imaging - it is also used for audio) is the ratio, or "distance" between the smallest and largest luminance values, i.e. the black and white point, as it is displayed (or captured) by a device.
There is no current "standard" of dynamic range. Different pieces of equipment allow for different ranges, but none of the standard acquisition or display methods come close to the (perceived) dynamic range of the human eye.
Dynamic range can be measured in f-stops, density, or contrast ratio. Over the thumb, 14 f-stops equal about 4 density points or a ratio of 10,000:1. The scale is logarithmic. Density 1, or 10:1 contrast ratio equals about 3 stops.
Bit depth equals contrast ratio, but NOT necessarily vice versa.
8 bits (=2 to the power of 8=256) equals a 256:1 contrast ratio. This is the current standard of your VGA graphics card.
14 bits (supported by some camera RAW formats) therefore equals a 16,384:1 contrast ratio.
But this conversion is a bit misleading, since an (advertised) 2,000:1 contrast ratio on a monitor does not suddenly give you an (approx.) 11 bit depth. You will still only have 256 steps, mapped between your black and white point. Only the black and white point are further apart, which means you're likely to get banding effects in certain situations.
But why do we need or want more than that range?
While the human eye can only distinguish about 16 million colors (so 8 bits per channel =16.7 mil colors should be fine), it can "capture" a theoretical range of about 1,000,000:1, or about 20 F-Stops. The eye does that by constantly re-adjusting its "exposure" through the iris (its static range is only about 100:1 (about 6.5 F-stops)).
So, in order to get closer to those 20 F-stops, we have to pull a few tricks. That's where HDRI comes into play. The usual method for photographically capturing that range is to put your camera on a tripod, and shoot a series of multiple exposures. That should be a minimum of 3 exposures, shot at a 2 F-stop difference (normal exposure, +2 stops and -2 stops). I usually do 5 exposures, and in extreme light situations up to 9. The dynamic range of a camera chip changes with the ISO (as a rule of thumb, the higher the ISO number, the lower the dynamic range). A Nikon D3 or Canon 5D, for instance, has a 14bit analog/digital converter. That again is a bit misleading, because it does not mean we're getting 14 stops (=14bit) of dynamic range. In practice, you’re looking at 5-9 stops per exposure, depending on the ISO setting.
In order to do these exposures, I would recommend a handy little device called the "promote control" .
It is basically a remote control that works with a multitude of digital still cameras and allows you to automatically set a number of exposures and exposure steps, and warns you if any of them are out of range. I'm using it, because there are limitations to most cameras. For "2012", I used the Nikon D3. The Nikon allows for up to 9 exposures, but only in 1 step increments. Currently, I'm working with the Canon 5D markII, and this one allows 2 step increments, but only 3 exposures. And it would be tedious to set each exposure or set of exposures manually, especially on location, when time is usually short. Aother option is to use a laptop with remote control software (Canon comes with it) - but that's also not practical in most environments.
There are a bunch of different software tools to combine multiple exposures into one HDR. There are two file formats that seemed to have become the standard: hdr and exr. I'm using the exr standard (openEXR was created by Industrial Light&Magic), and the Photomatix software by HDRsoft. HDRshop and, of course, good old Adobe Photoshop, are also popular. All of them have their advantages and disadvantages. Photohop is probably the most powerful software for HDRI, since you have all the manual correction and filtering options you could ever wish for. My main reason for using Photomatix was that I needed to shoot HDRs for "2012" out of a helicopter. At first I thought it wasn't possible, but I tried it anyway. I had the helicopter hovering over a mountain and shot 5-9 exposures (in one second with the Nikon). Photomatix has a built-in pattern recognition that aligns the pictures , and it actually works (Photoshop's is more iffy, and HDRshop doesn't have one). The disadvantage of Photomatix vs Photoshop, for instance, is that Tone Mapping usually results in slightly strange "off-color" images, and the tools to prevent or correct that are limited. They have some kind of Technicolor-like quality that I think can actually be cool in an artistic sense, but not so useful if you need accurate color representation for CG textures, for instance.
Tone mapping is the process of mapping one set of color values to a different set of values, pretty much what a look-up table (LUT) does. For HDRI, this means "compressing" a multitude of exposures into an 8bit or 16bit format, and giving the appearance of a higher range. This can be extremely useful, when you have to output the result of an HDR image to a format that doesn’t support a 32-bit (per channel) bit depth, or if you want to use it for textures, and simply can't afford to use 32-bit textures because of render times, files sizes, etc.
This is an extreme example to show that no single exposure can show all details in the highlights around the sun, AND the shadow detail in the foreground trees. Just running this through the Photomatix software and compressing it into an 8-bit file gives you this result:
This is just an uncorrected output, to give an example what the software does under extreme circumstances.
The areas we use HDRIs for are: textures for CG models, sky domes as reflection maps, sky domes as HDR lighting map, and live action backgrounds. For textures, we usually only need three exposures, since a building wall, for instance, doesn’t carry such a high dynamic range to warrant more. For sky lighting maps, we definitely use 7-9 exposures, same for reflection maps. For live action backgrounds (meaning the background for a greenscreen or bluescreen shoot), we usually use 3-5 exposures. We wouldn't necessarily use a tone mapped file as an actual background for a greenscreen shoot, since that would look fake. But I like to have the range, so we have more leverage in adjusting our backgrounds and playing with the lighting to make it more dramatic.
Here is an example of such a background:
Again, this only an uncorrected 8bit representation of the actual image, since a full 32bit file could not be displayed here.
Here are three samples of the original exposures:
No single exposure can give us the details we need in the stained glass windows, and in the shadow areas on the back columns.
Since the digital imaging acquisition standards constantly evolve, I'm pretty sure that the term HDRI itself may become obsolete some day, when every imaging device by default will have a high dynamic range that is greater than the human eye.
For more info:
http://www.hdrsoft.com/ (photomatix software)
http://projects.ict.usc.edu/graphics/HDRShop/ (hdrshop software)
http://www.adobe.com/ (photoshop software)