HDR, High Dynamic Range, is often associated with bracketing, which means shooting the same scene in several (usually 3-5) exposures and then combining them in post processing. This is one, but not the only way to overcome the disadvantages of digital photography.
First I’ll try to explain what I actually mean by that. A camera, contrary to common belief, “sees” very different compared to a human eye.
The eye has a focus range of about 1-2 degrees, hence you see only a few letters in your newspaper really sharp and move your eyes during reading – a camera can take an in-focus shot of the whole newspaper page.
The eye does not capture a single moment, but records a continuous “video” – a camera takes a picture, “frozen” in time. This can be a very short exposure to freeze a water droplet in mid-air, or a longer exposure to make a choppy ocean look like fog, or a very long moment to catch enough light to make the Milky-Way visible in the night sky.
The eye doesn’t see a wide scene, but the brain fills in the gaps with information about the environment, to get a wider picture of the actual surroundings – a camera, depending on lenses, can capture up to 180 degrees or more in one shot.
The eye has a lens and aperture – a camera has those too, which is where the similarities end.
The eye, in combination with the brain, can recover shadows in a foreground when looking towards a sunset, and darken the sky in the same scene, so you see the scene in even light, this is called dynamic range – the camera doesn’t do an equal good job in the same situation.
What the human brain does in such a situation is pretty much what a photographer does in post processing, also called editing.
Your brain edits a scene it sees it. So the purist call for "unedited images" is an actual myth. There is no such thing as an unedited image, not even the stuff we see with our own eyes. Those are simple facts, not just an opinion.
What we call “reality” is only a one-sided perception of our surroundings. Every animal sees things different. Insects see images made up from hundreds of single images seen by their hundreds of eyes, and they can’t see infra-red light. But they do see ultra violet light, so their color perception is pretty different to ours. A cat has a superior night sight, a snail sees only light and shadow, and so forth. A monochrome photograph is actually something pretty abstract, due to the lack of colors.
You see, reality is a very complex and a very individual thing. We can't even imagine how any other being perceives a scene. There are even differences among us humans, some are color-blind, others need glasses, etc.
This article is about dynamic range, which is one small part of the subjects mentioned above.
Dynamic range means the ability to see even a very strong contrasted scene in some sort of even light.
For example, you are on a beach with someone who is standing in front of the setting sun. You will still recognize this person, but you will also see the lit up sky in all its colorful beauty. Then you take out your phone or some other camera and take a picture. Looking at the result, you will notice either the person is reduced to a silhouette but the sky is looking perfect, or you see the person, but the sky is blown out (too bright, nearly white).
This is due to a lower dynamic range a camera is capable of "seeing", compared to our eyes. It’s due to the differences of “seeing”. Your eyes focus only on what they really “point” at, the rest is filled with side information by your brain, “edited” as you go. The brain can fill the gaps as your eyes constantly record the scene and see the sky in all different angles and lighting.
The camera needs to cover a much bigger area, and it’s “brain”, actually its operating system, tries to even out the lights and shadows. Even though its internal computer is pretty fast, it’s much harder and more computing intensive to do this for such a much bigger area (remember, the eye only focuses on about 1-2 degrees). In older days of photo-film or glass plates, this was pretty much the same. You had one type of film in your camera, but such a contrast rich scene might actually need 2 or more different types of film, more or less light sensitive for different areas of the scene.
To overcome this, we do the same as our brain does. We edit.
Back in the day the photographer could recover the blown out sky with filters in the dark-room. In digital photography we use software.
Sometimes it’s enough to just hit the trigger, and the software on your camera or phone does this basic edit without you even knowing it (yes, every form of digital camera does this). But this goes only so far and is limited by the levels the programmer implemented.
Often we need to enhance the image more, to get closer to what we actually saw when we were shooting.
In older days we had photo negatives. In our digital age we have cameras which can shoot in “RAW”.
Negatives and RAW images hold a maximum of information about the scene photographed. The RAW image has one additional advantage, it shows pretty much the image as it is, the colors not reversed like in a negative. But a RAW photo taken for a one frame HDR image can look dull and flat, and it actually has to, if you want to get all details. A RAW image, as the name suggests, isn’t necessarily meant to reflect our reality, but to give us as much information about our reality as possible (similar to a negative which also doesn't reflect what we really saw).
The image shown below is meant to point out what my camera saw, and what my own reality looked like (as close as I could get to it).
For this I chose a more subtle contrast, not something as strong as a sunset, just to show the differences in texture etc.
Processing such images, by the way, is not just smashing a filter over the whole thing, like you would do on your phone apps. As there are different parts in an image, some brighter or darker, different color casts etc, each area needs attention and each area needs to be treated differently – that’s exactly the same as what our brain does on the go when we are looking at a certain scene.
I can’t show the whole process in one article, so a before-after comparison will have to do.
The first image shows the scene as I saw it out in a storm on the South Coast of Wellington, the second image shows what the camera “saw”, a screenshot of the RAW image, shot as “flat” as possible to get all the detail information about the texture and lighting.