I know, you might see lots of images, especially on social media, showing all those beautiful Milky Way arcs.
But, what you might not know is one important fact:
The Milky Way, as we see it from our point of view in this galaxy, is no arc at all!
This arc shape is due to a few conditions which arise during capturing our galaxy in photographs.
First, the Milky Way, in reality, is a disk shaped spiral. We are pretty far away from the center, on it’s outskirts, as shown in this image:
This means, we are sitting on the brim of a plate. When we look to the Milky Way center, we do so from this brim. The Milky Way center, or core, is the beautiful big area, with all its colors. It’s usually visible to us on the southern hemisphere from around February to October.
Back to our position… As mentioned, we are sitting on the outskirts of our disk shaped galaxy. This means, if you take my plate-example, looking towards the core of our galaxy will show it in a pretty straight line, naturally, like the plate looks like a line if you look at it from the side.
If you are out there in a clear night, in a really dark place, you will see the Milky Way with the naked eye. You will also notice, it IS actually a straight line.
So how do we get those mysterious arcs then?
Well, this happens due to the way a camera is used, what sort of lens you have on and the cameras' angle of view.
The curvature of our planet plays also a role (sorry flat-earthers), as our horizon isn't a straight line in the first place.
There are two main techniques to shoot such an arc, given the Milky Way is in a somewhat horizontal position, not too far or too close to the horizon. The bigger that distance, the more pronounced the arc(s) get:
The first one is to use a very wide angle (or even fish-eye) lens. Something around 15mm focal length on a full frame camera. With this sort of setup the arc is pretty obvious and can be done in a single shot. The arc effect comes from the lens distortion. It’s very obvious if the center of the lens is pointing somewhere to the middle between horizon and the Milky Way. Without correcting this distortion, you will see the the horizon curving downward and the Milky Way pointing upward.
Now when you post process such an image, most programs will find the actual horizon and straighten it, resulting in curving the galaxy even more – and the arc is born.
In some cases the software might adjust to the galaxy, and so create a downward ‘arc’ of the horizon, which can usually be corrected, depending on the software used. Like in the image below, which shows the Milky Way correct but the horizon curved.
The second technique is to shoot a panorama with a somewhat longer lens (say 24mm on a full frame). Naturally this doesn't work in a single shot. The advantage is, you catch more detail. The disadvantage is, you need to be fast, as Earth rotates within the galaxy, so the Milky Way ‘moves’ during the shooting. This technique is often used by more advanced astro shooters.
You'll see distortion effects similar to single shot images (fish-eye/super-wide-angle lens), when you stitch your individual shots together. This is because the overall focal length of the lens 'mimics' that of a fish-eye. However, you will have a much higher resolution.
Some of those images above are single shots, some are panoramas. Because the effect is the same, I didn't bother splitting this up any further
The top image is a panorama of 19 individual shots, image #4 is a panorama of 24 shots in 2 rows.
You see something similar when you shoot a panorama with your phone, for example parallel to a street. The street will show a pretty weird angle in some of those shots. This is due to the same effect, shooting several images from different angles – because that’s exactly what the phone, too, does.
You might also have seen images of the Milky Way, rising steep over the horizon. In those you won’t get any arc. This is due to said steep angle, very roughly around 90 degrees towards the horizon. With that, the software needs only to straighten the horizon, but this has no visible effect on such a steep Milky Way.
Those images below are vertical panoramas of 5 or more individual shots.
First, what does <macro> mean?
Macros are essentially photographs depicting very small objects, very close and magnified. Like an insect, or a flower, part of a plant, showing lots of detail that escapes the eye when walking past.
There is actually <real macros> and <close-up-photography>. What most of us label as macro, is more often than not actual close-up.
The difference is in the gear. A real macro uses a lens with a magnification factor of at least 1x1 or higher, anything below that is technically close-up.
1x1 means, if you have a rice grain, 10 mm long and use a lens with 1x1 factor, the image of the rice grain will also take up 10 mm on your sensor.
Real macro lenses are pretty expensive. Many lenses carrying the name <macro> are actually below 1x1. But close-ups aren’t any less impressing.
I don’t own any dedicated macro gear myself, so I make do with what’s at hand:
First, any lens needs a minimum distance to the object you want to photograph.
This is what makes the magnification factor difficult. Say you have a nice big zoom lens, so you could just zoom in on an object till you have it as close as you like. Good plan, but, say a 150-600 mm lens (my biggest one), needs some 5-10 meters distance on 600 mm. Hard to even see a bug at that distance.
So, a long lens might not be the way then. But there is a way to overcome such issues, without investing too much.
Extension tubes. They enhance, physically, the distance between lens and sensor (or film in the old days). They fit between your lens and the camera.
The result of this is, you can go considerably closer to the object you want to photograph, but the tubes also flatten the depth of field - DOF considerably.
I found, for my personal setup, a fast 50mm Canon prime lens the best solution. The one I got is the EF 50mm F1.8. This is actually my cheapest lens, I think it’s under $250 new. There is also a F1.4, but this one is considerably more pricy and it’s less sharp on the edges.
You might find that the kit lens that came with the camera is actually sufficient, but I suggest for a start, you set it to around 50mm. You can later experiment with that.
I also use a mid-price set of extension tubes. Metal, and all the contacts of the camera properly forwarded to the lens. The price should be around $100 for a set of 3. Extension tubes contain no optical elements whatsoever, they are just tubes.
You would also want to use a tripod. On such a small scale the slightest movement shows – this can be a cool effect though.
Any macro or close-up also needs more light, partly because being very close to an object casts your shadow on it and partly because of how optics work.
So you have a camera with a sufficient lens, extension tubes, good lighting and also a hopefully still object. The rest is pretty much experimenting. Aperture wide open flattens DOF to a millimeter size or less, which can cause a nice creamy bokeh effect. Not using a tripod can cause some motion blur which can add interesting effects.
The angle on the object can cause some cool spatial effect.
Closing the aperture will widen DOF, showing more of the object itself, but will also need more light.
Focusing can be pretty difficult in such a setup. I found it easier to turn off auto-focus. You can actually focus via changing your distance to the object, which is quite useful if you shoot handheld. Just go closer till the part you want is in focus, then hit the trigger. On a tripod you can use manual focus.
I hope this helps some of you. As we are still in lockdown for a while, this sort of photography might help to keep you busy, and it’s perfectly safe as you can do it at home.
Here some of my late experiments:
Here the lens and extension tubes I use:
Depth of field, or DOF, in photography defines how much of a scene in a photograph is in focus, relating to the distance from the camera.
Sounds complicated, but it’s actually not that bad.
Let’s assume, in a landscape, you want to take a picture of a tree, the tree is, say 10 meters away from your position.
If you want the foreground, tree and background all in focus, this would be a deep DOF.
If you want to <isolate> the tree from foreground and background, it’s a shallow DOF.
There are of course lots of variables in between.
So, how do you generally control DOF?
For once, the aperture in your camera does that, similar to the iris in your eye (one of the few actual similarities).
The further you open the aperture, the shallower the DOF.
The settings for this are a bit confusing though. Low <F-number> means wide open, high <F-number> means tighter – and with that a deeper DOF.
Another factor to control DOF is the distance to your subject.
A wide open aperture has less effect the greater the distance between you and the subject is.
For example, you shoot a portrait, your model is 1 meter from your position. You have a <fast> lens, which means the aperture can open very wide, somewhere around F 1.4-3.
If you open it say to F 2.8 (like my lenses), the DOF will be extremely shallow. This means, the tip of your model’s nose is in focus, all the rest fades into the out-of-focus area. For such short distances you would need to close the aperture more, say F 5 or so.
If your model is some 6-10 meters away, this effect is not so severe. You can open the aperture wider. The model would still be in focus, but anything closer or further would be out of focus. This <isolation> effect is often used in portrait photography and some landscape setups.
It’s also an interesting effect in macro/close-up photography. In fact, for macro and close-up, DOF can be a serious issue. Due to the physical closeness to the subject, the depth can shrink to a matter of millimeters, which makes focusing rather difficult. There are techniques to overcome such issues, and I’ll write about those later on.
Here an example of a portrait. Using a long lens, the real physical distance was some 4 meters, so I could open up the aperture to F 2.8 without losing any focus in the model's face.
In the second image, my subject was about one meter away. Opening the aperture to F 2.8, you see the strong effect. The tip of the beak in full focus, everything else out of focus. In this image it works well enough.
The third image is a typical landscape. This setting needs a deep DOF, so the aperture was much tighter, at F 8. All parts of the image are pretty much in focus.
Here one recent high dynamic range photograph:
And here the underlying individual images:
I think it’s time to clarify another myth about photography.
As a continuation of my last article, I thought I’ll explain a bit more about the differences between eye-vision and camera vision, and a bit about <reality> (=perception) itself.
At first glimpse, the human eye and a camera seem to have a lot in common. Both have a lens, an aperture (iris) and a sensor (retina).
But that’s where similarities end.
A camera <sees>, or records, an instant. A status quo of its environment. This instant is a certain amount of time, longer or shorter doesn’t matter, when the sensor is exposed to light. All the information is <written> onto the sensor.
The human eye, though, sees a continuous video. It doesn’t record anything, so everything we see is real-time.
This gives the camera the ability to show things you can’t see without a camera. Here some examples:
Water droplets in mid-air, very short exposure, 1/1000 second or so.
<Milky> water (or waterfalls), longer exposure, 1 or more seconds.
Stars, Milkyway, star-trails, very long exposure, some 15-30 seconds or longer.
Thing is, the human brain can’t really distinguish individual images within very fast image flows. So you see splashing water, but not really individual droplets. The brain also isn’t designed to <collect> image information over a longer period of time, so in a waterfall you see also just splashing water, not the trails of droplets.
Stars in the night sky aren’t very bright. Not for human eyes anyway, because they are not designed very light-sensitive (we aren’t nocturnal creatures). Collecting the light on an electronic sensor enables you to see the light accumulating over several seconds, which makes the whole sky brighter.
Camera and eye focus different.
The human eye has a focus range of roughly 1-2 degrees. If you hold out your thumb, and look at your thumbnail, concentrating on it, you will notice that everything around it is actually blurry. The sharp part is your actual focus window. The rest of the environment is filled in by the brains' memory of the environment.
A camera, on the other hand, can show the whole range of its vision in perfect focus, unless you flatten the depth of field to mimic human eye sight. I have a fish-eye lens covering 180 degrees diagonal in one image.
Another big difference is dynamic range.
The human eye has a high dynamic range (=HDR) compared to a camera. Dynamic range is the capability to see details in shadows as well as in bright parts of a setting.
A good example is a sunset with a person in the foreground. In such a setting, you will see the sunset in all its beautiful colors, and are still able to recognize the person in the foreground. This happens due to the slim focus range I mentioned above. You look at the person directly and the brain adjusts the background lighting, so you see both – person and the light show in the background.
A camera, however, would either adjust to the person or the sky. You might have noticed if you ever tried to take a photo in such a setting. Either the sky looks awesome but the person is just a silhouette, or the person looks good but the sky is blown out (means far too bright, even just white). It depends where you point the focus point, either to the person or the background.
To overcome this issue, one can take multiple images with different exposure times, usually 3. One under exposed, one normal, one over exposed. Then those three images can be combined to one single image. The result will show the foreground as detailed as the background. The image above shows such a final image, and the three individual images.
Things like that aren’t <tricks>. What I said about HDR is done by your brain as well, hence you are able to see as you do. It also does not mean these other techniques I mentioned above are <tricks> or they weren’t <real>. It only means you use a different type of vision, one your eye can’t reproduce by itself.
Both, camera and human eye see <reality>, but they have different ways of seeing it.
Both, camera and human (or any other known) eye produce not an image, but electric signals. In both cases those signals are interpreted (actually edited), to make sense and create a visual image. In a camera the first part of this process happens within the camera, in its operating system, the second part, if needed, on a computer. In a human these things happen within the brain, which technically is pretty much a computer as well.
There are a lot more different ways of seeing in nature, like cats seeing much better in darkness. Insects see a compound image delivered by some hundreds of individual eyes. Most insects can’t see infra-red light, but they see ultra violet light, so colors look entirely different for them. Most bats <see> via ultrasound, snails see only bright and dark.
Even for some humans <reality> looks different, a colorblind person still sees <reality>, only in different colors, but nonetheless real.
You see, reality is a pretty subjective thing. It’s also a purely human concept, not a natural one. It becomes only important due to a form of language. Without communication reality, as we define it, has no meaning whatsoever. It’s utterly nonrelevant for a fly to know how a cat sees or the other way around.
Humans, on the other hand, have the ability to talk and to share their individual perceptions of their environment. One way of doing this is via art, visual, musical, verbal etc. Because every individual experiences their environment different, there are always some who like some piece of art and some who don’t.
However, this has no effect, whatsoever, on the way the actual creator experienced the reality they <reproduced> or that inspired them to create a certain piece of art. That makes the job of an art critic pretty redundant.
It also has no effect on the underlying reality itself.
So saying something were not <real> is a pretty bold statement, based on one single opinion of one single individual, who wasn't even present, nothing more and nothing less.
The same is true for any piece of art. You can say you like or dislike it, but stating something is art or not, is just an individual opinion. A single opinion, however, can’t be a universal fact.
... or "developing", in the older days!
Here 2 examples, an un-edited photo in the past and an un-edited photo in our digital age (yeah, that's right, the second "image" is part of the program text interpretation of a digital raw photo, only one of several hundred pages per photo, btw... that's how a digital image really looks like):
This is a subject regularly popping up on my Facebook pages. People commenting like "ah that's a filter" etc.
No, it's not, and I'm trying to explain this a bit in this article.
First, a fact that most people are not aware of. Every photograph ever taken has been "edited" (=developed). In older days, a camera exposed a glass or tin sheet covered in light sensitive chemicals to light through a lens. Later to a piece of film, and nowadays it exposes an electronic sensor.
In ALL cases, you get a photo-negative, though in our days it's a so called "raw image". Any such negative has to be transferred to another medium (usually paper) and during this process those images have been fine-tuned to correct contrast, exposure and other properties. All pre-digital photographs you might have seen in old magazines, were subjected to this process.
A similar process happens in your phone or point-and-shoot-camera. The sensor gets light signals and translates those into computer readable language. In those "lesser" cameras, this "raw image" will be "adjusted" to the settings you chose on the camera during the shooting (most people use the "auto mode" for this). The "raw image data" is discarded when the automatic editing process saves the final image. Yes, this is already an actual editing process, even though nobody notices it. The "auto mode" is just a general set of settings (and adjustments), implemented by the programmer who wrote the operating system for your camera or phone - and any digital camera or phone runs on an operating system.
Essentially it's pretty close to a Polaroid image, sometimes good, sometimes bad, but ALWAYS an (automated) edit, regardless if you hit any additional button or not!
More sophisticated cameras can shoot and save in raw mode, which produces an image similar to the old negatives, though it doesn't inverse the colors. It contains all possible image information, regardless which preset (auto mode, manual mode, white balance, ISO, contrast data, etc.) you selected beforehand. Hence those raw images can easily exceed 20, 30, 50 megabytes or even more, compared to a final jpeg image of 5-10 megabyte - I never shoot auto mode, only manual mode, to have full control of the actual shot.
Those raw images, due to the mentioned properties and presets, can look dull, too bright, too dark, etc. and need to be processed (=edited=developed) like a negative in the old days.
A so called (modern day) filter is nothing more than a program preset applied to the whole image. On your phone, such a filter usually dramatically reduces the over all quality of the image. This goes unnoticed, mostly, as those images are usually viewed only on your phone or social media, so they are too small for those flaws to be seen. Though, they become pretty obvious when they get printed beyond A 4 format - Images taken with proper gear can be printed much bigger. I did billboards up to 6 meters wide...
So called filters are, technically, editing, but a photographer would usually chose a more subtle approach. We don't just squash a filter on top of an image, but work out details, contrasts, saturations etc. in certain parts of an image that needs adjustment.
Nothing you can do in an editing program is new. All the techniques and adjustments originate in the dark-room of the older days - this is also a widely ignored fact.
Below you see a raw file right out of the camera, and the edited version of the same image.
You see, the raw is a bit too dark, over all, and it lacks contrast.
This was taken in the afternoon, on a bright day, so the real light conditions were rather harsh. Hence I slightly under exposed during the shooting, to avoid the strong sun glare, but had to restore brightness in some parts of the image.
Cameras in general "see" different to the human eye. A camera is bound to its settings, over the whole picture, the human eye can adjust to the circumstances much faster.
The following image was one of my very first tries in really creating something new from some existing images. I have come a long way since, but this one might be good to show an actual workflow.
This space isn't really big enough to explain in detail, but I'll try giving you a bit of an insight - This image is composed from more than 100 single layers,
The first two images show the initial images I started with. Both have been taken in Trentham, on the same day in early 2017. The first one was a simple portrait idea, the second one a location shot to show my models on the day.
I thought, why not combining them? - Resulting in the third image, and a first clean-up in image 4.
Then I thought, something is missing, started browsing the web, and found a 2 dimensional (right hand side) wing on a free download page - Atm I'm learning to make my own 3 d digital props, but I'm not that far yet.
So I had one wing, mirrored it on another layer to get a left wing.
Each of those wings needed to be converted into 3 d layers within Photoshop, to change position and adjust perspective, and masking around the model - For this the 3 d layers need to be converted in smart objects. The result is image 5.
The claws at the end of the wings didn't look right to me, so I created some of my own, one layer each to be adjusted individually, makes 8 additional layers on image 6.
I wanted to add some diffused, foggy background lighting, which you see in image 7.
The matching shadows are added in image 8.
In image 9 I added some additional lighting and sun rays, to get a little more foggy feel to the image.
The final image, # 10, is mainly cleaned up around the edges of the model, some motion added to the wings, some more fog added as well as final adjustments to general lighting, shadows depth of field, general clean-up and straightening the whole thing.
As you see, Photoshop isn't just a tool to add some filters across a whole image, like most phone apps do. It's about layers, combining those, applying filters to parts of layers, grouping such layers, merging and re-filter the result... The list, and the possibilities are endless. There are countless ways just to darken or brighten one single image. Pretty much all those techniques come from the darkroom of the old days. The only difference in digital editing is the speed, sometimes... This image, from shooting to the final version, took me way more than 20 hours, excluding the actual shooting.
Thank you Lia, for the shooting!
As many of my colleagues, I use a tablet for editing my photos.
By "tablet" I don't mean something like an iPad or similar. Mine is a Wacom, a black surface and a stylus replacing the mouse (or rather complementing it as both are working). It's more like a touch pad on a notebook, but far bigger and more precise, 50 cm wide as it goes across 4 of my 5 screens.
My computer runs Windows 10, i7 CPU and 32 GB of RAM, and the latest Wacom drivers. It can be used via wireless as well as tethered connection.
And that's the start of things, the wireless. I can't tell if this happens to MAC users too, but combined with a Windows Operating System, the wireless connection works rather patchy. Means, the stylus actions sometime freeze, sometimes for the fracture of a second (which is annoying), sometimes for seconds or minutes (which makes work impossible).
Coming from an IT background, I thought of driver issues etc, but found the culprit to be the wireless module itself. It gets obviously (even according to Wacom) interference from Bluetooth as well as WiFi routers, phones and other RC devices everyone has around. The USB port, the wireless receiver is connected to, seems not to be an issue. I got 6 USB buses on my mainboard, but those problems occur even if I dedicate the fastest one just to the receiver.
This isn't great, but a good workaround is to just use it tethered. If I knew this before, I could have saved the 50-odd Dollar on top of the tablet.
Another thing I noticed is a frequent reset of saved settings - this seems to have also stopped since I use it tethered.
The touch sensitivity of the stylus depends on the Microsoft Ink feature, but it has some annoying side effects when you move the stylus to other parts of the screen. Sliders in Photoshop react odd (or not at all), as the touch intensity obviously isn't enough to "hold" the slider, things like that. Easy workaround, switch off Microsoft Ink in the Wacom settings and do without touch sensitivity. It would be a great feature, but in that case Microsoft is the showstopper.
The touch pad feature is cool, generally, but if your Wacom is big and you naturally touch it often, it gets confused between your hands and the stylus. So I switched it off too.
The buttons to the left would be helpful, but they are in the way more often than not, and their programming is deleted frequently too - here the tethered use doesn't help.
All in all, I wouldn't give up my pad and stylus, but I find it disappointing that there are so many great features which are made useless because Wacom and Microsoft obviously refuse to talk to each other. - And No, I won't switch to a MAC! They might be great, but not for my type of use.
The last thing is another workaround, also for people with a larger tablet.
Human skin tends to be a bit sticky sometimes, which makes it difficult to draw straight lines or nice smooth curves with a stylus, when your hand needs to slide over the tablet surface. I found some white gloves, often used by museums or in archives (to protect paper in particular). Gloves make your fingers bulky though, so I just cut off thumb, index and middle finger of the glove I use on my stylus hand. This makes things much easier, and lines more smooth and the silly look might have some entertainment value.
I hope this little article is helpful for some of you, now have fun taking photos!
Hi there, please have a look at the WPS Impact magazine (here the link to Issue April 2019).
There are generally some interesting articles by some fellow photographers, and some more by me to come.
On page 35 of this issue you find some guidelines about street photography, written by myself, page 35.
HDR, High Dynamic Range, is often associated with bracketing, which means shooting the same scene in several (usually 3-5) exposures and then combining them in post processing. This is one, but not the only way to overcome the disadvantages of digital photography.
First I’ll try to explain what I actually mean by that. A camera, contrary to common belief, “sees” very different compared to a human eye.
The eye has a focus range of about 1-2 degrees, hence you see only a few letters in your newspaper really sharp and move your eyes during reading – a camera can take an in-focus shot of the whole newspaper page.
The eye does not capture a single moment, but records a continuous “video” – a camera takes a picture, “frozen” in time. This can be a very short exposure to freeze a water droplet in mid-air, or a longer exposure to make a choppy ocean look like fog, or a very long moment to catch enough light to make the Milky-Way visible in the night sky.
The eye doesn’t see a wide scene, but the brain fills in the gaps with information about the environment, to get a wider picture of the actual surroundings – a camera, depending on lenses, can capture up to 180 degrees or more in one shot.
The eye has a lens and aperture – a camera has those too, which is where the similarities end.
The eye, in combination with the brain, can recover shadows in a foreground when looking towards a sunset, and darken the sky in the same scene, so you see the scene in even light, this is called dynamic range – the camera doesn’t do an equal good job in the same situation.
What the human brain does in such a situation is pretty much what a photographer does in post processing, also called editing.
Your brain edits a scene it sees it. So the purist call for "unedited images" is an actual myth. There is no such thing as an unedited image, not even the stuff we see with our own eyes. Those are simple facts, not just an opinion.
What we call “reality” is only a one-sided perception of our surroundings. Every animal sees things different. Insects see images made up from hundreds of single images seen by their hundreds of eyes, and they can’t see infra-red light. But they do see ultra violet light, so their color perception is pretty different to ours. A cat has a superior night sight, a snail sees only light and shadow, and so forth. A monochrome photograph is actually something pretty abstract, due to the lack of colors.
You see, reality is a very complex and a very individual thing. We can't even imagine how any other being perceives a scene. There are even differences among us humans, some are color-blind, others need glasses, etc.
This article is about dynamic range, which is one small part of the subjects mentioned above.
Dynamic range means the ability to see even a very strong contrasted scene in some sort of even light.
For example, you are on a beach with someone who is standing in front of the setting sun. You will still recognize this person, but you will also see the lit up sky in all its colorful beauty. Then you take out your phone or some other camera and take a picture. Looking at the result, you will notice either the person is reduced to a silhouette but the sky is looking perfect, or you see the person, but the sky is blown out (too bright, nearly white).
This is due to a lower dynamic range a camera is capable of "seeing", compared to our eyes. It’s due to the differences of “seeing”. Your eyes focus only on what they really “point” at, the rest is filled with side information by your brain, “edited” as you go. The brain can fill the gaps as your eyes constantly record the scene and see the sky in all different angles and lighting.
The camera needs to cover a much bigger area, and it’s “brain”, actually its operating system, tries to even out the lights and shadows. Even though its internal computer is pretty fast, it’s much harder and more computing intensive to do this for such a much bigger area (remember, the eye only focuses on about 1-2 degrees). In older days of photo-film or glass plates, this was pretty much the same. You had one type of film in your camera, but such a contrast rich scene might actually need 2 or more different types of film, more or less light sensitive for different areas of the scene.
To overcome this, we do the same as our brain does. We edit.
Back in the day the photographer could recover the blown out sky with filters in the dark-room. In digital photography we use software.
Sometimes it’s enough to just hit the trigger, and the software on your camera or phone does this basic edit without you even knowing it (yes, every form of digital camera does this). But this goes only so far and is limited by the levels the programmer implemented.
Often we need to enhance the image more, to get closer to what we actually saw when we were shooting.
In older days we had photo negatives. In our digital age we have cameras which can shoot in “RAW”.
Negatives and RAW images hold a maximum of information about the scene photographed. The RAW image has one additional advantage, it shows pretty much the image as it is, the colors not reversed like in a negative. But a RAW photo taken for a one frame HDR image can look dull and flat, and it actually has to, if you want to get all details. A RAW image, as the name suggests, isn’t necessarily meant to reflect our reality, but to give us as much information about our reality as possible (similar to a negative which also doesn't reflect what we really saw).
The image shown below is meant to point out what my camera saw, and what my own reality looked like (as close as I could get to it).
For this I chose a more subtle contrast, not something as strong as a sunset, just to show the differences in texture etc.
Processing such images, by the way, is not just smashing a filter over the whole thing, like you would do on your phone apps. As there are different parts in an image, some brighter or darker, different color casts etc, each area needs attention and each area needs to be treated differently – that’s exactly the same as what our brain does on the go when we are looking at a certain scene.
I can’t show the whole process in one article, so a before-after comparison will have to do.
The first image shows the scene as I saw it out in a storm on the South Coast of Wellington, the second image shows what the camera “saw”, a screenshot of the RAW image, shot as “flat” as possible to get all the detail information about the texture and lighting.
How a panorama image is done - I'm not talking about some in camera or in phone apps. Those apps work on a different level, are less precise and less detailed and often cause some surprising effects like a stretched dog which was running through the shot.
A panorama image is basically a series of photos, taken in varying angles usually on a horizontal line. However, there are vertical panoramas too, for example to show the Milky-Way.
The easiest form is just one row of images, though my biggest panoramas consist from up to 5 rows, up to more than 100 single shots. It really depends on the lens you use and the final angle you want to achieve. More rows also increase the level of detail dramatically, as well as the final resolution of the image.
Ideally you will use a tripod with a panning head, so the start point of each photo and the vertical angle stays constant. More practiced photographers can do it handheld.
All camera settings should be manual and should be fixed, including focus. So you decide where your main subject will be and focus on this before you start shooting. Auto mode would change the light in each shot, gets you a bunch of brighter and darker images, resulting in an inconsistent overall lighting.
The single shots need to overlap each others by at least 30%, I prefer to overlap to about 60%. The larger overlap helps if one of the shots isn't right, out of focus etc, and it can just be ignored.
Personally I prefer a 24 mm (wide angle) lens for such images (on a full frame camera). Most kit lenses will be zoom lenses with a similar wide angle length depending on your camera.
Those photos will be loaded into a panoramic software which "stitches" them together (EG Photoshop). The software removes most distortions and adjusts the edges, lighting etc. In photo-film times this had to be done in the darkroom, by aligning the lenses of the projectors etc. However, everything you can do in Photoshop has its origin in a darkroom.
Once stitched, you will have to fine-tune the image, removing steps in the horizon (a very common issue), removing a bird half in one and not at all showing in the next frame, etc, as well as adjusting lighting, white balance and so forth - This has also been done in the darkroom, by using filters, different sorts of paper etc.
Here an example of a low light single row panorama, this one is of course available as a print:
And here an example of how it looked if "stitched manually", without a panorama software correcting distortions etc. In this version I simply overlaid the original photos, without changing the initial shots. You also see some doubled edges in buildings in this version, another result of distortion:
Here the single images used in the stitching process. You will notice, image #7 is out of focus, but because I overlap my single shots for panoramas by more than the recommended 30%, I could just leave this one out without losing any detail.