The following image was one of my very first tries in really creating something new from some existing images. I have come a long way since, but this one might be good to show an actual workflow.
This space isn't really big enough to explain in detail, but I'll try giving you a bit of an insight - This image is composed from more than 100 single layers,
The first two images show the initial images I started with. Both have been taken in Trentham, on the same day in early 2017. The first one was a simple portrait idea, the second one a location shot to show my models on the day.
I thought, why not combining them? - Resulting in the third image, and a first clean-up in image 4.
Then I thought, something is missing, started browsing the web, and found a 2 dimensional (right hand side) wing on a free download page - Atm I'm learning to make my own 3 d digital props, but I'm not that far yet.
So I had one wing, mirrored it on another layer to get a left wing.
Each of those wings needed to be converted into 3 d layers within Photoshop, to change position and adjust perspective, and masking around the model - For this the 3 d layers need to be converted in smart objects. The result is image 5.
The claws at the end of the wings didn't look right to me, so I created some of my own, one layer each to be adjusted individually, makes 8 additional layers on image 6.
I wanted to add some diffused, foggy background lighting, which you see in image 7.
The matching shadows are added in image 8.
In image 9 I added some additional lighting and sun rays, to get a little more foggy feel to the image.
The final image, # 10, is mainly cleaned up around the edges of the model, some motion added to the wings, some more fog added as well as final adjustments to general lighting, shadows depth of field, general clean-up and straightening the whole thing.
As you see, Photoshop isn't just a tool to add some filters across a whole image, like most phone apps do. It's about layers, combining those, applying filters to parts of layers, grouping such layers, merging and re-filter the result... The list, and the possibilities are endless. There are countless ways just to darken or brighten one single image. Pretty much all those techniques come from the darkroom of the old days. The only difference in digital editing is the speed, sometimes... This image, from shooting to the final version, took me way more than 20 hours, excluding the actual shooting.
Thank you Lia, for the shooting!
As many of my colleagues, I use a tablet for editing my photos.
By "tablet" I don't mean something like an iPad or similar. Mine is a Wacom, a black surface and a stylus replacing the mouse (or rather complementing it as both are working). It's more like a touch pad on a notebook, but far bigger and more precise, 50 cm wide as it goes across 4 of my 5 screens.
My computer runs Windows 10, i7 CPU and 32 GB of RAM, and the latest Wacom drivers. It can be used via wireless as well as tethered connection.
And that's the start of things, the wireless. I can't tell if this happens to MAC users too, but combined with a Windows Operating System, the wireless connection works rather patchy. Means, the stylus actions sometime freeze, sometimes for the fracture of a second (which is annoying), sometimes for seconds or minutes (which makes work impossible).
Coming from an IT background, I thought of driver issues etc, but found the culprit to be the wireless module itself. It gets obviously (even according to Wacom) interference from Bluetooth as well as WiFi routers, phones and other RC devices everyone has around. The USB port, the wireless receiver is connected to, seems not to be an issue. I got 6 USB buses on my mainboard, but those problems occur even if I dedicate the fastest one just to the receiver.
This isn't great, but a good workaround is to just use it tethered. If I knew this before, I could have saved the 50-odd Dollar on top of the tablet.
Another thing I noticed is a frequent reset of saved settings - this seems to have also stopped since I use it tethered.
The touch sensitivity of the stylus depends on the Microsoft Ink feature, but it has some annoying side effects when you move the stylus to other parts of the screen. Sliders in Photoshop react odd (or not at all), as the touch intensity obviously isn't enough to "hold" the slider, things like that. Easy workaround, switch off Microsoft Ink in the Wacom settings and do without touch sensitivity. It would be a great feature, but in that case Microsoft is the showstopper.
The touch pad feature is cool, generally, but if your Wacom is big and you naturally touch it often, it gets confused between your hands and the stylus. So I switched it off too.
The buttons to the left would be helpful, but they are in the way more often than not, and their programming is deleted frequently too - here the tethered use doesn't help.
All in all, I wouldn't give up my pad and stylus, but I find it disappointing that there are so many great features which are made useless because Wacom and Microsoft obviously refuse to talk to each other. - And No, I won't switch to a MAC! They might be great, but not for my type of use.
The last thing is another workaround, also for people with a larger tablet.
Human skin tends to be a bit sticky sometimes, which makes it difficult to draw straight lines or nice smooth curves with a stylus, when your hand needs to slide over the tablet surface. I found some white gloves, often used by museums or in archives (to protect paper in particular). Gloves make your fingers bulky though, so I just cut off thumb, index and middle finger of the glove I use on my stylus hand. This makes things much easier, and lines more smooth and the silly look might have some entertainment value.
I hope this little article is helpful for some of you, now have fun taking photos!
Hi there, please have a look at the WPS Impact magazine (here the link to Issue April 2019).
There are generally some interesting articles by some fellow photographers, and some more by me to come.
On page 35 of this issue you find some guidelines about street photography, written by myself, page 35.
HDR, High Dynamic Range, is often associated with bracketing, which means shooting the same scene in several (usually 3-5) exposures and then combining them in post processing. This is one, but not the only way to overcome the disadvantages of digital photography.
First I’ll try to explain what I actually mean by that. A camera, contrary to common belief, “sees” very different compared to a human eye.
The eye has a focus range of about 1-2 degrees, hence you see only a few letters in your newspaper really sharp and move your eyes during reading – a camera can take an in-focus shot of the whole newspaper page.
The eye does not capture a single moment, but records a continuous “video” – a camera takes a picture, “frozen” in time. This can be a very short exposure to freeze a water droplet in mid-air, or a longer exposure to make a choppy ocean look like fog, or a very long moment to catch enough light to make the Milky-Way visible in the night sky.
The eye doesn’t see a wide scene, but the brain fills in the gaps with information about the environment, to get a wider picture of the actual surroundings – a camera, depending on lenses, can capture up to 180 degrees or more in one shot.
The eye has a lens and aperture – a camera has those too, which is where the similarities end.
The eye, in combination with the brain, can recover shadows in a foreground when looking towards a sunset, and darken the sky in the same scene, so you see the scene in even light, this is called dynamic range – the camera doesn’t do an equal good job in the same situation.
What the human brain does in such a situation is pretty much what a photographer does in post processing, also called editing.
Your brain edits a scene it sees it. So the purist call for "unedited images" is an actual myth. There is no such thing as an unedited image, not even the stuff we see with our own eyes. Those are simple facts, not just an opinion.
What we call “reality” is only a one-sided perception of our surroundings. Every animal sees things different. Insects see images made up from hundreds of single images seen by their hundreds of eyes, and they can’t see infra-red light. But they do see ultra violet light, so their color perception is pretty different to ours. A cat has a superior night sight, a snail sees only light and shadow, and so forth. A monochrome photograph is actually something pretty abstract, due to the lack of colors.
You see, reality is a very complex and a very individual thing. We can't even imagine how any other being perceives a scene. There are even differences among us humans, some are color-blind, others need glasses, etc.
This article is about dynamic range, which is one small part of the subjects mentioned above.
Dynamic range means the ability to see even a very strong contrasted scene in some sort of even light.
For example, you are on a beach with someone who is standing in front of the setting sun. You will still recognize this person, but you will also see the lit up sky in all its colorful beauty. Then you take out your phone or some other camera and take a picture. Looking at the result, you will notice either the person is reduced to a silhouette but the sky is looking perfect, or you see the person, but the sky is blown out (too bright, nearly white).
This is due to a lower dynamic range a camera is capable of "seeing", compared to our eyes. It’s due to the differences of “seeing”. Your eyes focus only on what they really “point” at, the rest is filled with side information by your brain, “edited” as you go. The brain can fill the gaps as your eyes constantly record the scene and see the sky in all different angles and lighting.
The camera needs to cover a much bigger area, and it’s “brain”, actually its operating system, tries to even out the lights and shadows. Even though its internal computer is pretty fast, it’s much harder and more computing intensive to do this for such a much bigger area (remember, the eye only focuses on about 1-2 degrees). In older days of photo-film or glass plates, this was pretty much the same. You had one type of film in your camera, but such a contrast rich scene might actually need 2 or more different types of film, more or less light sensitive for different areas of the scene.
To overcome this, we do the same as our brain does. We edit.
Back in the day the photographer could recover the blown out sky with filters in the dark-room. In digital photography we use software.
Sometimes it’s enough to just hit the trigger, and the software on your camera or phone does this basic edit without you even knowing it (yes, every form of digital camera does this). But this goes only so far and is limited by the levels the programmer implemented.
Often we need to enhance the image more, to get closer to what we actually saw when we were shooting.
In older days we had photo negatives. In our digital age we have cameras which can shoot in “RAW”.
Negatives and RAW images hold a maximum of information about the scene photographed. The RAW image has one additional advantage, it shows pretty much the image as it is, the colors not reversed like in a negative. But a RAW photo taken for a one frame HDR image can look dull and flat, and it actually has to, if you want to get all details. A RAW image, as the name suggests, isn’t necessarily meant to reflect our reality, but to give us as much information about our reality as possible (similar to a negative which also doesn't reflect what we really saw).
The image shown below is meant to point out what my camera saw, and what my own reality looked like (as close as I could get to it).
For this I chose a more subtle contrast, not something as strong as a sunset, just to show the differences in texture etc.
Processing such images, by the way, is not just smashing a filter over the whole thing, like you would do on your phone apps. As there are different parts in an image, some brighter or darker, different color casts etc, each area needs attention and each area needs to be treated differently – that’s exactly the same as what our brain does on the go when we are looking at a certain scene.
I can’t show the whole process in one article, so a before-after comparison will have to do.
The first image shows the scene as I saw it out in a storm on the South Coast of Wellington, the second image shows what the camera “saw”, a screenshot of the RAW image, shot as “flat” as possible to get all the detail information about the texture and lighting.
How a panorama image is done - I'm not talking about some in camera or in phone apps. Those apps work on a different level, are less precise and less detailed and often cause some surprising effects like a stretched dog which was running through the shot.
A panorama image is basically a series of photos, taken in varying angles usually on a horizontal line. However, there are vertical panoramas too, for example to show the Milky-Way.
The easiest form is just one row of images, though my biggest panoramas consist from up to 5 rows, up to more than 100 single shots. It really depends on the lens you use and the final angle you want to achieve. More rows also increase the level of detail dramatically, as well as the final resolution of the image.
Ideally you will use a tripod with a panning head, so the start point of each photo and the vertical angle stays constant. More practiced photographers can do it handheld.
All camera settings should be manual and should be fixed, including focus. So you decide where your main subject will be and focus on this before you start shooting. Auto mode would change the light in each shot, gets you a bunch of brighter and darker images, resulting in an inconsistent overall lighting.
The single shots need to overlap each others by at least 30%, I prefer to overlap to about 60%. The larger overlap helps if one of the shots isn't right, out of focus etc, and it can just be ignored.
Personally I prefer a 24 mm (wide angle) lens for such images (on a full frame camera). Most kit lenses will be zoom lenses with a similar wide angle length depending on your camera.
Those photos will be loaded into a panoramic software which "stitches" them together (EG Photoshop). The software removes most distortions and adjusts the edges, lighting etc. In photo-film times this had to be done in the darkroom, by aligning the lenses of the projectors etc. However, everything you can do in Photoshop has its origin in a darkroom.
Once stitched, you will have to fine-tune the image, removing steps in the horizon (a very common issue), removing a bird half in one and not at all showing in the next frame, etc, as well as adjusting lighting, white balance and so forth - This has also been done in the darkroom, by using filters, different sorts of paper etc.
Here an example of a low light single row panorama, this one is of course available as a print:
And here an example of how it looked if "stitched manually", without a panorama software correcting distortions etc. In this version I simply overlaid the original photos, without changing the initial shots. You also see some doubled edges in buildings in this version, another result of distortion:
Here the single images used in the stitching process. You will notice, image #7 is out of focus, but because I overlap my single shots for panoramas by more than the recommended 30%, I could just leave this one out without losing any detail.