Friday, February 24, 2012

Exposure - How to Get it Right Most of the Time

With automatic cameras and their wonderful exposure setting systems it is not hard to get a good picture under normal circumstances. The ease with which even the simplest, least expensive point and shoot cameras can take a reasonable picture is astonishing. These little cameras are amazingly sophisticated, especially when you consider the low prices.

But it is when the not so normal circumstance presents itself that many newer photographers are at a loss. Strong side light, backlight, very bright scenes, low light action shots, water reflecting bright highlights, sunrises/sunsets, stage performances - these are just a few scenarios that can be challenging for a photographer that does not have a firm grasp of how to interpret the conditions and set the camera exposure accordingly. Camera manuals are of little help, since they are written for the non-technical user and for "average" lighting situations. Unless you take the initiative to investigate how exposure works on your own, you are likely to be in the "dark" as far as how it all comes together.

Back in the day, before cameras had built in metering systems, a photographer would use a printed "exposure calculator" like one of ones shown in this link http://www.mathsinstruments.me.uk/page67.html or they would wing it, using a best guess estimate of how best to set the camera, using a printed guide that relied on "rules of thumb" to arrive at a close approximation of an exposure setting. Kodak used to include an exposure guide in the box with each roll of film that looked like this:

Believe it or not, following these guides resulted in pretty decent exposures. But for really accurate results in challenging light, pros and serious amateurs would turn to electronic light meters to measure light and translate the measurements into camera settings.

It was not until the early 60s (1960s, that is) that a Japanese camera manufacturer by the name of  Topcon introduced a single lens reflex camera with a through the lens metering system. Up until then some of the fancier cameras were equipped with external light meters, some of which were mechanically coupled to the shutter speed and aperture setting mechanisms. But the meters were not very sensitive to the extremes of black and white - and it was difficult to measure reflected light accurately. Cameras with interchangeable lenses presented another challenge, since the reflected light measured from a wide angle was not necessarily the same as the light from a narrow telephoto shot given the meter's fixed angle of view.

At the time many light meters were like the one pictured at the right, set up to measure the light  falling on a subject rather than the light reflected by it. This type of metering is called Incident Metering. The hemispherical piece on the top of the meter - the Lumisphere - would capture the light and present it to the meter sensor as having the same luminance as an 18% gray card. This was actually pretty clever, since the reflectances of the elements in the scene could not affect the reading. This is important as the meter and its scales were calibrated for 18% reflectance to render it as middle gray. So taking a reflected reading of an 18% gray card and an incident reading of the light falling on that card in the same setting would result in exactly the same exposure recommendations.

As the technology improved, reflected light meters became more accurate and sensitive. A German company named Gossen engineered a series of extremely sensitive reflective light meters, that had a little Lumisphere so that you could still take incident readings. They were somewhat modular, and had attachments that you could add to measure light in a narrower view, through a microscope, etc. Later models included a flash option.

Gossen Luna Pro Incident/Reflective Light Meter


Lumisphere in place for incident reading


Lumisphere moved aside, exposing sensor for reflected readings

Today, nearly all modern portable cameras use some form of reflected light metering system that measures the light coming through the lens and falling on the digital imaging sensor, or in the case of a film camera, the film plane. Professionals working in large format film photography using natural light often rely on a version of the above, or in the case of the Sekonic Digital Master L-758DR Light Meter pictured below, which can accurately pinpoint and measure a small specific element in a scene, using a very narrow angle of view, usually 1 degree, or it can function as an incident meter, and also has the ability to be triggered by a flash system, so it can perform incident readings of flash lighting. And it does this over a range of brightness that is far greater than what any digital camera can measure, with an accuracy of .1 fstop.

Using a light meter required a bit of thought in order ot get good results - and it didn't much matter whether you used incident or reflected in most situations. In the case of incident readings, you could take the reading from the meter 95% of the time without any exposure compensation and get a good image. You could also measure any part of the scene, and with your experience decide how birght you wanted the metered area to appear in your image, and compensate appropriately. The incident reading was more foolproof, while the reflected reading required more experience but gave you more control.

Consider the following example of a picture of a pair of cats, one white and one black.

If you were to measure the reflected light either using a camera or a light meter where you are able to isolate the entire cat, the black cat reading would tell you there is not a lot of light and suggest that you use a slower shutter speed or a wide lens opening or a high ISO (more sensitive to light) to allow more light to hit the film or camera sensor, and vice versa for the white cat. For argument's sake, a black cat might reflect 1 1/2 stops less light than middle gray, and the white cat 1 1/2 stops more. If you were to use the white cat's reading as a reference, you would have to add 1 1/2 stops more exposure - either by opening up the lens or lengthening the shutter speed. This would bring the tonal value of the cat from the middle gray the light meter assumes, to a brighter value - along with everything else in the scene. You could use the black cat as a reference and decrease the exposure - experience and sample measurements will help you to place the value of anything that you read with a reflectance meter in the right place.

In contrast, an incident meter would only read the amount of light hitting the subject, disregarding the brightness differences betweent the two subjects. So the setting for a picture of the black cat would be no different than for the white cat. The dark cat would reflect less light appear dark, the light cat would be light. Using the exact recommendation would result in a perfectly exposed image in most cases.

This is an important concept upon which all  exposures are based on. You CAN use a reflected light meter to accurately expose an image, but this is where experience and common sense come into play. It helps to think of the world in terms of shades of gray. To be more specific - 11 patches of shades of gray - from complete black (Step 0)  to complete white (Step 11), and nine more patches in between. These would be spaced "one f stop" apart, which simply means that moving from black to white, each step would reflect twice as much light as the previous. The table below shows typical picture elements and what their values might be:




The table works on the the premise that the average scene has a brightness range that generally does not exceed 11 f stops. This is a good thing, since most digital cameras have trouble recording an image when the brightness range goes over 10 stops. When encountering scenes with unusually wide brightness range the photographer must make a decision about what is more important - highlights or shadows - and adjusting exposure accordingly. Modern camera metering systems read entire viewfinders worth of tonal values, then do some very complex interpretations of what they read, taking an average of the entire scene, sometimes giving greater weight to the center area, or what the camera if focusing on, or taking into consideration the brightest areas and adjusting exposure to avoid overexposing these areas. But the one thing a meter cannot evaluate is what the subject matter is. A camera or handheld meter cannot tell that the light that it sees is coming from a black cat - and will suggest a camera setting that will result in a black cat being shown at value V on the chart, when in fact it is probably closer to III. It would do the same thing if it read a white wall. That decision is left to the photographer.

A very useful tool is an 18% Gray card - as long as you understand that it may give you an erronenous reading, but it will do this in a linear fashion. What I mean is that it could read 1/2 stop brighter, but it will affect all your settings by the same 1/2 f stop. The reason for this, according to Thom Hogan, is that meters are actually calibrated to 12% reflectance, not 18% thus making all readings go off by 1/2 stop. But, as always, your mileage can differ, so its always best to test your card under typical lighting situations and check the camera's histogram, (not the software's histogram) to see if the reading is dead center. If it is off to one side, you have to dial in enough exposure compensation to bring it back to center. You can visit http://www.bythom.com/graycards.htm to see a more detailed explanation.

I will use the next few lines to make some general statements about how shutter speed, lens opening and ISO interact, and how that affects your exposure setting, with the intent of following up with greater detail in future posts.

ISO + Shutter Speed + Fstop = Correct Exposure - they must always be in balance and this is ALWAYS true. If you use a lower ISO (less sensitive) you need to open the lens, or slow down the shutter speed. Remember, the Fstop number is the ratio of the opening of the lens to the focal length, so as you increase the F number the lens opening gets smaller. Just to totally confuse you, the shutter speed numbers on your camera represent the denominator of the fraction of a second that the shutter is open and admitting light to the film or sensor - so if the camera says 250, it assumes that you know that it means 1/250 of a sec, 4 would mean 1/4 second and so on.

Remember that fstop represents a doubling or halving of the light getting to the film or sensor. I'll start with ISO values, since these are a bit more intuitive. An ISO value of 200 is 2x as sensitive as 100. A value of 400 is 2x as sensitive as 200. To go from an ISO of 100 to 400 means that you are doubling twice - or 2 f stops.

Shutter speeds are similar. It's fairly straightforward to understand that if your shutter is set to 1000 (1/1000 sec) and you change it to 500 (1/500 sec) you will be letting in 2x as much light. If you slow it down to 1/250, you will be letting in 2x again more light - moving from 1/1000 to 1/250 you are adjusting the light by 2 f stops.

Now things get a little hairy. Lens fstop numbers are not intuitive, since they represent a numerical ratio of the effective diameter of the lens opening to the focal length. In the simplest of examples, a 200 mm lens with a maximum opening of 100 mm in diameter would be listed as F2. If the same focal length were an F4 lens, then it would be 25mm in diameter. The difficulty is introduced when you realize what your high school geometry teacher was trying to get you to learn - the AREA of a 100mm diameter circle is 4 times the area of a 25mm circle and would let in 4x more light - or 2 f stops. With lenses, the standard would be 2 - 2.8
- 4 - with each interval representing one fstop. So to change from a lens opening of F4 to
F2.8 you would double the light coming in, or one Fstop, and again going from F2.8 to F2. Because of these relationships, if you double the shutter speed but close the lens down by one Fstop, the image will have the same brightness. You could also slow the shutter speed by one stop and increase the sensitivity (ISO) by doubling it, and end up in the same place exposure-wise.

But what constitutes "correct" exposure? Typically that is where you are able to capture all the information possible in your image. Which begs the question - "How much is enough? Too much? Not enough? A better working version of correct exposure is the setting that will correctly capture the information the photographer wants to show.

This image of a Bufflehead was a particularly challenging exposure situation - a mostly dark bird, with bright white markings, bright sun that was low in the sky causing deep, long shadows, and water. It was shot with a 600 F4 and a 1.4x extender, which meant that the largest lens opening possible was effectively F5.6, but to provide better image quality I needed F8. So that set things up for a shutter speed that was short enough to stop the wave action and any random small movements from the bird. The end result was to adjust to a  higher ISO - 1000 in this case - to ensure that all of the above conditions were satisfied.




Even with all of the above in place, there was still a looming challenge that I decided I would not try to solve in the field. The brightness range was greater than what my camera could record. The rule of thumb is if you want any detail in the white areas, take care not to overexpose them. But that meant that all of the dark areas would have been "lost in the mud." So I decided to compromise a bit of the highlight detail in order to get the subtle iridescence from the neck and sides of the head, and show the all important eye. The dark areas did in fact go to "mud" but I was able to selectively lighten, or "dodge"' the darker areas to reveal the texture and color of the plumage.

In the interest of keeping things a simple as possible - I will describe, in broad terms what happens when you tinker with the three elements of exposure.

ISO - the less sensitive (lower number) you use, the less noise/grain you will have in your final image. You will have greater detail and sharpness, and a broader "dynamic range" (more about this in a future post).

Shutter Speed - slower speeds let in more light but has less "motion-stopping" capability. This is not necessarily a bad thing - you want a longer exposure to show things like fireworks, headlights of cars in traffic at night, star trails - or a special technique where the photographer purposely uses a slow shutter speed and pans the camera with a moving subject, showing the subject relatively blur-free while totally blurring the background, thus giving the impression of extreme speed. On the other hand if you want to stop the beating wings of a hummingbird you'd better use as fast a shutter speed as possible.

Apeture/Lens Opening - big openings let in lots of light, however, all but the most specialized of lenses are sharpest at their widest opening. If you see a lens that is F2.8 or as big as f1.4 there is a good chance that the designer made that lens tack sharp at that opening. Many lenses have a sweet spot at F5.6-F11 where they are sharpest. Another phenomenon is depth of field - or moving away from the camera, and focusing at a specific point, at what distance do things begin to look sharp in front of the focal point, and at what distance do they become unacceptably out of focus. Smaller openings (larger number) give you the advantage of a deeper depth of field, while bigger openings (smaller numbers) will provide only a very shallow zone of sharpness. You have all seen pictures where the subject is nicely sharp and th backgrounds are all blurry and soft.  That is a dead giveaway that the lens was pretty wide open. Telephoto or long focal length lenses have shallower depth of field than wide angle lenses do at the same distance. But at the same magnification (image size on the sensor) the depth of field is exactly the same.All this means is that at the same lens opening a 200 mm lens at 20 ft is going to have the same depth of field as a 100 mm lens at 10 ft.

As you can see, there are a lot of things that must come together in order to get consistent results, but the most important thing is to "own" the fundamentals. These building blocks to exposure, once grasped with confidence, will allow everything else to fall into place. You will be able to intrinsically know what is possible without giving it a second thought, and what you need to do to with your settings to get the finished product looking the way you want.





Monday, February 20, 2012

HDR - How to Expand the Dynamic Range of your DSLR



You've seen it on your smartphone, maybe you've seen it on your point and shoot camera - HDR. You may have even used it without knowing what it is or how it works. Hopefully this will help you gain a better understanding of HDR and it will open up some possibilities for seriously better pictures in certain situations.

In simple terms, HDR - or High Dynamic Range - is a way to capture a range of brightnesses that is beyond the cameras capacity to record in a single exposure.

In this first image you can see that the camera's exposure recommendation results in pasty-looking clouds lacking any tonality or detail, and image seems darker than it should be for a sunny day. The shadow areas lack yjr detail and "punch" that was present in the original scene. This is quite typical as the camera's metering system tries to cope with the super-bright sky elements - it tends to not want to overexpose the sky too badly so it gives precendence to the darker areas since you can always recover shadow information - sorta.


This is the middle image in the HDR sequence.



The next shot shows what happens when you underexpose the scene by 2 stops. The clouds look pretty good, but everything else has gone to pot.



This shot shows all the shadow areas with rich detail and nicely exposed, but everything else is washed out.




While I might be able to work with the image that is underexposed by only one stop, and possibly the one that is completely dark,  I would need to take heroic steps to introduce fill light and highlight recovery and dial in large amounts of brightness. But the result will be noisy in the shadows, and it will lack the overall vibrance of the original scene. This scene probably had a brightness range of 13-14 fstops.

In practice, the very best professional digital cameras can faithfully record up to 10 fstops of dynamic range. What this means is that if you were to use a light meter to read the light coming from the darkest area of a scene in which you want to show some detail, then you read the light from the brightest area with detail, there would be no more than 10 fstops difference between the two readings.

Most of you won't have a light meter, but most cameras will give you a spot metering mode that will allow you  narrowly and precisely select tiny areas for exposure evaluation. As the name implies, it measures light from just a small spot in the center of the image, rather than the entire screen. For the most part this is fairly accurate, and if you are familar with Zone System metering, it can help you nail the exposure - but this is a topic for a future post.

Another way to think about this is to look at your camera's histogram. If the histogram is all stretched and making full contact with both the left and right sides - there is a good likelihood that you are going to lose detail and texture in both the highlights and the shadows. To a small degree shadows can be "lifted" or lightened in Adobe Camera Raw or any reasonable raw converter using commands named "Fill" or" Shadow Recovery." To a lesser degree ACR can rebuild highlight detail information from highlights that are not severely blown. It does this by looking at each component of RGB (the red, green and blu channels) and copying the detail in the least blown out channel to add to the other two. But shadows will have noise and the highlights will look fairly pasty.

If you have one of those situations where you have to get both extremes, HDR will allow you to combine multiple exposures at different exposure settings, blending them into a single ultra-wide contrast 32 bit image. A typical HDR image will consist of two or more images - one or more that are underexposed to preserve the highlights and one or more that are overexposed to preserve the shadows.

There are some technical hurdles to be overcome, however. The resulting ultra-wide contrast image is a very large, 32 bit file, which can not be displayed on a conventional monitor or printed. Rather than going into a technical description here, an excellent description of bit depth, 32 bit files and how they relate to HDR photography can be found at http://www.cambridgeincolour.com/tutorials/bit-depth.htm

So lets get on to the nitty gritty of HDR photography - from planning a shot to finished image.

If you think a scene is a candidate for HDR, take a test shot at the camera's recommended exposure settings and look at the histogram and the image in preview. If the image looks normal and the histogram does not touch the left and right sides, HDR is not going to make a difference. There are other software packages that can give you the HDR "Look" by manipulating local contrast and shifting and boosting colors to enhance detail and shadow rendition such has Lucis Art and Topaz Adjust.

The HDR candidate has a histogram that looks like this:


Clearly, a single image will not be able to capture all the data at either extreme of the tonal range.  You can see how the camera tends to underexpose - most of the image's information is concentrated on the left side.

It's best to shoot your scene on a tripod, although I take many HDR scenes hand-held. I use RAW file format to ensure I record all that the camera can see. I set the camera to aperture priority, manual focus, and if your lens/camera has image stabilization or vibration reduction, I'd turn it off. This way the camera will not re-acquire focus or adjust the aperture or anything else that might affect image backgrounds by changing the depth of field.


The easiest way to take multiple exposures with most DSLRs is to use automatic bracketing. Many Nikons have the ability to bracket up to 9 exposures, up to one Fstop apart, in a rapid sequence by holding in the shutter for the entire series.


Most entry-level pro cameras allow you to do this with a single shutter press, automating the process. In a Nikon D200 there is a single button on the back that controls bracketing. The D300 requires you to assign the function to one of the assignable buttons. If you have a remote shutter release (wireless or tethered) use it, along with mirror lock up - this will minimize any human-based vibrations especially on night exposures.

Either camera will allow you to use an intervalometer, so that you can automatically record the exposure sequence in a "hands off" fashion. For those not familiar with this function, an intervalometer is a built-in function that allows you to record images on a predetermined schedule within either a given time frane or number of pictures taken. If youre camera does not offer the automation describe above, you can always take a shot and adjust the shutter speed manually.

If you are careful not to move, you can shoot a 3-5 shot HDR scene without using a tripod. I have even done HDR scenes at night, using 1 second exposures, braced against a solid surface, and breathing normally.

But planning and shooting the image is only half the fun. You get to "play" with it and turn it into something special and uniquely yours in the next part - the post processing.

There are several paths you can use to process your HDR images -

·         Photoshop CS3 and above (Merge to HDR command, Tone Mapping command)
·         Photomatix Pro (standalone and Photoshop plugin)
·         Nik HDR Efex (Photoshop plugin)
·         Enfuse (Lightroom plugin)
·         Artizen HDR (standalone)
·         DynamicPhoto HDR (standalone)
·         EasyHDR (standalone, easy interface, entry level)
·         Essential HDR (standalone, entry level)
·         HDR Darkroom (standalone, easy interface)
·         HDR Photo Studio (standalone)
·         Luminance HDR (standalone, free, dfficult interface)
·         HDR Pro
·         Picturenaut (free, entry level, no frills) - and I am sure there are others.  
The interpretation of an HDR image is very personal - each artist's eye is different, and most software packages give you a very broad set of adjustments to do just about anything you want.

Personally I use Photomatix Pro - it provides enough adjustments to render images that are realistically natural, yet enough adjustability to get really creative. I have tried Photoshop's merge to HDR and tone mapping - but I found that I am spoiled by the flexibility of Photomatix. Here is a workfow that I often use.

This is a set of 5 that I took for the bridge scene above.


In Photomatix, I browse for them - Photomatix will allow you do select multiple images.



After press OK, you will get a dialogue box that allows you to make some adjustments and determine how you want things like image ghosts (things that moved during the exposure sequence), noise reduction, alignment, color space, etc.



Pressing OK will merge the images into a single, 32 bit image and display it in the Image Editor.


It looks pretty awful - most displays cannot show an image with such color depth and range of brightness. Tone mapping will take all the information and do a decent job of assigning colors and values that are displayable/printable. I next save this intermediate image, in full 32 bit - as .HDR image. Next I select ToneMapping / Fusion to open up the Tone Mapping Editor.



You can see the image now looks a lot better. The histogram is looking pretty good too. Here you get to have fun - play with all of the  adjustment sliders to familiarize yourself with what they do.



I usually start by adjusting the black, white, gamma, saturation and smoothing.


There is a small option panel called Lighting Adjustments that will take you through a series of 5 presets that I sometimes use.


You can also select from a set of presets by clicking on the strip of images at the bottom of the screen.



Next I scroll down the settings menu to display the Advanced Options and adjust the sliders to get things looking exactly the way I want them to.




When done, I save the image as a 16 bit TIFF file so I can edit and adjust things a bit more in Photoshop.


Photomatix will provide a default file name that is a combination of all the names of the component files.

I use the Open As option to open the image as a RAW file in ACR. I usually adjust perspective, brightness, sometimes tone down (reduce the saturation) the colors in both ACR and after opening it in Photoshop, then I peform final sharpening. At this point I either save as a 16 bit TIFF, or as a JPG, which is a lot smaller and can usually be sent as an email attachment or uploaded to a website.

Tip: You can create a pseudo HDR by saving three versions of a RAW image in ACR - each with a different exposure compensation value - -1, 0 and +1. Then you process normally in Photomatix. Also, you can open a single image in Photomatix and apply tonemapping adjustments. The results will not be as dramatic, but it can produce some pretty fine images.



Sunday, February 19, 2012

Sunrise over Greenwich Point - HDR



Out of the house at 6:15 AM to catch a sunrise - and maybe some nice bird photographs this morning. Weather forecast called for calm winds, clear sky, and the Sun peeking out from below the horizon at 6:44.
I just barely made it in time to grab about 3 shots. I like to do sunrises/sunsets in HDR because the light of the sun is very intense and would provide an exposure setting that would skew everything to the dark side in an effort ot avoid overexposing the sun. The HDR above is a 7 exposure shot - 2 under, 1 under, a middle exposure 1 over and 2 over. If you are curious as to what the heck I am talking about, check in tomorrow when I post a how to tutorial on HDR photography.

Here are a few more HDR shots from this morning.



Friday, February 10, 2012

AF-On Button - A Better Way to Focus



Damn, my camera didn't focus where I wanted it to!  Has this ever happen to you? Automatic focus, no matter how sophisticated it is, can be both a blessing and a curse. When it works, it does a great job - until it doesn't. Here is a quick tip to help you get more in-focus images. It involves re-thinking and retraining how you focus. Sounds a lot worse than it actually is.

By default, most DSLR cameras come set up so that a half press on the shutter will enable VR, set the exposure, and acquire focus. Generally, this is way cool, but sometimes you want make some of those decisions yourself.

Focus reliability is my big pet peeve with such an arrangement, particularly when shooting with a lens that is wide open in less than optimum light, or when you want to isolate the subject from its background - and you need to be very precise with your focus selection.

On the more advanced Nikon camera bodies you will find a menu selection that allows you to disable focus on shutter press,



and another selection that allows you to assign the focus activation to a different button - usually the AF-On button.



Why bother?

Let's consider what happens when you are shooing people at an event. You frame the face and you use the focus aids in the viewfinder to get the eyes in focus, then you reframe the image to make a reasonable composition, at which point you press the shutter - and you get a perfectly focused torso. This happens because the camera will re-acquire focus.

You could use single servo focus, so that once you acquire focus it remains until the picture is taken. But I can't tell you how many times I left the camera on that setting, then started to shoot a moving subject, and ended up with a whole series of images that were out of focus, simply because I forgot to set the camera back to continous focus.

When you disengage the focus from the shutter release and assign it to the AF-On button, you have the freedom of activating focus only when you need to. A single press of the button will allow the camera to behave as if it were in single servo mode - acquiring focus only when pressed, and not re-acquiring focus again as you release the shutter. Yet, if I want to follow a subject that is moving around, I can hold the button and "track" the subject, and still have the flexibility to use it in "single servo" mode with the button press - without ever having to change the camera from single to continous or vice versa.

Another benefit is battery life. If you use VR lenses, with the default setting, each time you half press you are focusing and activating the VR, which really chews through batteries. Separating the two functions only engages VR when you press the shutter. You will get anywhere from 25% to 50% more shots by using the AF-On button to focus.

It does feel very awkward at first, especially if you are one of those photographers that also set the audible "beep" to let you know you are in focus. In order to best take advantage of AF-On button, you will have to train your thumb to focus, use your eye in the viewfinder to check the dot that shows you have acquired focus, and use your index finger to just release the shutter. Trust me, once you learn this new way, you will never go back to the old. In a matter of a couple of outings, you will be comfortable with the new method and your percentage of "keepers" will improve. There is a reason why sports photographers almost universally prefer this method to focus with over the default one.

Thursday, February 9, 2012

Panoramas in Photoshop


Panoramas are wonderful ways to repeal the laws of physics – at least when it comes to optical designs. Here is the issue. Your eyes can view around 180 degrees horizontally and about 135 degrees vertically. There is not a single  camera/lens system that can do that  in a single image without all sorts of distortion. Well, sorta.
Post-processing does offer a solution - panoramic stitching. Yes, I know, many point and shoot cameras, smartphone cameras and other photographic devices offer a crude version of in-camera stitching. But creating a pano in a dedicated panorama application or using the Photomerge-Panorama process in Photoshop will yield  much better results, with seamlessly smooth transitions between the component images. And, in the case of Photoshop, the process is straightforward and fairly easy. You don’t necessarily need any fancy camera, tripod, tripod head etc to do it, although I will admit that those accessories do make things easier and help improve your results.

Continue reading to see how I did the pano of the mouth of San Francisco Bay looking north towards the Marin Headlands.

I started by taking several test shots to check the histogram for exposure. I looked at both ends of the histogram to see if anything was touching either side which would have meant that I could have lost information in shadows or highlights. This would have been the time to make any exposure compensation adjustments. Happy with the results, I took my sequence of shots, being careful to overlap each image about 50% with the prior one. I also turned on the grid display to ensure that my horizon remained at more or less the same level in the viewfinder.


Once in front of the computer I open Adobe Bridge to identify and highlight the images. Using the right click on one of the images I open all of them in ACR.


In ACR I was able to look them over, pick a representative image, and apply the lens correction, clarity, sharpening black and white levels, exposure adjustment, etc to get it looking pretty good.

Then I used the Selected All button (top left) followed by the Synchronize Button to apply the adjustments to all of the images. With all images still selected, I pressed the Open Images button at the bottom right. This loaded the images into Photoshop as individual images.


The next step was to create individual layers of each image, and combine them in a single image. Photoshop provides the File-Automate-Photomerge command which brings up a dialogue where you can select the type of merge (Auto Panorama), the source of the files, which in this case I used the Add Open Files option, and checked off the Blend Images Together.


This results in the following screen - note the layers in the layer palette. They are masked to display only the part that each contributes to the entire image.




Using the Layer-Flatten Image command I merged all the layers into one file which Photoshop names Untitled_Panorama1 by default. At this point I no longer need the contributing files so I can remove them without saving them.


You can see that the horizon is pretty straight, but I did tilt the camera a bit as I changed my body position from right to left, leaving some croppable areas, and also some areas that can be augmented with Content Aware Fill.
Selecting the Crop Tool from the palette, I cropped the left and the right borders away to clean up the sides, but left a little white space above and below the image so that I could fill it and end up with a slightly taller image.




With the Magic Wand selection tool picked I clicked in the white space on the top left of the image to select it. Using Select-Modify-Expand, 



I added an 8 pixel expansion to the border. This helps ensure a seamless fill. 

Then I used Edit-Fill-Content aware to fill the white area. I repeated this on the other areas until done.




There were still a few areas that needed a little additional work - in particular the top edge and the top left corner. The Clone Brush came in handy as a way of removing the branches hanging in the air that were created during the Content Aware Fill procedure.








And this is the final product in Photoshop




That’s pretty much all there is to it. Once you get the hang of it, you can process a RAW-based pano comprised of 5-7 frames that require minimal adjustment in about 5 minutes.












Sunday, February 5, 2012

Focus Stacking



When I am really close to a subject, depth of field is usually an issue.  Under normal circumstances I need to make a choice as to what I need to have in focus. It has become customary to use this characteristic in a creative fashion - you will see images that have only a small portion in crisp focus, with the rest blurred. Back in the day if I wanted a subject in complete focus from front to back I could always use my Sinar F View Camera with it's tilting lens/film planes to increase depth of field, similar in function to the one below.

http://commons.wikimedia.org/wiki/User:Jacopo188

Nowadays I often use a technique that combines a series of images - or slices - of the subject, each one taken at a different point of focus, from front to back. Then I combine them in post processing to produce an image that is entirely razor sharp - with seemingly unlimited depth of field. This is known as Focus Stacking.

There are several third party applications that do this - CombineZP, Picolay, Tufuse (all free), Helicon Focus, Zerene Stacker, and Photoshop CS4 and CS5 - all commercial applications, some offering free trials.

I will go through a quick illustration in Photoshop CS5 to show how I did the Day Lily above.

When photographing a subject that you intend to apply focus stacking to, ideally you should be on a sturdy tripod, equipped with a horizontal arm that can get you close to subject, like this one from http://store.tabletopstudio-store.com/hoarmfortr.html



and use a rack and pinion macro focusing rail to allow you to precisely adjust the distance between the subject and the camera like this exquisitely crafted one from fellow photographer and master machinist Kyrstof Hejnar, which can be found at http://www.hejnarphotostore.com/:






It is possible to obtain reasonable results by simply adjusting the focus, but this approach does have it's downside. When you change focus, you change the optics, and as a result you may end up changing the out of focus background's (Bokeh) character. The perspective changes, and the magnification changes, since you are extending the lens to subject distance - those elements that are further away from the camera will diminish in size. Also, most modern lenses feature internal focus which adjusts the optical formula to change effective focal length to get you in closer without altering the external parts of the lens. As you close in, the lens will shorten the focal length.This will also affect image magnification. Moving the camera to change focus point eliminates all of this since the camera to subject's focus point never changes - subjects will be rendered more accurately. Luckily, adjustments for the variances in subject size/magnification are within the capabilities of the focus stacking applications, and some correct better than others.

Shoot the images in RAW format. If you are not familiar with RAW format, look at my earlier post on the subject and make use of all of the helpful information that is all over the Internet. Shooting RAW allows me to adjust a single image for balance, lighting, contrast, lens aberrations and distortions, white/black balance, sharpening, etc - and apply the adjustment made on the one image to the rest of the images - a real time saver.

I start by setting up my camera for the lowest ISO, my lens for its sharpest aperture and greatest depth of field, and since I am using a tripod, just allow the shutter speed to fall where it needs to be to give me a proper exposure - which in this particular case was ISO 200, F8 and 1/500 sec. I use aperture priority to let the camera determine the correct shutter speed, make a mental note, set the camera to manual exposure using the cameras suggestion, and turn off auto focus. Next I manually focus on the nearest point that I want to see sharp and take a shot. I advance the focus a little to the rear, making sure that I overlap the previous shot's focus. I take another shot. I continue to repeat this until I get just beyond the farthest area I want in focus.

When I get back to my computer I copy the contents of the memory card to my computer, open up Adobe Bridge and identify the images I want to stack. I hold the shift key as I highlight the images. When I have finished my selection, I right click inside any one of them, and right click and select Open in Camera Raw in the fly out menu.


All the images will be opened as a set in Adobe Camera Raw. After looking one of the images and making my adjustments, press Select All at the top left of the screen, then Synchronize, to apply any adjustments made to the entire set. I Press Ok in the next screen to apply changes. With all the images still selected, I press the Open Images button at the bottom to bring them into Photoshop.



All of the images are brought in as individual files. With the next step I combine all the images in one file as layers.



The Files-Photomerge command brings up the dialogue below, where you select Auto Layout, and Add Open Files, with no other boxes checked. If the Blend Images Together box is checked, uncheck it. Pressing OK will create the layers.



I highlight all the layers, then execute Edit-Autoblend Layers;



in the next dialogue, I pick Stack Images and check off Seamless Tones and Colors.




After a few seconds (longer if you are merging lots of layers) the final product emerges. I remove the contributing images without saving them, then I flatten the layers using Layers-Flatten Image.




I then apply the usual sharpening cropping and other adjustments just as if it were an ordinary single image.




Focus Stacking is a very useful technique that I use when shooting in close quarters, but it can be equally useful with larger static subjects over longer distances - like a large ship, freight train, landscapes, etc. It produces images that would otherwise be impossible, and as you can see, not very hard to accomplish. See you soon!