Tuesday, March 13, 2012

Dreaded Lens Flare? Here's How I Deal with It.

Ask any photographer about lens flare - you will probably get an ugly expression accompanied by some choice expletives that I will not print here. It's a law of photography (that probably needs repeal) that you should always keep the sun at your back or side. The reasons are to provide better lighting on your subject, and to avoid - DREADED LENS FLARE - from ruining your shot.

I like to break rules, especially one like this. The many photographs of the setting or rising sun is one example that shows that I am not alone here. Sunlight filtering through the trees will also wreak havoc on images that are made with lenses that are prone to flaring. Lots of times these images are taken just before sunrise or just after sunset to avoid flare. If I like the light and composition I will typically shoot right into the sun if necessary to get what I want, then fix it later. There is a mood, an element of "drama" that results when you shoot into the light in these conditions that is hard to capture if you simply follow the rules. You can fix certain "features" in postprocessing, but you can forget about a bad composition that results from trying to avoid shooting into the light.



Needless to say, there are some obvious challenges here. First is the amount of light coming from the sun. It will fool your meter and make everything go to silhouette if you are not careful. HDR (covered in my earlier post) will take care of extremely wide contrast range and you can work in Photoshop using a combination of tools to tame down the flare.

Flare has two components. A strong color cast and a change in luminance.Where the flare occurs will determine how I go about fixing it.

Flare in an evenly toned sky or detail-less surface is the easiest. Just create a duplicate background layer and use the patch tool to select an adjoining area of sky and move the patch over the area that needs to be fixed. Done!



Flare that shows up in detail areas is more difficult. Here I employ a two-step process. First I remove the color cast. I start by creating a duplicate layer and selecting it, then I select a brush from the tool palette,




change the brush's blend mode to color;




then I select the eyedropper tool;



and and its sample size to 11x11 pixels;




I then select the brush tool again, and using the right click to enable the eyedropper, I sample a similarly colored area to use as my brush color.




I then paint over the flared area until all the green, yellow or magenta (or other color) is "neutralized." By using the color blend mode on the brush, the detail and texture is left intact, and I am only replacing colors.


Now you see it,



and now you don't!



At this point I usually flatten the layers once I am satisfied with the color removal. My second step involves adjusting the difference in luminance and/or reconstruction of the brighter area.


I decide what tool to use - content aware fill, clone, patch, or burn-in based on the type of repair I intend to make. If there is detail and the difference in tone is not that bad, I will burn in the area, which is the way I went with on this image.



Photoshop provides a dodge and burn tool, which lets you limit its effective range to shadows, mid-tones or highlights. I usually bypass this and create a new "dodge and burn" layer, fill it with 50% gray, set the blend mode to overlay or soft light, and use a black brush set at 15% opacity and 15% flow. I paint directly on the 50% gray layer. This darkens the area smoothly with minimal effect on color saturation or hue. The reason this works so nicely is that both overlay and soft light are contrast enhancing, but the closer to middle gray the tone is the less the effect. Anything that is middle gray is completely transparent to the layer below it. It is easy to build up density with black or white and be very precise about where you are dodging and burning. If you want to check your progress, turn the gray layer' visibility on and off. If you have to back off on an adjustment, use the opposite color brush - white to fix black mistakes and vice versa. If you don't like at all what you have done, just delete the layer and start over again.

Here is the final image:






Monday, March 12, 2012

What You Saw is NOT What You Got?

Consider the following scenarios. You are careful to set a proper white balance when you take the picture, then you come home, load up Photoshop (or the image editing program of your choice), and you get it looking perfect. Then you pirnt and something goes wrong - it looks like someone switched your files between the moment you press Print and you see the printhead moving back and forth creating an image. The results are just WRONG. Colors are different, the orange shirt is now salmon-colored, people are red-faced, a white dog is orange, the purple flower is now a bright shade of magenta, etc.



The immediate response is to go back to your image editor, or your printer dialogue and based on what you see in the print you start making adjustments - a little less magenta, a little more cyan, add some yellow - wasting tons of ink and paper in the process - and when its over you are wondering if your printer is broken.

Or, there is a different problem. Everything looks great on the screen, that cloudless blue sky is deeply saturated and perfectly smooth from edge to edge. You look at the print and it seems as if someone dropped coffee grounds during the print process. This is sensor dust. But there is a problem - you go back to the screen image and you can't see them. My guess is that you are using either a laptop or an inexpensive LCD/LED display with a limited bit depth. Without getting too technical, unless specifically noted in the specifications, all-purpose displays such as the sub $200 flat panels or those in laptops are not capable of displaying the more subtle gradations in tone and color. Sometimes his will come across as banding on a sky, where you have a very gentle shift in hue, saturation and luminosity from horizon to the top of the frame. Other times minute dust particles may have settled on your camera's sensor and blocking the light getting to the sensor directly beneath the particle, causing a "dust shadow" to appear. But in the bargain monitors, the bit depth is not enough to differentiate the subtle changes, making the dust spots invisible on the display. Most printers have a wider gamut, and are able to print this information,  The better the printer, the better the ablilty to display everything that is in the image - including the dust spots. Here is an example of dust spots:


If you don't see the dust spots, then there is a good chance you have one of "those" monitors. If you are serious about image quality, either sending it to others electronically or making a print - then its probablay a good idea to put a replacement display into the budget.

Well, as far as the first situation is concerned, there's an app for that. More accurately, there is a combination of hardware and software that you can purchase to address the difference between displayed vs printed colors and tones. There are a class of products called monitor/printer profiling applications that will fix the majority of the issues. These consist of a either a colorimeter or spectrophotometer that measures your monitor's native colors and gray tones on a test pattern, then creates a table of values that correct the differences between what the monitor shows and what the neutral standard is, and then builds a monitor profile that loads when you start your computer, making all the necessary adjustments. this way a green on screen will look like the green that will be printed - more or less. It's hard to do any image editing and color correction without at least the display being calibrated.


With the color accuracy of your display now under control, you have to address the print. If you use a printer manufacturer's inks and paper (and printer profiles if provided), or if you use a printing service. you are likely to get decent results - not perfect by any means, but reasonable. Most inexpensive printing services will use a hybrid technique, employing a digital projection onto silver halide emulsion paper, then processed in conventional wet process chemistry.  The more expensive houses will use high quality, color profiled and calibrated inkjet printers. Each paper type offered will have its own profile. the printers will use anywhere from 8 to 12 different pigmented inks - providing the widest color gamut and dynamic range possible. Very few affordable processes can even begin to approach the quality obtained from a properly processed inage printed on a 12 color image printed on rag paper in a color managed workflow.



Spyder Studio with monitor and printer profiling

Luckily the tools used to do this at the pro or commercial level are available in scaled down versions for the consumer. Datacolor and Xrite both offer affordable profiling solutions that work with most displays and printers as well as more expensive ones for professional printers and photographers. These create profiles for either printers or displays, or both.

The dust bunny situation involves being able to see the dust before you print. This requires using a display that is capable of showing dust.  At the present there are only a handful of displays that can do this - those which use e-IPS, S-IPS, P-IPS, H-IPS, AS-IPS, H2-IPS and UH-IPS. IPS stands for In Plane Switching, a display technolgy developed by LG Philips in 1996. LG makes nearly all the IPS panels currently available in the marketplace. For all intents and purposes the S, H and P-IPS panels are the ones to look for. The e-IPS is an adaptation of the technology to lower the cost, and more often than not can only display a color depth of 6 bits per color, or 2^6 x 2^6 x 2^6 = 64 x 64 x 64 = 262,144 colors simultaneously. This is not good for photo editing applications. The way 6 bit panels create 16.2 million colors is by rapidly switching between 2 colors at each pixel, creating the illusion of greater color depth - but while they look good for general applications, not being able to see all the information without switching is going to present problems seeing subtle things like dust and other artifacts. Sometimes you will see a panel specification stated in terms of a percentage of a color space. You might see something like 72% of sRGB. Most printers can now print close to 100% of sRGB, which means that the prints will show more color and tonal variation than your display, whicn in simple terms explains why you won't be able to see the dust bunnies on a $200 LCD/LED panel. With the exception of some Apple products and the Lenovo Thinkpad X220, nearly all displays are 6 bit.

The image below is an exaggeration, but a good way to illustrate the difference between high and low bitrate displays. The dithering is the switching that takes place - this is a static image, but you should be able to still see the banding. The rightmost color is typical of what you will see on an 8 bit (x3 = 24 bit) per color display.




At the very minimum you should be looking for a panel that can display 8 bits, or 2^8 x 2^8 x 2^8 = 256 x 256 x 256 = 16,777,216 simultaneous colors. These typically can display a color space as large as sRGB, considered the minimum for photo editing, and the pricier versions can display up to Adobe RGB, a bigger color space. A rule of thumb is to get the largest color space you can afford., but not less than 98% sRGB. The specs that are meaningless to you are speed, brightness, contrast ratio, etc. These are all well-beyond what you need. In some cases flat panel displays can be too bright, making it difficult to profile.

There are a few very costly 10 bit panels, which if you do the math,  2^10 x 2^10 x 2^10 = 1024 x 1024 x 1024 = 1,073,741,824 simultaneous colors. These are absolutely breathtaking, but be prepared to spend more than $1100 for a 27"panel. The problem with these is that unless you can create a 10 bit workflow, such a display is overkill. Most cameras are 8 bit, as are printers. There are few photo editors that can work in 10 bits. It is easy to see that  having a 10 bit panel would be unecessary.

You can find a list of popular IPS displays with street prices here . The ASUS PA238Q seems to be the least expensive 8 bit panel that offers full sRGB display at $300. I suggest that you look for reviews or a list of specifications for any display you are interested it to ensure that it is suitable for your purposes.



Once you are able to see stuff like dust spots, you need to be able to remove them, and there are two methods to accomplish this. Each has its good and bad points. The more conservatie but costlier approach is to send you camera in for a sensor cleaning. Give to someone else to do and if something gets messed up in the process, they will (hopefully) take care of things. This can cost from $50-$100 and you can be without your camera for several weeks.


dry and wet sensor cleanng system
You can always purchase a blower, dust brush and wet-cleaning swabs and solutions for around the lower price of sending the camera out. A blower and a brush,should be standard equipment, since the majority of sensor dust is removeable with these gentler tools. Use the mirror lock up function to expose the sensor, and using a light to see what you are doing, use a blower that is intended for this to gently blow the specs away. Sometimes you need to use a little "gentle persuasion" to get the more stubborn specs off. Under no circumstances should you use the compressed air products, which use unfiltered air, and can blast microgrit across your sensor, permanently etching it in the process. Actually, you would not etch the sensor itself, but the low-pass filter that is in front of it. In any case, you would be looking at a costly repair, typically in excess of $200.


If you are daring enough, you might try the wet method. This includes a swab of lint free material attached to a paddle that you dampen with a cleaning solution and wipe once across the sensor. Any time you touch the sensor you run the risk of scratching the filter, so you need to be super extra careful and resign yourself to the $200 or higher repair should things go wrong. I have done it 3x on my D200 with no damage, but everyone's mileage is different. If you at all nervous about this, just send it in.


Friday, February 24, 2012

Exposure - How to Get it Right Most of the Time

With automatic cameras and their wonderful exposure setting systems it is not hard to get a good picture under normal circumstances. The ease with which even the simplest, least expensive point and shoot cameras can take a reasonable picture is astonishing. These little cameras are amazingly sophisticated, especially when you consider the low prices.

But it is when the not so normal circumstance presents itself that many newer photographers are at a loss. Strong side light, backlight, very bright scenes, low light action shots, water reflecting bright highlights, sunrises/sunsets, stage performances - these are just a few scenarios that can be challenging for a photographer that does not have a firm grasp of how to interpret the conditions and set the camera exposure accordingly. Camera manuals are of little help, since they are written for the non-technical user and for "average" lighting situations. Unless you take the initiative to investigate how exposure works on your own, you are likely to be in the "dark" as far as how it all comes together.

Back in the day, before cameras had built in metering systems, a photographer would use a printed "exposure calculator" like one of ones shown in this link http://www.mathsinstruments.me.uk/page67.html or they would wing it, using a best guess estimate of how best to set the camera, using a printed guide that relied on "rules of thumb" to arrive at a close approximation of an exposure setting. Kodak used to include an exposure guide in the box with each roll of film that looked like this:

Believe it or not, following these guides resulted in pretty decent exposures. But for really accurate results in challenging light, pros and serious amateurs would turn to electronic light meters to measure light and translate the measurements into camera settings.

It was not until the early 60s (1960s, that is) that a Japanese camera manufacturer by the name of  Topcon introduced a single lens reflex camera with a through the lens metering system. Up until then some of the fancier cameras were equipped with external light meters, some of which were mechanically coupled to the shutter speed and aperture setting mechanisms. But the meters were not very sensitive to the extremes of black and white - and it was difficult to measure reflected light accurately. Cameras with interchangeable lenses presented another challenge, since the reflected light measured from a wide angle was not necessarily the same as the light from a narrow telephoto shot given the meter's fixed angle of view.

At the time many light meters were like the one pictured at the right, set up to measure the light  falling on a subject rather than the light reflected by it. This type of metering is called Incident Metering. The hemispherical piece on the top of the meter - the Lumisphere - would capture the light and present it to the meter sensor as having the same luminance as an 18% gray card. This was actually pretty clever, since the reflectances of the elements in the scene could not affect the reading. This is important as the meter and its scales were calibrated for 18% reflectance to render it as middle gray. So taking a reflected reading of an 18% gray card and an incident reading of the light falling on that card in the same setting would result in exactly the same exposure recommendations.

As the technology improved, reflected light meters became more accurate and sensitive. A German company named Gossen engineered a series of extremely sensitive reflective light meters, that had a little Lumisphere so that you could still take incident readings. They were somewhat modular, and had attachments that you could add to measure light in a narrower view, through a microscope, etc. Later models included a flash option.

Gossen Luna Pro Incident/Reflective Light Meter


Lumisphere in place for incident reading


Lumisphere moved aside, exposing sensor for reflected readings

Today, nearly all modern portable cameras use some form of reflected light metering system that measures the light coming through the lens and falling on the digital imaging sensor, or in the case of a film camera, the film plane. Professionals working in large format film photography using natural light often rely on a version of the above, or in the case of the Sekonic Digital Master L-758DR Light Meter pictured below, which can accurately pinpoint and measure a small specific element in a scene, using a very narrow angle of view, usually 1 degree, or it can function as an incident meter, and also has the ability to be triggered by a flash system, so it can perform incident readings of flash lighting. And it does this over a range of brightness that is far greater than what any digital camera can measure, with an accuracy of .1 fstop.

Using a light meter required a bit of thought in order ot get good results - and it didn't much matter whether you used incident or reflected in most situations. In the case of incident readings, you could take the reading from the meter 95% of the time without any exposure compensation and get a good image. You could also measure any part of the scene, and with your experience decide how birght you wanted the metered area to appear in your image, and compensate appropriately. The incident reading was more foolproof, while the reflected reading required more experience but gave you more control.

Consider the following example of a picture of a pair of cats, one white and one black.

If you were to measure the reflected light either using a camera or a light meter where you are able to isolate the entire cat, the black cat reading would tell you there is not a lot of light and suggest that you use a slower shutter speed or a wide lens opening or a high ISO (more sensitive to light) to allow more light to hit the film or camera sensor, and vice versa for the white cat. For argument's sake, a black cat might reflect 1 1/2 stops less light than middle gray, and the white cat 1 1/2 stops more. If you were to use the white cat's reading as a reference, you would have to add 1 1/2 stops more exposure - either by opening up the lens or lengthening the shutter speed. This would bring the tonal value of the cat from the middle gray the light meter assumes, to a brighter value - along with everything else in the scene. You could use the black cat as a reference and decrease the exposure - experience and sample measurements will help you to place the value of anything that you read with a reflectance meter in the right place.

In contrast, an incident meter would only read the amount of light hitting the subject, disregarding the brightness differences betweent the two subjects. So the setting for a picture of the black cat would be no different than for the white cat. The dark cat would reflect less light appear dark, the light cat would be light. Using the exact recommendation would result in a perfectly exposed image in most cases.

This is an important concept upon which all  exposures are based on. You CAN use a reflected light meter to accurately expose an image, but this is where experience and common sense come into play. It helps to think of the world in terms of shades of gray. To be more specific - 11 patches of shades of gray - from complete black (Step 0)  to complete white (Step 11), and nine more patches in between. These would be spaced "one f stop" apart, which simply means that moving from black to white, each step would reflect twice as much light as the previous. The table below shows typical picture elements and what their values might be:




The table works on the the premise that the average scene has a brightness range that generally does not exceed 11 f stops. This is a good thing, since most digital cameras have trouble recording an image when the brightness range goes over 10 stops. When encountering scenes with unusually wide brightness range the photographer must make a decision about what is more important - highlights or shadows - and adjusting exposure accordingly. Modern camera metering systems read entire viewfinders worth of tonal values, then do some very complex interpretations of what they read, taking an average of the entire scene, sometimes giving greater weight to the center area, or what the camera if focusing on, or taking into consideration the brightest areas and adjusting exposure to avoid overexposing these areas. But the one thing a meter cannot evaluate is what the subject matter is. A camera or handheld meter cannot tell that the light that it sees is coming from a black cat - and will suggest a camera setting that will result in a black cat being shown at value V on the chart, when in fact it is probably closer to III. It would do the same thing if it read a white wall. That decision is left to the photographer.

A very useful tool is an 18% Gray card - as long as you understand that it may give you an erronenous reading, but it will do this in a linear fashion. What I mean is that it could read 1/2 stop brighter, but it will affect all your settings by the same 1/2 f stop. The reason for this, according to Thom Hogan, is that meters are actually calibrated to 12% reflectance, not 18% thus making all readings go off by 1/2 stop. But, as always, your mileage can differ, so its always best to test your card under typical lighting situations and check the camera's histogram, (not the software's histogram) to see if the reading is dead center. If it is off to one side, you have to dial in enough exposure compensation to bring it back to center. You can visit http://www.bythom.com/graycards.htm to see a more detailed explanation.

I will use the next few lines to make some general statements about how shutter speed, lens opening and ISO interact, and how that affects your exposure setting, with the intent of following up with greater detail in future posts.

ISO + Shutter Speed + Fstop = Correct Exposure - they must always be in balance and this is ALWAYS true. If you use a lower ISO (less sensitive) you need to open the lens, or slow down the shutter speed. Remember, the Fstop number is the ratio of the opening of the lens to the focal length, so as you increase the F number the lens opening gets smaller. Just to totally confuse you, the shutter speed numbers on your camera represent the denominator of the fraction of a second that the shutter is open and admitting light to the film or sensor - so if the camera says 250, it assumes that you know that it means 1/250 of a sec, 4 would mean 1/4 second and so on.

Remember that fstop represents a doubling or halving of the light getting to the film or sensor. I'll start with ISO values, since these are a bit more intuitive. An ISO value of 200 is 2x as sensitive as 100. A value of 400 is 2x as sensitive as 200. To go from an ISO of 100 to 400 means that you are doubling twice - or 2 f stops.

Shutter speeds are similar. It's fairly straightforward to understand that if your shutter is set to 1000 (1/1000 sec) and you change it to 500 (1/500 sec) you will be letting in 2x as much light. If you slow it down to 1/250, you will be letting in 2x again more light - moving from 1/1000 to 1/250 you are adjusting the light by 2 f stops.

Now things get a little hairy. Lens fstop numbers are not intuitive, since they represent a numerical ratio of the effective diameter of the lens opening to the focal length. In the simplest of examples, a 200 mm lens with a maximum opening of 100 mm in diameter would be listed as F2. If the same focal length were an F4 lens, then it would be 25mm in diameter. The difficulty is introduced when you realize what your high school geometry teacher was trying to get you to learn - the AREA of a 100mm diameter circle is 4 times the area of a 25mm circle and would let in 4x more light - or 2 f stops. With lenses, the standard would be 2 - 2.8
- 4 - with each interval representing one fstop. So to change from a lens opening of F4 to
F2.8 you would double the light coming in, or one Fstop, and again going from F2.8 to F2. Because of these relationships, if you double the shutter speed but close the lens down by one Fstop, the image will have the same brightness. You could also slow the shutter speed by one stop and increase the sensitivity (ISO) by doubling it, and end up in the same place exposure-wise.

But what constitutes "correct" exposure? Typically that is where you are able to capture all the information possible in your image. Which begs the question - "How much is enough? Too much? Not enough? A better working version of correct exposure is the setting that will correctly capture the information the photographer wants to show.

This image of a Bufflehead was a particularly challenging exposure situation - a mostly dark bird, with bright white markings, bright sun that was low in the sky causing deep, long shadows, and water. It was shot with a 600 F4 and a 1.4x extender, which meant that the largest lens opening possible was effectively F5.6, but to provide better image quality I needed F8. So that set things up for a shutter speed that was short enough to stop the wave action and any random small movements from the bird. The end result was to adjust to a  higher ISO - 1000 in this case - to ensure that all of the above conditions were satisfied.




Even with all of the above in place, there was still a looming challenge that I decided I would not try to solve in the field. The brightness range was greater than what my camera could record. The rule of thumb is if you want any detail in the white areas, take care not to overexpose them. But that meant that all of the dark areas would have been "lost in the mud." So I decided to compromise a bit of the highlight detail in order to get the subtle iridescence from the neck and sides of the head, and show the all important eye. The dark areas did in fact go to "mud" but I was able to selectively lighten, or "dodge"' the darker areas to reveal the texture and color of the plumage.

In the interest of keeping things a simple as possible - I will describe, in broad terms what happens when you tinker with the three elements of exposure.

ISO - the less sensitive (lower number) you use, the less noise/grain you will have in your final image. You will have greater detail and sharpness, and a broader "dynamic range" (more about this in a future post).

Shutter Speed - slower speeds let in more light but has less "motion-stopping" capability. This is not necessarily a bad thing - you want a longer exposure to show things like fireworks, headlights of cars in traffic at night, star trails - or a special technique where the photographer purposely uses a slow shutter speed and pans the camera with a moving subject, showing the subject relatively blur-free while totally blurring the background, thus giving the impression of extreme speed. On the other hand if you want to stop the beating wings of a hummingbird you'd better use as fast a shutter speed as possible.

Apeture/Lens Opening - big openings let in lots of light, however, all but the most specialized of lenses are sharpest at their widest opening. If you see a lens that is F2.8 or as big as f1.4 there is a good chance that the designer made that lens tack sharp at that opening. Many lenses have a sweet spot at F5.6-F11 where they are sharpest. Another phenomenon is depth of field - or moving away from the camera, and focusing at a specific point, at what distance do things begin to look sharp in front of the focal point, and at what distance do they become unacceptably out of focus. Smaller openings (larger number) give you the advantage of a deeper depth of field, while bigger openings (smaller numbers) will provide only a very shallow zone of sharpness. You have all seen pictures where the subject is nicely sharp and th backgrounds are all blurry and soft.  That is a dead giveaway that the lens was pretty wide open. Telephoto or long focal length lenses have shallower depth of field than wide angle lenses do at the same distance. But at the same magnification (image size on the sensor) the depth of field is exactly the same.All this means is that at the same lens opening a 200 mm lens at 20 ft is going to have the same depth of field as a 100 mm lens at 10 ft.

As you can see, there are a lot of things that must come together in order to get consistent results, but the most important thing is to "own" the fundamentals. These building blocks to exposure, once grasped with confidence, will allow everything else to fall into place. You will be able to intrinsically know what is possible without giving it a second thought, and what you need to do to with your settings to get the finished product looking the way you want.





Monday, February 20, 2012

HDR - How to Expand the Dynamic Range of your DSLR



You've seen it on your smartphone, maybe you've seen it on your point and shoot camera - HDR. You may have even used it without knowing what it is or how it works. Hopefully this will help you gain a better understanding of HDR and it will open up some possibilities for seriously better pictures in certain situations.

In simple terms, HDR - or High Dynamic Range - is a way to capture a range of brightnesses that is beyond the cameras capacity to record in a single exposure.

In this first image you can see that the camera's exposure recommendation results in pasty-looking clouds lacking any tonality or detail, and image seems darker than it should be for a sunny day. The shadow areas lack yjr detail and "punch" that was present in the original scene. This is quite typical as the camera's metering system tries to cope with the super-bright sky elements - it tends to not want to overexpose the sky too badly so it gives precendence to the darker areas since you can always recover shadow information - sorta.


This is the middle image in the HDR sequence.



The next shot shows what happens when you underexpose the scene by 2 stops. The clouds look pretty good, but everything else has gone to pot.



This shot shows all the shadow areas with rich detail and nicely exposed, but everything else is washed out.




While I might be able to work with the image that is underexposed by only one stop, and possibly the one that is completely dark,  I would need to take heroic steps to introduce fill light and highlight recovery and dial in large amounts of brightness. But the result will be noisy in the shadows, and it will lack the overall vibrance of the original scene. This scene probably had a brightness range of 13-14 fstops.

In practice, the very best professional digital cameras can faithfully record up to 10 fstops of dynamic range. What this means is that if you were to use a light meter to read the light coming from the darkest area of a scene in which you want to show some detail, then you read the light from the brightest area with detail, there would be no more than 10 fstops difference between the two readings.

Most of you won't have a light meter, but most cameras will give you a spot metering mode that will allow you  narrowly and precisely select tiny areas for exposure evaluation. As the name implies, it measures light from just a small spot in the center of the image, rather than the entire screen. For the most part this is fairly accurate, and if you are familar with Zone System metering, it can help you nail the exposure - but this is a topic for a future post.

Another way to think about this is to look at your camera's histogram. If the histogram is all stretched and making full contact with both the left and right sides - there is a good likelihood that you are going to lose detail and texture in both the highlights and the shadows. To a small degree shadows can be "lifted" or lightened in Adobe Camera Raw or any reasonable raw converter using commands named "Fill" or" Shadow Recovery." To a lesser degree ACR can rebuild highlight detail information from highlights that are not severely blown. It does this by looking at each component of RGB (the red, green and blu channels) and copying the detail in the least blown out channel to add to the other two. But shadows will have noise and the highlights will look fairly pasty.

If you have one of those situations where you have to get both extremes, HDR will allow you to combine multiple exposures at different exposure settings, blending them into a single ultra-wide contrast 32 bit image. A typical HDR image will consist of two or more images - one or more that are underexposed to preserve the highlights and one or more that are overexposed to preserve the shadows.

There are some technical hurdles to be overcome, however. The resulting ultra-wide contrast image is a very large, 32 bit file, which can not be displayed on a conventional monitor or printed. Rather than going into a technical description here, an excellent description of bit depth, 32 bit files and how they relate to HDR photography can be found at http://www.cambridgeincolour.com/tutorials/bit-depth.htm

So lets get on to the nitty gritty of HDR photography - from planning a shot to finished image.

If you think a scene is a candidate for HDR, take a test shot at the camera's recommended exposure settings and look at the histogram and the image in preview. If the image looks normal and the histogram does not touch the left and right sides, HDR is not going to make a difference. There are other software packages that can give you the HDR "Look" by manipulating local contrast and shifting and boosting colors to enhance detail and shadow rendition such has Lucis Art and Topaz Adjust.

The HDR candidate has a histogram that looks like this:


Clearly, a single image will not be able to capture all the data at either extreme of the tonal range.  You can see how the camera tends to underexpose - most of the image's information is concentrated on the left side.

It's best to shoot your scene on a tripod, although I take many HDR scenes hand-held. I use RAW file format to ensure I record all that the camera can see. I set the camera to aperture priority, manual focus, and if your lens/camera has image stabilization or vibration reduction, I'd turn it off. This way the camera will not re-acquire focus or adjust the aperture or anything else that might affect image backgrounds by changing the depth of field.


The easiest way to take multiple exposures with most DSLRs is to use automatic bracketing. Many Nikons have the ability to bracket up to 9 exposures, up to one Fstop apart, in a rapid sequence by holding in the shutter for the entire series.


Most entry-level pro cameras allow you to do this with a single shutter press, automating the process. In a Nikon D200 there is a single button on the back that controls bracketing. The D300 requires you to assign the function to one of the assignable buttons. If you have a remote shutter release (wireless or tethered) use it, along with mirror lock up - this will minimize any human-based vibrations especially on night exposures.

Either camera will allow you to use an intervalometer, so that you can automatically record the exposure sequence in a "hands off" fashion. For those not familiar with this function, an intervalometer is a built-in function that allows you to record images on a predetermined schedule within either a given time frane or number of pictures taken. If youre camera does not offer the automation describe above, you can always take a shot and adjust the shutter speed manually.

If you are careful not to move, you can shoot a 3-5 shot HDR scene without using a tripod. I have even done HDR scenes at night, using 1 second exposures, braced against a solid surface, and breathing normally.

But planning and shooting the image is only half the fun. You get to "play" with it and turn it into something special and uniquely yours in the next part - the post processing.

There are several paths you can use to process your HDR images -

·         Photoshop CS3 and above (Merge to HDR command, Tone Mapping command)
·         Photomatix Pro (standalone and Photoshop plugin)
·         Nik HDR Efex (Photoshop plugin)
·         Enfuse (Lightroom plugin)
·         Artizen HDR (standalone)
·         DynamicPhoto HDR (standalone)
·         EasyHDR (standalone, easy interface, entry level)
·         Essential HDR (standalone, entry level)
·         HDR Darkroom (standalone, easy interface)
·         HDR Photo Studio (standalone)
·         Luminance HDR (standalone, free, dfficult interface)
·         HDR Pro
·         Picturenaut (free, entry level, no frills) - and I am sure there are others.  
The interpretation of an HDR image is very personal - each artist's eye is different, and most software packages give you a very broad set of adjustments to do just about anything you want.

Personally I use Photomatix Pro - it provides enough adjustments to render images that are realistically natural, yet enough adjustability to get really creative. I have tried Photoshop's merge to HDR and tone mapping - but I found that I am spoiled by the flexibility of Photomatix. Here is a workfow that I often use.

This is a set of 5 that I took for the bridge scene above.


In Photomatix, I browse for them - Photomatix will allow you do select multiple images.



After press OK, you will get a dialogue box that allows you to make some adjustments and determine how you want things like image ghosts (things that moved during the exposure sequence), noise reduction, alignment, color space, etc.



Pressing OK will merge the images into a single, 32 bit image and display it in the Image Editor.


It looks pretty awful - most displays cannot show an image with such color depth and range of brightness. Tone mapping will take all the information and do a decent job of assigning colors and values that are displayable/printable. I next save this intermediate image, in full 32 bit - as .HDR image. Next I select ToneMapping / Fusion to open up the Tone Mapping Editor.



You can see the image now looks a lot better. The histogram is looking pretty good too. Here you get to have fun - play with all of the  adjustment sliders to familiarize yourself with what they do.



I usually start by adjusting the black, white, gamma, saturation and smoothing.


There is a small option panel called Lighting Adjustments that will take you through a series of 5 presets that I sometimes use.


You can also select from a set of presets by clicking on the strip of images at the bottom of the screen.



Next I scroll down the settings menu to display the Advanced Options and adjust the sliders to get things looking exactly the way I want them to.




When done, I save the image as a 16 bit TIFF file so I can edit and adjust things a bit more in Photoshop.


Photomatix will provide a default file name that is a combination of all the names of the component files.

I use the Open As option to open the image as a RAW file in ACR. I usually adjust perspective, brightness, sometimes tone down (reduce the saturation) the colors in both ACR and after opening it in Photoshop, then I peform final sharpening. At this point I either save as a 16 bit TIFF, or as a JPG, which is a lot smaller and can usually be sent as an email attachment or uploaded to a website.

Tip: You can create a pseudo HDR by saving three versions of a RAW image in ACR - each with a different exposure compensation value - -1, 0 and +1. Then you process normally in Photomatix. Also, you can open a single image in Photomatix and apply tonemapping adjustments. The results will not be as dramatic, but it can produce some pretty fine images.



Sunday, February 19, 2012

Sunrise over Greenwich Point - HDR



Out of the house at 6:15 AM to catch a sunrise - and maybe some nice bird photographs this morning. Weather forecast called for calm winds, clear sky, and the Sun peeking out from below the horizon at 6:44.
I just barely made it in time to grab about 3 shots. I like to do sunrises/sunsets in HDR because the light of the sun is very intense and would provide an exposure setting that would skew everything to the dark side in an effort ot avoid overexposing the sun. The HDR above is a 7 exposure shot - 2 under, 1 under, a middle exposure 1 over and 2 over. If you are curious as to what the heck I am talking about, check in tomorrow when I post a how to tutorial on HDR photography.

Here are a few more HDR shots from this morning.



Friday, February 10, 2012

AF-On Button - A Better Way to Focus



Damn, my camera didn't focus where I wanted it to!  Has this ever happen to you? Automatic focus, no matter how sophisticated it is, can be both a blessing and a curse. When it works, it does a great job - until it doesn't. Here is a quick tip to help you get more in-focus images. It involves re-thinking and retraining how you focus. Sounds a lot worse than it actually is.

By default, most DSLR cameras come set up so that a half press on the shutter will enable VR, set the exposure, and acquire focus. Generally, this is way cool, but sometimes you want make some of those decisions yourself.

Focus reliability is my big pet peeve with such an arrangement, particularly when shooting with a lens that is wide open in less than optimum light, or when you want to isolate the subject from its background - and you need to be very precise with your focus selection.

On the more advanced Nikon camera bodies you will find a menu selection that allows you to disable focus on shutter press,



and another selection that allows you to assign the focus activation to a different button - usually the AF-On button.



Why bother?

Let's consider what happens when you are shooing people at an event. You frame the face and you use the focus aids in the viewfinder to get the eyes in focus, then you reframe the image to make a reasonable composition, at which point you press the shutter - and you get a perfectly focused torso. This happens because the camera will re-acquire focus.

You could use single servo focus, so that once you acquire focus it remains until the picture is taken. But I can't tell you how many times I left the camera on that setting, then started to shoot a moving subject, and ended up with a whole series of images that were out of focus, simply because I forgot to set the camera back to continous focus.

When you disengage the focus from the shutter release and assign it to the AF-On button, you have the freedom of activating focus only when you need to. A single press of the button will allow the camera to behave as if it were in single servo mode - acquiring focus only when pressed, and not re-acquiring focus again as you release the shutter. Yet, if I want to follow a subject that is moving around, I can hold the button and "track" the subject, and still have the flexibility to use it in "single servo" mode with the button press - without ever having to change the camera from single to continous or vice versa.

Another benefit is battery life. If you use VR lenses, with the default setting, each time you half press you are focusing and activating the VR, which really chews through batteries. Separating the two functions only engages VR when you press the shutter. You will get anywhere from 25% to 50% more shots by using the AF-On button to focus.

It does feel very awkward at first, especially if you are one of those photographers that also set the audible "beep" to let you know you are in focus. In order to best take advantage of AF-On button, you will have to train your thumb to focus, use your eye in the viewfinder to check the dot that shows you have acquired focus, and use your index finger to just release the shutter. Trust me, once you learn this new way, you will never go back to the old. In a matter of a couple of outings, you will be comfortable with the new method and your percentage of "keepers" will improve. There is a reason why sports photographers almost universally prefer this method to focus with over the default one.

Thursday, February 9, 2012

Panoramas in Photoshop


Panoramas are wonderful ways to repeal the laws of physics – at least when it comes to optical designs. Here is the issue. Your eyes can view around 180 degrees horizontally and about 135 degrees vertically. There is not a single  camera/lens system that can do that  in a single image without all sorts of distortion. Well, sorta.
Post-processing does offer a solution - panoramic stitching. Yes, I know, many point and shoot cameras, smartphone cameras and other photographic devices offer a crude version of in-camera stitching. But creating a pano in a dedicated panorama application or using the Photomerge-Panorama process in Photoshop will yield  much better results, with seamlessly smooth transitions between the component images. And, in the case of Photoshop, the process is straightforward and fairly easy. You don’t necessarily need any fancy camera, tripod, tripod head etc to do it, although I will admit that those accessories do make things easier and help improve your results.

Continue reading to see how I did the pano of the mouth of San Francisco Bay looking north towards the Marin Headlands.

I started by taking several test shots to check the histogram for exposure. I looked at both ends of the histogram to see if anything was touching either side which would have meant that I could have lost information in shadows or highlights. This would have been the time to make any exposure compensation adjustments. Happy with the results, I took my sequence of shots, being careful to overlap each image about 50% with the prior one. I also turned on the grid display to ensure that my horizon remained at more or less the same level in the viewfinder.


Once in front of the computer I open Adobe Bridge to identify and highlight the images. Using the right click on one of the images I open all of them in ACR.


In ACR I was able to look them over, pick a representative image, and apply the lens correction, clarity, sharpening black and white levels, exposure adjustment, etc to get it looking pretty good.

Then I used the Selected All button (top left) followed by the Synchronize Button to apply the adjustments to all of the images. With all images still selected, I pressed the Open Images button at the bottom right. This loaded the images into Photoshop as individual images.


The next step was to create individual layers of each image, and combine them in a single image. Photoshop provides the File-Automate-Photomerge command which brings up a dialogue where you can select the type of merge (Auto Panorama), the source of the files, which in this case I used the Add Open Files option, and checked off the Blend Images Together.


This results in the following screen - note the layers in the layer palette. They are masked to display only the part that each contributes to the entire image.




Using the Layer-Flatten Image command I merged all the layers into one file which Photoshop names Untitled_Panorama1 by default. At this point I no longer need the contributing files so I can remove them without saving them.


You can see that the horizon is pretty straight, but I did tilt the camera a bit as I changed my body position from right to left, leaving some croppable areas, and also some areas that can be augmented with Content Aware Fill.
Selecting the Crop Tool from the palette, I cropped the left and the right borders away to clean up the sides, but left a little white space above and below the image so that I could fill it and end up with a slightly taller image.




With the Magic Wand selection tool picked I clicked in the white space on the top left of the image to select it. Using Select-Modify-Expand, 



I added an 8 pixel expansion to the border. This helps ensure a seamless fill. 

Then I used Edit-Fill-Content aware to fill the white area. I repeated this on the other areas until done.




There were still a few areas that needed a little additional work - in particular the top edge and the top left corner. The Clone Brush came in handy as a way of removing the branches hanging in the air that were created during the Content Aware Fill procedure.








And this is the final product in Photoshop




That’s pretty much all there is to it. Once you get the hang of it, you can process a RAW-based pano comprised of 5-7 frames that require minimal adjustment in about 5 minutes.