I’ve been asked how to take colorful landscape photographs a couple of different times, and ended up writing an answer on Quora and republishing it here. It’s always best to get the shot right in the camera versus focusing on post-processing. While I always shoot RAW as that gives me the most leeway to fix anything later and add artistic effects, it’s a bad habit to rely on later RAW processing to fix things you can get right in the camera.
The good news? There are two common reasons for washed out, low contrast images, and only one extra trick that’s worth knowing to get vivid, colorful photographs.
Exposure is the most common error, so I will cover that first. We need to expose for the mid-tones of the image to get saturated colors. Highlights and anything at the right of the luminance histogram tend to white. The more over-exposed your image, the lighter the colors and the more washed out it looks.
This landscape is properly exposed per the camera, but looks washed out. The histogram shows that the pixels are bunched up to the right, with lots of high RGB-value pixels:
High RGB-value pixels are basically shades of white, the opposite of saturated colors, so the histogram shows that the image is washed out because it is mostly composed of white-ish pixels. We can reduce the exposure value (EV) to fix this at capture: the simplest way is to use the exposure compensation dial, but a reduction of ISO, shutter and/or aperture will work too.
Here’s that same scene about -2 EV from the previous image:
And here’s the histogram with the actual exposure values for this image:
Now you can see that there is a little clipping in the blacks (that’s what the red triangle at the top-left indicates) and we have also have no pixels at all with high luminance (the gap between the blue pixel “mountain peak” and the right-hand edge of the histogram shows this). Technically, this picture is not exposed correctly — some detail is being lost in the darkest shadows, and we’re not using the full range of the sensor. But that’s why photographers make exposure decisions, not cameras. The second image conveys the saturated colors of the lake and the trees, and we don’t care about the lost shadow detail in the lakeside on the right.
Now that we understand where saturated colors come from, we can understand why we’d use other exposure-reducing techniques like filters (see below). But first…
The second most important reason for lack of saturation is taking a photo in bad light. Typically the worst light is at noon, because illuminated surfaces are going to tend to white as they’re getting a lot of light from the fiery orb in the sky. If you compensate for this in exposure, the shadows will simply be black because the camera sensor cannot capture the enormous light difference between the two. So, take the photo when the light is softer and the dynamic range can be captured by the camera.
The photographer Syl Arena () summarizes this well: “If you want to create interesting light, you have to create interesting shadows.”
This is why landscape photographers scout the location to understand where the sun will be and what path it will take, and have smartphone apps that tell them sunrise and sunset at that location. Get up early for soft morning light, know when the “golden hour” is when the setting sun washes the scene with warm light, and learn how to use the “blue hour” after the sun has set.
Notice the very different moods of the same scene captured at different times of the day. Think about the emotion you want to evoke.
And this is just ambient light. There’s also a whole world of lighting for photography using strobes (flash) and, increasingly, continuous lights that can help you create interesting light and interesting shadows.
If we can get good exposure and good light, then we are 90% of the way there, and can now look at getting the last 10% of saturation and contrast possible at capture.
All landscape photographers have at least one polarizing filter. It cuts out reflected light, which has the effect of cutting haze in a landscape image and really making clouds “pop”. Reducing reflected light also increases contrast and saturation. All modern cameras use circular polarizing filters. You should get one for each of your landscape lenses.
Note that the effect of a polarizing filter is one thing that cannot be fixed later in RAW processing. I know, I know, there are “haze cutter” presets/actions for Lightroom and Photoshop but they basically work by fiddling with the local contrast in an image. The polarizing filter globally screens out light you don’t want in your image, and that isn’t possible to take out in post.
You can also buy “graduated neutral density” filters. You can use these to selectively reduce exposure for situations where a standard exposure would blow out some portion of the image — the sky for example — while the rest of the image (and those mid-tones) is properly exposed. Neutral density (ND) filters are basically translucent gray glass or plastic. They simply block some amount of the light passing through them. Graduated neutral density filters are clear on one edge and gray on the other, with a change in opacity of the filter.
These are examples of fairly sharp changes, but you can also buy filters with a more gradual transition from grey to clear. Using a filter holder mounted on the end of your lens, you can position the clear part of the filter on the part of the image that you’re exposing for, and use the grey part on the sky or whatever needs to come down in exposure.
You can get a very similar effect as a graduated ND filter in RAW processing. Basically it’s an exposure gradient applied to the image. It isn’t as good as doing it at the time you take the shot, because it is limited to the exposure leeway of the RAW file (typically 2.5 EV) and it introduces more noise into the image.
Here’s an example of using a graduated ND filter. Without the filter, the sky is pretty boring while the chapel and graveyard are well exposed:
And after, the graduated ND filter reduces the sky exposure revealing some interesting clouds and saturated blue sky. It’s also giving us more saturation in the red color of the chapel roof line:
Both of these were taken with a polarizing filter, which is why the cloud texture is so visible in the sky, and it helps increase the overall contrast of the image.
Photography began with the concept of capturing light in a single plane — the idea of what’s in and out of focus in an image has been central to photographic composition for more than 100 years. The photographer’s job has been to direct the viewer’s attention by deciding which subjects are in focus, while ensuring that the background (and less frequently, the foreground) are appropriately blurred. The Lytro camera turns that entire concept on its head by delivering a “living image” where the viewer can change the point of focus to explore the image.
When my Lytro arrived, I couldn’t wait to try it out. What follows is my review of the 1.0.0 version of the Lytro camera and 1.0.0 software following my attempts to take good photographs.
I’d love to embed some Lytro photos in this post, but that doesn’t seem to work. So, instead, here’s a link to my Lytro gallery (opens in a new window): Lytro example
The Lytro camera “out of box” experience is very well done and will be very familiar to anyone who owns an iPod or iPhone. It comes in a white box, with the camera held firmly in place with plastic inserts. Underneath the camera is a white cardboard box containing the paperwork, lens cap, cleaning cloth, wrist strap and USB cable.
The camera itself is small — just 4.4 inches long — and arrives with some charge in the battery so you can start shooting immediately. It’s shaped like a square tube, with the lens at one end and a small viewing screen at the other:
A small power button on the bottom turns on the camera, and the shutter button is on the top. Zooming is accomplished by dragging a finger left and right on the rear top edge of the camera. It’s a little clunky, as it can be hard to set just the right composition, but it works.
The camera is fully automatic: it sets shutter speed and ISO automatically with a fixed f2 aperture. You can tap on the rear screen to tell the camera the exposure metering point and it will do its best. The Lytro needs a wide aperture to capture the best light field — the direction of the light as well as its intensity and color. There’s a little clicking sound when you point the camera at a very bright object like the sky on a sunny day, and exposure is reduced. I suspect it’s inserting a neutral density filter inside the lens barrel to reduce the light intensity.
In “Everyday mode” (the default), the Lytro takes photos instantaneously when you press the shutter button. Lytro makes much of this “instant capture” capability in its marketing, because a light field camera doesn’t have to focus. In theory, yes. In practice, not so much, which is why there’s “Creative mode”, accessed by swiping a finger upwards on the rear screen and tapping a small icon, allowing you to focus the camera. The physics of optics has crashed the “focus free” light field party here: the camera has to focus the lens so that the captured light field has a useful range of potential focus planes.
The net? The Lytro camera requires very careful framing and composition of subjects to give the viewer an interesting set of subjects separated by distance. While the camera might be point and shoot, the composition work is anything but instantaneous! Creative mode offers more flexibility in composing the subjects by distance (handy if they are inanimate) and careful control of the camera to ensure it is properly “focused”. The square (1:1) aspect ratio of the image doesn’t help with composition, making it hard to get everything into the frame. The camera would be easier to use if it had a wide aspect ratio, as there would be more room in the frame for the subjects.
This is really the crux of the challenge presented by the Lytro: how to use it to produce an effective photograph that is interesting and aesthetically pleasing for the viewer. Many of the examples on the Lytro website are contrived compositions with near and far elements to show off the novelty of being able to re-focus after the fact. If that is all there is for light field images, then its impact on photography has been dramatically over-hyped.
Plugging the camera into your Mac (Windows support just shipped) allows you to install the Lytro Desktop software — a nice touch. Once installed, you unplug and replug the camera to download images. You can then view the images — though not at full size — and change the focus point. You can also share images on Facebook and Lytro’s own website, and export JPEGs (so I’m told — I have never managed to find this function in the software). The only effective way to show an image is through the flash file stored on the Lytro website, which is severely limiting.
Lytro touts the camera’s low-light capabilities, but I think that’s a mistake. There is significant noise, banding, lines and other odd artefacts in low light. Shutter speeds are low, even at f2, so camera shake is a real problem, and there’s no flash integration.
Overall, the Lytro camera is intriguing more for future possibilities than what it can do today. Version 1.0.0 is pretty basic and effective composition is extremely time-consuming and requires full cooperation of your subjects. With light field photography, we ought to be able to produce 3D photographs, change the viewers position, tilt and shift the plane of focus to our liking… there are many more possibilities than capabilities in the current software. Today, I find myself leaving the Lytro at home more often than I take it out, and that’s a shame.
Thinking about buying a new digital camera? The best advice I can give you is to avoid maxing out megapixels and you’ll get sharper photographs with less grain (noise). This seems counter-intuitive, but is the effect of camera technology running into physical limitations of lenses and light itself.
David Goldstein has written a full-length paper that explains the physics, but here are the key take-aways on megapixels:
Why is this? Fundamentally, camera manufacturers have to balance increasing resolution from more megapixels with two competing forces: noise from the digital camera sensor and softness caused by lens diffraction. You’ve probably seen that photos taken at ISO 1600 on your camera are much more grainy than those taken at ISO 100 — that’s what noise looks like in your pictures. At the same time, the lens aperture (f-stop) puts an upper limit on how much detail can be resolved by the camera. Overall, more megapixels don’t automatically mean better pictures.
Noise is a result of the fundamental physics of light and the way that digital color cameras capture images. Light is made up of photons, with more photons meaning brighter light. The digital camera sensor counts the number of photons that arrive at each pixel to build the overall digital image. In a 21-megapixel 35mm digital camera such as the Canon 5D Mark II, there are about 20 photons hitting each sensor pixel in the darkest (shadow) areas of the image, assuming a perfect lens. That’s not very many, and it gets worse because of the way digital cameras deal with color. A red, green or blue lens covers each pixel so that only light of that color is detected at each pixel. Dividing 20 by 3, that’s just 6 or 7 photons arriving at each pixel.
Even with a super efficient sensor, this means there’s a lot of noise due to mis-counting of photons. If we add more pixels to the sensor, the number of photons arriving at each pixel goes down because each pixel has to be smaller, which means more noise. That’s why adding more megapixels makes the photos more grainy, especially in shadow areas.
When it comes to image resolution, adding more megapixels helps the camera resolve more detail – up to a point. Adding more pixels reduces the pixel size on the sensor, which also increases the effects of diffraction, caused by the lens. Diffraction is a fact of life: Isaac Newton discovered that light beams spread into a circle pattern as they pass through an aperture such as the iris of a lens. Diffraction effects make pictures look “soft” and lack sharpness. Diffraction softness becomes more noticeable as sensor pixels get smaller because the diffraction circles spread over more pixels.
Newton showed that diffraction increases as the aperture gets smaller. As the f-stop increases and the aperture of the lens gets smaller, diffraction increases. The point where diffraction starts to noticeably affect picture quality is called the diffraction limit. The diffraction limit for a 21-megapixel full-frame (35mm) sensor is f/10. This means that at f/11 or greater, the image will get softer and softer. The limit is f/8 for a 10 megapixel APS-sized sensor, and f/2.8 for a 12 megapixel pocket camera. Most pocket cameras don’t have a lens that offers that wide an aperture!
In other words, while adding megapixels initially increases camera resolution, there comes a point when it leads to noticeably softer pictures (as well as more noise). For example, in this New York Times review of a new, tiny Samsung ST80 14.2 megapixel pocket camera, the reviewer notices that “people and scenery in the background looked murky, and the photos lacked crispness”. Recall that to get a sharp picture on a pocket camera of 12 megapixels, you need f/2.8 or better aperture to avoid softness from diffraction. The ST80 has 14.2 megapixels and a maximum aperture of f/3.3, so pictures are guaranteed to be soft from diffraction – in addition to the extra noise from pixels that are so tiny you can fit 50 million of them in an area the size of your fingernail.
With today’s cameras, don’t obsess over megapixels, and stick to lower pixel counts where possible — especially with pocket cameras, where the limitation in picture quality is more likely to be the lens rather than the sensor.
Think twice about:
To find a camera with low megapixel count, use DPreview.com’s Camera Statistics pages. For more than 1600 cameras, DPReview lists the number of megapixels per centimeter squared (cm2) — lower is better. For example, the Canon 1D mark IV has just 3.1 megapixels per cm2 as you might expect for high-end professional gear. The 5D Mark II has just 2.4 megapixels per cm2. The Canon SX210 pocket camera has 50 megapixels per cm2 as it’s a 14.1 megapixel pocket camera with a tiny sensor. Instead, consider the Canon S95 with a pixel density of 23 megapixels per cm2 — a 10 megapixel camera.
On a recent trip to Asia, I was browsing an electronics duty free store in Tokyo airport, chock full of the latest cameras. Not being able to read Japanese, the only recognizable characters in the displays were the numbers — the number of megapixels, for the most part.
And that made me think, because this figure has become so important to digital camera sales. More megapixels sounds like a great idea, because it means more resolution for your pictures — they’re clearer and can be made into larger prints – right? Well, not quite. It’s possible an older camera with fewer megapixels actually produces cleaner, more detailed pictures.
More megapixels are not necessarily better because of some physical constraints. The size of the sensor chip that collects the light and makes the image is fixed, so more megapixels means that the individual photo sites (pixels) on the sensor get smaller if more megapixels are added. This is a problem because, in general, smaller pixels mean more noise. Noise shows up like film grain in a picture – it’s a measure of the difference between the actual light level and that measured by the photo site. In other words, it’s an error in the light reading. Smaller sensors are prone to more errors for physical reasons that are complicated to explain… and I’m not going to go into that level of detail here. The net is that more megapixels mean smaller light sensor sites, and therefore more “grain” in your pictures.
Professional digital SLRs have “full frame” (i.e., 35mm-sized) sensors, which is one reason why professionals prefer them – the image is “cleaner” since there is less noise. So if this is the case, why the obsession with cramming more megapixels onto the sensor? In the early days of digital cameras, the manufacturers were able to reduce the noise level of the sensor sites at the same time as putting more of them into the same space, through better manufacturing and design technology. So there really was an improvement in the camera. We got used to more megapixels meaning better pictures. The problem is that this isn’t always the case any longer, and yet we expect the same improvement.
This is particularly the case for compact (palm-sized) digital cameras, which have teeny tiny sensor pixels. They’re made at the limits of chip manufacturing technology and adding more megapixels definitely means more noise – grainer pictures – for these cameras. Manufacturers include a lot of “noise reduction” software in the cameras to compensate, which reduces noise at the expense of making the pictures more blurry and lacking in definition (detail).
“Pro-sumer” SLR cameras, like the Canon 40D, are a different story. Here, Canon has been able to increase the quality of the sensor sites and the tiny micro-lenses that sit over them, so that going from 10.1 megapixels (40D) to 18 megapixels is accompanied by an overall reduction in noise. Yes, more megapixels but less noise. The question is whether this can continue for the follow up to the 50D that is inevitably in the works.
The net: when choosing a camera, especially a compact, consider whether it has too many megapixels by finding out how much noise and blurring (from noise reduction) there is in the final pictures. You might be better off with an older, cheaper camera with fewer megapixels. Check out a good review website like www.dpreview.com to find out how the camera’s pictures perform versus the competition.
Judging by the comments on photography sites, many people have yet to upgrade Adobe Photoshop CS3 to CS4, or Lightroom 1.x to 2.x. Adobe doesn’t update the Camera Raw plug-in for older versions of Photoshop, which is a problem because updates are the only way to get support for new camera models. The same problem exists for Lightroom, where a new version of the program is required.
The Canon 50D isn’t supported in Lightroom 1.4 — only version 2.x. While the 50D is supported in Camera Raw 4.6, which means you can use it with Photoshop CS3, owners of the 5D Mark II are not so lucky: no support in Lightroom 1.4 and no support in Photoshop CS3 either; it’s supported in Camera Raw 5.2 which only works with Photoshop CS4.
To solve this problem, use the latest version of Adobe’s free DNG (Digital NeGative) converter to translate the RAW files from newer cameras into .DNG files. Lightroom 1.4 and Photoshop CS3 can open any DNG file, regardless of the original camera type. The DNG converter can be found here, and there are PC and Mac variants. It is updated at the same time as the Camera Raw plug-in for Photoshop when new camera support is added.
It works in batch mode — you point it at a directory (folder) full of RAW files and it grinds away creating DNGs in another directory. Taking the long view, DNG is probably a better file format for archiving images because it is open, unlike the proprietary camera-makers’ RAW file formats.