11 Ways to Improve the Sharpness of Your Images, Part 5: Image Stabilization

Should you turn image stabilization off when shooting from a tripod? We’re going to put it to the test and find some interesting results.

This is the fifth in a series of articles on how to improve the sharpness of our images. In previous articles in the series we’ve examined: Optical Design of the Lens (Part 1), Missed Focus (Part 1), Subject Motion (Part 1), Camera Shake (Part 2), Depth of Field (Part 2), Noise (Part 3), Atmospheric Disturbance (Part 3), Mirror Slap (Part 4), and Diffraction (Part 4).

In the current article, we’re going to check out the 10th factor that can affect image sharpness, image stabilization.

Image Stabilization

Admittedly, the whole point of vibration reduction/image stabilization systems is to improve image sharpness. But it turns out that that may not be quite the whole story. There are circumstances under which image stabilization can actually degrade the quality of your images.

What Actually Causes the Loss of Sharpness?

To understand what can potentially cause a loss of sharpness we first need to understand a bit about how vibration reduction systems work.

How Are Images Stabilized?

Vibration reduction systems are designed to help mitigate camera shake. They do this by actively moving one or more elements in the light path to compensate for the small motions of the camera. The parts that move are typically either an optical element or group within the lens, the image sensor, or a combination of the two. We’ll focus on lens-based vibration reduction/image stabilization/vibration compensation (VR/IS/VC) here, but many of the same considerations apply to in-body image stabilization (IBIS) as well.

In order to mitigate camera shake, it’s necessary to do two things: determine how the camera is moving and compensate for that motion. The first task is accomplished with a pair of accelerometers (Nikon) or gyro sensors (Canon) in the lens that, when paired with an embedded microcomputer, measure motion in two orthogonal directions a thousand times per second. The accelerometers are similar to the ones found in your smartphone that allow it to respond to tilting or shaking.

That information, then, can be used to alter the projection of the image in just such a way as to compensate for the motion of the camera. This compensation is done using an optical quirk of lenses. Notice, in the image pair below, that when the lens is moved to the right, the location of the distant pine tree actually shifts to the left.

Moving a lens transverse to the optical path shifts the location at which the image is projected in the opposite direction.

In a VR/IS-enabled lens, one of the central lens elements is suspended within the lens barrel. A pair of actuators are used to move the lens element so as to keep the projection of the image stable with respect to the sensor, even when the camera is moving. Pretty slick, actually.

How Might Image Stabilization Cause Degradations in Sharpness?

Now that we know the basic components of a vibration reduction system, we’re in a better position to think about how they may contribute to a loss of sharpness. At first glance, there seems to be relatively little concrete information on the subject. Nikon’s website, as well as a host of other sources, suggests that you turn VR off when using a tripod, but doesn’t provide an explanation as to why.

The Popular Explanation

By far the most frequent explanation I’ve run across is the potential development of a feedback loop. The idea goes like this. Say we’re making a relatively long exposure, somewhere from a 1/10th of a second (roughly the resonant frequency of many telephoto lenses) out to a few seconds. We depress the shutter release to start the exposure or perhaps we forget to lock up the mirror. Either can cause some initial movement of the lens. In response, the VR/IS system would move the floating lens element in an effort to compensate. But the lens element isn’t weightless. In fact, it probably weighs a good bit more than the mirror. As a result, the motion of the element, meant to mitigate the initial vibration, could actually cause additional oscillations. The accelerometers could then pick these up causing the VR system to try to compensate for its own behavior.

To me, this explanation seems possible with early image stabilization systems, but a little unlikely. Engineers have been doing motion control with feedback loops for a long time and such a glaring oversight seems implausible (for a basic introduction to a simpler version of motor control, see here). I need to do a little more testing to track this down further.

More Likely Explanation in Many Cases

While researching this article, I ran across a comment by Russel McMahon in a post on Stack Exchange. He suggested that electronic drift may be the cause of image stabilization failures during long exposures. With that bit of insight to go on, I found an in-depth article on how the Olympus image stabilization system works, and, indeed, McMahon’s suggestion seems to be on the mark. Gyro sensors produce a small bias voltage even while no rotation is occurring and this voltage can drift over time in a way that can’t easily be compensated for. The result is that the microcomputer slowly moves the internal lens element to compensate for motion that isn’t actually occurring. This bias drift is likely a limiting factor of image stabilization system accuracy during long, solid exposures. The integration of noise in accelerometer outputs can also lead to random drift over time with the standard deviation of the inferred speed proportional to the square root of the exposure duration. 

Tamron, like Olympus, uses gyro sensors in their Vibration Compensation (VC) system, and as we’ll see in a moment, there are potentially some issues when VC is engaged during long exposures.

What Does This Loss of Sharpness Look Like?

The purpose of an image stabilization system is to exactly mimic camera shake, just in the opposite direction from the actual motion. When the actual motion and compensated motion don’t precisely cancel, we’re left with what’s effectively residual camera shake. The impact on the final image should, therefore, be identical to that of traditional camera shake with the entire field of view moving in a correlated way during the exposure.

We can see what this looks like in an 8-second test shot taken with the Tamron 15-30mm f/2.8 (G1). I had neglected to turn Vibration Compensation off during an early test exposure while composing an image of the night sky. Notice that each star has been turned into a short streak and that each streak has an identical shape.

Interestingly, just as the satellite trail here told us something about the motion of a lens following a mirror slap, the star streaks tell us something about the motion of the Vibration Compensation system during the exposure. We know from the direction of the streaks that the internal lens element moved diagonally. Further, from the fairly uniform luminosity along with the streaks, we know that the motion was likely to slow and consistent across the full duration of the 8-second exposure, rather than a rapid movement to a new stable point (which would have yielded a bright spot with a faint tail).

With Vibration Compensation turned off — and an improvement in composition — the photograph below was captured a few minutes later.

Nasim Mansurov over at Photography Life has also done a few experiments with Nikon’s 300mm f/2.8G VR II. This is a beast of a lens that was first introduced back in 2009. It has a version of Nikon’s Vibration Reduction system that is a generation older than many of their newer lenses. The article is a little light on some details, but he came to the conclusion that VR on the 300mm negatively impacted sharpness (fairly significantly) and that how significantly depended on how long the VR had been engaged for.

I recently made a number of test shots with the Nikon 500mm f/5.6 PF ED VR, which was introduced in late 2018 and previously reviewed here. It looks like many of the VR issues during long exposures may have been addressed in their latest generation system. The shots were all taken with a Nikon D810 body using the electronic front curtain shutter mode.

The figure below looks at the effect of VR with and without the mirror locked up in a series of exposures made at 1/20th of a second. Panel (A) is the gold standard with the mirror locked up, a remote shutter release used, and VR turned off. Note that three images were taken at each setting to make sure the results were consistent and reproducible. Invariably, they were. Note, also, that Nasim’s assertion that the efficacy of the VR system depended on how long it had been engaged for was tested. No difference was found with the 500mm lens between shots made immediately and those in which the VR system was allowed to stabilize for a few seconds.

With the mirror locked up, the use of VR, (B), may lead to a slight degradation in sharpness, but it’s very slight. Panel (C) shows the effect of mirror slap without VR and, (D), with VR. In the latter case, the VR system isn’t able to completely compensate for mirror slap, but it’s a dramatic improvement over the image quality without it (we saw the same result in this earlier article in the series).

The last two panels show what happens when a tripod is used to steady a shot, but with a manual shutter release rather than a remote one (this would be a pretty standard configuration, for example, when shooting wildlife in the field). In panel (E) the VR is turned off and, in panel (F), turned on. Again, the image quality is dramatically improved by the VR, even though a tripod is used.

The same series of experiments was then repeated at 4- and 15-second exposures, all using a remote shutter release. The results are shown in the figure below. Again, panels (A) and (C) represent the gold standard with mirror lockup and no VR. There is a slight degradation in the quality of the 4-second shots with the mirror unlocked (not unexpected). The mirror seems to have had little impact on the 15-second images (also not unexpected). In each case, the use of VR may have a slight effect, but it’s subtle. There’s a potential improvement with VR from (E) to (F) and the potential for a slight degradation from (A) to (B), (C) to (D), and (G) to (H).

Note that the top row of each figure is (almost) equivalent to shooting with a mirrorless body in silent mode (with the exception that the rear curtain was released during these experiments).

How Can We Manage It?


First, if we’re hand-holding a shot, it’s pretty much always a good idea to have the image stabilization system engaged. Any adverse effects are likely to pale in comparison to the potential impact of unmitigated camera shake.


When shooting from a tripod, the answer is that it depends on the lens and/or body we’re shooting with. It’s almost certainly a good idea to do a little experimentation with each of your own bits of kit that have active image stabilization elements to see how they behave under different conditions. If I’m shooting long, tripod-based exposures with the Tamron, I turn Vibration Control off every single time. With Nikon’s newer 500mm, I don’t hesitate to leave VR on. The degradation is somewhere between minimal and non-existent in the worst-case scenario, and under the vast majority of common shooting conditions, it offers a huge potential for improvement. For example, even if I’m shooting from a tripod using a remote release, if I don’t take the time to lock the mirror up on a DSLR before each shot, I’m likely far better off with the VR on.

So, play around a little bit. Do some testing on your own. The rule of thumb that you should always turn VR/IS off when using a tripod is outdated at best. There are many times/conditions under which VR offers dramatic opportunities to improve image quality even with a tripod. Just make sure you’re aware of the cases where it doesn’t help out for your specific gear.


Add comment

%d bloggers like this: