Dead/Stuck/Hot Pixels

vftt.org

Help Support vftt.org:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

Billy

Active member
Joined
Jun 29, 2009
Messages
628
Reaction score
148
Shot some long exposures last night. When I looked at the JPEG photos on the computer I could see a few red dots (and a few green ones and blue ones) visible against the dark sky. The dots were in the same X-Y coordinates of every photo. RGB made me think Bayer filter and blown pixels. Quick web search shows slightly varying ideas, but mostly the same theme...probably stuck or hot pixels, expecially since I was shooting 30-second exposures.

I shoot in JPEG+RAW. When I opened the RAW files in PSE7, as usual the photo opened up in the RAW editor. I noticed the red/greeen/blue dots would be there for about a second, and then boom...gone. It's like PSE knows what they are and immediately gets rid of them ("paints" them the color of the adjacent pixels). When I toggle between two RAW pictures in the RAW editor, the colored dots always show up again at first, but then go away again. I was expecting to have to brush these dots away, but PSE did it automatically.

Below are the two photos (JPEG and RAW). You can see the red dot in the first photo. From the left edge of the photo, count to the 4th street light on the bridge. Then travel straight up until you hit blue sky. That's where the red dot is. Notice it's gone on the bottom photo. The top photo is unedited in PSE. The bottom image was sharpened, straightened, and cropped, that's all. There's also a blue dot at the very top of the top image, about 2/3 from the left, but that dot was cropped out of the bottom image, so there's no comparison available for that one. There are probably others, it's kind of Where's Waldo. Anyway, if you're shooting long exposures, keep your eyes open for these little dudes.


JPEG image

IMG6161-XL.jpg





RAW image

IMG6161edited-1-XL.jpg
 
Last edited:
Many cameras have a mode for shooting long exposures which will, when you close the shutter, take another exposure for the same amount of time but keeping the shutter closed. This produces a baseline of any noise produced by the sensor in the exact same temperature/pressure/etc. conditions against what should be black. The camera will then subtract the second from the first and the result becomes the image (JPG or RAW). I highly recommend long exposures in this mode, as well as sticking to a lower ISO (less sensitivity = less noise). On Canon dSLR's it's called "Long Exposure Noise Reduction" and is Custom Function II-1 (C.Fn II-1).

Now, before Doug jumps in ... yes, there's a difference between noise and stuck pixels. However, by the nature of simple subtractive way this reduction works, if the red pixel is stuck on when the "black" image is taken, it will still remove that spot when it does the noise reduction.

BTW, this is just something that Camera Raw does in Photoshop. It's able to use heuristics and correct the pixel when doing the image realization from the RAW file.
 
Last edited:
Note to the reader: Noise analysis of digital cameras and noise reduction processing is a complex technical topic. So my response will necessarily be somewhat technical. There are some good articles at http://www.clarkvision.com/photoinfo/#part_4. Much of this reply is derived from those articles.

Many cameras have a mode for shooting long exposures which will, when you close the shutter, take another exposure for the same amount of time but keeping the shutter closed. This produces a baseline of any noise produced by the sensor in the exact same temperature/pressure/etc. conditions against what should be black. The camera will then subtract the second from the first and the result becomes the image (JPG or RAW). I highly recommend long exposures in this mode,
Note that the noise is random in both the open shutter and closed shutter images. Thus you have
* p1 = s1 + n1
* p2 = n2
* n1 and n2 come from random distributions and are generally !=
* thus p3 = p1 - p2 != s1
where p1=picture_1, s1 = signal_1, and n1 = noise_1 (all shutter open);
p2=picture_2, and n2=noise_2 (all shutter closed, the noise estimate);
and p3 = the final estimated picture (ideally = s1)

This simple subtraction can yield negative intensity pixels in p3 which are clearly impossible. The actual noise subtraction process is more complex than simple subtraction.

In addition, Canon does a lot of noise removal by comparing each pixel to nearby pixels and altering pixels that it thinks are noisy. (No pure noise image required.) This process reduces the resolution. Some cameras allow the user to set the amount of noise reduction.

I have tried Canon's noise subtraction on 30 sec exposures with my cameras and could see no difference so I don't use it.

as well as sticking to a lower ISO (less sensitivity = less noise
Not so simple--it can only be true if you can let more light onto the sensor.

A note on terminology and the nature of noise:
Expected value (written E[x]) is essentially the average of x over an infinite number measurements. However, we only have one measurement of x which will generally differ from E[x]. If all measurements of x yielded E[x], there would be no noise problem... (The difference between a measurement of x and E[x] is the noise.)

A bit of background:
A sensor well (pixel) holds electrons--the number of electrons is proportional to the measured image intensity (including noise).
1) shot (signal) noise: Some proportion of photons (particles of light) produce electrons--the noise is proportional to the square root of the expected number of electrons (this is a random process in the number of photons hitting a well and the conversion of photons to electrons).
2) leakage effects: A random number of electrons will leak into a well-- the expected number will be affected by the temperature and is proportional to the time.
3) read noise: There is noise in the analog electronics used to measure the number of electrons in a well. This is constant. For this discussion, I will assume that the digital intensity value associated with a pixel is a measurement of the number of electrons in the well (ie the actual number plus some random noise). (Actual cameras use some mixture of analog gain adjustment and digital scaling.)

A sensor well can hold up to some maximum number of electrons. (~66K for a Canon 5D II DSLR full size sensor and ~8K for a Canon S70 point and shoot. The number is approximately proportional to the pixel well area.) For the lowest ISO (let's assume 100 for this discussion), the measured number of electrons in a full well is scaled to produce a maximum intensity value (255 for an 8-bit intensity value). For an ISO of 200 the scale factor would be doubled and you would only be using half the range of the sensor and so on up to some maximum practical ISO.

Now back to the original conjecture: lower ISO yields lower noise:
In bright light the shot noise will dominate the other noise sources--halving the ISO (which halves the scale factor) and doubling the amount of light hitting the sensor reduces the noise by a factor of sqrt(2).

But at lower light, you can only lower the ISO if you can still take the picture (ie can increase the aperture or exposure time):
* ISO/2, same exposure time, +1 stop aperture
The expected value of the signal doubles, the expected value of the shot noise goes up by sqrt(2), and the scale factor is halved. The expected values of the time and temperature based noises (leakage) and the read noise will be the same. After the scaling, the shot noise goes down by sqrt(2) and the thermal, leakage, and read noises go down by a factor of 2. Thus then noise in the image is reduced.
* ISO/2, exposure time x2, same aperture
The expected value of the signal doubles, the expected value of the shot noise goes up by sqrt(2), and the scale factor is halved. The expected values of the time and temperature based noises (leakage) double and the read noise will be the same. The noise is reduced only if the shot noise and read noises are not dominated by the leakage noise (which is only true at shorter times).

Thus lowering the ISO reduces the noise only in bright enough light or for short enough exposures.

Doug
 
Last edited:
Now, before Doug jumps in ... yes, there's a difference between noise and stuck pixels.
Yes.

However, by the nature of simple subtractive way this reduction works, if the red pixel is stuck on when the "black" image is taken, it will still remove that spot when it does the noise reduction.
No. A stuck pixel will yield the same value in both images--thus after subtracting, the intensity will always be 0 (or close to 0 taking the read noise into account). An intensity at or near 0 is not always correct. However, bad pixel detection (see the comments on Camera Raw below) may then be able to synthesize a reasonable value for the bad pixel.

You can check or record the stuck pixels by:
1) take 1st image at minimum ISO and with the lens cap on
2) take 2nd image of a brightly lit white wall/card, way over exposed, large aperture*, out of focus. (The goal is to over saturate every pixel.)

* if you use a small aperture you may be finding the dirt particles on your sensor. (Use a normal exposure if you want to find the dirt...)

Any pixel that is not dark (0 or close to 0) in the 1st image is bad and any pixel that isn't saturated in the 2nd image is bad. Short times will tend to probe stuck pixels and long times will also find leaky pixels. The results could be temperature dependent.

I believe some software can use such images to locate bad pixels.


BTW, this is just something that Camera Raw does in Photoshop. It's able to use heuristics and correct the pixel when doing the image realization from the RAW file.
CR is simply observing that a pixel is very different from nearby pixels and responds by setting the pixel value to the average (or a similar function) of the surrounding pixels. This will also erase a legitimate 1 pixel spot that differs significantly from the nearby pixels. (Most digital cameras have anti-aliasing (blurring) filters that make this unlikely.)

Doug
 
Last edited:
Oh yes, there is also quantization noise...
4) quantization noise: the error in representing the number of electrons in a limited number of bits. This can appear in images as "posterization"--contour-like edges in regions of slowly changing intensity.

There are probably a few more noise sources that I have missed...

Doug
 
Top