0.00
#...
·
|
---|
Just an fyi, turns out binning 2x2 on the camera and losing the two LSBs really doesn't affect SNR significantly at all. Yes Frank, I agree the link I mentioned has a simplified explanation of what happens when binning 2x2 on the camera, and how averaging four 16 bit pixels requires 18 bits and a very small error is introduced when truncating the result to 16 bits. The s/sqrt(12) factor is the expected RMS quantization error of the camera sensor digitizer. That's already included in the read noise and is not the truncation error we are talking about here (caused by pixel averaging and tossing the fractional part of the result). So I'm a bit confused why you are talking about the sqrt(12) factor. In any case, I think we both agree that as you say, the increase in noise caused by binning on the camera is "extremely small - and negligible". Speaking of rigor, I'm not doubting your conclusion but would like to see some rigor with regard to the assumption that smaller pixels result in significantly better subframe registration than larger binned pixels (with their higher associated SNR). Is there a source you can point me to here? I'm wondering at what point the diminishing returns of smaller pixels become insignificant, ie; if you could use 0.14" pixels on your EdgeHD11 for registration, would you? Thanks. Dave |
1.51
#...
·
·
1
like
|
---|
Yes Frank, I agree the link I mentioned has a simplified explanation of what happens when binning 2x2 on the camera, and how averaging four 16 bit pixels requires 18 bits and a very small error is introduced when truncating the result to 16 bits. The s/sqrt(12) factor is the expected RMS quantization error of the camera sensor digitizer. That's already included in the read noise and is not the truncation error we are talking about here (caused by pixel averaging and tossing the fractional part of the result). So I'm a bit confused why you are talking about the sqrt(12) factor. In any case, I think we both agree that as you say, the increase in noise caused by binning on the camera is "extremely small - and negligible". Hi Dave- Yes - the discretization is normally included in the read noise and need not be added - but I did it to show it is playing a role even for the unbinned case. You could figure out what the intrinsic analog read noise is prior to discretization and it would be a tiny bit less than 3.5. Discretization then brings it up to 3.5. The s/sqrt(12) applies any time a signal is discretized into steps of s units. If you take the digital values for four pixels and add them, the sum will have error/noise as described above at 7.016e. But if you then drop the last two bits, you are discretizing the result in steps of 4 ADU, or 3.2e. That will be an additional noise term that adds in quadrature - to make a final 7.076e. So you know that dropping the last two bits is a tiny effect. (The s/sqrt(12) is the standard deviation of a uniform distribution of width s). How many bits can you tolerate dropping? Well - if this is a good deep sky subexposure than read noise should be a small part of the noise, and sky background noise should be much larger in order to do its "swamping." So if the total read noise is 7.016e and you want to swamp it by 5x background noise (or 10x or whatever) the sky background noise is about 35e. How many bits can you drop to equal that? s/sqrt(12) = 35e -> s = 121e = 152 ADU = 7+ bits. So you could do the sum of four pixels and then drop 7 bits and still the background noise would dominate the noise introduced by discretization. If you 10x swamp it is 300 ADU or 8+ bits. This is based on regarding the discretization as purely a random noise term - but in reality it could lead to posterization for large amounts of truncation. You can have purely random noise that is visually tolerable compared to structured noise that isn't - despite having the same rmsd. A side point in all this is that the inherent dynamic range of a sensor is of little importance when you are sky background limited - and the dynamic range is intentionally squashed by the sky background noise. As for the pixel size question - which is the original point of this thread - the two factors overlooked with regard to optimal sampling in astro imaging are 1) The pixels don't sample at discrete points as required by the Nyquist theorem. Instead they represent averages over a square region - and that loses bandwidth. This point is rarely made in audio contexts, and only advanced texts describe it in the imaging context - but it is obviously happening. 2) Additional bandwidth is lost when aligning and stacking the exposures - and most people aren't even aware this is happening because it happens under the covers with stacking software. The only way to avoid this blurring is to allow some kind of sharpening interpolation - but that is a form of a "cheat" that boosts bandwidth artificially and no longer represents the original discrete values at each point in the image (which I assume plays a role in why PI no longer recommends it. I always avoided such things). You can similarly deconvolve and sharpen the final image and create arbitrary high frequency information that shouldn't be there. If you use 1:1 drizzle it is very similar to bilinear interpolation - and you can see examples of how that blurs an image when it shifts or rotates slightly. Nearest neighbor doesn't really blur each exposure - but its end result is to provide some bloat in the final aligned stack. A side piece of evidence is that many people thought they were "optimally" imaging with 9um pixels - but when they switched to cmos they saw much more detail in the star shapes indicating collimation and alignment problems. That immediately tells you the original sampling was losing bandwidth that was only capturable with much smaller pixels. Frank |