Binning in astrophotography [Deep Sky] Acquisition techniques · Francois Theriault · ... · 26 · 1104 · 5

FrancoisT 1.91
...
· 
I have read a lot and watched a lot of videos explaining the why and reasons behind binning and its benefits on working around the limitations of your equipment. 
I have been musing lately with binning in astrophotography and the maximum resolution available in imaging.There are two main factors that determine how much resolution and clarity can be obtained by your imaging train.Local seeing put aside, the formulas I could find is the resolution:            Resolution = (camera pixel size / telescope focal length) x 206.265The ideal resolution range for OK seeing is 0.7 to 2 arcseconds/pixel.And Dawes limit            DL = 116 / telescope aperture in mmWith these two formulas, I went through a series of calculations to satisfy myself I was getting the maximum performance out of my imaging train.  Ritchey Chretien @ 1x1 binningRitchey Chretien @ 2x2 BinningRefractorFocal Length1625 mm1625 mm900 mmAperture203 mm203 mm100 mmCamera Pixel Size3.8 µ3.8 µ3.8 µ    Calculated Resolution  0.48 Arcsec/pixel 0.96 Arcsec/pixel 0.87 Arcsec/PixelCalculated Dawes Limit  0.57 Arcsec 0.57 Arcsec 0.49 Arcsec    Example --->Target  - Ring Nebula Apparent Size = 230 x 230 ArcsecSize of target on sensor230 Arcsec /  0.48 Arcsec / pixel = 480 Pixels230 Arcsec / 0.96 Arcsec / pixel= 239 Pixels230 Arcsec / 0.87 Arcsec / pixel= 264 PixelsDawes Limit (smallest detail detectable)0.57 Arcsec / 0.48 Arcsec / Pixel= 1.18 Pixels  0.57 Arcsec / 0.96 Arcsec / Pixel= 0.59 Pixels      0.49 Arcsec / 0.87 Arcsec / Pixel= 0.56 Pixels With this configuration, the smallest detail would occupy 4 pixels as it overspills 1 pixel on  the sensor.Indicates Oversampling.With this configuration, the smallest detail would occupy 1 pixel.   With this configuration, the smallest detail would occupy 1 pixel.
In the above comparison, the Ritchey- Chretien at binning 1x1 would be ruled out since the resolution at 0.57 Arcsec / Pixel is below the recommended scale of Arcsec / pixel, indicating oversampling. 

So, by binning the Ritchey-Chretien at 2x2, we fall within the "acceptable" range of resolution between 0.7 and 2.0 Arcsec / pixel.

However, from the above comparison, it becomes apparent that both the RC at 2x2 and the refractor at 1x1 have very similar results. The RC comes in at 0.59 pixels for the smallest detail detectable and a target size of 239 pixels. The refractor comes in at 0.56 pixels for the smallest detectable detail and 264 pixels in size.With the binning at 1x1 on the shorter refractor, some may argue that we are “wasting” the sensor space as the target would only occupies a very small portion of the overall field of view. The fact is, however, that the target, when cropped, is the same size in both configurations.
Like
_The3D_ 1.81
...
· 
·  6 likes
resolution is only  part of the equation, considering that with modern CMOS cameras you can resample post acquisition and achieve the same results as a native 2x2 binning (CMOS cameras simply resample in software) i would always shoot at bin1 unless storage space/processing power are a concern.
Like
FrancoisT 1.91
...
· 
Sorry,
The calculations came out as garbled. Here is an image of the calculations.
Calculations.jpg
Like
Rustyd100 4.13
...
· 
Wow, what a great description of the science behind all of this!

I use a slightly more practical approach in actual applicaiton.

I scale after processing, as Emilio suggests. The cool thing is that post-binning doesn't have to be by half. It can be by any percentage. I generally find the smallest stars and simply reduce until they are 2X2 pixels. I tend not to like the appearance of single-pixel stars. This means the smallest possible detail in the image will not be lost, as it can't shrink beyond 2x2

Noise drops proportionately and no formulas needed. The image is exactly the right size, which saves server space by being the minimum file size necessary.

In Indiana USA, at 800 feet, the guiding and seeing are the biggest influence on my resolution. It is always more than the theoretical limit of the scope. This week saw a good night, as I averaged about .65 arcsec in guiding. On the bench, the scope is capable of .557 (at center)...a pretty good match. But the camera's .335 over-samples in this configuration by 2 times. It would be expected I could reduce the image significantly to reduce the visibly-miniscule stars to 2x2.

The primary way to improve resolution is to move to better seeing conditions (the closest Bortle 2 to me would be central Maine). Only then would it be worth buying a scope/mount with higher theoretical resolution (a favorite for me would be the ASA H400). Such a purchase would be foolish here in Indiana, as image results would not improve due to limited seeing in Bortle 5 (not to mention that I'd also have no more retirement savings). 

Since conditions vary every night, I use this decidedly less sophisticated method with good results.
Like
aabosarah 6.96
...
· 
·  2 likes
I am by no means an expert and would like others to chime in, but the one thing missing from this comparison is your SNR. The resolution is only part of the equation. When you Bin 2 with your RC on that CMOS sensor, you are doubling your SNR (not quadruple, because you are doubling your read noise for quadruple the signal) . Yes the resolution itself maybe the same, but assuming you are seeing limited, maybe the level of "clarity" is the same, but the actual details, especially dimmer details, should be better seen in the bin 2 image from the RC.
Like
jrista 8.59
...
· 
·  5 likes
Binning and downsampling are actually not the same thing. Binning is a simple summation (or direct nearest-neighbor averaging), and could be accomplished in PixInsight with the IntegerResample. 

Resampling uses an interpolative kernel to combine neighboring pixels in a more advanced, and usually much more pleasing way. You can use the Resample tool in PixInsight to try this. You can compare the results of an IntegerResample to a normal Resample, and the differences will be quit stark, and heavily weighted in favor of a proper Reample with a kernel. With Resampling, you can often choose your kernel, and different kernels will produce different results, and sometimes one kernel will work better for certain data, and another for a different kind of data (i.e. higher or lower resolution, noiser/lower snr vs. cleaner/higher snr). 

My preference is generally Mitchell-Netrivally resampling, which produces an exceptional result on most of my images. I usually aim to downsample by a clean 2x factor, although as someone else mentioned earlier in the thread, you can resample by any scale factor you want. With CMOS cameras as high resolution as they are these days, resampling is a great way to improve SNR, clean up your images, without actually losing any meaningful data.

One of my own techniques is to drizzle 2x, do initial deconvolution and perhaps a first pass of noise reduction, on the full scale 2x drizzled image. Then downsample once, and do the rest of my processing on the "normal sized" image. After all the rest of my processing, or maybe most of it, I'll downsample again by 2x to my final image size.

Regarding SNR benefits of binning vs. resampling...I don't think that's all that easy to calculate. Hardware binning would be similar to interger resampling, although with CCDs you get an added benefit, to one degree or another, or lowering read noise (its not really a 4x reduction as often stated, there are various sources of noise even in charge transfer, but you do get a reduction in read noise with hardware charge binning). 

Resampling with a kernel is a bit more complicated, and that can redistribute different amounts of signal from many pixels. So calculating exact SNR benefits are going to be a lot harder, and will be dependent on the kernel used, the amount you are resampling, etc. Visually, I usually like the results of resampling better than strait binning or integer resampling...with one key exception: legitimately read noise limited exposures, where hardware binning with a CCD can indeed improve SNR by reducing read noise at the point of read. Even then, if I had both hardware binning and resampling, I'd probably use a combination of both to get the best results.
Edited ...
Like
dkamen 6.89
...
· 
·  1 like
Hi Francois,

There is an error in your calculations. The Dawes limit for the 100mm refractor should be double the Dawes limit for the 200mm RC.

Now, regarding the topic of binning vs downsampling: binning is something that happens during the capturing phase, even if it's software binning like in CMOS senors. Downsampling is something that is done (usually) to the final integration, or the final processed image. 

This means they are impacting the processing pipeline and the end result in different ways.

For example, if you are significantly oversampled, unbinned subs may be completely worthless because registration will be unable to detect stars. Various weighing algorithms that rely on noise evaluation and star shape will also be way off. And of course, unbinned data is 4X more bytes, which is of very questionable value if you are going to scale the image down in the end.

On the other hand binning will be a problem if it leads to significant undersampling. No reason to bin a camera with 5 micron pixels on a 360mm refractor, for example. 

The best advise I've read on the subject is: binning is a means to an end, i.e. proper sampling given your optics/camera/seeing combination. Do not be afraid to use it.
Like
FrancoisT 1.91
...
· 
Thank you all for your feedback. Looks like I will have to experiment more with this.

The basic point I was trying to portray here is that a binned image at long focal length occupies the sale "real estate" on the sensor as an unbinned image at a shorter focal length. This is of course hardware binning at capture time, not software upsampling or downsampling.

As far as SNR, I image from my backyard in an urban area at Bortle 8. So my conditions are typically bad regardless. Let's just say that I do a lot of narrowband to get around the light pollution that is prevalent in my area.

Thanks all for your input.
Like
jrista 8.59
...
· 
·  1 like
Francois Theriault:
Thank you all for your feedback. Looks like I will have to experiment more with this.

The basic point I was trying to portray here is that a binned image at long focal length occupies the sale "real estate" on the sensor as an unbinned image at a shorter focal length. This is of course hardware binning at capture time, not software upsampling or downsampling.

As far as SNR, I image from my backyard in an urban area at Bortle 8. So my conditions are typically bad regardless. Let's just say that I do a lot of narrowband to get around the light pollution that is prevalent in my area.

Thanks all for your input.

What camera do you have? Not all cameras support hardware binning. If it is a CMOS camera, at BEST, you might get voltage binning, but usually CMOS "binning" is just integer resampling in the driver. In that case, I would say avoid it, entirely, and use resampling in post instead as you'll get better results. 

If you have a CCD that truly supports binning, then it could be useful, as it does reduce read noise. CCDs usually had high enough read noise that binning offered some real value here, although at the same time, CCDs usually had bigger pixels as well, so binning would be a negative there (based on your image scales, I suspect you probably have a CMOS camera.)
Like
FrancoisT 1.91
...
· 
John,
I image using a CMOS. My camera is a ZWO ASI1600MM Pro.

I started out years ago with an SBIG ST8300, which was at the time a CCD. A real workhorse that allowed me to get my feet wet. Unfortunately, it died a quiet death by overworking it. 

As you mentioned, the CMOS camera has a driver binning, so you are correct, resampling is the way to go. 

Thanks
Like
jhayes_tucson 22.40
...
· 
·  3 likes
Binning is useful for two things:
1) Increasing signal at the expense of sampling rate.  This is very useful for guide cameras.
2) Decreasing the image sampling rate to better match the seeing and the telescope.

If you want to understand how to pick the optimum sampling rate for any telescope under different seeing conditions, please review the talk that I gave at AIC in 2022.  If you haven't registered, it's free to join and get in.  You can find the presentation here:  

https://www.advancedimagingconference.com/articles/secrets-long-focal-length-imaging-john-hayes

I go through all of the factors that go into computing the optimum sampling along with some easy to use charts on slides 43 and 44 to find the sensor that best matches your telescope and your seeing conditions.  You can listen to the presentation or you can just go through the slides.

Binning a CMOS camera does have one downside.  Since cameras transmit only 16 bit images and 2x2 binning adds another 2 bit of information, arithmetic binning in the camera looses 2 bits of information.  The easiest way to fix that is to bin in processing using floating point (or just more bits).  That strategy doesn't reduce bandwidth requirements  in data transmission but it does improve SNR in the result.

John
Like
jrista 8.59
...
· 
·  2 likes
Francois Theriault:
John,
I image using a CMOS. My camera is a ZWO ASI1600MM Pro.

I started out years ago with an SBIG ST8300, which was at the time a CCD. A real workhorse that allowed me to get my feet wet. Unfortunately, it died a quiet death by overworking it. 

As you mentioned, the CMOS camera has a driver binning, so you are correct, resampling is the way to go. 

Thanks

Aye, no point in binning. Resampling in post will give you far more flexible options, and you simply don't have to worry about or complicate your acquisition process.
Like
dkamen 6.89
...
· 
I respectfully disagree. A 1x1 sub from the ASI1600 is 32MB, a 2x2 is 8MB. Having 75% less data to carry around (equivalently: being able to process 400% more subs with the exact same computing resources in the exact same time) is quite strong a point. 

600 subs produce a much better result than 150 subs. I doubt any sophisticated downsampling algorithm can make up for the difference. In fact, I am pretty certain the opposite holds true: the 600 sub stack when upsampled (which is after all what algorithms like M-T were designed for) will be considerably better.
Like
AstroLux 8.03
...
· 
·  1 like
I respectfully disagree. A 1x1 sub from the ASI1600 is 32MB, a 2x2 is 8MB. Having 75% less data to carry around (equivalently: being able to process 400% more subs with the exact same computing resources in the exact same time) is quite strong a point. 

600 subs produce a much better result than 150 subs. I doubt any sophisticated downsampling algorithm can make up for the difference. In fact, I am pretty certain the opposite holds true: the 600 sub stack when upsampled (which is after all what algorithms like M-T were designed for) will be considerably better.

How does the number of subs have to do with anything regarding binning?
You are not producing more images by binning 2x2... (sure you may save some time because the transfering of the files takes maybe shorter time) 

Nobody here doubts that 600 subs in a stack will be better than 150 subs in a stack. 
After 600 subs I would say depending on your image train you would be hitting diminishing returns.
Like
aabosarah 6.96
...
· 
John Hayes:
Binning is useful for two things:
1) Increasing signal at the expense of sampling rate.  This is very useful for guide cameras.
2) Decreasing the image sampling rate to better match the seeing and the telescope.

If you want to understand how to pick the optimum sampling rate for any telescope under different seeing conditions, please review the talk that I gave at AIC in 2022.  If you haven't registered, it's free to join and get in.  You can find the presentation here:  

https://www.advancedimagingconference.com/articles/secrets-long-focal-length-imaging-john-hayes

I go through all of the factors that go into computing the optimum sampling along with some easy to use charts on slides 43 and 44 to find the sensor that best matches your telescope and your seeing conditions.  You can listen to the presentation or you can just go through the slides.

Binning a CMOS camera does have one downside.  Since cameras transmit only 16 bit images and 2x2 binning adds another 2 bit of information, arithmetic binning in the camera looses 2 bits of information.  The easiest way to fix that is to bin in processing using floating point (or just more bits).  That strategy doesn't reduce bandwidth requirements  in data transmission but it does improve SNR in the result.

John

Thank you for the link! Hopefully will have some time this weekend to study it.
Like
FrancoisT 1.91
...
· 
All,
I feel that we are getting off topic here.

The original premise of the post was to present to the community that when binning - hardware in this case, the image produced occupies the same real estate on the overall final image.

Calculations-2.jpg

In the case of the Ritchey-Chretien, my original calculations show - that the Ring nebula at the "native" 1x1 occupies 480 pixels on the sensor.
HOWEVER, the Dawes limit indicates that the smallest detail possible with this setup is 1.18 pixels. Therefore, the smallest detail would, in reality, occupy 4 pixels, since it overspills a single pixel. 

The native resolution of the camera is 4656 x 3520, for a total of 16,389,120 pixels total (16 MB).
The target would be 480 x 480 pixels = 230,400 pixels.
Thus, the target occupies 1.4% of the available area on the sensor.

If we bin at 2x2, each pixel is now the equivalent of 4 pixels at the "native" 1x1 binning.

The target now occupies 239 pixels.
Checking the Dawes limit, the smallest feature that can be resolved is now the equivalent of 0.59 pixels, so for all intents 1 pixel.

AND
 The size of the target is now 239 x 239 = 57,121 pixels.
The binned sensor reduces in size, since we are doubling the size of the pixels. It is now 2328 x 1760  = 4,097,280 pixels (4 MB).
So the target is 57,121 / 4,097,280 = 1.4% of the available area on the sensor.

For software binning, that is a whole different discussion. I simply do not know enough about upsampling and downsampling to contribute anything  here.

And as far as taking and processing a whole batch of data in individual subs, I recently burnt out a computer by processing too many runs of subs. Processor overheated, damaged the motherboard and seized up 2 of my hard drives. That is a different post however.

So as far as processing 300 + subs, I really am not that excited about it...
Like
jrista 8.59
...
· 
Francois Theriault (FrancoisT)All,
I feel that we are getting off topic here.

The original premise of the post was to present to the community that when binning - hardware in this case, the image produced occupies the same real estate on the overall final image.

Calculations-2.jpg

In the case of the Ritchey-Chretien, my original calculations show - that the Ring nebula at the "native" 1x1 occupies 480 pixels on the sensor.
HOWEVER, the Dawes limit indicates that the smallest detail possible with this setup is 1.18 pixels. Therefore, the smallest detail would, in reality, occupy 4 pixels, since it overspills a single pixel. 

The native resolution of the camera is 4656 x 3520, for a total of 16,389,120 pixels total (16 MB).
The target would be 480 x 480 pixels = 230,400 pixels.
Thus, the target occupies 1.4% of the available area on the sensor.

If we bin at 2x2, each pixel is now the equivalent of 4 pixels at the "native" 1x1 binning.

The target now occupies 239 pixels.
Checking the Dawes limit, the smallest feature that can be resolved is now the equivalent of 0.59 pixels, so for all intents 1 pixel.

AND
 The size of the target is now 239 x 239 = 57,121 pixels.
The binned sensor reduces in size, since we are doubling the size of the pixels. It is now 2328 x 1760  = 4,097,280 pixels (4 MB).
So the target is 57,121 / 4,097,280 = 1.4% of the available area on the sensor.

For software binning, that is a whole different discussion. I simply do not know enough about upsampling and downsampling to contribute anything  here.

And as far as taking and processing a whole batch of data in individual subs, I recently burnt out a computer by processing too many runs of subs. Processor overheated, damaged the motherboard and seized up 2 of my hard drives. That is a different post however.

So as far as processing 300 + subs, I really am not that excited about it...


So, there is the resolving limit of the optics, and then there is the sampling rate of what the optics resolved. The two are not the same thing. You generally, for best results when it comes to detail, WANT to OVER-sample to a certain degree. There have been many debates on this topic. Suffice it to say, I've (if I have had the opportunity) aimed for about a 3x sampling rate across the FWHM. Some people will strictly adhere to Nyquist rate, which is primarily regarding "two dimensional" signals (i.e. audio), and would require sampling at 2x across the FWHM. Image signals have an additional dimension, however, and then there is the fact that we are registering and stacking many individual subs, among other things, all of which can benefit from sampling beyond Nyquist rate. Somewhere between 3-4 samples across the FWHM will usually net you an optimal balance of resolution vs. SNR. 

Your current table very simplistically divides the Dawes limit by the raw image scale, and you've got ~1.18 pixels. Sampling your stars with just one pixel, or close to it, is not really going to be sufficient. Even a 0.57" size star, "spilling over" into the adjacent pixels, is actually NOT well sampled, it is undersampled. If you really wanted that star to be WELL sampled, or more optimally sampled, then you want the FWHM produce by that star to be sampled by THREE to FOUR pixels, meaning the entire diameter of the star would span maybe 5-7 pixels in total.

I would say, in your case, you most certainly don't need to worry about sampling too much at the Dawes limit of your scope. The Dawes limit is on thing, but it has more to do with the maximum ability of a scope to separate two closely spaced point sources. IMO a more interesting figure would be the actual size of the Airy disc and Airy pattern. The Airy patterns is going to be convolved by seeing, which will produce a star spot that is most likely, under most circumstances (unless you are blessed with wonderful seeing all the time!!) going to be even larger. What is the size of that star spot? What is its FWHM? Ideally, you would be sampling that FWHM with your sensor, with around 3 pixels, and across the entire diameter by more than that. If you are sampling your average star spot by 4+ pixels, then you could downsample by at least 2x, if not more, without any meaningful loss in details.
Edited ...
Like
jhayes_tucson 22.40
...
· 
Jon Rista:
Francois Theriault (FrancoisT)All,
I feel that we are getting off topic here.

The original premise of the post was to present to the community that when binning - hardware in this case, the image produced occupies the same real estate on the overall final image.

Calculations-2.jpg

In the case of the Ritchey-Chretien, my original calculations show - that the Ring nebula at the "native" 1x1 occupies 480 pixels on the sensor.
HOWEVER, the Dawes limit indicates that the smallest detail possible with this setup is 1.18 pixels. Therefore, the smallest detail would, in reality, occupy 4 pixels, since it overspills a single pixel. 

The native resolution of the camera is 4656 x 3520, for a total of 16,389,120 pixels total (16 MB).
The target would be 480 x 480 pixels = 230,400 pixels.
Thus, the target occupies 1.4% of the available area on the sensor.

If we bin at 2x2, each pixel is now the equivalent of 4 pixels at the "native" 1x1 binning.

The target now occupies 239 pixels.
Checking the Dawes limit, the smallest feature that can be resolved is now the equivalent of 0.59 pixels, so for all intents 1 pixel.

AND
 The size of the target is now 239 x 239 = 57,121 pixels.
The binned sensor reduces in size, since we are doubling the size of the pixels. It is now 2328 x 1760  = 4,097,280 pixels (4 MB).
So the target is 57,121 / 4,097,280 = 1.4% of the available area on the sensor.

For software binning, that is a whole different discussion. I simply do not know enough about upsampling and downsampling to contribute anything  here.

And as far as taking and processing a whole batch of data in individual subs, I recently burnt out a computer by processing too many runs of subs. Processor overheated, damaged the motherboard and seized up 2 of my hard drives. That is a different post however.

So as far as processing 300 + subs, I really am not that excited about it...


So, there is the resolving limit of the optics, and then there is the sampling rate of what the optics resolved. The two are not the same thing. You generally, for best results when it comes to detail, WANT to OVER-sample to a certain degree. There have been many debates on this topic. Suffice it to say, I've (if I have had the opportunity) aimed for about a 3x sampling rate across the FWHM. Some people will strictly adhere to Nyquist rate, which is primarily regarding "two dimensional" signals (i.e. audio), and would require sampling at 2x across the FWHM. Image signals have an additional dimension, however, and then there is the fact that we are translating and stacking many individual subs, among other things, all of which can benefit from sampling beyond Nyquist rate. Somewhere between 3-4 samples across the FWHM will usually net you an optimal balance of resolution vs. SNR. 

Your current table very simplistically divides the Dawes limit by the raw image scale, and you've got ~1.18 pixels. Sampling your stars with just one pixel, or close to it, is not really going to be sufficient. Even a 0.57" size star, "spilling over" into the adjacent pixels, is actually NOT well sampled, it is undersampled. If you really wanted that star to be WELL sampled, or more optimally sampled, then you want the FWHM produce by that star to be sampled by THREE to FOUR pixels, meaning the entire diameter of the star would span maybe 5-7 pixels in total.

I would say, in your case, you most certainly don't need to worry about sampling too much at the Dawes limit of your scope. The Dawes limit is on thing, but it has more to do with the maximum ability of a scope to separate two closely spaced point sources. IMO a more interesting figure would be the actual size of the Airy disc and Airy pattern. The Airy patterns is going to be convolved by seeing, which will produce a star spot that is most likely, under most circumstances (unless you are blessed with wonderful seeing all the time!!) going to be even larger. What is the size of that star spot? What is its FWHM? Ideally, you would be sampling that FWHM with your sensor, with around 3 pixels, and across the entire diameter by more than that. If you are sampling your average star spot by 4+ pixels, then you could downsample by at least 2x, if not more, without any meaningful loss in details.

Jon,
While I agree with the general message that you are conveying to the OP, you've slightly scrambled up the optics here.

1) The Nyquist limit applies to audio signals AND to image data!  It is a simple proof in Fourier space that works in N dimensions showing how sampling relates to the band limit of any linear, shift-invarient system.  It is also very easy to show that sampling at a rate of more than two samples per cycle at the bandwidth limit does not add any additional information to the output--no matter how hard you try!  That's called, "oversampling".  Oversampling merely decreases SNR without adding to image "detail".  This is a very well known result and if there are any debates about it, it is not among professionals or those who have actually done the mathematics (which again, is very simple.).  Trust me:  The engineering teams that designed both Hubble and JWST understand this stuff really well.  It's a proof that's done in the first semester of any graduate level Fourier analysis course.

2) It is simple to show that the Nyquist limit for an optical system requires 4.88 point samples across the Airy Disk.  Remember that the FWHM of the central peak in the Bessel function (which is what the Airy disk is) is narrower than the diameter of the first ring so that translates into 2.05 samples across that central peak to be at the Nyquist limit.  But, that's not relevant unless you are in space AND you are using a point sensor; neither of which applies.

3) Dawes limit is an experimentally derived limit for visual observers.  The eye is not an integrating detector so what you can see with your eye has little to do with what happens with long exposure imaging of faint objects using an integrating detector with finite size.  That's why I posted my AIC talk!  It explains how the sensor size, the optics, and the atmospheric seeing can be combined using MTF analysis to determine the optimum sensor size for any passive imaging system not in space.  I've distilled the analysis into a simple formula that works well down to about 0.3" seeing and if you have better seeing than that, you'll need to go back to Nyquist and do a more rigorous MTF analysis.  The goal of my analysis was to eliminate old-wives tales about how to achieve optimum sampling and it does that.  I can guess from your comments that you probably haven't reviewed it.  I think that you will enjoy it so take the time and go check it out.

John
Like
jrista 8.59
...
· 
John Hayes:
Jon Rista:
Francois Theriault (FrancoisT)All,
I feel that we are getting off topic here.

The original premise of the post was to present to the community that when binning - hardware in this case, the image produced occupies the same real estate on the overall final image.

Calculations-2.jpg

In the case of the Ritchey-Chretien, my original calculations show - that the Ring nebula at the "native" 1x1 occupies 480 pixels on the sensor.
HOWEVER, the Dawes limit indicates that the smallest detail possible with this setup is 1.18 pixels. Therefore, the smallest detail would, in reality, occupy 4 pixels, since it overspills a single pixel. 

The native resolution of the camera is 4656 x 3520, for a total of 16,389,120 pixels total (16 MB).
The target would be 480 x 480 pixels = 230,400 pixels.
Thus, the target occupies 1.4% of the available area on the sensor.

If we bin at 2x2, each pixel is now the equivalent of 4 pixels at the "native" 1x1 binning.

The target now occupies 239 pixels.
Checking the Dawes limit, the smallest feature that can be resolved is now the equivalent of 0.59 pixels, so for all intents 1 pixel.

AND
 The size of the target is now 239 x 239 = 57,121 pixels.
The binned sensor reduces in size, since we are doubling the size of the pixels. It is now 2328 x 1760  = 4,097,280 pixels (4 MB).
So the target is 57,121 / 4,097,280 = 1.4% of the available area on the sensor.

For software binning, that is a whole different discussion. I simply do not know enough about upsampling and downsampling to contribute anything  here.

And as far as taking and processing a whole batch of data in individual subs, I recently burnt out a computer by processing too many runs of subs. Processor overheated, damaged the motherboard and seized up 2 of my hard drives. That is a different post however.

So as far as processing 300 + subs, I really am not that excited about it...


So, there is the resolving limit of the optics, and then there is the sampling rate of what the optics resolved. The two are not the same thing. You generally, for best results when it comes to detail, WANT to OVER-sample to a certain degree. There have been many debates on this topic. Suffice it to say, I've (if I have had the opportunity) aimed for about a 3x sampling rate across the FWHM. Some people will strictly adhere to Nyquist rate, which is primarily regarding "two dimensional" signals (i.e. audio), and would require sampling at 2x across the FWHM. Image signals have an additional dimension, however, and then there is the fact that we are translating and stacking many individual subs, among other things, all of which can benefit from sampling beyond Nyquist rate. Somewhere between 3-4 samples across the FWHM will usually net you an optimal balance of resolution vs. SNR. 

Your current table very simplistically divides the Dawes limit by the raw image scale, and you've got ~1.18 pixels. Sampling your stars with just one pixel, or close to it, is not really going to be sufficient. Even a 0.57" size star, "spilling over" into the adjacent pixels, is actually NOT well sampled, it is undersampled. If you really wanted that star to be WELL sampled, or more optimally sampled, then you want the FWHM produce by that star to be sampled by THREE to FOUR pixels, meaning the entire diameter of the star would span maybe 5-7 pixels in total.

I would say, in your case, you most certainly don't need to worry about sampling too much at the Dawes limit of your scope. The Dawes limit is on thing, but it has more to do with the maximum ability of a scope to separate two closely spaced point sources. IMO a more interesting figure would be the actual size of the Airy disc and Airy pattern. The Airy patterns is going to be convolved by seeing, which will produce a star spot that is most likely, under most circumstances (unless you are blessed with wonderful seeing all the time!!) going to be even larger. What is the size of that star spot? What is its FWHM? Ideally, you would be sampling that FWHM with your sensor, with around 3 pixels, and across the entire diameter by more than that. If you are sampling your average star spot by 4+ pixels, then you could downsample by at least 2x, if not more, without any meaningful loss in details.

Jon,
While I agree with the general message that you are conveying to the OP, you've slightly scrambled up the optics here.

1) The Nyquist limit applies to audio signals AND to image data!  It is a simple proof in Fourier space that works in N dimensions showing how sampling relates to the band limit of any linear, shift-invarient system.  It is also very easy to show that sampling at a rate of more than two samples per cycle at the bandwidth limit does not add any additional information to the output--no matter how hard you try!  That's called, "oversampling".  Oversampling merely decreases SNR without adding to image "detail".  This is a very well known result and if there are any debates about it, it is not among professionals or those who have actually done the mathematics (which again, is very simple.).  Trust me:  The engineering teams that designed both Hubble and JWST understand this stuff really well.  It's a proof that's done in the first semester of any graduate level Fourier analysis course.

2) It is simple to show that the Nyquist limit for an optical system requires 4.88 point samples across the Airy Disk.  Remember that the FWHM of the central peak in the Bessel function (which is what the Airy disk is) is narrower than the diameter of the first ring so that translates into 2.05 samples across that central peak to be at the Nyquist limit.  But, that's not relevant unless you are in space AND you are using a point sensor; neither of which applies.

3) Dawes limit is an experimentally derived limit for visual observers.  The eye is not an integrating detector so what you can see with your eye has little to do with what happens with long exposure imaging of faint objects using an integrating detector with finite size.  That's why I posted my AIC talk!  It explains how the sensor size, the optics, and the atmospheric seeing can be combined using MTF analysis to determine the optimum sensor size for any passive imaging system not in space.  I've distilled the analysis into a simple formula that works well down to about 0.3" seeing and if you have better seeing than that, you'll need to go back to Nyquist and do a more rigorous MTF analysis.  The goal of my analysis was to eliminate old-wives tales about how to achieve optimum sampling and it does that.  I can guess from your comments that you probably haven't reviewed it.  I think that you will enjoy it so take the time and go check it out.

John

Hi John,

I missed the link earlier. I'll see if I can give it a read tomorrow. Thanks!

I do know that Nyquist applies.... (Although, I may have mis-remembered prior instances where you mentioned the 4.88x factor...I guess previously, I may have thought that was across the FWHM, not the disc...) When it comes to optics, I defer to you as you definitely have more knowledge there. That said...I guess I would dispute the notion that sampling more than 2x offers zero benefits. At least, considering the exact nature of what we do...

In practice, I think there are benefits to sampling greater than that (than 2.05x across the FWHM), specifically for multi-frame stacked astrophotography. In my experience, if I sample the FWHM (now, I guess this would NOT be of the airy disc, and would instead be the resolved spot for my optics...which is primarily dominated by seeing) by about three to four pixels has given me better results when it comes to what we do with astrophotography, which is to register (in particular) and then stack. The translation, rotations, and possibly other transformations of each frame in order to properly register them (which may also try to correct for distortion differences across the frames, which can occur due to dithering, etc.) are the main reason why I find, in practice, sampling beyond 2x has benefits. Less, and I have found that I end up with more artifacts from the registration process, and sadly, those artifacts can often cause other issues with background signal quality in the final stack as well. To be fair, the vast majority of my experience here is PixInsight. Its StarAlignment process offers a number of different interpolation algorithms. The only one that doesn't suffer from any artifacts at all (unless the data is indeed oversampled!) is a cubic kernel, but that tends to soften details and BLOAT stars, so I never use it. Depending on the exact nature of the data, one of the others will usually minimize the artifacts if the data is less than optimally sampled, but usually will not eliminate the issue. 

I have tried to use the term well-sampled over the years, rather than oversampled...the latter obviously has specific connotations. I understand the theory that if you sample at Nyquist, and apply a proper convolution filter, then you should be able to recover all of the relevant information... I think that would apply to a single frame, though, correct? When it comes to all the transformations that occur with the registration process (especially some of the more advanced options in a program like PixInsight which can apply localized corrections for distortion and things like that), however, I get less blocky, cleaner, rounder, smoother stars if I sample beyond 2x. IIRC, in the past, I determined my optimal to be around 3.3x across the FWHM of my spots (although exactly how I arrived at that, is now lost some five years in my past now!) These aren't airy discs though, they are seeing-convolved spots. In any case, this is probably something that could be experimentally demonstrated. I still haven't had a chance to do any imaging this year (Every time I have time, its cloudy and either snowing, or now raining....) I am going to be picking up the extender for the FSQ106, which would let me image at f/3.6, f/5 and f/8, and I may be able to gather a set of data at each scale and share my results. 

Anyway, I'll take a peek at your article tomorrow (hopefully!) You've shared your thoughts on how an MTF analysis provides better results in this area in the past, so I'm interested in seeing this distilled analysis.
Like
DaveDE 0.00
...
· 
John Hayes:
Binning a CMOS camera does have one downside.  Since cameras transmit only 16 bit images and 2x2 binning adds another 2 bit of information, arithmetic binning in the camera looses 2 bits of information.  The easiest way to fix that is to bin in processing using floating point (or just more bits).  That strategy doesn't reduce bandwidth requirements  in data transmission but it does improve SNR in the result.


Just an fyi, turns out binning  2x2 on the camera and losing the two LSBs really doesn't affect SNR significantly at all.

https://forums.sharpcap.co.uk/viewtopic.php?p=34676#p34676

Dave
Like
jhayes_tucson 22.40
...
· 
John Hayes:
Binning a CMOS camera does have one downside.  Since cameras transmit only 16 bit images and 2x2 binning adds another 2 bit of information, arithmetic binning in the camera looses 2 bits of information.  The easiest way to fix that is to bin in processing using floating point (or just more bits).  That strategy doesn't reduce bandwidth requirements  in data transmission but it does improve SNR in the result.


Just an fyi, turns out binning  2x2 on the camera and losing the two LSBs really doesn't affect SNR significantly at all.

https://forums.sharpcap.co.uk/viewtopic.php?p=34676#p34676

Dave

Ok.  You have to first accept the basic assumption that simple shift register integer operations aren't being done in the camera (which I'm not sure is always the case) and then be happy with using what amounts to a 14 bit data from each pixel.  In the case of arithmetic 2x2 binning read noise is twice as high as for 1x1 binning, which adds additional uncertainty to the LSBs but I'm one to automatically want to toss out signal just because of higher uncertainty.  Tossing that data requires a little bit more exposure to regain the SNR.  Maybe not by much but why bother with a 16 bit camera if you don't care about it?

John
Like
Freestar8n 1.51
...
· 
·  1 like
Just an fyi, turns out binning  2x2 on the camera and losing the two LSBs really doesn't affect SNR significantly at all.

https://forums.sharpcap.co.uk/viewtopic.php?p=34676#p34676

Dave


That link is a very simplified description of what happens when you truncate, and it can be made more rigorous.  Instead of talking about "average" error introduced, it's more useful to talk about the standard deviation of the error, and when you discretize a signal into steps of size, s, the error introduced is s/sqrt(12) - which is much less than the "average" error of s/2 or whatever.  Knowing the error as a standard deviation allows you to combine it in quadrature with read noise - and the result is a very real, but very small, contribution when summing 4 pixels at gain 0.8.  And it makes no difference if you round down or up or whatever.

This discretization noise happens even when you don't bin, because the intrinsic read noise will be added in quadrature with the discretization noise of g/sqrt(12), where g is the gain as e/adu.

For gain 0.8 and read noise 3.5, the total read noise with discretization error is slightly bloated to 3.508.  If you sum 4 pixel values exactly the noise is doubled to 7.016, but if you then discretize it in steps of 4 (3.2e), corresponding to dropping the final two bits, you end up with noise in the sum of 7.076 - which is a real but extremely small - and negligible - increase on the exact sum.  (I welcome corrections on the math).

So - it's not good to think in terms of "those low bits are just noise" because those bits also have signal.  The entire set of bits represents the signal and noise - and discretizing or truncating at any level will increase the total noise - but by an amount much smaller than the step size due to the 1/sqrt(12) factor - and the fact that the noise adds in quadrature with the other sensor noise terms (here, read noise).

This describes the impact of discretization of the signal itself, while the original topic of this thread relates to *spatial* binning and discretization of the image and its impact on resolution - and the situation is very similar.  There is always blurring happening on the scale of the pixels, and smaller pixels, in arc-sec, will result in less total blurring in the final result.  This is because the process of aligning and stacking multiple exposures requires shifting and interpolation - on the scale of the pixels - prior to stacking.  And that results in a blur contribution *on the scale of the pixels*.  Smaller pixels means less blur and a smaller fwhm *in the aligned and stacked result*.  There is no sudden point where smaller pixels cease to have resolution benefit because this blur is always happening - just as there is always error introduced by discretization at any size of signal step.

This is also why it is best to defer the final binning or smoothing until the last stage of processing - so the alignment can be done using the original unbinned pixels.  It's also why, for max resolution, you should never bin during acquisition - even though the impact of discretization noise is small.  But if you aren't after max detail - you can go ahead and bin and use any size pixels you want, with a corresponding trade off of pixel SNR for detail.

The amount of  blur depends on the type of interpolation used when stacking, but recently PI switched to recommending 1:1 drizzle over things like Lanczos - and that will definitely result in blur being introduced to each exposure in the stack.  I prefer to use small pixels and nearest neighbor, for a number of reasons, hence I use 0.28" pixels with EdgeHD11 and get stacked fwhm's in the low 1".  That would never be possible with the typically recommended 0.5-1" pixels for such an SCT.

Frank
Like
IrishAstro4484 5.96
...
· 
Emilio Frangella:
resolution is only  part of the equation, considering that with modern CMOS cameras you can resample post acquisition and achieve the same results as a native 2x2 binning (CMOS cameras simply resample in software) i would always shoot at bin1 unless storage space/processing power are a concern.

*** Ye, I've always used bin1, apart from the file size being unnecessarily large I don't see a disadvantage.

***
Like
DaveDE 0.00
...
· 
John Hayes:
You have to first accept the basic assumption that simple shift register integer operations aren't being done in the camera (which I'm not sure is always the case) and then be happy with using what amounts to a 14 bit data from each pixel.


I don't know about all cameras but in the case of the ASI6200 the averaging is done with 18 bit registers and the fractional part (2 lsbs) are discarded leaving 16 bits. This has been confirmed by my testing and correspondence with ZWO. Simple shift register operations in the camera electronics are trivial compared to all the other digital signal formatting and processing going on. But yes, if file size is of no concern then there's no advantage to binning in the cmos camera. I just wanted to clarify that the loss of SNR vs binning or resampling in post processing in many (probably most) cases is insignificant.

Dave
Like
jhayes_tucson 22.40
...
· 
·  1 like
Just an fyi, turns out binning  2x2 on the camera and losing the two LSBs really doesn't affect SNR significantly at all.

https://forums.sharpcap.co.uk/viewtopic.php?p=34676#p34676

Dave


That link is a very simplified description of what happens when you truncate, and it can be made more rigorous.  Instead of talking about "average" error introduced, it's more useful to talk about the standard deviation of the error, and when you discretize a signal into steps of size, s, the error introduced is s/sqrt(12) - which is much less than the "average" error of s/2 or whatever.  Knowing the error as a standard deviation allows you to combine it in quadrature with read noise - and the result is a very real, but very small, contribution when summing 4 pixels at gain 0.8.  And it makes no difference if you round down or up or whatever.

This discretization noise happens even when you don't bin, because the intrinsic read noise will be added in quadrature with the discretization noise of g/sqrt(12), where g is the gain as e/adu.

For gain 0.8 and read noise 3.5, the total read noise with discretization error is slightly bloated to 3.508.  If you sum 4 pixel values exactly the noise is doubled to 7.016, but if you then discretize it in steps of 4 (3.2e), corresponding to dropping the final two bits, you end up with noise in the sum of 7.076 - which is a real but extremely small - and negligible - increase on the exact sum.  (I welcome corrections on the math).

So - it's not good to think in terms of "those low bits are just noise" because those bits also have signal.  The entire set of bits represents the signal and noise - and discretizing or truncating at any level will increase the total noise - but by an amount much smaller than the step size due to the 1/sqrt(12) factor - and the fact that the noise adds in quadrature with the other sensor noise terms (here, read noise).

This describes the impact of discretization of the signal itself, while the original topic of this thread relates to *spatial* binning and discretization of the image and its impact on resolution - and the situation is very similar.  There is always blurring happening on the scale of the pixels, and smaller pixels, in arc-sec, will result in less total blurring in the final result.  This is because the process of aligning and stacking multiple exposures requires shifting and interpolation - on the scale of the pixels - prior to stacking.  And that results in a blur contribution *on the scale of the pixels*.  Smaller pixels means less blur and a smaller fwhm *in the aligned and stacked result*.  There is no sudden point where smaller pixels cease to have resolution benefit because this blur is always happening - just as there is always error introduced by discretization at any size of signal step.

This is also why it is best to defer the final binning or smoothing until the last stage of processing - so the alignment can be done using the original unbinned pixels.  It's also why, for max resolution, you should never bin during acquisition - even though the impact of discretization noise is small.  But if you aren't after max detail - you can go ahead and bin and use any size pixels you want, with a corresponding trade off of pixel SNR for detail.

The amount of  blur depends on the type of interpolation used when stacking, but recently PI switched to recommending 1:1 drizzle over things like Lanczos - and that will definitely result in blur being introduced to each exposure in the stack.  I prefer to use small pixels and nearest neighbor, for a number of reasons, hence I use 0.28" pixels with EdgeHD11 and get stacked fwhm's in the low 1".  That would never be possible with the typically recommended 0.5-1" pixels for such an SCT.

Frank

Nicely said Frank.  We agree on this point.

John
Like
 
Register or login to create to post a reply.