Short vs. Long Exposures [Deep Sky] Acquisition techniques · andrea tasselli · ... · 25 · 1172 · 11

andreatax 7.76
...
· 
·  12 likes
Not so long ago there was a rather long discussion in AB about the merit and demerits of short exposures for DSO imagery with one folk clocking an amazing amount of short exposures for both M81 and M51, and one of the question that come up was how different are they if both have the same integrated duration? Off of happenstance (I forgot to turn off auto-saving in NINA preview) I collected a rather large amount of 5s exposures; to be exact 112 (once I pruned the worst ones, FWHM-wise) and since I did follow up with an actual imaging plan I can compare the results (same everything, one set immediately shot after the other). The imaging (regular) run takes 3 min integrations so 3 integrations are roughly equivalent to 112 5s (560s vs 3x180=540s) and here are the results:
image.png
It easy to guess which one is which (or just read the header) and this proof positive that very short exposures are a very bad idea for DSO. Not only the results are way nosier (112xRON vs 3xRON) but the overall depth is worse. This is with an IMX533 sensor so a low-noise, high efficiency modern CMOS.
Like
Alan_Brunelle
...
· 
Interesting Andrea.  And thanks for posting.

I would have guessed that the many, very short exposures would be limited in depth.  My thinking has been that too short, and infomation is simply lost for those faint image features that then fall so far below the realm of the noise variations.  Just thinking of the few photons being sampled by the camera under those conditions.  I might suggest that this can eventually be overcome with many short exposures, and your 112 subs may not qualify as "many" for proponents of this method.

What I see in your right image is not just that it seem noisier, but that the noise appears to have a pattern.  Roughly striations, disjointed, coming from aligned "noise elements" that seem aligned roughly from upper right to lower left.  This seems to come from contributions from darker pixels or groupings of dark pixels.  The dark groupings seem clumped sometimes into features not necessarily aligned.  Between these ridges of dark seem lighter island areas which have noise similar too, or maybe even less than, the noise in the left image.  Not sure what to make of this.  

I appreciate your data and it supports how I work, but I hope others can flesh out the arguments for or against the approach of many short exposures in this thread.  Seems like a fun challenge!  I personally do not see the need to operate at this extreme end of the exposure spectrum.  In the end, when I set up exposures, I am more concerned with my fast optics about over exposing those bright areas and stars.  I try to never go above max (upper clipping) for more than a few dozen pixels, and only when they are distributed between a number of the brightest stars.  Being new to this hobby (I think coming up on 5 years now), and working only ever with CMOS cameras, I never ascribed to super long exposures.  But then I use the rule I mentioned and mostly worked with f/4 - f/2 optics!
Like
Carande 1.20
...
· 
·  2 likes
This recent YouTube video addressed this very issue an might be of interest.  https://youtu.be/T0JDvllCaV4?feature=shared
Like
Alan_Brunelle
...
· 
·  4 likes
Richard Carande:
This recent YouTube video addressed this very issue an might be of interest.  https://youtu.be/T0JDvllCaV4?feature=shared

Hi Richard,

I just looked at that video and I do not think that experiment was well set up.  It was a one shot camera, which I am familiar with, and he was comparing 10 min subs vs. 1 minute subs using a RASA.  That is an f/2 system and I cannot understand how anyone would use 10 minute subs with an f/2 optic.  When I started out working with my RASA 11, I did do up to 3 minute subs, but was not aware of the issues of saturation of the sensor.  After I learned more, my exposure times for f/2 systems almost always were down around a minute, plus or minus a bit.  Often work at 45 sec.  And with 5 hrs of integration time for the Astrobin Survey work, I can see objects at 19th magnitude or even better.  I work in Bortle 4 skies.  This guy was doing his work in Bortle 7 skies?!  I can't imagine that he should be working at anthing over a minute exposure time under normal circumstances with the gain he was using.  This, I think is very different than what Andrea was intending, which is basically using almost Lucky Imaging techniques for deep sky objects.

The only thing that I think the video demonstrated was that longer subs are subjected to more blurring due to wind effects on the stability of his setup.

Best,
Alan
Edited ...
Like
jhayes_tucson 22.44
...
· 
·  6 likes
What gain was used with IMX533 and how much read noise does it have?  The only way that you'll have much success stacking really short exposures is if the read noise is low enough such that when it's stacked, the total read noise is well below the photon noise.  That doesn't appear to be the case in this comparison.

For long exposure imaging, I typically use unity gain to maximize well depth.  That gain setting would be terrible for short exposure imaging because of the relatively high read noise.

John
Like
SemiPro 7.67
...
· 
·  1 like
We live in a world where your optimal sub exposure times can be both longer and shorter than you think. It also highly depends on the light pollution, your telescope and what camera you are using.

If your imaging system is fast enough, you could very well find yourself shooting LRGB to the tune of 30 second sub exposures. Conversely, for narrowband you are still injecting a lot of stacking noise even at 20 minute exposure times.

If we forget about super high dynamic range objects for a moment, what you want to do is to try to find the shortest possible sub-exposure length that does not adversely affect the noise added to the total integration from stacking.

Shorter subs will always be better, because they are less saturated, less prone to guiding problems, less prone to satellite streaks, etc etc.. The problem is just that if you go too short for your system you are adding noise that could of been avoided.

image.png

Here are some of my calculated sub-exposure times. The 'extra noise' is the extra noise from stacking all the sub-exposures.

It doesn't surprise me that a stack of 5 second subs injected a lot of noise. Even at F/2 I am running at least 15 seconds on the L channel.
Edited ...
Like
AccidentalAstronomers 11.14
...
· 
·  8 likes
Despite the excellent explanations, I still find all this very confusing. The overriding factor for me is not read noise or extra noise, it's brain noise. I'm old and slow now. So dealing with a ton of different exposure times would be too much for me to organize. I've got three to five rigs going at any given time, so I've just settled in on three exposure times: (1) 60s for RGB stars for narrowband; (2) 180s for LRGB for broadband; and (3) 300s for narrowband, all at gain 2750 on the Moravian C5 and C3, and gain 100 on the ASI6200. I don't know whether this is optimal, but it seems to work for me.
Like
andreatax 7.76
...
· 
John Hayes:
What gain was used with IMX533 and how much read noise does it have?  The only way that you'll have much success stacking really short exposures is if the read noise is low enough such that when it's stacked, the total read noise is well below the photon noise.  That doesn't appear to be the case in this comparison.

For long exposure imaging, I typically use unity gain to maximize well depth.  That gain setting would be terrible for short exposure imaging because of the relatively high read noise.

John

I'm in e- counting mode, so on my comera the gain is 200 and RON is minimum, see below for sensor analysis (of the actual camera):
image.png
Like
andreatax 7.76
...
· 
Alan Brunelle:
What I see in your right image is not just that it seem noisier, but that the noise appears to have a pattern. Roughly striations, disjointed, coming from aligned "noise elements" that seem aligned roughly from upper right to lower left. This seems to come from contributions from darker pixels or groupings of dark pixels. The dark groupings seem clumped sometimes into features not necessarily aligned. Between these ridges of dark seem lighter island areas which have noise similar too, or maybe even less than, the noise in the left image. Not sure what to make of this.


The pattern might be due to the fact that the sort shots were unguided as I was measuring the mirror shift and given their length I didn't think to run a CosmeticCorrection beforehand which the results (or rather the background) might have benefited from, although to be honest with the amount of movement there there should be not a lot of cross-correlation. But the main concern wasn't about how the background is beheaving but rather how the image dynamics is affected.
Like
Bennich 1.91
...
· 
I actually started looking into this as well....with all the bad weather - I have started looking at all the other little things I can optimize on my different setups. 

Robin Glover (as you most likely already know) has done a lot of writing and speaking on this topic. 
There are multiple other threads on this here on AB and CN. 
One example is this one - https://www.astrobin.com/forum/c/astrophotography/deep-sky/robin-glover-talk-questioning-length-of-single-exposure/

This morning I did a sensor analysis on my ASI2600MM pro. 
Screenshot 2024-04-22 at 12.54.23.png

If I do the math on my camera and bottle 4,5 ish sky - I end up on about 120sec per exposure as the optimal exposure length for LRGB.
I will investigate this a bit more over the summer - as the nights get shorter and shorter here in DK.
Edited ...
Like
cioc_adrian
...
· 
andrea tasselli:
Not so long ago there was a rather long discussion in AB about the merit and demerits of short exposures for DSO imagery with one folk clocking an amazing amount of short exposures for both M81 and M51, and one of the question that come up was how different are they if both have the same integrated duration? Off of happenstance (I forgot to turn off auto-saving in NINA preview) I collected a rather large amount of 5s exposures; to be exact 112 (once I pruned the worst ones, FWHM-wise) and since I did follow up with an actual imaging plan I can compare the results (same everything, one set immediately shot after the other). The imaging (regular) run takes 3 min integrations so 3 integrations are roughly equivalent to 112 5s (560s vs 3x180=540s) and here are the results:
image.png
It easy to guess which one is which (or just read the header) and this proof positive that very short exposures are a very bad idea for DSO. Not only the results are way nosier (112xRON vs 3xRON) but the overall depth is worse. This is with an IMX533 sensor so a low-noise, high efficiency modern CMOS.

*** Not really a good comparison. In order to obtain the same SNR in both situations, you must gather more 5s subs. Basically if you reduce the optimal sub exposure you need more subs to reach the same SNR. I'll dig up the equation, it's somewhere in my python astro scripts.
IF for some reason you want to compare strictly equal total exposure time then your findings are correct and expected. But you can compute this stuff, no need to waste clear nights.
Like
dkamen 6.89
...
· 
I run a similar experiment during the last full moon. Target was M3 and I 5432x2" exposures (gain 300) vs 26x60" exposures (gain 101). Short exposures revealed more stars with better colours, the globe looks "fatter" which is to be expected at 7X the integration time. But the background was extremely noisy. On the other hand, long exposures resulted in tighter and better corrected stars after BlurX, much to my surprise. I think this is because the picture was closer to the kind of data BlurX has been trained with.

Here are the integrated stacks (2432 is a typo, it was of course 5432).:

unstretched.png

And the final images, made to look as similar as possible (quite amazing actually that the short exposure version was "hiding" all those stars) :

processed.png

Like I said in the description of the right-side image when I uploaded it in my gallery, the target type lends itself to the short exposures technique: you basically have stars  situated against an empty background. Stars benefit from short exposures because they won't burn, will not elongate and so on. And the empty background renders FPN irrelevant because it can be darkened/denoised to oblivion. 

BUT the technique would not work so well for another target whose dimmer parts you do care for (an emission nebula for instance). Those dim parts would be hurt pretty bad by the FPN and by anything you do in post to fight the FPN. 

Also, even though the short exposures version reveals more stuff, you cannot really call it a major success if you take into account that it used 7X the integration time, 8X the capture time and 208X the disk space. If I took another 30x60" subs or so, I bet the "long-exposure" globe would look just as fat and the only real advantage of the short exposure version would be better colour preservation in a handful of big stars. Short exposures produce something better, but FAR from your money's worth of better, so to speak. 

So short exposures have their merits under certain circumstances: if the target is bright, if you are undermounted and so on.  But they also have important drawbacks and I would never recommend them for your average nebula. 

Cheers,
Dimitris
Like
CygnusBob
...
· 
While the use of short exposures may not result in the highest SNR image, it can increase the sharpness of the final image.  DSO lucky imaging lets you stack the images that have lower FWHM values.  Generally seeing can vary quite a bit during the night.  Take a look at the output of seeing monitors at various remote sites.  Also, if the mount tracking is less than perfect, aligning all of the short exposures will result in minimizing the image blur due to mount periodic error, drift, etc.

Bob
Like
coolhandjo 1.91
...
· 
For me I only ramp up exposure time on faint targets. I usually shoot 300 sec. But amp it up to 900 sec on faint targets.

I noticed the longer subs require a very different post processing technique.

Less BE. Less Gradient Removal. And a softer stretch.

Reason is there is a lot of faint detail in the background that gets captured in these tools and the results are noisy and drastic.
Like
TareqPhoto 2.94
...
· 
Timothy Martin:
Despite the excellent explanations, I still find all this very confusing. The overriding factor for me is not read noise or extra noise, it's brain noise. I'm old and slow now. So dealing with a ton of different exposure times would be too much for me to organize. I've got three to five rigs going at any given time, so I've just settled in on three exposure times: (1) 60s for RGB stars for narrowband; (2) 180s for LRGB for broadband; and (3) 300s for narrowband, all at gain 2750 on the Moravian C5 and C3, and gain 100 on the ASI6200. I don't know whether this is optimal, but it seems to work for me.

I was experimenting in the past, and i wanted to come up or conclude my optimal ideal exposures to do it, and i think i am following your path also, as i was using 300sec for my NB and about 1-2 minutes for broadbanding, didn't try to image stars RGB, but after long stop i am getting back to imaging where i will put a dedicated setup for only RGB stars, and i was thinking about using either 30s or 60s maximum for that depends on the scope and sky, i don't like to limit myself much, but also don't want to overdo it too much like some doing about 20-30min exposure of NB and maybe 5 minutes exposure RGB alone, that is too much for me really, so with my cameras new and old one i will come to a ground where i can use a certain fixed exposure for all time any setup.
Like
gnnyman 4.52
...
· 
·  1 like
Alan Brunelle:
Richard Carande:
This recent YouTube video addressed this very issue an might be of interest.  https://youtu.be/T0JDvllCaV4?feature=shared

Hi Richard,

I just looked at that video and I do not think that experiment was well set up.  It was a one shot camera, which I am familiar with, and he was comparing 10 min subs vs. 1 minute subs using a RASA.  That is an f/2 system and I cannot understand how anyone would use 10 minute subs with an f/2 optic.  When I started out working with my RASA 11, I did do up to 3 minute subs, but was not aware of the issues of saturation of the sensor.  After I learned more, my exposure times for f/2 systems almost always were down around a minute, plus or minus a bit.  Often work at 45 sec.  And with 5 hrs of integration time for the Astrobin Survey work, I can see objects at 19th magnitude or even better.  I work in Bortle 4 skies.  This guy was doing his work in Bortle 7 skies?!  I can't imagine that he should be working at anthing over a minute exposure time under normal circumstances with the gain he was using.  This, I think is very different than what Andrea was intending, which is basically using almost Lucky Imaging techniques for deep sky objects.

The only thing that I think the video demonstrated was that longer subs are subjected to more blurring due to wind effects on the stability of his setup.

Best,
Alan

I totally agree - I have the RASA11 as well and would never ever take exposures of 10min.... that is in my opinion just not reasonable. My exposure range for the RASA is between 30 seconds and 120seconds, in very very rare cases I do 180 seconds, that´s it. I am working in a Bortle 3-4 area and up to now, I never experienced the desire to to as far as 10minuts/sub.

CS
Georg
Like
MuslimAstronomer 0.00
...
· 
Interesting comparison @andrea tasselli. I would be curious to see another comparison with guiding and dithering executed on the 5s integration as the observed noise pattern in so many subs may have been influenced by the (I'm assuming) lack thereof during NINA's auto saving preview. 

I'm also of the "longer exposure" school of thought where appropriate - though this is delicately balanced in my London skies.
Like
astrospaceguide 2.41
...
· 
just my 2 cents... but I use a RASA 11 and take 5min subs in NB and usually 30-60s subs in RGB.  It collects fast at F/2 and easily saturates the stars to the point you have no color other than very edge of halos.  I've been experimenting with 5 second subs, lots of them, stacking and just covering the camera noise and getting more color across stars... transplanting those RGB stars on the narrowband images...  been working good.  From bortle 3-4.  I could see it highly depends on targets, goal of the image, etc.
Like
andreatax 7.76
...
· 
·  1 like
Hamza Ilyas @Muslimastronomer:
Interesting comparison @andrea tasselli. I would be curious to see another comparison with guiding and dithering executed on the 5s integration as the observed noise pattern in so many subs may have been influenced by the (I'm assuming) lack thereof during NINA's auto saving preview. 

I'm also of the "longer exposure" school of thought where appropriate - though this is delicately balanced in my London skies.

Thanks for your comments.

I'm not really sure that would make much sense in the overall context. If I were to dither every integration the overhead time can easily surpass the integration time for short exposures of 5s and the disadvantage would be even larger w.r.t long integrations. In the time the short integration were captured the total displacements (in pixels) were of 81 px in one direction and -17px in the other, that's the effect of both mirror shift and Ra and Dec drift, which I think should be large enough to avoid walking noise. Yet that doesn't rule out other fixed-pattern noise to surface (and I don't have an auto-correlation value to measure the drift against). But on the flip side, I never dither, so what is good (or not) for the goose is good (or not) for the gander, if you take my meaning...
Like
jrista 8.59
...
· 
·  2 likes
andrea tasselli:
Not so long ago there was a rather long discussion in AB about the merit and demerits of short exposures for DSO imagery with one folk clocking an amazing amount of short exposures for both M81 and M51, and one of the question that come up was how different are they if both have the same integrated duration? Off of happenstance (I forgot to turn off auto-saving in NINA preview) I collected a rather large amount of 5s exposures; to be exact 112 (once I pruned the worst ones, FWHM-wise) and since I did follow up with an actual imaging plan I can compare the results (same everything, one set immediately shot after the other). The imaging (regular) run takes 3 min integrations so 3 integrations are roughly equivalent to 112 5s (560s vs 3x180=540s) and here are the results:
image.png
It easy to guess which one is which (or just read the header) and this proof positive that very short exposures are a very bad idea for DSO. Not only the results are way nosier (112xRON vs 3xRON) but the overall depth is worse. This is with an IMX533 sensor so a low-noise, high efficiency modern CMOS.

I always love seeing visual comparisons to go along with the theory. 

In this case, its a pretty raw and direct apples to apples, and I think that apples to apples comparisons are too often shirked in favor of apples to oranges when oranges better suits one of the approaches. ;) 

I think, though, that there may not be consistency in what "short" vs "long" exposures are... In this case, its 5 seconds vs. 180 seconds. There are two things there...one, the difference is DRAMATIC, so the more dramatic difference in the visual results should be expected to a degree. The other, is that at 5s, its unlikely you are swamping the read noise to even a reasonable degree, and that's probably the key impacting factor here. 

For me, 3m subs are not particularly long. Maybe not particularly short, either. Maybe I am just old school, but when I think of "long" exposures, I think back to when I was doing 10 minute subs, or even the 15, 20, 30 minute subs that a lot of the CCD imagers used when I first got into the hobby years ago. Back then, 5 minutes was "short" and anything under 60 seconds was just read noise, basically.

Now, 5 seconds, interestingly, is actually viable thanks to the very low read noise CMOS cameras have these days. That said, I think that such short exposures are probably better paired with a reasonably appropriate high gain setting, to minimize the potential read noise. In your case, half an electron difference between the gain you used, and the minimum possible gain, may not seem like much, but when your only getting a handful of electrons in each sub, that half an electron could make a difference (dynamic range notwithstanding.)  

It would be interesting to see, though, when "very short" exposures become viable, and at what level of read noise and DR. Your 3 minute subs look to be thouroughly swamping the read noise. More than is necessary, probably, given the quality of the background signal there. At what point do the gains of "longer" exposures fall off? Is it 3 minutes, or 2, or 1m30s? At what point do "short" exposures start to look like longer exposures...10s, 30s, 60s? 

A 5s exposure isn't just short, its very short. And even with CMOS cameras, unless your LP is psychotically bright, its doubtful that such short exposures would swamp the read noise to a reasonable degree. That doesn't mean, though, that some form of short exposure wouldn't be viable. There can be benefits to acquiring lots of shorter subs. For one, it allows much finer-grained culling of less than ideal or optimal subs, to optimize the final stack for some particular characteristic (i.e. star roundness, or detail sharpness, etc.) This kind of stack optimization is less viable with longer exposures (barring exceptional equipment like absolutely encoded mounts and the like.) 

Anyway, I love a good visual example! Thanks for sharing.
Like
Alexn 0.00
...
· 
Its really one of those things though, isn't it... 

This is discussed at length, so often... There are videos where someone will show mathematically that providing that your total integration time is the same, and you're using a camera where the read noise is significantly low enough to not factor in dramatically.

I started Astrophotography with a SBIG ST10XME... The read noise on that camera was quite intense, and while it would calibrate out reasonably well, it would still swamp signal on most targets, despite the ST10XME having a higher QE than nearly any other camera ever produced (including newer CMOS cameras). Back in the days of these CCD's, your only course of action was to shoot the longest possible exposures, to build more signal per sub, so that when you stacked it, you were stacking strong SNR images, and the accumulation of read noise in the final integrated image was significantly less than the signal that was collected.  It was NOTHING to be running 1800s exposures in narrowband and at least 600s in LRGB to negate the per-frame noise... This was true from the ST7 and 8 all the way through to the STL-11000K, and STX-8300, even the STX-L 16802 and 16200 CCD's... 

I've recently moved from a KAF-8300 CCD to the IMX-294 CMOS, and I've been amazed in the quality of image I've been able to obtain using relatively short subs (180 - 300s), and then taking two to three times the sub exposure count to reach a similar total integration time. 

I will say this though... longer subs go deeper - it's as simple as that... total integration equalisation will get you 90% of the way there, but there are details, faint stars and background galaxies that simply do not resolve in 3 min subs, so no matter how many of them you stack, they just don't appear, or appear incredibly weak...

I have not tested with my CMOS camera, but, I'd be looking at running 180s vs 600s subs (average CMOS exposure time vs average CCD exposure time) at the same gain and offset, and running say, 3hrs of 3min subs, and 3hrs of 10 min subs.

Providing your rig is capable of long exposures without star elongation etc, I'd be very very surprised to find that the longer exposures did not produce a better overall image...

The factors in there to consider are:
Dark current - Longer exposures hurt if your camera has high dark current - but darks should resolve that.
Read noise - There will be a dramatic improvement each way, More subs with significantly low read noise, longer subs with significantly high read noise.
Well depth - CMOS cameras typically do not have the well depth the old CCD's had.. so longer subs tend to saturate CMOS pixels sooner. You can mitigate this with gain settings etc, but then you lose sensitivity as a result, and sometime you increase your read noise too... 
Sky Quality - If you're in Bortle 7/8, good luck with 600s LRGB subs...
Mount/Tracking accuracy - My general belief is that if you can run 10 consecutive 180s subs, you can run 1800s subs, there are times however, where this isn't the case, and losing a 30 minute sub HURTS if something goes wrong.... 


I am, and always have been of the mind that you run the longest subs that your equipment and sky allow... 
I expose long enough to fill wells, provided the mount accuracy and guiding will support the sub duration. with my rig, 10mins for broadband in dark skies, 5mins at home, 20mins for NB in dark skies, and 10mins at home. (depending on the moon)..

But then I've seen phenomenal stuff produced with 1 and 2 min subs... and if you lose one to a weird issue, vibration, etc - big deal, but if you lose a 30 min sub to a gust of wind at the 20 minute mark, you're pretty mad...
Like
jrista 8.59
...
· 
·  1 like
Alex Nicholas:
Its really one of those things though, isn't it... 

This is discussed at length, so often... There are videos where someone will show mathematically that providing that your total integration time is the same, and you're using a camera where the read noise is significantly low enough to not factor in dramatically.

I started Astrophotography with a SBIG ST10XME... The read noise on that camera was quite intense, and while it would calibrate out reasonably well, it would still swamp signal on most targets, despite the ST10XME having a higher QE than nearly any other camera ever produced (including newer CMOS cameras). Back in the days of these CCD's, your only course of action was to shoot the longest possible exposures, to build more signal per sub, so that when you stacked it, you were stacking strong SNR images, and the accumulation of read noise in the final integrated image was significantly less than the signal that was collected.  It was NOTHING to be running 1800s exposures in narrowband and at least 600s in LRGB to negate the per-frame noise... This was true from the ST7 and 8 all the way through to the STL-11000K, and STX-8300, even the STX-L 16802 and 16200 CCD's... 

I've recently moved from a KAF-8300 CCD to the IMX-294 CMOS, and I've been amazed in the quality of image I've been able to obtain using relatively short subs (180 - 300s), and then taking two to three times the sub exposure count to reach a similar total integration time. 

I will say this though... longer subs go deeper - it's as simple as that... total integration equalisation will get you 90% of the way there, but there are details, faint stars and background galaxies that simply do not resolve in 3 min subs, so no matter how many of them you stack, they just don't appear, or appear incredibly weak...

I have not tested with my CMOS camera, but, I'd be looking at running 180s vs 600s subs (average CMOS exposure time vs average CCD exposure time) at the same gain and offset, and running say, 3hrs of 3min subs, and 3hrs of 10 min subs.

Providing your rig is capable of long exposures without star elongation etc, I'd be very very surprised to find that the longer exposures did not produce a better overall image...


I am, and always have been of the mind that you run the longest subs that your equipment and sky allow... 
I expose long enough to fill wells, provided the mount accuracy and guiding will support the sub duration. with my rig, 10mins for broadband in dark skies, 5mins at home, 20mins for NB in dark skies, and 10mins at home. (depending on the moon)..

But then I've seen phenomenal stuff produced with 1 and 2 min subs... and if you lose one to a weird issue, vibration, etc - big deal, but if you lose a 30 min sub to a gust of wind at the 20 minute mark, you're pretty mad...

There should be a point of equalization, though, between a CCD with high read noise and long exposures, and a CMOS with low read noise and moderate exposures. If you swamp the read noise by the same ratio, then you should NOT be finding that CMOS is incapable of capturing and rendering faint details the same. So if you are aiming for say 10xRN^2 criterion, and you are dithering well with both systems, then both systems should be capable of resolving exactly the same details.

With some CMOS cameras, you actually have better characteristics than some of the popular CCDs. The IMX455 for example, at its highest DR HCG mode, has more dynamic range than the KAF-16803 and its gigantic pixels. That means with optimal exposures with both cameras, there should be absolutely no handicap with the IMX455 vs. the KAF-16803, and in fact you should be able to use longer exposures without clipping anything using the CMOS than with the CCD.

There IS going to be a per-sub difference between these two cameras. If you aim for 10xRN^2, then you would need ~30e- signal per CMOS frame, and 810e- signal per CCD frame. The CCD frame (individual single frame here) has TWENTY SEVEN (27) TIMES more signal! On a PER FRAME basis, yes, the CCD frames will look better than the CMOS frames. Stack 27 of the CMOS frames, though, and you shouldn't see any difference in the total amount of signal captured or SNR. The CHARACTERISTIC of the noise won't be the same...because the CMOS camera pixels are resolving much finer details, and the noise profile will be finer grained as well (which can make the background noise profile look more coarse.)

If you integrate the same total amount of time, then both cameras should be producing the same SNR. There should not be any reason why a CMOS camera wouldn't capture the same faint signals. I know that a lot of people think that if you don't capture at least some photons on an object in each frame, then you won't be able to resolve it at all. This is false. You can capture a "fraction" of a photon per frame (which really means only capturing a photon every few frames) from a very faint object, and so long as you accumulate enough total signal on that object in your stack to overcome the total amount of noise, you WILL resolve that object, with one key exception (in a moment.) 

Here is a stack that shows continued faint signal revelation through 400 CMOS subs stacked:



You can see numerous objects and stars that were not present in the shorter stacks, that do show up eventually as the stack gets deeper. These were all calibrated and well dithered, which I think is a critical factor here (dithering in particular, insufficient or ineffective dithering can be a limiting factor.) I also think that both dark and optimal flat calibration is key as well, since both correct for forms of FPN (which is another limiting factor.) 

The key potential limiting factor when it comes to revealing really faint objects is scattering within the scope. Refractors, especially those with nanocoatings, do better here than reflectors. Scattering by the scope can scatter the rare and occasional photons from very faint objects so that they just become part of the incoherent background signal. This could be a factor in why switching from a CCD to a CMOS might look like you have lost the ability to resolve faint objects, if you also switched from a scope that managed scattering better to one that managed it more poorly. A properly multicoated refractive optic will reflect and scatter less than a normally ground mirror. Even a very finely ground high grade mirror will not necessarily perform as well from a scattering standpoint. A nanocoated refractive optic will scatter/reflect less than 0.05% of the light, and they offer the best performance here. The above was acquired with a 600mm f/4 Canon lens that uses nanocoatings on internal element surfaces...the same object imaged with say one of the commonly used newtonian telescopes, with the same camera, wouldn't go as deep, and may in fact have distinct limitations on how deep it COULD go... That would be regardless of the camera used.
Like
TareqPhoto 2.94
...
· 
·  1 like
Jon Rista:
Alex Nicholas:
Its really one of those things though, isn't it... 

This is discussed at length, so often... There are videos where someone will show mathematically that providing that your total integration time is the same, and you're using a camera where the read noise is significantly low enough to not factor in dramatically.

I started Astrophotography with a SBIG ST10XME... The read noise on that camera was quite intense, and while it would calibrate out reasonably well, it would still swamp signal on most targets, despite the ST10XME having a higher QE than nearly any other camera ever produced (including newer CMOS cameras). Back in the days of these CCD's, your only course of action was to shoot the longest possible exposures, to build more signal per sub, so that when you stacked it, you were stacking strong SNR images, and the accumulation of read noise in the final integrated image was significantly less than the signal that was collected.  It was NOTHING to be running 1800s exposures in narrowband and at least 600s in LRGB to negate the per-frame noise... This was true from the ST7 and 8 all the way through to the STL-11000K, and STX-8300, even the STX-L 16802 and 16200 CCD's... 

I've recently moved from a KAF-8300 CCD to the IMX-294 CMOS, and I've been amazed in the quality of image I've been able to obtain using relatively short subs (180 - 300s), and then taking two to three times the sub exposure count to reach a similar total integration time. 

I will say this though... longer subs go deeper - it's as simple as that... total integration equalisation will get you 90% of the way there, but there are details, faint stars and background galaxies that simply do not resolve in 3 min subs, so no matter how many of them you stack, they just don't appear, or appear incredibly weak...

I have not tested with my CMOS camera, but, I'd be looking at running 180s vs 600s subs (average CMOS exposure time vs average CCD exposure time) at the same gain and offset, and running say, 3hrs of 3min subs, and 3hrs of 10 min subs.

Providing your rig is capable of long exposures without star elongation etc, I'd be very very surprised to find that the longer exposures did not produce a better overall image...


I am, and always have been of the mind that you run the longest subs that your equipment and sky allow... 
I expose long enough to fill wells, provided the mount accuracy and guiding will support the sub duration. with my rig, 10mins for broadband in dark skies, 5mins at home, 20mins for NB in dark skies, and 10mins at home. (depending on the moon)..

But then I've seen phenomenal stuff produced with 1 and 2 min subs... and if you lose one to a weird issue, vibration, etc - big deal, but if you lose a 30 min sub to a gust of wind at the 20 minute mark, you're pretty mad...

There should be a point of equalization, though, between a CCD with high read noise and long exposures, and a CMOS with low read noise and moderate exposures. If you swamp the read noise by the same ratio, then you should NOT be finding that CMOS is incapable of capturing and rendering faint details the same. So if you are aiming for say 10xRN^2 criterion, and you are dithering well with both systems, then both systems should be capable of resolving exactly the same details.

With some CMOS cameras, you actually have better characteristics than some of the popular CCDs. The IMX455 for example, at its highest DR HCG mode, has more dynamic range than the KAF-16803 and its gigantic pixels. That means with optimal exposures with both cameras, there should be absolutely no handicap with the IMX455 vs. the KAF-16803, and in fact you should be able to use longer exposures without clipping anything using the CMOS than with the CCD.

There IS going to be a per-sub difference between these two cameras. If you aim for 10xRN^2, then you would need ~30e- signal per CMOS frame, and 810e- signal per CCD frame. The CCD frame (individual single frame here) has TWENTY SEVEN (27) TIMES more signal! On a PER FRAME basis, yes, the CCD frames will look better than the CMOS frames. Stack 27 of the CMOS frames, though, and you shouldn't see any difference in the total amount of signal captured or SNR. The CHARACTERISTIC of the noise won't be the same...because the CMOS camera pixels are resolving much finer details, and the noise profile will be finer grained as well (which can make the background noise profile look more coarse.)

If you integrate the same total amount of time, then both cameras should be producing the same SNR. There should not be any reason why a CMOS camera wouldn't capture the same faint signals. I know that a lot of people think that if you don't capture at least some photons on an object in each frame, then you won't be able to resolve it at all. This is false. You can capture a "fraction" of a photon per frame (which really means only capturing a photon every few frames) from a very faint object, and so long as you accumulate enough total signal on that object in your stack to overcome the total amount of noise, you WILL resolve that object, with one key exception (in a moment.) 

Here is a stack that shows continued faint signal revelation through 400 CMOS subs stacked:



You can see numerous objects and stars that were not present in the shorter stacks, that do show up eventually as the stack gets deeper. These were all calibrated and well dithered, which I think is a critical factor here (dithering in particular, insufficient or ineffective dithering can be a limiting factor.) I also think that both dark and optimal flat calibration is key as well, since both correct for forms of FPN (which is another limiting factor.) 

The key potential limiting factor when it comes to revealing really faint objects is scattering within the scope. Refractors, especially those with nanocoatings, do better here than reflectors. Scattering by the scope can scatter the rare and occasional photons from very faint objects so that they just become part of the incoherent background signal. This could be a factor in why switching from a CCD to a CMOS might look like you have lost the ability to resolve faint objects, if you also switched from a scope that managed scattering better to one that managed it more poorly. A properly multicoated refractive optic will reflect and scatter less than a normally ground mirror. Even a very finely ground high grade mirror will not necessarily perform as well from a scattering standpoint. A nanocoated refractive optic will scatter/reflect less than 0.05% of the light, and they offer the best performance here. The above was acquired with a 600mm f/4 Canon lens that uses nanocoatings on internal element surfaces...the same object imaged with say one of the commonly used newtonian telescopes, with the same camera, wouldn't go as deep, and may in fact have distinct limitations on how deep it COULD go... That would be regardless of the camera used.

I still enjoying your images with those old cameras, sounds i shouldn't give up my old cameras then, keep going, i will see what kind of exposures i will do with all my cameras old or new.
Like
CygnusBob
...
· 
·  1 like
Take a look at my M51 image.  A higher resolution version can be found  at  my gallery here on Astro Bin .  With the help of BlurXTerminator, I am getting a FWHM of ~ 1 arc-seconds.  The OTA was an 8 inch SCT.  This was created using 10 second exposures with a luminance filter.  A total of 2240 exposures were used in the final  image shown.  The camera was an ASI533MM Pro with at gain setting in ZWO speak of 200 (actually  it is really a linear gain of 10) The readout noise I would expect to be ~ 1.3 electrons RMS.  While the final SNR may not be great, the system is generating resolution close to what could be produced if the OTA was in space.  This is thanks to the benefits of DSO lucky imaging.  I only used 50% of the exposures collected during the real-time lucky imaging acquisition process.

M51Decon_5_4_2024CN.jpeg
Like
jrista 8.59
...
· 
Tareq Abdulla:
I still enjoying your images with those old cameras, sounds i shouldn't give up my old cameras then, keep going, i will see what kind of exposures i will do with all my cameras old or new.

Thank you.

I am of the pretty strong opinion that camera technology, really, is less important than the quality of the photons you are capturing. By that, I mean, polluted vs. not. Polluted skies are devastating to image quality, and IMHO, getting way from the light pollution or eliminating the light pollution, is the single best thing any imager can do for their astrophotography.

This either means using narrow band filters, which is an option for imaging under light polluted skies. Or, finding and using a decent dark site (which are often FAR closer to people than gray/black zones...as cameras have STATIC sensitivity, compared to human eyesight which is DYNAMIC sensitivity. In years past, I think I determined that outside of some of the more densely populated areas in the eastern half of the US, and similar with the EU, most people probably live within an hour of a reasonably and sufficiently dark site for good quality astrophotography (for broadband, OSC or RGB, as well as narrow band). 

If people can find and use a decent dark site, it will be more transformative to their astrophotography than any camera. This is not to say that technology...cameras, scopes, don't play a role...they do. But, for most people, the difference between a light polluted back yard and a decent dark site is often 15-25x, which usually far outpaces any relative differences between Camera A or B, or telescope A or B. The differences between cameras and telescopes would still make a difference at a given imaging site...so once you have a dark site you can use, then depending on your specific goals, a bigger scope, or a better sensor, could then allow you to optimize your results for your specific goals. So cameras and scopes (and mounts) DO matter...just, IMO, not as much as the difference between light polluted skies and dark skies.
Like
 
Register or login to create to post a reply.