Let's discuss about dark, bias, dark-flats... [Deep Sky] Acquisition techniques · Daniel Arenas · ... · 106 · 5020 · 23

Jbis29 1.20
...
· 
“The statistical distribution that describes counting processes is Poisson statistics and it has the property that the standard deviation of the distribution (the uncertainty) is given by the square root of the mean, in our case the number of photons that arrive over the duration of the exposure.”

@Luca Marinelli so if I understand the Poisson statistics correctly then if over the duration of one image pixel A captures 20 photons the standard deviation expected (and I assume accounted for) would be equivalent to 4.5 photons? (I realize this would be read out in electrons but just for simplicity’s sake I kept them as “photons”) 

so is it true then that the cameras processor is processing these values in the statistical framework. Such that the uncertainty is written in to the equation and is added into the process(ie noise is added because the variable is in the equation) by the equation? You can lower the uncertainty overall by using a cooled camera these days which would reduce the uncertainty in the read noise. And it seems like older CCD camera were the better option because of the linear readout at a confined point off the sensor instead of the cmos sensor. But can you lower the uncertainty by removing the variable from the equation?

im loving this discussion btw. I hope my ignorance isn’t bothersome.
Like
kuechlew 7.75
...
· 
·  1 like
Joseph Biscoe IV:
“The statistical distribution that describes counting processes is Poisson statistics and it has the property that the standard deviation of the distribution (the uncertainty) is given by the square root of the mean, in our case the number of photons that arrive over the duration of the exposure.”

@Luca Marinelli so if I understand the Poisson statistics correctly then if over the duration of one image pixel A captures 20 photons the standard deviation expected (and I assume accounted for) would be equivalent to 4.5 photons? (I realize this would be read out in electrons but just for simplicity’s sake I kept them as “photons”) 

so is it true then that the cameras processor is processing these values in the statistical framework. Such that the uncertainty is written in to the equation and is added into the process(ie noise is added because the variable is in the equation) by the equation? You can lower the uncertainty overall by using a cooled camera these days which would reduce the uncertainty in the read noise. And it seems like older CCD camera were the better option because of the linear readout at a confined point off the sensor instead of the cmos sensor. But can you lower the uncertainty by removing the variable from the equation?

im loving this discussion btw. I hope my ignorance isn’t bothersome.

By cooling you can only reduce the thermal noise i.e. the additional electrons created in the pixel by heat. There is no way to reduce the noise created by the poisson distribution of the signal. What you can achieve though is an improvement of the signal to noise ratio. Since the signal increases linear with exposure time while the noise increases only with the square root doubling the exposure will lead to an increase of SNR by a factor of sqrt(2) = 1.414... 

Be aware that longer exposure time increases (!) noise. It's the SNR that gets better not the noise.

Clear skies
Wolfgang
Like
Jbis29 1.20
...
· 
·  2 likes
John Hayes:
Joseph Biscoe IV:
@John Hayes thanks for taking the time to clarify. I really do appreciate that. I have a lot to learn. I want to learn. I enjoy delving into the details and I have a need to understand how things work. I’ll read whatever you’d suggest starting with what you’ve sent so far. I understand that what we see as noise in the light frame is uncertainty. And that uncertainty is relative to the collector i.e. sensor. So im wondering, if I understand the physics of image capturing the cameras sensor “reads” the values of released electrons at each pixel. (Photons rain down on sensor, photon releases electron, sensor reads how full each pixel is of electrons and outputs that value based on bit depth). Why, or maybe I should ask how, does the processor see a group of electrons as “uncertain” ? 


once again thank you!

Ah...that's a very good question and you are the perfect straight man Joe!  Let's do a thought experiment.  Wait for a rainy day, then take 10 identical bottles, position them in a line and put a piece of wood over all ten to cover them.   Let's assume that it starts to rain and that the rain comes down with perfect uniformity over the bottles.  Now uncover the bottles for 5 minutes to gather water and cover them all at the precisely the same time.  That's your exposure time.  Rain drops falling into a bottle are discrete events that perfectly mimic how photons arrive at your sensor and both are described by Poisson statistics.  If you very carefully weigh each bottle to see precisely how much water and hence how many drops it gathered, you'll find a small variation between each of the 10 bottles.  That variation will turn out to be the square root of the average number of drops over the ten bottles and that's the uncertainty in the average number of drops that you can expect to gather in any one bottle.  We call the average number of drops "the signal" and we call "the noise", the uncertainty in what we measure in any one bottle. So in discrete events that are driven by Poisson statistics, noise will grow as the square root of the average signal and the SNR will also be given by the square root of the average signal.  That means that the more drops (or photons) that you gather, the more the SNR will increase.  An important take away is that ALL measurements include uncertainty--there is no such thing as a "perfect" one-time measurement and that applies if you are measuring the length of a piece of wood, the brightness of a star using the cameras onboard JWST, or a gravitational wave using LIGO. 

I should add that a sensor is a bit more complicated than a bottle gathering rain drops because the incident photon has to interact with the semiconductor to produce a photo-electron and that photo-electron has to be measured by the electronics (which is ultimately where read noise comes from).  So you have to be a little careful about how you interpret the numbers if you want to convert ADU numbers from your sensor into photon noise.

Hopefully that makes sense.  Understanding the difference between signals and noise is the critical bridge needed to correctly understand how image calibration works.

John

This makes perfect sense!  If I understand correctly the “noise we see in images, is the result of the standard deviation written into poisson statistics. The is “uncertainty is the square root of the average”. There’s no getting around the uncertainty because it is part and parcel of poisson statistics.
Like
Freestar8n 1.51
...
· 
·  2 likes
Joseph Biscoe IV:
This makes perfect sense!  If I understand correctly the “noise we see in images, is the result of the standard deviation written into poisson statistics. The is “uncertainty is the square root of the average”. There’s no getting around the uncertainty because it is part and parcel of poisson statistics.

If you take a single raw exposure with a typical sensor, a dominant noise term will be FPN - which is ultimately a form of manufacturing defect and has nothing to do with Poisson statistics.  Each pixel behaves differently and that difference shows up as noise in the image.

It is indeed also a form of uncertainty in that the particular offset of each pixel is not known a priori in a single image.  But since it is constant in time, you have the opportunity to average many darks and determine each offset - and then subtract it.  This is because imaging sensors have both a spatial dimension and a time dimension.  The FPN in a sensor is spatially random-ish, but temporally constant.  So you can effectively "take a picture of the noise" by averaging - and then subtracting.

FPN is still a noise term in the image even though it can be measured and subtracted.   It still has a form of uncertainty in that you don't know it a priori.  But that doesn't prevent you from measuring it and removing it - because it is constant in time, unlike Poisson noise.

The question in this thread is how good those masters need to be - and that amounts to asking how well the masters exactly capture the pure FPN variation across the pixels.  A single dark will contain the FPN - plus or minus the random noise that is present in any exposure.  In the case of read noise it is fairly Gaussian, and in the case of dark current noise it is Poisson.  In general it will be a mixture with an approximately constant sigma.  By averaging many darks into a master, that residual noise is reduced to near zero and allows you to subtract the FPN cleanly - with a corrsponding noise reduction.

Once you calibrate each frame you then stack many frames - and at that point you are dealing with a mixture of Gaussian read noise, dark current shot noise, and astro signal shot noise.  All of those noise terms will reduce in the average as sqrt(N) regardless of being Poisson or Gaussian.  But before you can do that averaging you need to remove the FPN - and that's what the masters are for.

Frank
Edited ...
Like
lucam_astro 9.15
...
· 
·  5 likes
Joseph Biscoe IV:
“The statistical distribution that describes counting processes is Poisson statistics and it has the property that the standard deviation of the distribution (the uncertainty) is given by the square root of the mean, in our case the number of photons that arrive over the duration of the exposure.”

@Luca Marinelli so if I understand the Poisson statistics correctly then if over the duration of one image pixel A captures 20 photons the standard deviation expected (and I assume accounted for) would be equivalent to 4.5 photons? (I realize this would be read out in electrons but just for simplicity’s sake I kept them as “photons”) 

so is it true then that the cameras processor is processing these values in the statistical framework. Such that the uncertainty is written in to the equation and is added into the process(ie noise is added because the variable is in the equation) by the equation? You can lower the uncertainty overall by using a cooled camera these days which would reduce the uncertainty in the read noise. And it seems like older CCD camera were the better option because of the linear readout at a confined point off the sensor instead of the cmos sensor. But can you lower the uncertainty by removing the variable from the equation?

im loving this discussion btw. I hope my ignorance isn’t bothersome.

John's example with the water drops is perfect and it also presents some of the subtlety of different averaging processes that are being discussed in this thread.

For the moment, focus on a the read-out of a single pixel on the sensor to remove ambiguity between spatial and temporal signal fluctuations. If you take one exposure, you will end up with a certain signal measured on that sensor pixel. Convert it back to number of incident photons, let's say it corresponds to 110 photons. There is no uncertainty here. You measured 110 photons, not 100, not 120. You then measure a second exposure and you find it's 90 electrons.  Now you are can start getting an idea of the statistics of that measurement (the book by Taylor that John recommended earlier is an excellent reference on this topic). Repeat the measurement 50 times and all of a sudden you are going to find that a counting process with average count of 100 will have a standard deviation of 10.

The average described above is an ensemble average, in the language of statistical mechanics. You pull a value for the signal at the sensor pixel out of the same physical condition and measure the statistical variation of that signal as you repeat the experiment many times. In this case you could say that this is a temporal average over many hours of exposures split into the relevant subexposures.

Now let's talk about what happens when you look at the whole image and I hope I don't get into hot waters with John and Frank. Take a smooth image and consider a small patch. The image intensity varies slowly on the scale of the patch size. On the other hand, if you have what we call a "noisy" image, the image intensity varies significantly from pixel to pixel, possibly with a random component. This "noise" is spatial in nature. You can measure it on a single image, you don't need a whole stack of frames.

You can define signal to noise ratio for a pixel in the patch in two ways: 1) take the signal value at that pixel for many exposures, average it, and divide it by the standard deviation of the signal over the many exposures; 2) take a single exposure, average the signal over the patch around the pixel of interest and divide it by the standard deviation of the signal over the patch. These are both acceptable definitions of SNR and for uncalibrated data they will NOT give the same answer (in stochastic processes this property is called ergodicity) because they are driven by different physical processes.

The image calibration process attempts to remove this spatial variance across the image due to spatial variation of dark current (dark frames) and gain (flats) at the expense of a bit of extra temporal noise (variance from sub to sub). This extra variance can be made arbitrarily small by choosing an appropriate number of dark and flat frames as originally presented by John early in this thread.
Like
Jbis29 1.20
...
· 
·  1 like
@Luca Marinelli Its all making sense now. It’s a pretty amazing concept when you get the full picture. If I’m understanding you right. Which I think I am, the right number of flats or darks will calibrate to the best ratio of uncertainty to temporal noise. That is that uncertainty in the reading will calibrate out but temporal noise may be added. If you exceed the “right” number of frames (in this case 16) , you will introduce more temporal noise than is necessary.
Like
AstroDan500 4.67
...
· 
·  2 likes
A lot of good information and I will research a lot that I don't really understand but here is real world examples.
asi294 color camera, (3)-300 sec + (8)-600 sec for 95 minutes total. 120 gain.
First image with no calibration. 
Second image with calibration.
11 light frames I calibrated with (5)-.75sec. Flats and (5)-,75 sec dark flats and just (2)-300 dark frames.
Simple APP stack converted to jpeg.
NA nebula.
Imaged in Bortle 8 with williams gt71 scope and 2" radian ultra filter.
The difference is pretty clear without pixel peeping.
nocalib.jpgcalib.jpg
Like
lucam_astro 9.15
...
· 
·  4 likes
Joseph Biscoe IV:
@Luca Marinelli Its all making sense now. It’s a pretty amazing concept when you get the full picture. If I’m understanding you right. Which I think I am, the right number of flats or darks will calibrate to the best ratio of uncertainty to temporal noise. That is that uncertainty in the reading will calibrate out but temporal noise may be added. If you exceed the “right” number of frames (in this case 16) , you will introduce more temporal noise than is necessary.

No, there is no such thing as exceeding the "right" number of frames. If you look at John's figure earlier in this thread you'll see that at each pixel, the temporal standard deviation of the dark signal goes like 1/sqrt(N) of exposures, so more frames will always reduce the uncertainty of the dark signal at each pixel. John's point was that after 16-32 exposures you have already reduced this variance to a very small number and additional time investment in shooting dark frames will not translate into meaningful increase in calibration quality.
Like
Jbis29 1.20
...
· 
·  1 like
Luca Marinelli:
Joseph Biscoe IV:
@Luca Marinelli Its all making sense now. It’s a pretty amazing concept when you get the full picture. If I’m understanding you right. Which I think I am, the right number of flats or darks will calibrate to the best ratio of uncertainty to temporal noise. That is that uncertainty in the reading will calibrate out but temporal noise may be added. If you exceed the “right” number of frames (in this case 16) , you will introduce more temporal noise than is necessary.

No, there is no such thing as exceeding the "right" number of frames. If you look at John's figure earlier in this thread you'll see that at each pixel, the temporal standard deviation of the dark signal goes like 1/sqrt(N) of exposures, so more frames will always reduce the uncertainty of the dark signal at each pixel. John's point was that after 16-32 exposures you have already reduced this variance to a very small number and additional time investment in shooting dark frames will not translate into meaningful increase in calibration quality.

@Luca Marinelli  Ah, ok. I was thinking that the uncertainty and the standard deviation were different. They are the same? Thank you for clarifying.
Like
jhayes_tucson 22.40
...
· 
·  2 likes
Luca Marinelli:
Joseph Biscoe IV:
“The statistical distribution that describes counting processes is Poisson statistics and it has the property that the standard deviation of the distribution (the uncertainty) is given by the square root of the mean, in our case the number of photons that arrive over the duration of the exposure.”

@Luca Marinelli so if I understand the Poisson statistics correctly then if over the duration of one image pixel A captures 20 photons the standard deviation expected (and I assume accounted for) would be equivalent to 4.5 photons? (I realize this would be read out in electrons but just for simplicity’s sake I kept them as “photons”) 

so is it true then that the cameras processor is processing these values in the statistical framework. Such that the uncertainty is written in to the equation and is added into the process(ie noise is added because the variable is in the equation) by the equation? You can lower the uncertainty overall by using a cooled camera these days which would reduce the uncertainty in the read noise. And it seems like older CCD camera were the better option because of the linear readout at a confined point off the sensor instead of the cmos sensor. But can you lower the uncertainty by removing the variable from the equation?

im loving this discussion btw. I hope my ignorance isn’t bothersome.

John's example with the water drops is perfect and it also presents some of the subtlety of different averaging processes that are being discussed in this thread.

For the moment, focus on a the read-out of a single pixel on the sensor to remove ambiguity between spatial and temporal signal fluctuations. If you take one exposure, you will end up with a certain signal measured on that sensor pixel. Convert it back to number of incident photons, let's say it corresponds to 110 photons. There is no uncertainty here. You measured 110 photons, not 100, not 120. You then measure a second exposure and you find it's 90 electrons.  Now you are can start getting an idea of the statistics of that measurement (the book by Taylor that John recommended earlier is an excellent reference on this topic). Repeat the measurement 50 times and all of a sudden you are going to find that a counting process with average count of 100 will have a standard deviation of 10.

The average described above is an ensemble average, in the language of statistical mechanics. You pull a value for the signal at the sensor pixel out of the same physical condition and measure the statistical variation of that signal as you repeat the experiment many times. In this case you could say that this is a temporal average over many hours of exposures split into the relevant subexposures.

Now let's talk about what happens when you look at the whole image and I hope I don't get into hot waters with John and Frank. Take a smooth image and consider a small patch. The image intensity varies slowly on the scale of the patch size. On the other hand, if you have what we call a "noisy" image, the image intensity varies significantly from pixel to pixel, possibly with a random component. This "noise" is spatial in nature. You can measure it on a single image, you don't need a whole stack of frames.

You can define signal to noise ratio for a pixel in the patch in two ways: 1) take the signal value at that pixel for many exposures, average it, and divide it by the standard deviation of the signal over the many exposures; 2) take a single exposure, average the signal over the patch around the pixel of interest and divide it by the standard deviation of the signal over the patch. These are both acceptable definitions of SNR and for uncalibrated data they will NOT give the same answer (in stochastic processes this property is called ergodicity) because they are driven by different physical processes.

The image calibration process attempts to remove this spatial variance across the image due to spatial variation of dark current (dark frames) and gain (flats) at the expense of a bit of extra temporal noise (variance from sub to sub). This extra variance can be made arbitrarily small by choosing an appropriate number of dark and flat frames as originally presented by John early in this thread.

Luca,
Thank you for such a clear amplification of how this stuff works and I appreciate your effort to help improve signal and decrease noise in this discussion.  

Here's something else that I wanted to mention.  The one thing that I have never liked about Janesick's book is that he confuses signals, signal-modulation, and noise in how things are named, which can be confusing.  (To be fair, I think that some of this stuff is historical).  His book is also about how to characterize sensors by looking at parameters that vary across the sensor from pixel to pixel.  FPN or "Fixed Pattern Noise" is one such example.  FPN is caused by non-uniform responsivity between the individual sensors, which is called PRNU (Pixel Response Non Uniform).  PRNU is a multiplicative effect (like vignetting) and it represents a variation in signal-modulation.  As such, FPN is not a noise term in the traditional sense, which is why it can be removed by direct division.  It represents uncertainty in a signal across the sensor due to variation in responsivity between pixels.  It is an important part of why flat calibration is so important.  Noise in our flat master represents the actual noise in the FPN signal-modulation that we remove.  We call it "Fixed Pattern Noise" only because we are following Janesick's convention.  Just remember that it is not a "true" noise term and it doesn't act like a noise term in the way that it contributes mathematically to uncertainty in the measurement--even though it may appear at first glance to add a random "noise-like granularity" across an image.  It comes from the sensor; not the measurement.

John
Like
HegAstro 11.91
...
· 
·  5 likes
It seems very odd and misleading to call PRNU as noise. It seems to modulate the signal in a similar fashion to optical vignetting. While you can “reverse” these modulations through flats, it is still true that the SNR in these areas will be different. For example, in the case of vignetting, you can visually remove it through flats but it is still true that those areas have lower SNR due to the higher proportion of shot noise on account of truly lower incident signal.

Ultimately, the true noise sources, in the sense of being random and following a statistical distribution  are read noise, shot noise, and dark current noise. The way to reduce them involves taking advantage of physics and statistics - in the case of shot noise, increasing total exposure time or otherwise collecting more signal, in the case of read noise, making it a small portion of your total noise by optimizing single exposure time, and in the case of dark current noise by lowering sensor temperature.
Edited ...
Like
Jbis29 1.20
...
· 
@John Hayes I found Photon Tranfer as a PDF. Thanks for the resource. Already begun reading.
Like
kuechlew 7.75
...
· 
The legal way to get it printed or as pdf is Photon Transfer | (2007) | Janesick | Publications | Spie
You may find some cheaper used copies of course.

Clear skies
Wolfgang
Like
Freestar8n 1.51
...
· 
·  1 like
It seems very odd and misleading to call PRNU as noise. It seems to modulate the signal in a similar fashion to optical vignetting. While you can “reverse” these modulations through flats, it is still true that the SNR in these areas will be different. For example, in the case of vignetting, you can visually remove it through flats but it is still true that those areas have lower SNR due to the higher proportion of shot noise on account of truly lower incident signal.

Ultimately, the true noise sources, in the sense of being random and following a statistical distribution  are read noise, shot noise, and dark current noise. The way to reduce them involves taking advantage of physics and statistics - in the case of shot noise, increasing total exposure time or otherwise collecting more signal, in the case of read noise, making it a small portion of your total noise by optimizing single exposure time, and in the case of dark current noise by lowering sensor temperature.

Some texts include PRNU as a form of pattern noise and others don't.  It is fundamentally different from dark current FPN because it isn't constant in each exposure and depends on the actual image in an exposure.  If you have no light signal at all there is no noise contribution from PRNU, but the dark current FPN is always there - and by definition it is always the same.

Nonetheless, PRNU results in pattern noise in each exposure and it can be greatly reduced with good flats.  Some people regard flats as simply for correction of vignetting, but they also serve to remove pattern noise in the lights.  For scientific work, the quality of the flats may end up being what limits the quantitative errors in the measurements.

As for the idea of "true noise sources" - FPN may not behave like a normal noise source since it is constant in time, but it is still a very real noise term in an image.  I have recommended a number of texts on sensor noise over the years and Janesick is just one of them.  But a good text should describe FPN and show it as another sensor term that adds in quadrature with other sensor noise sources in an image, such as read noise.  It is just unfortunate that when ccd imaging first became popular for amateurs, the texts that came out did not discuss this - and many made unfortunate claims such as "noise only increases during calibration."

As for the focus of this thread on Poisson noise - it's important to know that this square root property only applies to a signal that amounts to a count of arriving objects in a given time - such as photons or electrons.   ADU is not a count of arriving objects - so the noise of a Poisson signal, in adu, will not be the square root of the signal, in adu.  This may be confusing but there is a gain factor involved.  I have seen numerous write ups on the web where someone takes the square root of adu - and that is a big mistake.

Also - in terms of reducing the noise in an averaged stack - there is no need to view the noise as Poisson because any type of noise would reduce in the average as 1/sqrt(N).  The only requirement is that the noise has constant statistics for each pixel and is uncorrelated between exposures.  In reality the noise at each pixel won't be purely Poisson - but the 1/sqrt(N) still applies, where N is the number of frames.

I think it is important to understanding the calibration process to view it in two separate stages.  First you average many darks to create a master dark that accurately captures the inherent FPN - and that error goes as 1/sqrt(N) of the number of dark frames.  Then you apply that dark to each light and average many lights so the noise in each light - a mixture of Gaussian and Poisson noise but minimal FPN - will reduce as 1/sqrt(N).

Frank
Edited ...
Like
D_79 1.43
...
· 
·  1 like
John Hayes:
Daniel Arenas:
Thanks John,

I assume that you're doing manual stacking. I'm using WBPP 2.4.5 to stack, following the Adam Block's videos.
In that case, if I do a library of bias and stack again all my data with flats, darks and bias but no dark-falts how can I appreciate if there's any kind or improvement or not, just visually when stretching the master light with ScreenTransferFunction o there are some parameters I can look for with any process in PixInsight to compare both masterlight?

With the chart you shared with us I think its clear than more than 16 subs are not necessary, 20 for those who want to have round figures. But once more just to do any kind of test with my camera to notice or not if there's any positive variation? Maybe with statistics, is there any easy way??

I very appreciate your contribution to this thread, in fact I think that all of us do.

Daniel,
Yes, I use manual stacking--mainly because I like to check everything as I go...and I find subtle problems all the time.  In your case, you can test everything by simply running WBPP with 16 flats, darks and bias frames and then do it with a lot more frames in the calibration files.  You can then compare the two results both visually and with one of the noise evaluation tools in PI.  Do the same with dark flats and no dark flats.  That will tell you how well the calibration process is working.

Good luck with it!

John

All right, I did it with 24 frames (except lights) and I stacked in 3 different sways. Then I run two PixInsight scripts. I'll show you each test with the diagram calibration using the stacking script WBPP 2.4.5, the stretched with ScreenTransferFunction of the master light, the Noise Evaluation (CFA Bayer) and the SNR (Noise Evaluation and SNR are the scripts in PixInsight that you can find in Image Analysis).

My dedicated camera is a ZWO ASI2600MC Pro.

24 Darks + 24 Flats + 24 Dark-flats + no Bias:


01-D_DF.png

Captura_D_DF_F.png

Captura de pantalla 2022-08-22 133108.png

Captura de pantalla 2022-08-22 133343.png

Now, 24 Darks + 24 Flats + 24 Bias + no Dark-flats:

02-B_D.png

Captura D_F_B.png

Captura de pantalla 2022-08-22 133758.png

Captura de pantalla 2022-08-22 133945.png

And the last one with all the sub frames.

24 Darks + 24 Flats + 24 Dark-flats + 24 Bias:

02-B_D_DF.png

Captura de D, DF, B, F.png
Captura de pantalla 2022-08-22 134455.png
Captura de pantalla 2022-08-22 134628.png

So, these are the results. They aren't so much different, are they? 
Someone can help me to interpretate the best "mode" (with or without dark-flas and with or without bias)?
It seems that the SNR value are slightly better in calibration with Dark-flats and no bias, but I don't know if it's the best parameter to compare or maybe just the best one is the noise evaluation.

Thank you for your help in advance.

Clear skies!
Like
Freestar8n 1.51
...
· 
·  1 like
Thanks for the examples by @Daniel Arenas ​​​​@Dan Kearl

One thing to try is to use bias and darks that have constant value equal to the mean of a single bias or dark - ideally after removing the genuine hot and cold (dead) pixels.

If there is no FPN to remove (which is what people were led to think for two decades) then there is no need to average multiple darks.  The offset value at each pixel is the same because all pixels are the same - so you can average a single dark and thereby average millions of values instead of 16 or so.

If you then do calibration with that dark - it will have virtually no residual error because the N is so high in the average.  But it won't capture any FPN at all.

If the sensor really has no dark current FPN then there is no reason this wouldn't work even better than a large number of darks.  And it is certainly simpler since it just involves one dark exposure.

So if you can create a constant dark and constant bias in this manner - it might be hard to tell if the results after calibration are better or worse, or about the same.  If you really have no FPN at all, the results should be better - if only slightly.  It all depends on how much FPN there is in the sensor - and that isn't usually known or spec'd very well.  Only things like read noise and mean dark current are stated.

Sometimes a sensor will indicate "system noise" - and that really does capture the total noise in an exposure, including FPN.  But it isn't commonly stated.

Frank
Like
D_79 1.43
...
· 
Thanks, @Freestar8n, what is FPN?

In the screen capture I tried to show you different noise evaluation data as @John Hayes explained few posts before. But all de numbers seemed to be very similar, and I don't know if there are really important differences between one or another (with bias but without dark-flats or with dark-falts but with bias).

Then I'll try if with 16 or 20 subs is similar than with 50 for example. I find the explanation from @John Hayes very logical and consistent.

Greetings and clear skies.

Daniel.
Like
Freestar8n 1.51
...
· 
·  2 likes
Daniel Arenas:
Thanks, @Freestar8n, what is FPN?

In the screen capture I tried to show you different noise evaluation data as @John Hayes explained few posts before. But all de numbers seemed to be very similar, and I don't know if there are really important differences between one or another (with bias but without dark-flats or with dark-falts but with bias).

Then I'll try if with 16 or 20 subs is similar than with 50 for example. I find the explanation from @John Hayes very logical and consistent.

Greetings and clear skies.

Daniel.

Hi - FPN is the reason people create master darks and master bias.  If your sensor didn't have it - you wouldn't need to go to all the trouble, or worry about how many exposures to take.  It stands for Fixed Pattern Noise - and it is a variation in pixel values that is embedded in every exposure - and is the same in each exposure.  In a single dark you will see that each pixel is slightly warm or cold relative to others - and it's that way in every exposure.  But it may be hard to see because there is a separate random noise that adds to each pixel in each exposure.

For some people this is confusing terminology because the random noise seems more like "noise" than something that is constant in each exposure.  But when you take a single exposure, anything that causes pixels to have an offset relative to each other is a form of noise in the image.  It happens that the exposure will have a mixture of fixed pattern noise that is constant across exposures, and random noise that is different in each exposure.  They are both noise terms in the image and in the sensor.

The purpose of capturing a master dark is to get a good estimate of that fixed part of the noise in each pixel - because that is a noise term you can simply subtract off.  And that is why people go to all this trouble.  I don't think it helps to say in general that more dark exposures will improve the final result, because what the dark exposures are doing is converging to the FPN that is present - so it can be subtracted off in each exposure.  What you are left with is the random noise present in each exposure, which you can't do anything about except average away in many exposures.  But you can do something about the FPN - which is why people take master darks and master bias.  But they may not know its purpose is strictly to deal with FPN.  It lets you directly subtract a problematic noise term and thereby reduce the noise that is present prior to stacking.

So - if you don't have FPN you don't need to bother with master bias or master dark.  You are saying that all the pixels have the same mean value, and on top of that there is some random noise.  If that is the case, then take a single dark and average all the pixels - and use that value for all pixels in the master dark.  You are done and it only took one dark exposure.

But for most sensors there is some FPN and master darks/bias are beneficial.  But for people doing comparisons of different methods - the difference may not show very well, especially if you dither well.

This description won't be found in typical amateur imaging write ups or web pages, but it is consistent with the terminology and models in more advanced sensor textbooks and journal articles.  Ideally it should just make sense, because without FPN I see no reason at  all to create a master dark from many exposures.  Typical write ups just say you need to average many of them or there will be residual shot noise.  But why are you doing it in the first place if all the pixels have the same mean value?

Frank
Like
HotSkyAstronomy 2.11
...
· 
John Hayes:
CONCLUSION
For most of us doing traditional long exposure imaging with a stacks the range of 15 -100 subs, taking more than about 16 darks is a waste of time.  The same applies to bias data as well.  It won't hurt anything to use 50-100 dark or bias frames, but you are kidding yourself if you think that it is improving your results.  

Rule for most situations:  Use 16 darks to construct your master flat or master bias files and you'll be fine.


Dark Noise Theory for 100 images 1-21-21.jpg

Funny question, but, theoretically speaking, what would this graph look like for say... 500+ per calibration? "asking for a friend" 
Like
jhayes_tucson 22.40
...
· 
·  2 likes
Here are the charts that I computed a while back for 500 images.

Dark Noise Theory 10-17-16.jpg


When I first derived this result, a number of folks on CN didn't "believe" it--or more likely didn't understand it.  While everyone argued over it, one enterprising fellow actually took some measurements and I've plotted his data against the prediction for 500 subs.  As you can see, the agreement is excellent and that pretty much ended the arguing.


Dark Noise Theory Vs Measurement 500 pts.jpg

At 500 calibration frames, you only gain maybe around 5% for calibrating a 50 sub stack and 10% when calibrating a 500 frame stack compared to taking 50 calibration frames, which is 10x less frames!  For most stacks of 100 frames or less, 16-20 frames is plenty.  Taking more data won't hurt you but you are wasting your time.

John
Edited ...
Like
skybob727 6.08
...
· 
Well, I've tried to keep up on all 4 pages hear. All very interesting and I will keep doing my 10-15 darks and flats and around 20-30 bias. My question hear is, and I hope John chimes in here is about dark-flats. My understanding is that dark-flats were only needed to calibrate out the amp-glow from your dark frames. Now the backlit cameras state they have NO amp-glow, I don't see any in mine, so why are so many people still doing them if there is no amp-glow to remove. Seems like a waste of time.
Like
andreatax 7.46
...
· 
·  1 like
Bob Lockwood:
Well, I've tried to keep up on all 4 pages hear. All very interesting and I will keep doing my 10-15 darks and flats and around 20-30 bias. My question hear is, and I hope John chimes in here is about dark-flats. My understanding is that dark-flats were only needed to calibrate out the amp-glow from your dark frames. Now the backlit cameras state they have NO amp-glow, I don't see any in mine, so why are so many people still doing them if there is no amp-glow to remove. Seems like a waste of time.

Your flats most likely will undercorrect (or is it overcorrect? Whichever it is, it will screw your flat-fielding). Bias or dark-flat needs removing from master flat. I'd rather go with the latter.

There has been a recent thread on this.
Like
AstroDan500 4.67
...
· 
andrea tasselli:
Bob Lockwood:
Well, I've tried to keep up on all 4 pages hear. All very interesting and I will keep doing my 10-15 darks and flats and around 20-30 bias. My question hear is, and I hope John chimes in here is about dark-flats. My understanding is that dark-flats were only needed to calibrate out the amp-glow from your dark frames. Now the backlit cameras state they have NO amp-glow, I don't see any in mine, so why are so many people still doing them if there is no amp-glow to remove. Seems like a waste of time.

Your flats most likely will undercorrect (or is it overcorrect? Whichever it is, it will screw your flat-fielding). Bias or dark-flat needs removing from master flat. I'd rather go with the latter.

There has been a recent thread on this.

Yes to this. I have no scientific knowledge on this except my  Flats do not work well without dark flats. The dark flats have nothing to do with amp glow as far as I know. I use a few darks even with the 2600mm and 2600mc which supposedly have no amp glow as they seem to help with the flats and dark flats also.
Calibration frames seem like a very small time expense compared to this overall hobby.
Like
jhayes_tucson 22.40
...
· 
·  1 like
Bob Lockwood:
Well, I've tried to keep up on all 4 pages hear. All very interesting and I will keep doing my 10-15 darks and flats and around 20-30 bias. My question hear is, and I hope John chimes in here is about dark-flats. My understanding is that dark-flats were only needed to calibrate out the amp-glow from your dark frames. Now the backlit cameras state they have NO amp-glow, I don't see any in mine, so why are so many people still doing them if there is no amp-glow to remove. Seems like a waste of time.

Hi Bob,
The two main reasons that you might need dark-flats are:

1) Your flat exposures are very long.  Remember that dark current is linear with exposure times.  Flats taken with an exposure under say 5 seconds generate very little (essentially zero) dark current with a cooled camera.  In that case, the primary advantage of dark flats is to prove the bias offset.  On the other hand, if your flats are made with say 600s, you need to subtract the dark signal.

2) If you do not use bias offsets in your calibration process, dark-flats will provide the offset needed to properly calibrate your subs.

I personally never use dark-flat and because I include the bias offset, the calibration process works perfectly.

Amp-glow is an RBI effect that comes from NIR light emitted by an amplifier on the sensor chip.  It will be the same for light, darks, and (I believe) flats.  Amp-glow should be removed by calibration and I don't think that you need dark-flat to make that happen; but, I have to admit that I haven't worked it out to be totally sure of myself on this point.

John
Like
HegAstro 11.91
...
· 
John Hayes:
The two main reasons that you might need dark-flats are:

1) Your flat exposures are very long.  Remember that dark current is linear with exposure times.  Flats taken with an exposure under say 5 seconds generate very little (essentially zero) dark current with a cooled camera.  In that case, the primary advantage of dark flats is to prove the bias offset.  On the other hand, if your flats are made with say 600s, you need to subtract the dark signal.

2) If you do not use bias offsets in your calibration process, dark-flats will provide the offset needed to properly calibrate your subs.

I personally never use dark-flat and because I include the bias offset, the calibration process works perfectly.

Amp-glow is an RBI effect that comes from NIR light emitted by an amplifier on the sensor chip.  It will be the same for light, darks, and (I believe) flats.  Amp-glow should be removed by calibration and I don't think that you need dark-flat to make that happen; but, I have to admit that I haven't worked it out to be totally sure of myself on this point.

John

John, at least for the 294MM, the magnitude of the amp glow depends on temperature and you will over or under correct it if you use the incorrect temperature. From this, I think it depends on time as well. That’s why proper flat calibration requires dark flats - though if the time is short enough, it may not make a huge difference.
Like
 
Register or login to create to post a reply.