Let's discuss about dark, bias, dark-flats... [Deep Sky] Acquisition techniques · Daniel Arenas · ... · 106 · 5018 · 23

Jbis29 1.20
...
· 
@Arun H. of course you’re right about the dedicated cams. There’s no comparison when it comes to control and repeatability. I’m looking at upgrading by spring next year. I will try it both way ways (dark optimization/no dark optimization just the exposure signal for lights and flats.) and see. But at least now I know what actually going with calibration…thanks!
Like
Alan_Brunelle
...
· 
·  1 like
John Hayes:
Sean van Drogen:
I read that for the 2600MM for instance if cooled sufficiently bias alone is enough as dark current is insignificant for this camera.

Boy, I don't know who is writing that stuff but it is simply not true.  The IMX455 chip has pretty low dark current but it sure isn't zero.  My typical exposure is 10 minutes with a QHY600M (which uses the same chip) and the dark current is quite obvious in the way of warm pixels.  You can get away with ignoring dark calibration but you are putting the load on the stacking filter to remove all that stuff and that may be okay so as long as you understand what you are doing and you are using sufficient dithering.

John



One thing I have observed with respect to calibration of the IMX571/455 is that if you Bias calibrate your Darks you, the resulting Master Dark suffers suspect results. (Previoulsy stated clipping and this was incorrect).  I've also seen similar issues when calibrating light frames with both Master Dark and Master Bias.  Now I am only using a master bias for flat calibration. 

For light calibration I am using a Master Dark (uncalibrated) and a Master Flat (Bias Calibrated).  I've found this to produce the most consistent results. 

I did find it interesting in your previous post that 16 darks and 16 bias were enough to properly calibrate most reasonable data sets.  I've always made a master dark using 30 darks and 250 bias.  Why?  I dont know.  Just the way I learned about it.  I'm actually glad to see that you dont think it's totally necessary.  Considering how large the files are for IMX455 I really hate the idea of having to capture and process so many frames for calibration.  Pretty cool...

*I'll probably not add much to this thread, but I too use many bias frames with my QHY286C (same sensor as the 2600).  But I do so because I make a superbias frame.  Seems to work well.  With this camera, I do sky flats.  So nothing odd with this camera with short flats.  But that's with a limited number of camera modes used so far.
Edited ...
Like
andymw 11.01
...
· 
·  1 like
Yes, with the exception it is best to have your flats be at the same gain as your lights.


I would agree, and I wish I could afford a dim enough flat panel to allow me to do that.  The only reason I use a lower gain is that it generates a better flat overall for my particular sensor.
Like
jhayes_tucson 22.40
...
· 
·  5 likes
Joseph Biscoe IV:
@John Hayes thank you so much! I need to divorce the idea of calibrating out noise and start realizing the math that going on here. It really helps to understand also how to capture darks and troubleshoot. 

when using the “output pedestal” is there a way to calculate the correct value when using image integration?

It's really important to understand right up front that you calibrate out unwanted signals and unwanted signal-modulation.  Calibration always ADDS noise!  The trick is to figure out how to minimize the amount of noise that you add.  One other important thing to lock in is that noise represents uncertainty in the measurement--nothing more.

As for output pedestal, you rarely need to worry about it.  You'll know that you need it if your registered images show a Moire pattern and you are most likely to see that with NB data.  In that case, you simply add say 20-50 ADU units to raise the floor safely above zero.

John
Like
Jbis29 1.20
...
· 
@John Hayes thanks so much! This has really helped! Last thing I wonder about, is the “linear” range of the camera. How do you know when you’re there? When the histogram is just right of the left?
Like
jhayes_tucson 22.40
...
· 
·  2 likes
Joseph Biscoe IV:
@John Hayes thanks so much! This has really helped! Last thing I wonder about, is the “linear” range of the camera. How do you know when you’re there? When the histogram is just right of the left?

You can fully characterize your camera by measuring the photon transfer curve and I know a few amateurs who have done this.  It's not a hard measurement, but the rest of us who are too lazy to go through the exercise depend on manufacturer supplied performance data for our cameras.  Early CMOS sensor were famous for being extremely non-linear, but fortunately that's been fixed.  Modern CMOS astro-cameras are VERY linear and most are sufficiently linear up to around 90% of saturation.  If you expose your flats to keep the peak of the histogram below around 75% of the max value in your camera, you should be safely operating well within the linear range of the response curve.  Having said that, I'll add that some CMOS sensors may exhibit wonky behavior at very short exposure values so it's a good idea to operate near the middle of the response curve using exposures greater than a second or two.

For those more interested in understanding sensor characteristics and characterization, a VERY good reference is: "Photon Transfer", James R Janesick, SPIE Press, 2010 (2nd Ed).  This is a technical reference complete with a lot of equations but it is very well written and quite readable for anyone with a technical background.  It's one of my favorite references for understanding sensor characteristics.

John
Edited ...
Like
Jbis29 1.20
...
· 
@John Hayes thanks so much I’ll give it a look!
Like
HegAstro 11.91
...
· 
·  2 likes
I'll add a couple of examples. Here is a characterization of the ASI1600 sensor (I believe it is a Toshiba sensor). You can see it is quite linear:

https://www.cloudynights.com/topic/554803-more-asi1600mm-cool-statistics-linearity-and-more-zwo-settings/

And John Upton's characterization of the Sony sensor in the 294MC (similar to the 294MM) which illustrates the non linearity at short exposure times John Hayes was talking about:

https://www.cloudynights.com/topic/636301-asi294mc-calibration-–-testing-notes-thoughts-and-opinions/

You can start to see it becomes linear after a 3 second exposure - this is the reason I keep my flat times above 4s and don't take biases. As he notes, there is a way to take a bias with this sensor, but it requires some work and is not necessary if you use darks of the same time/temp as lights and flats.
Edited ...
Like
Jbis29 1.20
...
· 
@Arun H. thank you! Much appreciated!
Like
Freestar8n 1.51
...
· 
·  2 likes
Joseph Biscoe IV:
I need to divorce the idea of calibrating out noise and start realizing the math that going on here. It really helps to understand also how to capture darks and troubleshoot.

I encourage you to re-marry the idea of calibrating out noise - because that is exactly what is going on in calibration.  Unfortunately the amateur descriptions of how this works are lacking in details - and end up missing the whole point.

Any description of image calibration that doesn't explicitly mention fixed pattern noise, or FPN, will create a false impression of the how and why of master frames - but it isn't hard to follow with the right terminology.

If we just focus on doing a master dark subtract - of course it will reduce the noise in an exposure.  That is how DSLR in camera noise reduction works.  Just take an exposure, then subtract a dark of the same duration and - voila - less noise.  It's because each exposure will have a fixed, randomish spatial noise that repeats in every exposure.  There is no need to worry about whether it is "really" noise or not - because it is automatically a noise term if it obscures the image that we are trying to capture.

If the dark current shot noise in each exposure has sigma=2e at each pixel, but the pattern noise across the sensor has an overall sigma of 10e, then the total noise in each exposure is sqrt(2^2 + 10^2) =  10.2e - and FPN completely dominates the noise.

If we now subtract a single dark from that exposure, the FPN term cancels completely because it is constant in each exposure, but the shot noise increases by a factor of sqrt(2) since it is random.  So the final noise after doing a single dark subtract is:

N = sqrt(2^2 + 2^2 + (10 - 10)^2)  = 2.8e 

And this is a huge reduction in noise with just a single dark subtract.

If instead you don't subtract just a single dark and you subtract an average of many darks - the noise in that average dark will be that same FPN of 10e plus only a tiny amount of residual read noise - because it is random and is averaged out in multiple frames.  In the limit the best you can do is:

NLimit = sqrt(2^2 + 0^2 + (10-10)^2) = 2e

and this is just a single exposure with its inherent random dark current shot noise that you can't do anything about.

But you have greatly reduced the noise in the image by creating a master dark that is a very pure capture of the inherent FPN.

This should be easy to follow, and it should make sense - and you can see that without explicitly talking  about the role of FPN, it avoids the elephant in the room.  Which is a bad thing if the elephant is what it's all about.

Frank
Like
Jbis29 1.20
...
· 
@Freestar8n  thank you. It makes total sense. Thanks for the explanation. Any further reading you’d recommend?
Like
Die_Launische_Diva 11.14
...
· 
·  1 like
Here is a nice guide:

http://www.astropy.org/ccd-reduction-and-photometry-guide/v/dev/notebooks/01-05-Calibration-overview.html

You can experiment with the code online by using the Binder service. It just needs some patience.

I think that a lot of confusion originates from the wrong use of the word "noise".
Like
jhayes_tucson 22.40
...
· 
·  3 likes
Joseph Biscoe IV:
@Freestar8n  thank you. It makes total sense. Thanks for the explanation. Any further reading you’d recommend?

Joe,
Be careful.  Unfortunately, Frank's explanation does not make "total sense"!  Many years ago, Frank was good enough to recommend the book, "Photon Transfer" to me but it looks like he still hasn't actually read it.  Janesick defines FPN is a symptom of PRNU, which is the variation in responsivity between pixels across the sensor.  FPN is most definitely NOT due to dark current!  There is something called Dark(FPN) and it looks like that's what Frank is talking about.  Dark current is characteristic of each pixel so it forms a fixed pattern and we use that fact to subtract it's signal from each image.  His statement, "There is no need to worry about whether it is "really" noise or not - because it is automatically a noise term if it obscures the image that we are trying to capture." lies at the core of a fundamental technical disagreement that I've had with him for years.  Noise is NOT something that obscures the image; noise is simply uncertainty in the measurement.  Don't lose sight of that fact.  Simply subtracting a single dark frame from a sub is an effort to subtract an estimate of the dark current signal from the image.  The problem with subtracting a single frame is that the estimate of the dark signal using only a single frame isn't very good and when you do that, you add 41% of the dark noise back into your image (as shown in the plots that I posted.). This approach is indeed used in DSLRs simply because the magnitude of the dark signal with a non-cooled sensor causes a more undesirable problem than the effect of the increased noise added by subtracting a poor estimate of the dark signal.  Furthermore, with an uncooled camera the only approach that makes sense is to take a single dark frame right after taking the image--mostly likely at the same sensor temperature.  With a cooled sensor, we can control the temperature so that we can average many frames in the calibration data to produce a better estimate of the signal and to reduce the noise by the sqrt() of the number of fames.

An excellent reference about the statistics that apply to image calibration is, "An Introduction to Error Analysis, The Study of Uncertainties in Physical Measurements", by John Taylor.  This book used to be available as a PDF online but now it looks like you have to order it.  You can save some money by buying it used here:  https://www.discoverbooks.com/An-Introduction-to-Error-Analysis-The-Study-of-U-p/093570275x.htm?cond=0004&gclid=CjwKCAjwo_KXBhAaEiwA2RZ8hCq0PnZ_KCi4rpjg0Mb9_KW_wMwbc2LCnN-6CwBX1FrGn28jw23IBhoC9F8QAvD_BwE.  This book is an easy read and it's one of the best I've read on the statistics of measurements, which is what we are doing when we take and calibrate an image.  Highly recommended.

I feel compelled to add that I am indeed an amateur astronomer but as a professional optical engineer and professor of optics, I'm pretty sure that my description of how this works is not "lacking in details" (since I referenced the math behind the plot that I posted) or that I am "missing the whole point" by carefully describing how this stuff works.  I'm trying my best to provide an accurate, clear answer to the question that you asked.


John
Edited ...
Like
HegAstro 11.91
...
· 
·  3 likes
I think that a lot of confusion originates from the wrong use of the word "noise".

John Hayes defined it very well. Noise is simply uncertainty in the estimate of a quantity. Our master darks are estimates of the mean dark current under given conditions and the error in this estimate reduces as the square root of the number of frames. This is a fundamental concept in statistics  and works in a similar manner to our estimate of the proportion of black and red balls in a box reducing in error and increasing in confidence  with increasing sampling.

The error in this estimate reduces very quickly at first, example halving between 4 and 16 frames, but reducing it 50% further would require 64 frames - diminishing returns. This residual error in our estimate of the mean dark current ADDS in quadrature to the existing sources of noise in our lights; the larger those sources of noise, the less productive it is to take very large numbers of darks, since their contribution to error will be low in comparison. All this is illustrated very well in the graph John Hayes shared at the beginning of this thread.
Edited ...
Like
Jbis29 1.20
...
· 
@Die Launische Diva thank you so much.
Like
Die_Launische_Diva 11.14
...
· 
·  1 like
I think that a lot of confusion originates from the wrong use of the word "noise".

John Hayes defined it very well. Noise is simply uncertainty in the estimate of a quantity. Our master darks are estimates of the mean dark current under given conditions and the error in this estimate reduces as the square root of the number of frames. This is a fundamental concept in statistics  and works in a similar manner to our estimate of the number of black and red balls in a box reducing in error and increasing in confidence  with increasing sampling.

The error in this estimate reduces very quickly at first, example halving between 4 and 16 frames, but reducing it 50% further would require 64 frames - diminishing returns. This residual error in our estimate of the mean dark current ADDS in quadrature to the existing sources of noise in our lights; the larger those sources of noise, the less productive it is to take very large numbers of darks, since their contribution to error will be low in comparison. All this is illustrated very well in the graph John Hayes shared at the beginning of this thread.

Exactly! I made that comment having Frank's post in mind. Even light pollution is a signal which by taking many light frames we are trying to estimate and reduce its accompanying uncertainty. But that is the subject of another thread. Taking many calibration frames is all about increasing the precision and accuracy of the corresponding quantities entering the calibration equation, and as @John Hayes says, there is a law of diminishing returns for that process.
Edited ...
Like
Jbis29 1.20
...
· 
@John Hayes thanks for taking the time to clarify. I really do appreciate that. I have a lot to learn. I want to learn. I enjoy delving into the details and I have a need to understand how things work. I’ll read whatever you’d suggest starting with what you’ve sent so far. I understand that what we see as noise in the light frame is uncertainty. And that uncertainty is relative to the collector i.e. sensor. So im wondering, if I understand the physics of image capturing the cameras sensor “reads” the values of released electrons at each pixel. (Photons rain down on sensor, photon releases electron, sensor reads how full each pixel is of electrons and outputs that value based on bit depth). Why, or maybe I should ask how, does the processor see a group of electrons as “uncertain” ? 


once again thank you!
Like
andymw 11.01
...
· 
·  1 like
I'm an engineer/mathematician by background and would love to spend time getting into the theory behind all this stuff.  I am, however also a pragmatist.

I've found the following:

* Integrating 10 dark frames into a master dark for each of my exposures (30s, 90s, 180s and 300s) works fine.  Usually once a year and takes a couple of hours.
* Taking 20 flat frames and dark flats per filter every major imaging session works fine (and hardly takes any time .. minutes rather than hours).
* Forget bias frames.

N.B. I do have a mono cooled CMOS camera which has relatively low noise, hence my recommendations above.

I just like to keep it simple, because there are so many other things to worry about with this hobby.
Edited ...
Like
andreatax 7.46
...
· 
Joseph Biscoe IV:
Why, or maybe I should ask how, does the processor see a group of electrons as “uncertain” ?

There isn't such a thing. The uncertainity is in the outcome of the measurement. If the original value of the signal before being measured is 1 then the uncertainity in the measurement might make that value slightly larger or smaller than 1.  Extended across an area this variation is spatially seen as "noise".
Like
Jbis29 1.20
...
· 
andrea tasselli:
Joseph Biscoe IV:
Why, or maybe I should ask how, does the processor see a group of electrons as “uncertain” ?

There isn't such a thing. The uncertainity is in the outcome of the measurement. If the original value of the signal before being measured is 1 then the uncertainity in the measurement might make that value slightly larger or smaller than 1.  Extended across an area this variation is spatially seen as "noise".

I think I understand you correct me if I’m wrong. The uncertainty is found in the inability of the camera to accurately measure each pixel. Is that related then to QE?
Like
lucam_astro 9.15
...
· 
·  2 likes
Joseph Biscoe IV:
@John Hayes thanks for taking the time to clarify. I really do appreciate that. I have a lot to learn. I want to learn. I enjoy delving into the details and I have a need to understand how things work. I’ll read whatever you’d suggest starting with what you’ve sent so far. I understand that what we see as noise in the light frame is uncertainty. And that uncertainty is relative to the collector i.e. sensor. So im wondering, if I understand the physics of image capturing the cameras sensor “reads” the values of released electrons at each pixel. (Photons rain down on sensor, photon releases electron, sensor reads how full each pixel is of electrons and outputs that value based on bit depth). Why, or maybe I should ask how, does the processor see a group of electrons as “uncertain” ? 


once again thank you!

There is uncertainty in various parts of the imaging process. The arrival of photons in the solid angle viewed by one pixel on your camera through the optics is a statistical process. Just like when you wait for the subway, if it's supposed to run every ten minutes sometimes two trains are 9 minutes apart and sometimes 11 minutes. The statistical distribution that describes counting processes is Poisson statistics and it has the property that the standard deviation of the distribution (the uncertainty) is given by the square root of the mean, in our case the number of photons that arrive over the duration of the exposure.

The conversion of photons to electrons is also a statistical process with a mean efficiency given by the quantum efficiency of the sensor, some of the photons are lost and absorbed via scattering processes. Read noise is a summary of all the uncertainty in the readout of the sensor. Again, electrons generated from photoconversion are ultimately counted and converted to analog digital units (ADU). Preamp gain is going to have small temporal fluctuations, losses are statistical, sometimes there will be one extra electron and sometimes one fewer one. All of this adds up to the total read noise of the system. Because there are many mostly uncorrelated contributing sources to read noise, its statistics ends up being described by a Gaussian distribution (central limit theorem).

Luca
Edited ...
Like
andreatax 7.46
...
· 
·  1 like
Joseph Biscoe IV:
I think I understand you correct me if I’m wrong. The uncertainty is found in the inability of the camera to accurately measure each pixel. Is that related then to QE?


It is the whole process that is affected by uncertainity, as described in the post above. Even the intrinisic nature of the measurement of energy from the quantum is affected by uncertainity (although admitedly this a pretty low value, being equal to the Plank constant divided by pi, if I recall it right). Bottom line is that noise is intrinisic to the measurement process (of which astronomical imagery is just one example) and cannot be avoided.
Like
jhayes_tucson 22.40
...
· 
·  2 likes
Joseph Biscoe IV:
@John Hayes thanks for taking the time to clarify. I really do appreciate that. I have a lot to learn. I want to learn. I enjoy delving into the details and I have a need to understand how things work. I’ll read whatever you’d suggest starting with what you’ve sent so far. I understand that what we see as noise in the light frame is uncertainty. And that uncertainty is relative to the collector i.e. sensor. So im wondering, if I understand the physics of image capturing the cameras sensor “reads” the values of released electrons at each pixel. (Photons rain down on sensor, photon releases electron, sensor reads how full each pixel is of electrons and outputs that value based on bit depth). Why, or maybe I should ask how, does the processor see a group of electrons as “uncertain” ? 


once again thank you!

Ah...that's a very good question and you are the perfect straight man Joe!  Let's do a thought experiment.  Wait for a rainy day, then take 10 identical bottles, position them in a line and put a piece of wood over all ten to cover them.   Let's assume that it starts to rain and that the rain comes down with perfect uniformity over the bottles.  Now uncover the bottles for 5 minutes to gather water and cover them all at the precisely the same time.  That's your exposure time.  Rain drops falling into a bottle are discrete events that perfectly mimic how photons arrive at your sensor and both are described by Poisson statistics.  If you very carefully weigh each bottle to see precisely how much water and hence how many drops it gathered, you'll find a small variation between each of the 10 bottles.  That variation will turn out to be the square root of the average number of drops over the ten bottles and that's the uncertainty in the average number of drops that you can expect to gather in any one bottle.  We call the average number of drops "the signal" and we call "the noise", the uncertainty in what we measure in any one bottle. So in discrete events that are driven by Poisson statistics, noise will grow as the square root of the average signal and the SNR will also be given by the square root of the average signal.  That means that the more drops (or photons) that you gather, the more the SNR will increase.  An important take away is that ALL measurements include uncertainty--there is no such thing as a "perfect" one-time measurement and that applies if you are measuring the length of a piece of wood, the brightness of a star using the cameras onboard JWST, or a gravitational wave using LIGO. 

I should add that a sensor is a bit more complicated than a bottle gathering rain drops because the incident photon has to interact with the semiconductor to produce a photo-electron and that photo-electron has to be measured by the electronics (which is ultimately where read noise comes from).  So you have to be a little careful about how you interpret the numbers if you want to convert ADU numbers from your sensor into photon noise.

Hopefully that makes sense.  Understanding the difference between signals and noise is the critical bridge needed to correctly understand how image calibration works.

John
Like
andymw 11.01
...
· 
Joseph Biscoe IV:
I think I understand you correct me if I’m wrong. The uncertainty is found in the inability of the camera to accurately measure each pixel. Is that related then to QE?


As an example:  My ASI1600MM Pro has a pretty low read noise of 1.2e, but also a low QE of 60%.  I'm OK with that as the electronics are not adding much noise, but it does mean I have to take a lot more exposures than someone with a camera that captures photons more efficiently ... i.e. my sensor only sees 6 out of 10 photons that hit it.  Many more modern cameras have a 90% QE, so their owners can either use shorter exposure times or take fewer exposures than I.

Other things you wil no doubt investigate are full well depth and the ADC (analogue to digital converter) bits.  Generally, the bigger the better, however you can get stunning images with just a 12 bit ADC (again due to statistics/stacking), so don't frett too much about those.
Edited ...
Like
Freestar8n 1.51
...
· 
·  1 like
I think the main reason there is a hangup regarding noise vs. signal is that people are looking at literature focusing on a single sensor measurement rather than an image.  When talking about noise in images, the concept of noise is generalized because you are talking about a multitude of sensors and not just one.

That is why you would never see the term "FPN" in normal discussions of laboratory practice - but a glance at any journal or text specific to ccd/cmos sensors will have it front and center.  And since it is described as a Noise term - and since it is established usage in the literature - and since it should not be confusing at all - I consider the matter completely settled.  Noise and signal are determined from context - and if that isn't already obvious I don't know what more I can say to help.  But  obviously if anyone objects to the term "FPN" - I hope they don't use it or recommend texts, such as Janesick, that also use it.

The calibration routines are ultimately based on a noise model of how the sensors behave - and the model I prefer is the following:

ADU(i, j, t) = B + FPN(i, j) + n(i, j, t) + S(i, j)

This says that the ADU value at a given i, j pixel at time t will be the sum of a constant overall offset or bias, B, plus a constant offset *for that pixel* FPN(i,j) and then a time varying noise term that is typically Gaussian or Poisson or some mixture.  The key thing is that the n term is the only one that changes between exposures.  There may be some variation of the sigma of that noise term across the pixels but it is usually small.  In the case of read noise - it is just a constant sigma for all pixels.  The S(i, j) represents the astro signal that is being captured.  It will have its own shot noise but that isn't important here.

We can further specify that B and FPN are defined so that the mean of FPN is 0 across the image.  The FPN will have some sigma, sig(FPN) across the image, and the time-varying noise term will have some sigma, sig(n) across the image in a given exposure.

If you didn't think the FPN term is present then there is no need to capture master darks.  You can just take the average of all the pixels and that will be the offset for each pixel.  But in reality, each pixel has its own offset - and that offset needs to be captured in a good master dark that averages many exposures.

So you see that the idea of averaging many darks into a master makes no sense if you don't have that FPN term.  But if you do have that term, you had better describe the critical role it plays in the image calibration process.

For those you have measured the gain and read noise of a sensor by taking two flats and two bias frames - that also relies on the model above.  If you didn't have that FPN term you could just subtract the mean from a single bias and use the resulting sigma.  But the calibration guides are smarter than that and require the subtraction of two separate bias frames so the FPN cancels out - perfectly.  And the resulting sigma is smaller and captures the actual read noise.

Frank
Edited ...
Like
 
Register or login to create to post a reply.