Let's discuss about dark, bias, dark-flats... [Deep Sky] Acquisition techniques · Daniel Arenas · ... · 106 · 5018 · 23

HegAstro 11.91
...
· 
I don't know what's hard about it. Calibration is the most routine and most reliable part of my workflow. I have a family of master darks at various times and temperatures  corresponding to the various exposure times and temperatures I use. The issue, in my opinion, is that many of the tutorials you see on the web were written when CCD was king. CMOS behaves a bit differently, but once you understand it, it is no more difficult and no less reliable than CCD. And a lot cheaper!
Edited ...
Like
Overcast_Observatory 20.43
...
· 
·  1 like
Bob Lockwood:
This has been a very interesting read, and I was happy just reading what everyone had to say, and I’m sure I’ll get slack for this. Is calibrating CMOS images as difficult as everyone makes it sound. I still use CCD, and all I do is darks and flats, 20 darks and 10 flats, I let CCDStack do what it needs to do and I’m done. I don’t look at the math or numbers, I visually look at each calibrated sub, if they look good I combine them to their single L,R,G,B, or S,H,O file, I do a color combine and move it to PS.  If the math/numbers say there not good, but I can’t see it visually in the color image with or without enlarging to where I see pixels, why should I be concerned.  Do my images turn out good, I think so, are they perfect, absolutely not.



It's not hard, but some cmos cameras do have quirks. The newest imx455/571/533 are as easy as CCD.  There are a couple of approaches as has been mentioned and I've tried several. I used the word academic before, as I can't see any real difference in my data.
Like
andymw 11.01
...
· 
·  1 like
Sorry in advance, but I haven't read all the previous posts.

I do the following:

* I take a set of master darks maybe once a year (maybe only 10 frames each)
* I take flats and dark flats for each filter once every major imaging session (i.e. I may be on a target over several nights where I am not altering the imaging train, so one set taken on any of those nights will work).  Typically 20 frames each as they are quite quick to take.
* No bias frames

That's it really and seems to work fine.
Like
jhayes_tucson 22.40
...
· 
·  1 like
Daniel Arenas:
Thanks John,

I assume that you're doing manual stacking. I'm using WBPP 2.4.5 to stack, following the Adam Block's videos.
In that case, if I do a library of bias and stack again all my data with flats, darks and bias but no dark-falts how can I appreciate if there's any kind or improvement or not, just visually when stretching the master light with ScreenTransferFunction o there are some parameters I can look for with any process in PixInsight to compare both masterlight?

With the chart you shared with us I think its clear than more than 16 subs are not necessary, 20 for those who want to have round figures. But once more just to do any kind of test with my camera to notice or not if there's any positive variation? Maybe with statistics, is there any easy way??

I very appreciate your contribution to this thread, in fact I think that all of us do.

Daniel,
Yes, I use manual stacking--mainly because I like to check everything as I go...and I find subtle problems all the time.  In your case, you can test everything by simply running WBPP with 16 flats, darks and bias frames and then do it with a lot more frames in the calibration files.  You can then compare the two results both visually and with one of the noise evaluation tools in PI.  Do the same with dark flats and no dark flats.  That will tell you how well the calibration process is working.

Good luck with it!

John
Like
Jbis29 1.20
...
· 
Great info here. Makes for a very interesting read. I wish the Handbook of astronomical image processing was available on kindle…i image with an uncooled dslr and so calibration ideas surrounding that seem to stretch from Star Wars to Star Trek. Still learning tho. All this has really helped. Id never considered that it would be wise not to think of Bias as a set pattern no matter the temp. I hadn’t thought about temp as a variable with it.
Like
Jbis29 1.20
...
· 
John Hayes:
It's probably not as often as it seems but I feel like I answer this question on a monthly basis.  First, Berry and Burnell have a good discussion about  image calibration in their book, "The Handbook of Astronomical Image Processing"  (https://www.amazon.com/Handbook-Astronomical-Image-Processing/dp/0943396824).    They discuss the "Rule of 5" to limit added noise in a single sub to 10% of a single bias frame but they don't address the statistics of stacked images.  Back in April of 2016, I worked out the statistics of calibration noise when subtracting unwanted additive signals in a stacked image.  Additive signals include both darks and bias signals.  You can find that thread here in post #33:  https://www.cloudynights.com/topic/534493-the-statistics-of-image-calibration/page-2?hl=%20calibration.  (You can safely ignore the rest of the discussion in that thread.  There are a lot of incorrect statements about this stuff that muddle up the conclusions.  Also don't respond on that thread--I won't see it).  It is possible to modify the results to cover flat division as well, but I didn't do that.  The equation shows a couple of interesting things.

1) The amount of noise reduction depends on both the number of dark frames used to calibrate each sub AND the number of light frames in the stack.

2) Past a certain point, using more dark or bias frames achieves only incremental improvement.  This is the rule of diminishing returns.

I've attached a plot below showing the result for a stack of up to 100 subs.  We can quickly see a couple of things.

1) After about 15 subs in the stack, the noise contribution diminishes very slowly.

2) Keep in mind that for most cameras, dark signal is typically a small value, which means that keeping the added noise due to the dark subtraction to below 30% - 50% is sufficient to keep the calibration noise below the photon noise in the stack.

3). For a stack of say 30 subs, going from 16 darks to 32 darks only gains an additional decrease in noise of another 8% (or so).  Using 64 darks only gains an additional 3%-4%, which will be completely unnoticeable in the result.  Working with a larger stack makes things better, but not by enough to justify the effort needed to take more dark data.  For those who are using exposures in the range of 600s-1200s, taking darks can require a significant amount of time. There's no reason to spend more time in the dark than necessary taking calibration data.

CONCLUSION
For most of us doing traditional long exposure imaging with a stacks the range of 15 -100 subs, taking more than about 16 darks is a waste of time.  The same applies to bias data as well.  It won't hurt anything to use 50-100 dark or bias frames, but you are kidding yourself if you think that it is improving your results.  

Rule for most situations:  Use 16 darks to construct your master flat or master bias files and you'll be fine.


Dark Noise Theory for 100 images 1-21-21.jpg


As for your other two questions:
1). Bias is important to insure that the proper offset is removed when you do calibration.  No camera, even the new ones, has identically zero offset and that can be the cause of minor calibration errors.  Bias is super easy to subtract so don't ignore it.

2). As long as your flat data is A) taken within the linear response region of the sensor response, B) your camera has fairly low dark current, and C) your flat exposures are less than a minute (or so) flat-darks are rarely needed.  If you are doing sky flats, you might need dark flats; however,  if you are using a light panel and your exposures are between 2s and 60s, you almost never need dark flats--for most modern CMOS cameras.


John

This is great! I’m constantly looking for reply’s with this depth. Thank you!
Like
Jbis29 1.20
...
· 
One thing I don’t understand is the “camera offset” how does this relate to calibration etc…? 

thanks for any help.
Like
jhayes_tucson 22.40
...
· 
·  1 like
Joseph Biscoe IV:
One thing I don’t understand is the “camera offset” how does this relate to calibration etc…? 

thanks for any help.

Good question!  Offset is there to prevent any pixel from reading zero (or below).  There are generally two offsets.  The first is set up by the sensor/camera manufacturer and it's typically not something that you can adjust.  The second is in software.  Again, the software value is there to doubly insure that all pixel values are transmitted with values above zero.  The value of the software offset is typically set in the ASCOM driver and stored in the FITS header.  If all of your light and calibration frames (bias, darks, and flats) are taken with the same offset values and you use the correct calibration procedure, the offsets won't have any effect on the output.  Some calibration routines (PI is one of them) include an option to remove the software offset before calibration, which allows skipping bias correction when desired.  I'm not familiar with every program but I wouldn't be surprised if some of them just automatically removed the software offset.  Either way, you have to be careful that you don't generate zeros when darks are subtracted and that's why PI offers an option in the ImageCalibration tool to add an output pedestal.  That is a very important feature that avoids creating a Moire pattern (by the interpolation routine used in registering data) when calibrating NB data.

John
Edited ...
Like
HegAstro 11.91
...
· 
·  1 like
I've also wondered about how "bias" actually works in things like DSLRs.

People recommend using ridiculously short shutter speeds like 1/5000s or 1/8000s to take biases. 

In actual fact, for DSLRs, the time the sensor is active is controlled by the flash sync speed and shutter speeds faster than that are accomplished by closing the second curtain before the first has cleared the sensor. Think a slit traversing the sensor. Of course, this also means the sensor is active a lot longer than 1/8000s or 1/5000s during these exposures - can be as much as 1/160s for some cameras.
Like
jhayes_tucson 22.40
...
· 
·  1 like
Arun,
As long as the sensor is active for a "very short" exposure you are good...and you are totally right that using an exposure of 1/8000 second on a DSLR isn't the same as exposing the sensor for that same period.  Remember that bias really only shows two main things:  1) Read noise and (most important) 2) Offset.  Master bias frames are just there to remove any offset.

John
Like
Jbis29 1.20
...
· 
John Hayes:
Joseph Biscoe IV:
One thing I don’t understand is the “camera offset” how does this relate to calibration etc…? 

thanks for any help.

Good question!  Offset is there to prevent any pixel from reading zero (or below).  There are generally two offsets.  The first is set up by the sensor/camera manufacturer and it's typically not something that you can adjust.  The second is in software.  Again, the software value is there to doubly insure that all pixel values are transmitted with values above zero.  The value of the software offset is typically set in the ASCOM driver and stored in the FITS header.  If all of your light and calibration frames (bias, darks, and flats) are taken with the same offset values and you use the correct calibration procedure, the offsets won't have any effect on the output.  Some calibration routines (PI is one of them) include an option to remove the software offset before calibration, which allows skipping bias correction when desired.  I'm not familiar with every program but I wouldn't be surprised if some of them just automatically removed the software offset.  Either way, you have to be careful that you don't generate zeros when darks are subtracted and that's why PI offers an option in the ImageCalibration tool to add an output pedestal.  That is a very important feature that avoids creating a Moire pattern (by the interpolation routine used in registering data) when calibrating NB data.

John

Ok, so the offset is there as a safety net from outputting random pixels as a “0” value or as “no information “ I’m a relatively new PI user working slowly through Adam Blocks website training. How do you check for the software offset in PI? I noticed awhile back that I hade some very strange flats. I started using a filter (Optolong L-enhance 2” ) and my usual way of obtaining flats- white laptop screen, white t-shirt, - I think may not have been in the bandpasses because I had some zero pixels when I used the readout mode on the weird flats.

When using the “output pedestal” this is the same as assigning an “offset” to the output? Is that the right way of looking at that?
Like
Jbis29 1.20
...
· 
I've also wondered about how "bias" actually works in things like DSLRs.

People recommend using ridiculously short shutter speeds like 1/5000s or 1/8000s to take biases. 

In actual fact, for DSLRs, the time the sensor is active is controlled by the flash sync speed and shutter speeds faster than that are accomplished by closing the second curtain before the first has cleared the sensor. Think a slit traversing the sensor. Of course, this also means the sensor is active a lot longer than 1/8000s or 1/5000s during these exposures - can be as much as 1/160s for some cameras.

This is mind blowing. So flash sync speed is the fastest a sensor can operate? So then do the bias properties change as you decrease exposure time? Say from 1/4000s to the length of my normal lights, 240s-300s? And is there any way to mathematically define this change? I’m assuming the book @John Hayes mentioned earlier would help with this but I travel extensively so kindle is best for me.
Like
HegAstro 11.91
...
· 
·  2 likes
Joseph Biscoe IV:
I've also wondered about how "bias" actually works in things like DSLRs.

People recommend using ridiculously short shutter speeds like 1/5000s or 1/8000s to take biases. 

In actual fact, for DSLRs, the time the sensor is active is controlled by the flash sync speed and shutter speeds faster than that are accomplished by closing the second curtain before the first has cleared the sensor. Think a slit traversing the sensor. Of course, this also means the sensor is active a lot longer than 1/8000s or 1/5000s during these exposures - can be as much as 1/160s for some cameras.

This is mind blowing. So flash sync speed is the fastest a sensor can operate? So then do the bias properties change as you decrease exposure time? Say from 1/4000s to the length of my normal lights, 240s-300s? And is there any way to mathematically define this change? I’m assuming the book @John Hayes mentioned earlier would help with this but I travel extensively so kindle is best for me.

As John mentioned, so long as the exposure time is short enough, it will be just as good as setting the shutter speed of 1/5000s. The intention of a bias frame is to determine and subtract the offset. This is easy to understand once you realize what calibration achieves: the purpose of calibration is to generate a file such that the pixel values are some multiplicative constant times the number of photons incident on that pixel, while also correcting for spatial and pixel response non uniformities by using flat frames. To determine the offset, you simply set the exposure time to be as short as the sensor allows so that other factors like dark current do not add on to the offset - even 1/160s is pretty short. Longer exposure times will add dark current to the pixel values but generally this is significant only after many seconds.
Edited ...
Like
Jbis29 1.20
...
· 
@Arun H. Thank you so much! I appreciate your help. That makes total sense. I think…haha. Why would you want to subtract the offset? Is this done to be able to be left with only the desirable information? I’ve been doing these calibration steps under the impression that it’s simply to remove the noise.
Like
HegAstro 11.91
...
· 
·  2 likes
No calibration does not and cannot remove noise. It can only correct for things than can be predicted or modeled - offset, mean dark current, spatial non uniformities due to the optics or dust and predictable non uniformities in how pixels respond.
Edited ...
Like
kuechlew 7.75
...
· 
·  1 like
John Hayes:
Dan Kearl:
John - agree with all of what you said. But isn't the bias signal already contained in the dark frame? So using a master dark that does not have the bias subtracted from it renders the subtraction of bias irrelevant, wouldn't it? Of course, the dark frames would have to be at the same time/temperature. I do know that for some sensors, such as the one used in the 294MC/MM and possibly also the ASI 1600, the "bias" signal depends on exposure time, see for instance this analysis by John Upton:

https://www.cloudynights.com/topic/636301-asi294mc-calibration-%E2%80%93-testing-notes-thoughts-and-opinions/

Subtracting a bias in these cases seems more trouble than it is worth so I simply use matched time/temperature darks. Attached, for reference his conclusions, specific of course, to this sensor:

Dan,
The calibration equation is [(light+dark+bias) - (dark+bias)]/[flat+flat_dark+bias].  If flat_dark is ~0, that's not needed.  Bias represents read noise+offset and if that total is ~0, then it's not needed either.  However if it's not zero, the component in the denominator will cause a small error.  If your calibration files look flawless, then you are good to go, but that's not always the case.  I personally think that bias correction is super easy.  It's trivial to take the data and then it's just a checkbox to do the calculation.  How much trouble is that?

...

John

This always confuses me. From the Covington source cited above:

"Bias, the fact that pixel values don't start at zero. Even with no light and no exposure, each pixel typically has a nonzero value. Bias has to be measured separately if your software wants to convert a dark frame to a different exposure time than it was taken with. Otherwise, you do not need bias frames because dark frames include bias."

If I take a 5 min light frame at 0 degrees Celsius and then capture a  5 min dark frames at the same temperature they both contain the bias - within the  statistical variance of a single frame. By creating a master dark from a number of such dark frames I'll reduce the statistical variance and the master dark still contains the bias. So if I calibrate the light frame with this master dark I get rid of the dark and the bias contributions. Or am I missing something?
Of course I can deduct a master bias both from the light frame and from the master dark but this will just change the equation (light + dark + bias) - (masterdark with bias) to (light + dark + bias - masterbias) - (masterdark with bias - masterbias) which is the same. The same basically holds for the flat and flat dark at same temperature and same duration. Am I missing something?

How accurate are you with your flats? If you're refocusing during the night, do you take flats for each focus position? I'm not talking about different filters but different focus positions for the same filter during an imaging session.  

Thank you for this fruitful discussion and clear skies
Wolfgang
Like
HegAstro 11.91
...
· 
·  3 likes
If I take a 5 min light frame at 0 degrees Celsius and then capture a  5 min dark frames at the same temperature they both contain the bias - within the  statistical variance of a single frame. By creating a master dark from a number of such dark frames I'll reduce the statistical variance and the master dark still contains the bias. So if I calibrate the light frame with this master dark I get rid of the dark and the bias contributions. Or am I missing something?


You are absolutely correct. Correcting with a master dark from which a bias has not been subtracted will subtract both the correct bias and mean dark current. You have to be sure that the master dark is taken at the same time and temperature as the light frame for this to work. What you are doing is in fact what I do. I do not bother with separate biases at all, and simply correct my lights with master darks and my  flats with flat-darks, in both cases the darks being matched for time and temperature as the light or flat that it is correcting. This is, in my opinion, safer for CMOS sensors than attempting to take a separate bias, the reason being that the behavior of some of these CMOS sensors at very short exposure times (such as what you'd use for dedicated bias) is slightly different than at the exposure times we typically use for lights and even flats. Using this method, all you are relying on is that the sensor behaves repeatably at a given time and temperature which is pretty much always the case.


One advantage of taking biases is the ability to scale darks. For example, if you could take a good bias, and you had a (Bias subtracted) dark that was 60 seconds and wanted one for 180 seconds, you would simply multiply the 60 second dark by three. You can't do this unless you subtract the bias.
Like
Jbis29 1.20
...
· 
@Arun H. ok, it’s all making sense now. I shoot with an uncooled dslr so getting darks that match up with my lights is not doable. Not only do my lights temp change but darks change as well. So it would be better for me to scale darks or “dark optimization “ in WBPP. But that being said, the bias could be a master bias made from frames that are less than 1” long. I could match my flash sync speed to keep the variation and “stress” off the sensor?
Like
kuechlew 7.75
...
· 
·  1 like
If I take a 5 min light frame at 0 degrees Celsius and then capture a  5 min dark frames at the same temperature they both contain the bias - within the  statistical variance of a single frame. By creating a master dark from a number of such dark frames I'll reduce the statistical variance and the master dark still contains the bias. So if I calibrate the light frame with this master dark I get rid of the dark and the bias contributions. Or am I missing something?


You are absolutely correct. Correcting with a master dark from which a bias has not been subtracted will subtract both the correct bias and mean dark current. You have to be sure that the master dark is taken at the same time and temperature as the light frame for this to work. What you are doing is in fact what I do. I do not bother with separate biases at all, and simply correct my lights with master darks and my  flats with flat-darks, in both cases the darks being matched for time and temperature as the light or flat that it is correcting. This is, in my opinion, safer for CMOS sensors than attempting to take a separate bias, the reason being that the behavior of some of these CMOS sensors at very short exposure times (such as what you'd use for dedicated bias) is slightly different than at the exposure times we typically use for lights and even flats. Using this method, all you are relying on is that the sensor behaves repeatably at a given time and temperature which is pretty much always the case.


One advantage of taking biases is the ability to scale darks. For example, if you could take a good bias, and you had a (Bias subtracted) dark that was 60 seconds and wanted one for 180 seconds, you would simply multiply the 60 second dark by three. You can't do this unless you subtract the bias.

Thank you for this confirmation and explanation Arun!

Clear skies
Wolfgang
Like
jhayes_tucson 22.40
...
· 
·  1 like
Joseph Biscoe IV:
Ok, so the offset is there as a safety net from outputting random pixels as a “0” value or as “no information “ I’m a relatively new PI user working slowly through Adam Blocks website training. How do you check for the software offset in PI? I noticed awhile back that I hade some very strange flats. I started using a filter (Optolong L-enhance 2” ) and my usual way of obtaining flats- white laptop screen, white t-shirt, - I think may not have been in the bandpasses because I had some zero pixels when I used the readout mode on the weird flats.

When using the “output pedestal” this is the same as assigning an “offset” to the output? Is that the right way of looking at that?

As I said, the offset value is written in the FITs header--you can just go read it.  The output pedestal that can be generated by the ImageCalibration tool adds a pedestal to the calibrated output.  It is there solely to prevent interpolation errors with data that may contain zeros in the image.  Those zeros could be generated by subtracting darks that aren't absolutely perfectly matched to the light data.  They shouldn't exist but it's not an uncommon problem when imaging with NB filters simply because the data in the dark regions may be VERY, VERY close to the dark level in the master dark file.  Noise (which is simply an uncertainty in the measurement) may cause the zeros to appear and that's a problem for the interpolation routine.  Adding a pedestal is a simple fix that doesn't damage the quality of the image in any way.

John
Like
jhayes_tucson 22.40
...
· 
This always confuses me. From the Covington source cited above:

"Bias, the fact that pixel values don't start at zero. Even with no light and no exposure, each pixel typically has a nonzero value. Bias has to be measured separately if your software wants to convert a dark frame to a different exposure time than it was taken with. Otherwise, you do not need bias frames because dark frames include bias."

If I take a 5 min light frame at 0 degrees Celsius and then capture a  5 min dark frames at the same temperature they both contain the bias - within the  statistical variance of a single frame. By creating a master dark from a number of such dark frames I'll reduce the statistical variance and the master dark still contains the bias. So if I calibrate the light frame with this master dark I get rid of the dark and the bias contributions. Or am I missing something?
Of course I can deduct a master bias both from the light frame and from the master dark but this will just change the equation (light + dark + bias) - (masterdark with bias) to (light + dark + bias - masterbias) - (masterdark with bias - masterbias) which is the same. The same basically holds for the flat and flat dark at same temperature and same duration. Am I missing something?

How accurate are you with your flats? If you're refocusing during the night, do you take flats for each focus position? I'm not talking about different filters but different focus positions for the same filter during an imaging session.  

Thank you for this fruitful discussion and clear skies
Wolfgang

Yes, you are missing something.  When you subtract the dark signal from the light data, you also subtract the bias offset, but you are forgetting about the bias offset contained in the flat data.  If bias isn't zero, that small offset causes a small error in the calibrated result that will vary with brightness.  Of course if the bias offset is very small, it won't make much of a difference.  Just be aware that if the bias offset isn't small, you can generate noticeable errors.

John
Edited ...
Like
HegAstro 11.91
...
· 
·  3 likes
John Hayes:
When you subtract the dark signal from the light data, you also subtract the bias offset, but you are forgetting about the bias offset contained in the flat data.  If bias isn't zero, that small offset causes a small error in the calibrated result that will vary with brightness


John - this is the reason why I also calibrate my flat with a flat-dark of the same time/temp as the flat. To get rid of the bias. The dark current for typical flat exposure times is of course not meaningful. The bias in a flat can be significant. On my 294MM, the bias at Gain 120 is about 1900 ADU, I believe, which is a not insignificant fraction of the 18000-20000 ADU I target for a flat.

Wolfgang is also subtracting a flat-dark from his flat, which would take care of the concern you bring up.
Edited ...
Like
andymw 11.01
...
· 
FWIW:  I know I don't have the in-depth knowledge of how this all works

(although I do get the basics ... (Light-MasterDark)/MasterFlat*mean(MasterFlat) etc. .)

From what I can tell:

* Taking darks at the same gain, exposure time, offsets and temp as your lights works.
* Taking dark flats at the same gain, exposure time, offsets and temp as your flats works.
* The flats do not need to be at the same gain as your lights.
* If you do the above you don't need bias frames as the bias is included in the darks.

Just trying to simplify things for folks.

I take my flats at gain 0 as I'm using an LED panel with 2 layers of T shirt and need to increase the exposure time to get to the linear portion of my CMOS sensor.  My lights, however are typically at gain 139 (unity gain for me).  I try to get my master flats at around 20000 ADU. It seems to work fine.

N.B. The above all works fine as long as you have a big enough camera offset in the first place to make sure you don't end up with negative values that lead to clipping of the data.  For my ASI1600MM Pro a value of 50 for the offset works fine for lights at gain 139.  Your camera manufacturer will have recommendations for offset values at different gains.
Edited ...
Like
Jbis29 1.20
...
· 
@John Hayes thank you so much! I need to divorce the idea of calibrating out noise and start realizing the math that going on here. It really helps to understand also how to capture darks and troubleshoot. 

when using the “output pedestal” is there a way to calculate the correct value when using image integration?
Like
HegAstro 11.91
...
· 
·  1 like
Andy Wray:
* Taking darks at the same gain, exposure time, offsets and temp as your lights works.
* Taking dark flats at the same gain, exposure time, offsets and temp as your flats works.
* The flats do not need to be at the same gain as your lights.
* If you do the above you don't need bias frames as the bias is included in the darks.


Yes, with the exception it is best to have your flats be at the same gain as your lights. The reason for this is that flats correct not just for optical non uniformities, but also pixel response non uniformity and this can, at least in theory, be dependent on gain. That said, I have used flats taken at different gains than my lights on a few occasions and this has worked. YMMV.

There was a question about DSLR calibration. I can only tell you what I did with my DSLRs when I used them, I don't anymore. Correcting with biases on my Canon 5D Mark IV always seemed to leave some residual banding. What seemed to work better was forgetting about bias and just correcting with darks as described above, using the bias contained in the darks. Yes, the darks could not be temperature matched, but I'd always take a set of 10 or so darks at the end of my imaging session and this seemed to be good enough. Ultimately, the predictability and convenience of cooled cameras won out, which is why I don't use DSLRs any more.
Like
 
Register or login to create to post a reply.