About the calibration and usage of cameras with the Gsense 4040 sensor Gpixel Gsense4040 (mono) · Philipp Weber · ... · 3 · 166 · 6

WeberPh 6.62
...
· 
·  3 likes
Dear all,

for almost exactly two years now we (Dr. Karl Remeis-Observatory Bamberg) are using a Moravian C4-16000 EC which houses the Gsense 4040 sensor from Gpixel for our university astronomy lab course and public outreach purposes. Initially we had some problems getting good data with this camera, or rather sensor, but after understanding the inner workings of this sCMOS sensor a bit better we are now able to produce satisfying results, and I would like to share some of these aspects.

I know this is a very long post, but I hope it will be helpful to some people.

First of all it is to be noted that the camera body of the C4-16000 EC itself is an oustanding piece of equipment and is a joy to use.

When looking around online in the usual forums and here on Astrobin you can find a lot of images/posts where people describe having problems with the calibration of the images taken with this sensor (some say using it is "a pain"). My suspicion is that most camera manufacturers do not communicate very well that two aspects of this sensor are very important to understand before taking it under the sky as well as before tackling the calibration. To their credit, Moravian specifically are very upfront in their operating manual with both aspects which I want to discuss. All other manuals for Gsense 4040 based cameras I could find online do, in my own opinion, not provide the necessary information.

The dual gain nature and calibrating Gsense 4040 images
The impressive feat of the Gsense 4040 is that with its fairly large pixels of 9 microns it can achieve a, for this size, very low read noise of only ~4 electrons RMS (this is also small compared to the IMX455/571 etc. series from Sony because the area of the pixels is ~5.7x larger), while at the same time retaining a large full well capacity of somewhere around 60k electrons. But to achieve this a bit of technical trickery is required. Natively the sensor only provides 12 bit ADCs for readout which would be a big disadvantage compared to the older CCDs like the KAF-16803 which provides this native 16 bit readout. However, the Gsense 4040 can do a digitization of the charges in the semiconductor twice, once with fairly high gain and once with fairly low gain. As far as I can tell the specific values for those gain settings vary from camera manufacturer to camera manufacturer. For the C4-16000 from Moravian the high gain is fixed at 0.85 electrons/ADU and the low gain is fixed at 19.5 electrons/ADU. If I interprete the manual of the FLI KL 4040 correctly those values can be arbitrarily chosen by the user. For the image read out with the high gain mode the read noise is fairly small, namely the above stated ~4 electrons RMS, but at the same time the full well capacity is fairly small, in case of the C4-16000 only around ~3500 electrons. So using only the high gain mode would result in low noise images which blow out really quickly. The, sort of, opposite results by taking images in the low gain mode. The full well is increased to the ~60k electrons stated above, but the read noise reaches an immense 34.5 electrons RMS (C4-16000). This would mean that there is a lot of room for highlights in the image, but at the same time the darker areas drown in noise.
To get the best of both worlds the Gsense 4040 (or rather the camera firmware) uses both of these modes to read out the same exposure twice. Once with the high gain and once with the low gain. It then goes ahead and checks for each pixel individually which of the digitized values it wants to keep in the final image stored to disk. If the brightness value of a given pixel is below or equal to a given threshold it keeps the value resulting from the high gain mode, if it is above it keeps the value from the low gain mode.
A detail to note here is that, because of its lower gain, the values for the low gain image are much smaller than those from the high gain value for a given pixel. To correct for this effect the brightness values from the low gain readout are first scaled up to the full 16 bit range such that the values of both read modes line up on the linearity scale of the sensor (I have been told that this is actually done using a polynomial to ensure the linearity of the sensor since this varies from device to device). After this is done the values below the mentioned threshold should be the same for high gain and low gain readout, but of course the read noise from the low gain image is much larger. Therefore the values from the high gain image are kept below this threshold.
Above this threshold, however, the high gain image starts to blow out. Therefore the value from the low gain image is kept.
This entire procedure is called "high dynamic range" mode, or HDR in short.
In case of the C4-16000 this threshold is fixed at 3600 ADU which gives a bit of room to spare up to the theoretically maximal value of 4095 ADU resulting from the 12 bit readout.

So let's go through this for some examples. Please understand that the following numbers are chosen arbitrarily by me and do not necessarily reflect a real case. They should only illustrate the behaviour.
Case 1: The high gain mode gives a value of 2500 for a specific pixel. For the same pixel the low gain mode gives a value of 107. This value is transformed to 16 bit and corrected for the differences in gain which results in a value of 2490. The result for the high gain mode (2500) is below the threshold of 3600 so for this particular pixel this value of 2500 is kept.
Case 2: The high gain mode gives a value of 4052 for a specific pixel, i.e. given the 12 bit depth it is almost blown out. For the same pixel the low gain mode gives a value of 1780 which is transformed to 16 bit and corrected for the differences in gain which results in a value of 28480. In this case the value of the high gain is well above the threshold of 3600 therefore the value of the low gain mode is kept for this pixel, so it is not blown out.

Quick side not here: Since the values of the high gain mode are placed natively in the image the brightness steps in this range are really possible in single ADUs, i.e. 1, 2, 3, 4, ... 4095. Because the 12 bit readout of the low gain mode is scaled up to 16 bit this is not true for the low gain image. The brightness values increase in much larger steps. I just checked in one of our 16 bit transformed flat fields (see below) and the average brightess step seems to be 21 ADU, i.e. the values go like ...  28470, 28491, 28512 ... This is not an issue however because when stretching the image in post processing we "squash" the bright areas together so these steps don't show up as posterization.

Of course for imaging we really want to use this HDR mode because it gives us low read noise and high full well, i.e. the high dynamic range we're looking for. But at the same time it complete mixes two different readouts in the images of the exposures stored to disk. This is what people tend to do, but then they run into a very interesting problem. If the HDR mode is also used to take flat fields, and the usual rule is used to aim for somehwere around half saturation (~33000 ADU) only the low gain value for all pixels is stored to disk for the flat fields which is absolutely fatal because when imaging the night sky most pixels are fairly dark, meaning the high gain value is stored. Applying this flat field then effectively applies the flat of the low gain mode to data from the high gain mode and this leads to wrong results, usually described as "fixed pattern noise". This is especially obvious for the Gsense 4040 because it consists of 4 distinct sets of ADCs which causes the very distinct 4-quadrant pattern of the sensor, see this this stack of 100 high gain bias frames as an example:
bias.jpg
When using flats taken this way the only correctly calibrated pixels in the lights are those which have been stored from the low gain readout, i.e. bright stars etc.. Incidentally, when using the HDR mode to capture bias, dark, and flat darks those essentially contain only values taken from the high gain mode because no light reaches the sensor and, apart from hot pixels, none reaches the threshold where the low gain value is stored. So when those frames are applied to lights taken with the HDR mode the values from the high gain in the lights are correctly corrected for bias and dark current, while those taken from the low gain are wrongly calibrated.

What is sometimes proposed as a possible solution is to take flats of very low brightness, i.e. below the threshold where the low gain mode values are kept, but still in the HDR mode. This ensures that at least most values in the flats are taken from the high gain mode which can then be correctly applied to the dark areas in the lights. But this then also means that the calibration of the wrong gain values are applied to the brighter areas, for example stars, where the low gain readout was kept.

So what can be done about this? We have to make sure to dynamically correct each pixel in the light with the calibration files from the correct read mode. To do this we first have to take those calibration frames. The C4-16000 makes it possible to not only choose the HDR mode for readout, but also explicitely set the driver to store the native 12 bit high gain image, and the 16 bit transformed low gain image. The latter is very important. The not transformed 12 bit low gain value is also available to be used, but since only the 16 bit transformed low gain values enter the final HDR light, this is not useful. Only the 16 bit transformed low gain mode is. I hope that other camera manufacturers also expose these settings in their drivers, but I can not verify this.
We now have to take a full set of calibration frames for each of these two read modes (dark, flat, and flat-dark or bias, whichever way you prefer). The usual rules apply for dark and bias (same temperature, no light, and the same exposure time for the darks). For the flats we have to make sure to get the brightness levels correct. For the 12 bit high gain flats we have to choose somewhere around half the saturation of the 12 bit scale, i.e. 0.5 * 2^12 which ends up to be somewhere around 2000 ADU. I usually bump it up to 2500. Given the high gain and low ADU count this means that the exposure times are fairly short. This is normal, but you have to make sure to not get issues from the frequency of flat panels. We have to use some ND filters to get exposure times much longer than this frequency otherwise we get banding in the flats. For the 16 bit transformed low gain mode however the story is different. The maximum is now given by the usual 2^16, half of which is again the old rule of ~ 33000 ADU. 30000 or 35000 are fine as well.
To recap: We now have
  • 12 bit high gain darks
  • 12 bit high gain bias or flat darks
  • 12 bit high gain flats
  • 16 bit transformed low gain darks
  • 16 bit transformed low gain bias or flat darks
  • 16 bit transformed low gain flats

and sufficiently many in each category. I'd recommend at least 20 of each. Now we have to separate both of these sets and create a set of master calibration frames for each readmode individually. This can be done using the usual techniques, for example WBPP in PixInsight. Just make sure you do not mix any of the frames with different read modes. The story usually goes something like this: Stack all flat darks, calibrate the flats using this master flat dark, stack all flats to get a master flat, and stack all darks to get a master dark. But again: Do this for each readmode individually. In the end you should have
  • a 12 bit high gain master dark
  • a 12 bit high gain master flat
  • a 16 bit transformed low gain master dark
  • a 16 bit transformed low gain master flat

Of course this is only valid for one filter. If there are more filters involved, or even different exposure times of the HDR lights more of these master calibration files must be created.
We can now use these files to properly calibrate the HDR lights. To recap: The pixels in the HDR images with values smaller than the threshold (3600 ADU in our case) come from the 12 bit high gain mode, therefore these calibration frames must be applied to these pixels. Pixels above this threshold come from the 16 bit transformed low gain mode, therefore these calibration frames must be applied to these pixels. I will illustrate how to accomplish this using PixInsight for a single light and afterwards show how this can be done in batch to all lights simultaneously.

We open a single HDR light. We also open the 12 bit high gain master dark and name it "dark_hi", the 12 bit high gain master flat and name it "flat_hi", the 16 bit transformed master dark and name it "dark_lo", and the 16 bit transformed master flat and name it "flat_lo". We then open PixelMath and enter the following expression:
iif($T < 3600/65535,
  ($T - dark_hi) * mean(flat_hi) / flat_hi,
  ($T - dark_lo) * mean(flat_lo) / flat_lo
)
This expression checks whether the pixel value in the HDR light is below (or equal) the threshold (3600 in my case) and then performs the calibration of this single pixel using its counterpart in the 12 bit high gain calibration files. If the pixel value is above the threshold it applies the 16 bit transformed low gain calibration to this pixel. Of course the PixelMath process goes through this procedure for all pixels in the HDR light. In this way we get a nicely calibrated HDR light. Here is what my GUI looks like before this single calibration:
Screenshot_2024-02-28_00-12-30.png
And here the result after applying the process (STF adjusted):
Screenshot_2024-02-28_00-12-52.png
Unfortunately we have some gradients in the city which are visible, but no other pattern from the sensor remains. Specifically the quadrants are completely gone while they are still a bit visible in the "before" image. Here is a full resolution jpg of the result for your inspection:
cal.jpg
I think this can be called clean, ignoring the gradient from our city sky.

Calibrating all lights this way by hand is of course not really feasible. But this is where PixInsight's ImageContainer comes in very handy. In the UI you can right click and open this dialog. I won't go into detail about how to use it in general, resources can be found online. suffice to say, you can add the existing HDR lights here, and set an output directory. Then create a new instance of this container (drag the triangle in the lower left-hand corner on the UI), and then apply the PixelMath process to this instance. PixInsight should then perform the entire calibration of the HDR lights.

Afterwards you can add these calibrated lights as the input of WBPP and proceed as normal, of course not doing any calibration anymore.

If you are handy with WBPP's keywords you can easily automatically prepare a set of the required calibration frames automatically as well without mixing the different readmodes and for a variety of different filters (that's what I usually do).

I have processed several images of this camera this way (see my gallery) and the calibration is usually not an issue at all. Once again I'd like to point out that the manual of the C4-16000 gives a very nice overview of this topic as well, but for the practical part Moravian of course shows how to do the calibration using their own software package "SIPS".


The residual bulk image effect
Even if the calibration outlined in the previous section is performed correctly there is still the possibilty for a residual pattern in the images from the Gsense 4040 sensor. Those arise from something called "residual bulk image effect" (RBI). To understand what happens we need to briefly discuss the basics of how CCD and CMOS sensors work. When a photon hits the semiconductor of the sensor it can be absorbed, generating a free electron in the process. Over the duration of the exposure these free electrons accumulate and when the exposure is over they get "counted" by the ADC(s) of the sensor. This results in a number for each pixel. The higher the number, the more electrons were counted in this pixel, and the more photons, i.e more light must have hit this pixel, thereforet he pixel is brighter. What can happen in front illuminated sensors is that some photons travel further down into the substrate of the sensor, where they can then still be absorbed and generate free electrons. However, those electrons are not caught by the readout process and "linger" around for a while because they are caught in what are called "charge traps". These charge traps are not uniformly distributed over the semiconductor of the sensor. Their density is determined by the growth structure of the silicon crystal. Over the course of a comparatively long time (sometimes even hours) these electrons then leak back from the charge traps into the actual pixel structures of the sensor and generate "ghost images", the structure of which depend on what caused the previous stronger illumination of the sensor. This effect becomes especially noticable when taking darks or capturing faint objects after taking flats or images with a broadband filter of bright objects.

What does this mean in practice?

Scenario 1: Bright stars
When one or multiple bright stars are located on the sensor the charge traps at the location of these stars get filled. When dithering is employed the location of these bright stars shift in subsequent frames, but the charge traps stay at the same location
. Therefore the electrons leak back into the pixels at the old location, causing a ghost image of the bright star. The same is of course true in case of a meridian flip or when switching targets. Here is an example of such a case. On the left hand side is the image before and on the right hand side after a meridian flip. Note the faint dot in the right image where the star used to be in the left image.

Screenshot_2024-02-28_00-55-27.png

Usually these occurances are not a big issue because pixel rejection during stacking after correct star alignment can easily remove these bright areas.

Scenario 2: Changing filter from broadband to narrowband
The situation is much worse when coming from a situation where a lot of light in general reaches the sensor to a situation where faint structures are supposed to be captured. I have experienced this a lot when a broadband image, especially luminance was taken, and a subsequent switch to the 3.5nm hydrogen filter. The image is extremely polluted by weird structures caused by the filled charge traps in the sensor's substrate. The following situation illustrates the behaviour with the OIII filter. It shows subsequent 10 minute exposures using this filter after doing a single 5 second luminance exposure for plate solving (STF adjusted between frames, otherwise they would get from very bright to very  dark):

Especially this edge on the left is very bizarre and I don't know the exact reason for it. Calibrating the frames with such pollution is very difficult, if not impossible. Most cameras using the Gsense 4040 come with an LED for the so called "near infrared preflash". This mechanism can be used to illuminate the entire sensor with strong IR light and a number of subsequent readouts before each exposure. In theory this fills all the charge traps consistently. When doing this also for the dark frames it should make it possible to just subtract this effect and get a clean calibration. However, these additional electrons from the RBI cause additional Poisson noise (aka shot noise) which pollutes the image additionally. I tried to use this method to calibrate images over multiple nights, but the results were vary unsatisfactory. The growth structure of the silicon crystal was clearly visible after stacking multiple hours of data.

What can be done about this? The key is careful planning. Single exposures, even when doing luminance, should be at least 5 minutes to get enough "real" electrons in the pixel structure, and filters must be changed as rarely as possible. The rate with which the electrons leak back into the pixel structure depends on the temperature of the sensor. The colder the sensor is the longer it takes for the electrons to leak back. If going down to -20 C it can easily take up to 2 hours (!) after a filter change until the RBI disappears completely.

The same is of course true for taking dark frames. It must be made sure that before doing that all charge traps are empty. So it might be necessary to wait for ~2 hours before taking the first dark frame.


Summary
I know this is a very long post and I am thankful for every single person who made it down here!
The key points are:
  • Stick to the calibration procedure I outlined above
  • Take exposures with at least 5 min length
  • Change filters as rarely as possible
  • After switching from broadband to a narrowband filter it can take up to 2 hours until you get useful data

When doing so the C4-16000 can deliver absolutely stunning performance, especially compared to KAF-16803 based cameras (not to mention the MUCH faster readout and download), and I'm sure that every other Gsense 4040 based camera can do the same.

I would love to hear your thoughts on this.

Ciao and clear skies,
Philipp
Like
Philippe.BERNHARD 0.00
...
· 
·  1 like
thanks
we have replaced our G4-16000 by a C4-16000-EC on our CDK24 and we made a script for calibration.
But the sensor is not easy.
We ordered a C5-100M we should receive in a couple of months.
Like
WeberPh 6.62
...
· 
Philippe BERNHARD:
thanks
we have replaced our G4-16000 by a C4-16000-EC on our CDK24 and we made a script for calibration.
But the sensor is not easy.
We ordered a C5-100M we should receive in a couple of months.

Don't you expected quite a bit of oversampling with the IMX461? Of course you can always rebin
Like
Philippe.BERNHARD 0.00
...
· 
one of our colleagues use IMX461 on his CDK1000 and CDK700 with binning 2x2.
this sensor can do hardware binning (but limited). In reality it is hardware on vertical and software on horizontal pixels
Like
 
Register or login to create to post a reply.