First attempt on drizzling - when and how ? [Deep Sky] Processing techniques · DanyJrt · ... · 15 · 681 · 3

DanyJrt 1.43
...
· 
Hi all,

After processing almost 40 hours of data from a Bortle 4 sky of M106, I expected way more details to appear in the galaxy. 

This image has been processed using BlurX, LHE, MLT and unsharp mask on the L layer before combining to HaRGB but zooming in, I find the galaxy still too blurry for all the hours of exposure I gave it.
The acquisition went smooth, with good seeing and no wind during all nights of capture (total RMS during guiding always below 0.7).

Convinced I can do better on the processing part, I am now investigating ways to pull out more details from my images and will attempt to drizzle my data before reprocessing it.

From my understanding, in order to drizzle, I need :
- to dither frequently (I do every 3 frames by 5 pixels)
- to be undersampled

The camera is use is the ASI2600 MM pro (pixel size of 3.76 microns, resolution of 6248 x 4176) and the scope is the Esprit 120 (120 aperture, 840mm focal length, f ratio :7 ; no reducer used in this example). 

According to Astronomy.tools using this formula : (   Pixel Size   /   Telescope Focal Length   )   X 206.265  
With this setup, the resolution is about 0.92"/pixel, so slightly oversampled.
If applying the reducer (x0.77), the resolution is 1.21"/pixel, so no over -nor under- sampling.

According to RC-Astro MTF analyzer, I am undersampled in both setups, which convinced me to try to drizzle and then compare results (drizzle x2, crop factor 0.9). Once done, I believe I have to resample the image at x0.5.

For those unsing the Esprit 120 and ASI 2600, do you drizzle all of your pictures ? If no, when do you ?

Finally, regarding RC-Astro MTF analyzer, one component that has an impact over the sampling are the seeing conditions (from 0.5 to 4.00 arcsec). 
I don't know how to calculate this but have read it is often defined by the FWHM.
How do I calculate it ?
In PI, I would extract the L component of a channel then run FWHMEccentricity analysis script on the stacked image (after running WBPP in non-drizzle mode) and then use this value to set up a manual PSF in BlurX. But should I run the script on a single frame (before stacking) to obtain a seeing value to input in RC-Astro MTF analyzer ?

Thanks in advance for any tips and insights!
Like
andreatax 7.56
...
· 
·  4 likes
It would be right to say not only undersampled but significantly undersampled, with an average FWHM less than 2.5 pixels. And your reference FWHM is the final, stacked  and unprocessed Luminance value as extracted by FWHMEccentricity script (multiplied by the image scale). Keep in mind that your aperture is small and your image scale already large at ~0.9"/px so I'd not pin my hopes on any improvement. If you want better resolution get a large reflector and you'll be good.

Incidentally I don't think that site is going to give you any guidance on whether you should be drizzling (I'll assume we're talking 2x here) unless you are in the business of splitting doubles. And given the results I bet your seeing is worse than 2".
Like
AstroNovixion 0.00
...
· 
·  1 like
Hi DanyJrt,

Commenting since I have a VERY similar set up (Esprit 120ED) and even a VERY similar image but in OSC (2600MC Duo) from B7+.  I'm rather new so please take what I say with a grain of salt. I think you captured a lot of the nicer details of the galaxy such as the dust around the core. Looking around the other Astrobin images with the Esprit 120ED on this galaxy, there aren't too many more details to pull out of the outer regions. I think Andrea covered drizzling well. I personally understood it as if your pixel scale was much larger than your seeing/guiding (limiting factor) then you can benefit from drizzling. So for us, it would be <0.45" seeing and guiding.
andrea tasselli:
It would be right to say not only undersampled but significantly undersampled, with an average FWHM less than 2.5 pixels. And your reference FWHM is the final, stacked  and unprocessed Luminance value as extracted by FWHMEccentricity script (multiplied by the image scale). Keep in mind that your aperture is small and your image scale already large at ~0.9"/px so I'd not pin my hopes on any improvement. If you want better resolution get a large reflector and you'll be good.

Incidentally I don't think that site is going to give you any guidance on whether you should be drizzling (I'll assume we're talking 2x here) unless you are in the business of splitting doubles. And given the results I bet your seeing is worse than 2".

Hi Andrea,

Please correct me if I misread, so using FWHMEccentricity, I'm getting FWHM around fully stacked 2.7-3 pixels and ~1.8 pixels single subs at 0.9"/px so would this mean that I'm seeing limited/my seeing is ~2" (assuming guiding is good etc.)? Also that I am still at a pixel scale that can benefit from good seeing?
Like
andreatax 7.56
...
· 
·  1 like
Joel Lee:
Hi Andrea,

Please correct me if I misread, so using FWHMEccentricity, I'm getting FWHM around fully stacked 2.7-3 pixels and ~1.8 pixels single subs at 0.9"/px so would this mean that I'm seeing limited/my seeing is ~2" (assuming guiding is good etc.)? Also that I am still at a pixel scale that can benefit from good seeing?


Your randomly sampled seeing might be as low as 1.6" (lucky you!) but is the integrated seeing that matters since, after all, this is extracted from the luminace layer your image is based upon. If the integrated FWHM where of the order of 2.5px or less then you'd be pixel scale limited, not seeing limited and thus "eligible" to benefit from drizzling at 2x. Or increasing your image scale, if you can afford it in terms of SNR and $$$. At your integrated FWHM of ~2.7" you'd do well to keep everything the same and use CFA drizzle to extract the maxmium amount of detail from your images.
Like
AstroNovixion 0.00
...
· 
andrea tasselli:
Joel Lee:
Hi Andrea,

Please correct me if I misread, so using FWHMEccentricity, I'm getting FWHM around fully stacked 2.7-3 pixels and ~1.8 pixels single subs at 0.9"/px so would this mean that I'm seeing limited/my seeing is ~2" (assuming guiding is good etc.)? Also that I am still at a pixel scale that can benefit from good seeing?


Your randomly sampled seeing might be as low as 1.6" (lucky you!) but is the integrated seeing that matters since, after all, this is extracted from the luminace layer your image is based upon. If the integrated FWHM where of the order of 2.5px or less then you'd be pixel scale limited, not seeing limited and thus "eligible" to benefit from drizzling at 2x. Or increasing your image scale, if you can afford it in terms of SNR and $$$. At your integrated FWHM of ~2.7" you'd do well to keep everything the same and use CFA drizzle to extract the maxmium amount of detail from your images.

Thanks Andrea! Looks like I got some reading and experimenting to do about stacking using CFA drizzling. Glad to hear I can push for higher resolution on good days around here.
Like
HotSkyAstronomy 2.11
...
· 
·  1 like
Joel Lee:
Thanks Andrea! Looks like I got some reading and experimenting to do about stacking using CFA drizzling. Glad to hear I can push for higher resolution on good days around here.

I've found drizzle VarShape = 1.5 with a drop-shrink of 0.7-0.8 works best for seeing-limited images, as long as you are dithering every frame with a bit larger scale, my settings should help a lot. You will lose out on integration maybe by 5 minutes overall due to guider settling times, but the end result will be MUCH sharper.
Edited ...
Like
DanyJrt 1.43
...
· 
andrea tasselli:
It would be right to say not only undersampled but significantly undersampled, with an average FWHM less than 2.5 pixels. And your reference FWHM is the final, stacked  and unprocessed Luminance value as extracted by FWHMEccentricity script (multiplied by the image scale). Keep in mind that your aperture is small and your image scale already large at ~0.9"/px so I'd not pin my hopes on any improvement. If you want better resolution get a large reflector and you'll be good.

Incidentally I don't think that site is going to give you any guidance on whether you should be drizzling (I'll assume we're talking 2x here) unless you are in the business of splitting doubles. And given the results I bet your seeing is worse than 2".

*** Thanks for your reply Andrea.
It looks like my seeing was indeed worse than 2", probably a bit of haze in the higher parts of the sky.
I ran a FWHMEccentricity script on all the masters to find out the median values : 
- 3.19 px on the blue channel
- 3.19 px on the Ha channel
- 3.00 px on the R channel
- 2.47 px on the G channel 
- 4.10 px on the L channel
I am way above 2.5 so I understand drizzling might not be relevant with this setup.

I plan to use a new setup for my next project, which will require to pair the same camera (ASI2600MM) with the Redcat 51 this time. The pixel scale should be 3.1"/pixel so significantly undersampled; I believe with this rig configuration drizzle will make sense.


In order to combine theory with practice, while reading your comments this weekend I was also re-running WBPP in drizzle mode, just to compare with the non-drizzled data and see for myself how better or worse it would be.
The workflow has been slightly modified and is as follow : 
- run WBPP on each channel, activating drizzle x2 (and crop factor x0.9)
- star align and dynamic crop all channels
- RGB combination
- run GraXpert on RGB, L and Ha
- SCNR on RGB
- run BlurX on RGB, L and Ha (drizzling + BlurX worked well to pull out finer details in the galaxy dust lanes, but also added extra noise across the entire image)
- resample the images (50%) to average the noise
- run NoiseX on RGB, L and Ha
- run StarX on RGB, L and Ha
- stretch.
And from here on things start to get shaky.

Drizzle.jpg

Dusty regions are cleaner than on the non-drizzled image, and this even before applying LHE/MLT/Unsharp mask. This is why I was sure when posting the thread I could get more out of this image.
Though, while carefuly stretching with GHS and playing with local intensity, color posterization appears very quickly on the edges of the galaxy, even before having fully stretched the image.

Is this normal because I ran the drizzle test on a data set that shouldn't be drizzled (as mentionned earlier), or is it due to a mistake in the workflow ? I'd like to understand this as I will be trying to drizzle again in the future with the Redcat.

***
Like
aaronh 1.81
...
· 
Though, while carefuly stretching with GHS and playing with local intensity, color posterization appears very quickly on the edges of the galaxy, even before having fully stretched the image.

Is STF set to use 24-bit Lookup Tables? If not, posterisation is expected when working with linear data.
Like
DanyJrt 1.43
...
· 
Trying different stuff, it appears I have to stretch less to avoid color posterization. Difficult to see what data I am loosing in comparison to a more stretch version of the image.

Working further on the L layer of the drizzled data, I tried my best to pull out the faint details.
On this attempt, I only used LHE and sharpening, versus the first version of the image where I also applied MLT.
MLT didn't work out on the drizzled data as it created an unatural look and some weird artifacts, also a lost of contrast.

I didn't integrate Ha data to RGB on the drizzled data before combining with L, so don't mind this difference.
But here's the result of the drizzled processed data versus non-drizzled : 

Drizzle2.jpg

Without zooming in too much, I believe the difference is quite noticeable. 
What do you think ?
Like
DanyJrt 1.43
...
· 
Aaron H.:
Though, while carefuly stretching with GHS and playing with local intensity, color posterization appears very quickly on the edges of the galaxy, even before having fully stretched the image.

Is STF set to use 24-bit Lookup Tables? If not, posterisation is expected when working with linear data.

STF is disabled as this is a stretched image.
Like
andreatax 7.56
...
· 
·  1 like
Trying different stuff, it appears I have to stretch less to avoid color posterization. Difficult to see what data I am loosing in comparison to a more stretch version of the image.

Working further on the L layer of the drizzled data, I tried my best to pull out the faint details.
On this attempt, I only used LHE and sharpening, versus the first version of the image where I also applied MLT.
MLT didn't work out on the drizzled data as it created an unatural look and some weird artifacts, also a lost of contrast.

I didn't integrate Ha data to RGB on the drizzled data before combining with L, so don't mind this difference.
But here's the result of the drizzled processed data versus non-drizzled : 

Drizzle2.jpg

Without zooming in too much, I believe the difference is quite noticeable. 
What do you think ?

*
I don't think that they are quite comparable in terms of equal process -> different outcomes. Some features are actually absent in the non-drizzled version which one might suspect is due to the SXT being too aggressive in one case. Use SN++ on the linear image to preserve more of those features. Besides, I also think you are processing this image way too much. If you want to increase contrast just use unsharp mask with appropriate masks to obtain the desired result.
Like
DanyJrt 1.43
...
· 
andrea tasselli:
=14pxI don't think that they are quite comparable in terms of equal process -> different outcomes. Some features are actually absent in the non-drizzled version which one might suspect is due to the SXT being too aggressive in one case. Use SN++ on the linear image to preserve more of those features. Besides, I also think you are processing this image way too much. If you want to increase contrast just use unsharp mask with appropriate masks to obtain the desired result.

Good observation ! I agree with you, SXT is unconsistent in features removal. I'll try again with Starnet++.

Regarding processing, I kept track of the inputed values applied in each process, which allowed me to replicate the same process between the non-drizzled and the drizzled data, to make sure I can compare apples to apples. The only difference being MLT removed in the drizzled workflow.

Finally, about over-processing the image, I find it quite difficult to know exactly when to stop, in particular in a case of processing a galaxy where we want to see crisp details versus a nebula than could be a bit more blurry. I guess we lack objectivity when spending too much time working on an image.
How do you know when to stop ? Do you use a benshmark when processing your images ? Stop processing it for a while and start again with a fresh new eye ?
Like
andreatax 7.56
...
· 
·  1 like
Finally, about over-processing the image, I find it quite difficult to know exactly when to stop, in particular in a case of processing a galaxy where we want to see crisp details versus a nebula than could be a bit more blurry. I guess we lack objectivity when spending too much time working on an image.
How do you know when to stop ? Do you use a benshmark when processing your images ? Stop processing it for a while and start again with a fresh new eye ?


It may take me years to be fully settled on any one given image, so as they say YMMV. This said, I feel your straining to get as much detail as possible from our images. I always keep the same basic process which is very streamlined and minimalistic and only the final touches do change from subject to subject so my best advice is sit tight and get back in a couple of days and try to see it in a new light. Here is what I do:

0. BXT if deserving to (i.e. only applies if the PSF is large enough to benefit from it)
1. Extract stars either with SN or SXT depending which one gives the best result in terms of preserving the non-stellar portion of the image.
2. Blot out imperfections and star haloes leftovers from the background of the starless image.
3. Stretch the starless image with one's preferred choice of algorithm (this based on the dynamics of the image).
4. Arcsinh of the above to bring out colors. Possibly in combination with HT to darken the background and/or shift the mid-point of the image.
5. Combination of masked applications of NXT at different levels to remove noise without creating un-natural/posterized/blotchy background, e.g., some noise  need to remain. Also, we want to preserve detail and contrast so a light hand is the way to go here.
6. CurveTransformation to remove color casting in the background.
7. Graxpert to remove large splotches and additional color gradients that might have been created by stretching the image (careful here though).
8, UM to increase contrast in high signal/low noise areas, if required. Always with masks.
9. Get the stretched stars back in and see how it looks. The whole of it.

Go back to point 5 to 8 if results does not appear "natural" and by natural I mean a good daytime picture of a landscape. Get back in a couple of days with afresh, not strained, eye and see whether still matches your perception of "beauty".
Like
DanyJrt 1.43
...
· 
andrea tasselli:
It may take me years to be fully settled on any one given image, so as they say YMMV. This said, I feel your straining to get as much detail as possible from our images. I always keep the same basic process which is very streamlined and minimalistic and only the final touches do change from subject to subject so my best advice is sit tight and get back in a couple of days and try to see it in a new light. Here is what I do:

0. BXT if deserving to (i.e. only applies if the PSF is large enough to benefit from it)
1. Extract stars either with SN or SXT depending which one gives the best result in terms of preserving the non-stellar portion of the image.
2. Blot out imperfections and star haloes leftovers from the background of the starless image.
3. Stretch the starless image with one's preferred choice of algorithm (this based on the dynamics of the image).
4. Arcsinh of the above to bring out colors. Possibly in combination with HT to darken the background and/or shift the mid-point of the image.
5. Combination of masked applications of NXT at different levels to remove noise without creating un-natural/posterized/blotchy background, e.g., some noise  need to remain. Also, we want to preserve detail and contrast so a light hand is the way to go here.
6. CurveTransformation to remove color casting in the background.
7. Graxpert to remove large splotches and additional color gradients that might have been created by stretching the image (careful here though).
8, UM to increase contrast in high signal/low noise areas, if required. Always with masks.
9. Get the stretched stars back in and see how it looks. The whole of it.

Go back to point 5 to 8 if results does not appear "natural" and by natural I mean a good daytime picture of a landscape. Get back in a couple of days with afresh, not strained, eye and see whether still matches your perception of "beauty".

Thnak you for sharing Andrea, much appreciated

I noticed you mention GraXpert quite late in your workflow. I believe best practice is to run it as early as possible. Any reason why ?
Like
andreatax 7.56
...
· 
·  1 like
I noticed you mention GraXpert quite late in your workflow. I believe best practice is to run it as early as possible. Any reason why ?


I assumed that the image to be processed was already flattened and not always Graxpert yields the best results, more so with galaxies.
Like
IrishAstro4484 5.96
...
· 
·  1 like
Hi all,

After processing almost 40 hours of data from a Bortle 4 sky of M106, I expected way more details to appear in the galaxy. 

This image has been processed using BlurX, LHE, MLT and unsharp mask on the L layer before combining to HaRGB but zooming in, I find the galaxy still too blurry for all the hours of exposure I gave it.
The acquisition went smooth, with good seeing and no wind during all nights of capture (total RMS during guiding always below 0.7).

Convinced I can do better on the processing part, I am now investigating ways to pull out more details from my images and will attempt to drizzle my data before reprocessing it.

From my understanding, in order to drizzle, I need :
- to dither frequently (I do every 3 frames by 5 pixels)
- to be undersampled

The camera is use is the ASI2600 MM pro (pixel size of 3.76 microns, resolution of 6248 x 4176) and the scope is the Esprit 120 (120 aperture, 840mm focal length, f ratio :7 ; no reducer used in this example). 

According to Astronomy.tools using this formula : (   Pixel Size   /   Telescope Focal Length   )   X 206.265  
With this setup, the resolution is about 0.92"/pixel, so slightly oversampled.
If applying the reducer (x0.77), the resolution is 1.21"/pixel, so no over -nor under- sampling.

According to RC-Astro MTF analyzer, I am undersampled in both setups, which convinced me to try to drizzle and then compare results (drizzle x2, crop factor 0.9). Once done, I believe I have to resample the image at x0.5.

For those unsing the Esprit 120 and ASI 2600, do you drizzle all of your pictures ? If no, when do you ?

Finally, regarding RC-Astro MTF analyzer, one component that has an impact over the sampling are the seeing conditions (from 0.5 to 4.00 arcsec). 
I don't know how to calculate this but have read it is often defined by the FWHM.
How do I calculate it ?
In PI, I would extract the L component of a channel then run FWHMEccentricity analysis script on the stacked image (after running WBPP in non-drizzle mode) and then use this value to set up a manual PSF in BlurX. But should I run the script on a single frame (before stacking) to obtain a seeing value to input in RC-Astro MTF analyzer ?

Thanks in advance for any tips and insights!

*** I've used drizzle integration to very good effect with my Redcat 51/2600MC Pro. It certainly improves my stars  but I am not entirely clear that it adds a lot of detail. I think the key in drizzle integration is that the data needs to be significantly under sampled to get any tangible benefit. I think the downside of drizzle integration is that stacked image can be more noisy so it works best with a lot of sub frames. Also, file sizes get very large and so does processing time. Pin-insight crashed a few times on my last project too. The proof is in the puddin, so if you have the time you can always experiment!
Like
 
Register or login to create to post a reply.