[RCC] M42 The Orion Nebula Requests for constructive critique · Omiros Angelidis · ... · 37 · 912 · 9

Gomarofski 0.90
...
· 
For the masters of the acquisition and processing I am seeking their honest feedback, other than the blown out core where I managed to eliminate the trapezium alltogether.

https://www.astrobin.com/6v61k9/

I know that beauty is subjective but I am not seeking a likeable photo. I am seeking a realistic and not overporcessed result, but I always seem to miss that balance!

Thanks you and clear skies!

Omiros
Like
C.Sand 2.33
...
· 
·  2 likes
Did you dither? It looks somewhat like walking noise, if you did dither you may want to increase your dither distance. Also there's that artifact in the center right, unsure if that is again from not dithering or something physical. 

In terms of processing, in my opinion it is a little bit "cooked".
Like
andreatax 7.72
...
· 
·  1 like
I don't think it is either really realistic or not over-processed, quite the contrary. And the framing doesn't help either. I'd start all over again and keep it simple.
Like
Gomarofski 0.90
...
· 
·  1 like
Did you dither? It looks somewhat like walking noise, if you did dither you may want to increase your dither distance. Also there's that artifact in the center right, unsure if that is again from not dithering or something physical. 

In terms of processing, in my opinion it is a little bit "cooked".

Thanks and appreciated! For this set ounfortunately I didn't dither and there are also some sattelite trails remnants which I noticed thereafter! Thanks for picking up the artifact which I didn't realize was there! It must be from the overprocessing after applying the HDRMT. I'll start allover again! Thanks
Like
Gomarofski 0.90
...
· 
andrea tasselli:
I don't think it is either really realistic or not over-processed, quite the contrary. And the framing doesn't help either. I'd start all over again and keep it simple.

Thanks Andrea! I agree that the framing doesn't help as I didn't want to play with the ESPRIT rotator. Thanks for your feedback! In all honesty I didn't believe it was so bad but I'll retry M42 as it is a good target to learn the process. Actually it was my first try of splitting the RGB channels and processed them individually. Perhaps I went way too far with the LHE and curves. I would appreciate if you could give me some hints on the PI workflow.

Appreciated.
Like
C.Sand 2.33
...
· 
·  1 like
Omiros Angelidis:
Did you dither? It looks somewhat like walking noise, if you did dither you may want to increase your dither distance. Also there's that artifact in the center right, unsure if that is again from not dithering or something physical. 

In terms of processing, in my opinion it is a little bit "cooked".

Thanks and appreciated! For this set ounfortunately I didn't dither and there are also some sattelite trails remnants which I noticed thereafter! Thanks for picking up the artifact which I didn't realize was there! It must be from the overprocessing after applying the HDRMT. I'll start allover again! Thanks

No problem. And again dithering would solve a lot of the issues I have it the image, aside from proper HDR of course, which I think would help with the overprocessed-y look. Also you should dither. Did I mention dithering? I'm a fan of dithering.
Like
Gomarofski 0.90
...
· 
·  1 like
Omiros Angelidis:
Did you dither? It looks somewhat like walking noise, if you did dither you may want to increase your dither distance. Also there's that artifact in the center right, unsure if that is again from not dithering or something physical. 

In terms of processing, in my opinion it is a little bit "cooked".

Thanks and appreciated! For this set ounfortunately I didn't dither and there are also some sattelite trails remnants which I noticed thereafter! Thanks for picking up the artifact which I didn't realize was there! It must be from the overprocessing after applying the HDRMT. I'll start allover again! Thanks

No problem. And again dithering would solve a lot of the issues I have it the image, aside from proper HDR of course, which I think would help with the overprocessed-y look. Also you should dither. Did I mention dithering? I'm a fan of dithering.

Did you say dithering? Oh dithering. That's what I thought. So I will dither. Now I just need to learn how to properly set up the correct step based on my guide/main scope ratio After M42 set I tried dithering to M106 but I just set it up at 3 but I am pretty sure based on my math that I should increase it further! 

Thanks
Like
andreatax 7.72
...
· 
·  2 likes
Omiros Angelidis:
Thanks Andrea! I agree that the framing doesn't help as I didn't want to play with the ESPRIT rotator. Thanks for your feedback! In all honesty I didn't believe it was so bad but I'll retry M42 as it is a good target to learn the process. Actually it was my first try of splitting the RGB channels and processed them individually. Perhaps I went way too far with the LHE and curves. I would appreciate if you could give me some hints on the PI workflow.

Appreciated.


First I would check your flat-fielding as it doesn't look too flat to me. There are dark areas where there should be none so I'd start with that.

Secondly, I'd separate the stars from the nebula. Starnet++ might be your best option in the linear phase but experiment with SXT if you have it.

Apply MaskedStretch with the default settings and see what comes out. You might need to lower your target background. Depending on the results you might want to apply ArcsinhStretch or curves and HystogramTransformation at some point to modulate your dynamics. Not having handy the unstretched image I can't be too sure of which path is best. But the main point is that you need to modulate your darkest level so that are just visible and your brightest highlight so that they are not burnt out. IOW, you need to fully bring out as much dynamical range as possible keeping in mind the dusts belongs to the background and the core belongs to the foreground. And the color are properly shown, the OIII areas around the M42 core especially.

ArcsinhStretch for the stars plus HystogramTransformation to keep them under control. Mask to keep the halos in check.

This for starters.
Like
andreatax 7.72
...
· 
·  1 like
Omiros Angelidis:
Did you say dithering? Oh dithering. That's what I thought. So I will dither. Now I just need to learn how to properly set up the correct step based on my guide/main scope ratio After M42 set I tried dithering to M106 but I just set it up at 3 but I am pretty sure based on my math that I should increase it further! 

Thanks


Dither won't help you here and I can't see obvious issues related to the absence thereof. Disclaimer: I am against dithering and I have never used it in my rigs.
Like
C.Sand 2.33
...
· 
·  1 like
andrea tasselli:
Omiros Angelidis:
Did you say dithering? Oh dithering. That's what I thought. So I will dither. Now I just need to learn how to properly set up the correct step based on my guide/main scope ratio After M42 set I tried dithering to M106 but I just set it up at 3 but I am pretty sure based on my math that I should increase it further! 

Thanks


Dither won't help you here and I can't see obvious issues related to the absence thereof. Disclaimer: I am against dithering and I have never used it in my rigs.

In my experience dithering will at the very least get rid of the satellite trails, at the cost of maybe one or two less exposures a night (depending on your exposure length of course). The real answer of course would be to dither and see if there's still the issue.
Like
andreatax 7.72
...
· 
In my experience dithering will at the very least get rid of the satellite trails, at the cost of maybe one or two less exposures a night (depending on your exposure length of course). The real answer of course would be to dither and see if there's still the issue.


I get plenty of satellite trails, in fact multiple ones at times and never had issues in getting rid of them in post-processing as long as I have enough subframes to make the rejection algorithm work its magic, hence short exposures are they way to go. and let's not forget that mounts (at least those without high-res encoders) drift. There are good reasons to use dithering in situations but in all the instances I can recall (on my rigs) it was related to walking noise (and DSLRs).
Edited ...
Like
C.Sand 2.33
...
· 
·  1 like
I get plenty of satellite trails, in fact multiple ones at times and never had issues in getting rid of them in post-processing as long as I have enough subframes to make the rejection algorithm work its magic, hence short exposures are they way to go. and let's not forget that mounts (at least those without high-res encoders) drift. There are good reasons to use dithering in situations but in all the instances I can recall (on my rigs) it was related to walking noise (and DSLRs).

If that works for you in short exposures for bright objects thats fine. Personally I shoot 5 min subs for most narrowband objects so dithering is basically a must. I'm not following the mount drift thread, can you explain why you mentioned that?

Other reasons I dither is hot pixels and to effectively drizzle. I don't use dark frames so dithering is the most reliable way to eliminate any issues that would be fixed by darks, without the cost of darks. And of course drizzling basically requires dithering.
Like
andreatax 7.72
...
· 
If that works for you in short exposures for bright objects thats fine. Personally I shoot 5 min subs for most narrowband objects so dithering is basically a must. I'm not following the mount drift thread, can you explain why you mentioned that?

Other reasons I dither is hot pixels and to effectively drizzle. I don't use dark frames so dithering is the most reliable way to eliminate any issues that would be fixed by darks, without the cost of darks. And of course drizzling basically requires dithering.


I don't see the need of longer exposures, be it with NB filters or broadband, if read noise is negligible compared to all other sources of noise, more importantly background noise and trails. As per the mount drift that's a fact, all mounts do to some extent. Well, all that I have used anyhow. Unless you have high-res encoder and a good map of your local sky. Drizzle is way overrated and in 99.9% of the times you will not get any effective increase in resolution (unless is CFA drizzle you're talking about). As for darks, they are easy and cheap to get so why not using them? As far as I know there is no "cost" to them, that is what cloudy nights are for.
Like
C.Sand 2.33
...
· 
·  1 like
andrea tasselli:
I don't see the need of longer exposures, be it with NB filters or broadband, if read noise is negligible compared to all other sources of noise, more importantly background noise and trails. As per the mount drift that's a fact, all mounts do to some extent. Well, all that I have used anyhow. Unless you have high-res encoder and a good map of your local sky. Drizzle is way overrated and in 99.9% of the times you will not get any effective increase in resolution (unless is CFA drizzle you're talking about). As for darks, they are easy and cheap to get so why not using them? As far as I know there is no "cost" to them, that is what cloudy nights are for.


I'm still not understanding the mount drift thing. Yes, mounts drift. That's why you recenter. I don't see how this relates to all this.

Sure readnoise is neglibible at the short exposures, but the ratio of target to background is affected by longer exposures. Why wouldn't you want to get a better ratio (of course the answer to this is time, but as with integration time always is, to a limit). The weighting software now is good enough to take note of the improved ratio. 

Personally I see the benifits to drizzling in my images. For very widefield/undersampled images drizzle has been wonderful, and for deeper sky images I don't use it for the resolution boost but just for the improved star shapes as I am still undersampled (though not as much). 

Using the imx533/571 sensors there's no amp glow so no *explicit* reason for darks beyond hot pixels, which of course I take care of in drizzling. To be entirely honest I forget the entire reasoning for no darks, but if I remeber correctly the jist of it was that they introduce random noise. While not extensive, if I can get rid of the issues they correct for anyway without introducing that random noise, why not?
Like
andreatax 7.72
...
· 
·  1 like
Said simply you don't recenter. Besides the issue of re-centering when the shift is less then the effective resolution of the guide scope. But in my case I just don't. Automatic dithering at no cost.  

Read noise does not depend on the length of the exposure, it just happens every time you read the sensor. And the sensor is perfectly linear for both subject and background (well as long it is in its linear response range which is quite high). So doubling the exposure doubles both the wanted and unwanted signal. The only reason for having overlong exposures (up to the point where read noise is equal to the background noise) is if the signal is less than the background noise at the given exposures. Notice you could still recover the signal since its average integrated value is not zero but still may be worth doing. This was specially true with old CCDs, not sure applies with modern CMOS.

I had a look at your page and while I can't comment on star shapes and such I notice you are distinctively oversampled in some images of your gallery. You can get the same effect by just increasing the image size in PI, which is pretty good at it, you'll be surprised. 

As for dark signal, well it adds so it may not be insignificant for very long integrations and having a good master dark basically add essentially zero noise to your image, especially if compared to Poisson noise or background noise. But to each their own...
Like
C.Sand 2.33
...
· 
·  1 like
andrea tasselli:
Said simply you don't recenter. Besides the issue of re-centering when the shift is less then the effective resolution of the guide scope. But in my case I just don't. Automatic dithering at no cost.


This is (could be) watch introduces walking noise, though in your shorter exposures your camera may not have the qualities to introduce it. The point of dithering is random movement, not consistent drift.
andrea tasselli:
Read noise does not depend on the length of the exposure, it just happens every time you read the sensor. And the sensor is perfectly linear for both subject and background (well as long it is in its linear response range which is quite high). So doubling the exposure doubles both the wanted and unwanted signal. The only reason for having overlong exposures (up to the point where read noise is equal to the background noise) is if the signal is less than the background noise at the given exposures. Notice you could still recover the signal since its average integrated value is not zero but still may be worth doing. This was specially true with old CCDs, not sure applies with modern CMOS.

Yes, read noise comes every time you read the sensor, which is exactly why longer exposures are better. In a two min sub if you get x from read noise, 2x from background, and 4x from target, we can say that (linearly) in a four minute sub you'd get x, 4x, and 8x. If we compare equal exposure time we can see that two of our two minute subs have "14x" data, with 2x noise, while our single four minute sub has "13x" data with 1x noise. Now of course this is oversimplified, but over a large dataset this difference in weight is what improves the image because that read noise stays consistent. Of course, all this is staying below saturation.


Edit since I forgot to ask: In which images were you seeing oversampling? And I'll stack something with darks to compare if you'd like. It will be a while though.
Edited ...
Like
andreatax 7.72
...
· 
The point of exposure length and shorter ones at that is that if the background noise is higher than the read noise there is no point in increasing the integration further. Which is why NB imaging can get away with longer exposures if the band is really tight as the background and hence its noise is low compared to read-out noise. Still you pay in increased potential issue with passing clouds, bouts of wind, various hiccups and MASSIVE plane lights crossing your FOV.

As for your images here what I think are oversampled: IC342, Crescent, M1, M33, M106, Thor's Helmet, IC434. Obviously I don't know if the original were oversampled or not just what they are in your gallery.
Like
C.Sand 2.33
...
· 
·  1 like
andrea tasselli:
The point of exposure length and shorter ones at that is that if the background noise is higher than the read noise there is no point in increasing the integration further. Which is why NB imaging can get away with longer exposures if the band is really tight as the background and hence its noise is low compared to read-out noise. Still you pay in increased potential issue with passing clouds, bouts of wind, various hiccups and MASSIVE plane lights crossing your FOV.

As for your images here what I think are oversampled: IC342, Crescent, M1, M33, M106, Thor's Helmet, IC434. Obviously I don't know if the original were oversampled or not just what they are in your gallery.

I think we're getting off topic from the image. If you'd like to dm I'm happy to but don't want to spam Omiros more than we have already. On the oversampling: I don't see the same thing you see.
Like
Gomarofski 0.90
...
· 
andrea tasselli:
Omiros Angelidis:
Thanks Andrea! I agree that the framing doesn't help as I didn't want to play with the ESPRIT rotator. Thanks for your feedback! In all honesty I didn't believe it was so bad but I'll retry M42 as it is a good target to learn the process. Actually it was my first try of splitting the RGB channels and processed them individually. Perhaps I went way too far with the LHE and curves. I would appreciate if you could give me some hints on the PI workflow.

Appreciated.


First I would check your flat-fielding as it doesn't look too flat to me. There are dark areas where there should be none so I'd start with that.

Secondly, I'd separate the stars from the nebula. Starnet++ might be your best option in the linear phase but experiment with SXT if you have it.

Apply MaskedStretch with the default settings and see what comes out. You might need to lower your target background. Depending on the results you might want to apply ArcsinhStretch or curves and HystogramTransformation at some point to modulate your dynamics. Not having handy the unstretched image I can't be too sure of which path is best. But the main point is that you need to modulate your darkest level so that are just visible and your brightest highlight so that they are not burnt out. IOW, you need to fully bring out as much dynamical range as possible keeping in mind the dusts belongs to the background and the core belongs to the foreground. And the color are properly shown, the OIII areas around the M42 core especially.

ArcsinhStretch for the stars plus HystogramTransformation to keep them under control. Mask to keep the halos in check.

This for starters.

Thanks Andrea and apologize for the late response! I missed the comments and I see that you had quite a constructive discussion with C.Sand! Definitely the whole discussion will provide some food for thought to me as a newbie! I will redo Orion applying less aggresive methods following your guidance for a start.
Like
Gomarofski 0.90
...
· 
andrea tasselli:
The point of exposure length and shorter ones at that is that if the background noise is higher than the read noise there is no point in increasing the integration further. Which is why NB imaging can get away with longer exposures if the band is really tight as the background and hence its noise is low compared to read-out noise. Still you pay in increased potential issue with passing clouds, bouts of wind, various hiccups and MASSIVE plane lights crossing your FOV.

As for your images here what I think are oversampled: IC342, Crescent, M1, M33, M106, Thor's Helmet, IC434. Obviously I don't know if the original were oversampled or not just what they are in your gallery.

I think we're getting off topic from the image. If you'd like to dm I'm happy to but don't want to spam Omiros more than we have already. On the oversampling: I don't see the same thing you see.

Guys feel free to spam it as much as you want to! It's productive to have arguments and different way of doing things! We all learn in the process. Thanks to your comments!
Like
Gomarofski 0.90
...
· 
andrea tasselli:
Omiros Angelidis:
Thanks Andrea! I agree that the framing doesn't help as I didn't want to play with the ESPRIT rotator. Thanks for your feedback! In all honesty I didn't believe it was so bad but I'll retry M42 as it is a good target to learn the process. Actually it was my first try of splitting the RGB channels and processed them individually. Perhaps I went way too far with the LHE and curves. I would appreciate if you could give me some hints on the PI workflow.

Appreciated.


First I would check your flat-fielding as it doesn't look too flat to me. There are dark areas where there should be none so I'd start with that.

Secondly, I'd separate the stars from the nebula. Starnet++ might be your best option in the linear phase but experiment with SXT if you have it.

Apply MaskedStretch with the default settings and see what comes out. You might need to lower your target background. Depending on the results you might want to apply ArcsinhStretch or curves and HystogramTransformation at some point to modulate your dynamics. Not having handy the unstretched image I can't be too sure of which path is best. But the main point is that you need to modulate your darkest level so that are just visible and your brightest highlight so that they are not burnt out. IOW, you need to fully bring out as much dynamical range as possible keeping in mind the dusts belongs to the background and the core belongs to the foreground. And the color are properly shown, the OIII areas around the M42 core especially.

ArcsinhStretch for the stars plus HystogramTransformation to keep them under control. Mask to keep the halos in check.

This for starters.

Dear @andrea tasselli Andrea Hello!

I tried less aggressive processing and I would appreciate to hear from you if this is on a better track while I feel that I am not getting the full data of the 7+ hours. 

https://www.astrobin.com/6v61k9/B/

If you want and have the luxury of time I could share you the master light to experiment with. 

PS: I don't have any background on traditional photography and as such I got into a very steep learning curve but I really enjoy every single minute of it. Harsh criticism is only received positively! 

Thanks
Edited ...
Like
jrista 8.59
...
· 
Omiros Angelidis:
For the masters of the acquisition and processing I am seeking their honest feedback, other than the blown out core where I managed to eliminate the trapezium alltogether.

https://www.astrobin.com/6v61k9/

I know that beauty is subjective but I am not seeking a likeable photo. I am seeking a realistic and not overporcessed result, but I always seem to miss that balance!

Thanks you and clear skies!

Omiros

You seem to have a general issue with the final result. I'm curious, what processing did you perform, and how?
Like
Gomarofski 0.90
...
· 
Jon Rista:
Omiros Angelidis:
For the masters of the acquisition and processing I am seeking their honest feedback, other than the blown out core where I managed to eliminate the trapezium alltogether.

https://www.astrobin.com/6v61k9/

I know that beauty is subjective but I am not seeking a likeable photo. I am seeking a realistic and not overporcessed result, but I always seem to miss that balance!

Thanks you and clear skies!

Omiros

You seem to have a general issue with the final result. I'm curious, what processing did you perform, and how?

Thanks Jon for the generalization and honesty ;)
Here is the processing flow (more or less). 

WBPP stacking
STF preview
Gradient correction 
SPCC
SCNR
Crop
BlurX
NoiseX
Histogram Transformation with EZsuite as I never get a good grasp on the manual mode. I know that this is fundamental but still...
HDRMT to get the trapezium
StarX
To the stars image:
--> Slight saturation bump
--> Slight Deconvolution
Then to the starless image I split RGB
Curves transformation with the same settings to all three channels. Darkening the background and bumping up the brights
Local Histogram Equalization (first pass 32 second pass 128) slight increaments for avoiding oversharpening
Unsharp tool not more than 60-70
Recombination of RGB
SCNR again
NoiseX for another smooth pass with selective range masking to the nebula
Colour saturation
Save as PNG/tiff/xisf 

Thanks

Omiros
Like
C.Sand 2.33
...
· 
·  1 like
Omiros Angelidis:
I tried less aggressive processing and I would appreciate to hear from you if this is on a better track while I feel that I am not getting the full data of the 7+ hours. 

https://www.astrobin.com/6v61k9/B/


At the very least, I think you've made a huge improvement over your first image. Good job. 

I think you could gain a bit more from better HDR, that might be part of the "not getting the full data". Here is an HDR script from a friend of mine, he is nuts at AP. He worked on that recent DSC 1k+ hr Andromeda and is a big contributor to the knowledge in AWA as far as I know. Here are instructions and such to use the script (at the time of positing this is V2.0, so this may not be completely comprehensive in the future): 

image.png
https://astrouri.com/pixinsight/scripts/iHDR/


Omiros Angelidis:
Here is the processing flow (more or less). 

[...]

Thanks

Omiros

Comments on processing workflow below, numbering them so you can keep track easier.

WBPP stacking
STF preview
Rec 1. Do crop here
Gradient correction 
SPCC
Rec 2. SCNR
Crop
BlurX
Rec 3. NoiseX
Rec 4. Histogram Transformation with EZsuite as I never get a good grasp on the manual mode. I know that this is fundamental but still... - 
Rec 5. HDRMT to get the trapezium
StarX
Rec 7. To the stars image:
--> Slight saturation bump
--> Slight Deconvolution
Rec 6. Then to the starless image I split RGB
Curves transformation with the same settings to all three channels. Darkening the background and bumping up the brights
Local Histogram Equalization (first pass 32 second pass 128) slight increaments for avoiding oversharpening
Unsharp tool not more than 60-70
Recombination of RGB
Rec 2. SCNR again
Rec 3. NoiseX for another smooth pass with selective range masking to the nebula
Colour saturation
Save as PNG/tiff/xisf 

Rec 1: Please crop before basically anything else. While rare, stacking artifiacts or other issues can cause background extraction and other processes to give worse results than if you had cropped before. Again, this isn't super common but it's an easy thing to do and as far as I know there's no reason not to. This does mean that you'll have to platesolve again for SPCC. Alternatively you can turn on autocrop from WBPP if you're happy with what that spits out, but it sounds like autocrop has been running slow recently?

Rec 2: I don't like SCNR. You're just completely removing data. In small amounts I think it can be useful but background neutralization and balancing curves has always given me results where I don't need it.

Rec 3: NoiseX is really powerful, but I would personally try to refrain from running it twice (especially if it's on higher settings). I prefer DeepSNR methods to NoiseX, but again use sparingly and take care that it isn't making fake structures. For example I like this guide by another friend of mine: https://www.nightphotons.com/guides/single-channel-denoise DeepSNR is "only" for mono images, but you can do tricks to get around that. In stacking if you drizzle 1x drop shrink .35, var shape 1.5 that should work, but it's not always consistent. Now of course you do have to dither and drizzle for this, which seems to be a little controversial. This is of course a lot of work when you have NoiseX already, so probably skip it, just throwing it out there in case you want to explore.

Rec 4: EZ suite is sorta outdated. Used to be great and still effective for some things but I strongly suggest learning GHS (Generalized Hyperbolic Stretch). It's a bit complicated but is just overall one of the best stretching tools.

Rec 5: See Mr. Sketchpad (Uri)'s script above and the info I've mentioned there.

Rec 6: This ties into Rec 2&4 in ways. Imo you can do all (or at least most) of the stretching of individual channels neccessary to avoid SCNR and splitting the channels in GHS. Not neccessary for stars. 

Rec 7: Your stars look pretty good at a glance so this is more optional. Maybe it'll be helpful to someone else. Sorta out of order here but I wanted to put it last because its the "last" thing you do. Linking another guide from Charlie: https://www.nightphotons.com/guides/star-addition Charlie goes into more depth here than might be neccessary so I'm giving a simplified method of this, I don't know if it works completely but in my lazy processing it's been what I've used, to decent result at least. This is essentially what Charlie does in "Combining Stars with the Starless Via Relinearization (Manually)".
Here's my short guide:

So you have your starless image and your star mask. Both are stretched/saturated to your liking. Use histogram transformation to "de-stretch" your stars using midtones, by setting midtones to 0.999. Apply this to your starless and star mask. Combine starless and starmask with pixel math. "Re-stretch" my setting midtones to 0.001 and apply to image. Should have recombined stars that look pretty good! Here's a quick example with some snipping tool writing on it.image.png
Like
jrista 8.59
...
· 
·  2 likes
Omiros Angelidis:
Jon Rista:
Omiros Angelidis:
For the masters of the acquisition and processing I am seeking their honest feedback, other than the blown out core where I managed to eliminate the trapezium alltogether.

https://www.astrobin.com/6v61k9/

I know that beauty is subjective but I am not seeking a likeable photo. I am seeking a realistic and not overporcessed result, but I always seem to miss that balance!

Thanks you and clear skies!

Omiros

You seem to have a general issue with the final result. I'm curious, what processing did you perform, and how?

Thanks Jon for the generalization and honesty ;)
Here is the processing flow (more or less). 

WBPP stacking
STF preview
Gradient correction 
SPCC
SCNR
Crop
BlurX
NoiseX
Histogram Transformation with EZsuite as I never get a good grasp on the manual mode. I know that this is fundamental but still...
HDRMT to get the trapezium
StarX
To the stars image:
--> Slight saturation bump
--> Slight Deconvolution
Then to the starless image I split RGB
Curves transformation with the same settings to all three channels. Darkening the background and bumping up the brights
Local Histogram Equalization (first pass 32 second pass 128) slight increaments for avoiding oversharpening
Unsharp tool not more than 60-70
Recombination of RGB
SCNR again
NoiseX for another smooth pass with selective range masking to the nebula
Colour saturation
Save as PNG/tiff/xisf 

Thanks

Omiros

So, probably an unpopular opinion...but, I have never cared for what star removal generally does to images. There may be some techniques that avoid the problems I've usually run into (even with say SXT), but I see signs of the same problems in a lot of other images. Since this can be a potentially degrading step, I would maybe try to eliminate it, and skip the RGB star replacement.

I would also probably do your linear processing, then just do a simple HT stretch and just see what you've got at that point. Before doing any post-stretch processing, I would make sure that you are in fact satisfied with your pre-stretch processing. If you are not, then I would keep backing up through your processing history, and adjust the settings of each tool, until you ARE satisfied. Once you are satisfied with your linear processing, I would then try processing the stretched image (which at that point, should be a lot simpler, really.) 

Processing that is often degrading in nature, are the things like star removal, local histogram/local contrast enhancement stuff, SHARPENING (!!!), and improper stretching (can reveal too much noise if its not done right.) 

My experience with NXT, BXT and SXT is more limited, as I just started using them. That said, I've found SXT to generally be destructive to finer details, and haven't yet been able to overcome that (but, its similar to pre-AI techniques, which were also somewhat destructive as well.) I have found that NXT should probably be run FIRST, BEFORE BXT. NXT can soften details, not a lot, but it  can. In contrast, BXT is designed to enhance details. You are probably generally better off running NXT first, then BXT to restore some of the loss in detail caused by NXT. So I would swap the order of those two.

I also find that if NXT is done properly, it shouldn't need to be run more than once, at least not at a high strength. Sometimes I may run a second pass of NXT later in the process just to lightly clean up some of the background nosie revealed by stretching, but if you apply it properly in the first place, that should be less necessary. ACDNR can also be used for that final-pass cleanup, and sometimes it does a better job. You shouldn't need to run SCNR more than once...if you do that properly, you should only need to use curves to manage color, and proper curves application should not introduce any color casts. Some of these things are post-stretch processing, though, and for now, I would avoid post-stretch processing until you are completely satisfied with your pre-stretch processing.
Like
 
Register or login to create to post a reply.