Data culling and background standardizations Pleiades Astrophoto PixInsight · Blaine Gibby · ... · 16 · 978 · 0

BlaineGibby 2.39
...
· 
·  1 like
Two things I have been trying to improve on to continuously push my astrophotography: 

1. What is the best way to cull data using pixinsight? Right now I have my lights integration set to Winsorized Sigma clipping with a minimum weight of 0.08 and a sigma high of 3.31 and a sigma low of 3.00. ( I actually don't know what any of that means.) However, I rarely see WBP rejecting frames. I use blink before I integrate. How do I cull data base on the FWHM?

2. What are good ways to standardize the black point? What point is this done in a workflow? Ideally I'd like a neutral dark grey but I usually end up with some sort of tint. 

Thanks in advance
Like
andreatax 7.76
...
· 
·  1 like
1. That doesn't cull anything. If you want to entirely remove a frame from the stack based on some parameter's value then you need to associate a weight to the frames, using SubframeSelector.

2. There is no "good way". BackgroundNeutralization does that for you, if you see it looking odd is because your frame/background ins't flat or the noise is high. Or both.
Like
C.Sand 2.33
...
· 
·  3 likes
SubframeSelector is the best to cull. I cull based off of eccentricity and FWHM primarily. I forget the exact value for eccentricity (I think it's roughly .45?) but if its greater than .6 it's never good. FWHM values depend on your equipment but it shouldn't be too hard to find an average value you get in good-normal conditions, and just base it around that.
Like
jrista 8.59
...
· 
·  2 likes
Blaine Gibby:
Two things I have been trying to improve on to continuously push my astrophotography: 

1. What is the best way to cull data using pixinsight? Right now I have my lights integration set to Winsorized Sigma clipping with a minimum weight of 0.08 and a sigma high of 3.31 and a sigma low of 3.00. ( I actually don't know what any of that means.) However, I rarely see WBP rejecting frames. I use blink before I integrate. How do I cull data base on the FWHM?

2. What are good ways to standardize the black point? What point is this done in a workflow? Ideally I'd like a neutral dark grey but I usually end up with some sort of tint. 

Thanks in advance

1. This is not culling, this is outlier pixel rejection. Different thing.
2. Background neutralization, linear alignment (manual, with pixinsight), linear fit.

IF you want to CULL, that is rejection of FRAMES, not pixels. You do that with Blink, visually, manually, and with SubframeSelector (which can rate traits on statistics, reject outliers, and weight based on your specified criteria.)
Like
ScottBadger 7.61
...
· 
I only cull the very worst, more in the stack gets better noise reduction at least, right? Anyhow, I leave it to weighting (psf signal weight) and rejection (deviant student….or whatever it’s called) to do the picking, and on a pixel by pixel basis. I find that rejection algorithm isn’t as good as Winsorized for satellite trails, but what’s left is usually faint enough to not show after stretching. Adam Block has a good tutorial on integration that goes over the settings for the different algorithms, but it may require membership to access.

Cheers,
Scott
Like
CCDnOES 5.21
...
· 
·  4 likes
Scott Badger:
only cull the very worst, more in the stack gets better noise reduction at least, right?


Since the advent of tools like NoiseX, that is not quite as relevant as it used to be (and recognize I have been doing this since 1993), especially for high res imaging. The reason?

You can now remove much of the excess noise that can come from using fewer frames by using the improved NR tools.

And since this allows using only the frames with better Seeing/Tracking/FWHM, you will improve your final resolution and tools like BlurX will work better with a better starting point. 

There are limits to this approach but, within reason, it has served me well.

Of course it depends on your goal. If you are looking for faint diffuse nebulosity at a short to moderate FL, more images is almost always better. OTOH if you are doing a small galaxy or Planetary at a long focal length, S/N is often one of the least important things and FWHM is king.
Edited ...
Like
scottstirling 0.90
...
· 
·  2 likes
My trial and error has shown me that averaged integration by PSFSignalWeight with a good pixel rejection algorithm can result in a better image quality result than any of the individual subframes for criteria such as FWHM and eccentricity as well as signal to noise ratio.

The more subframes contributing to an integration, the greater the probability of optimal image quality results by averaged integration weighted by PSFSignalWeight enhanced with a pixel rejection algorithm.

The more subframes contributing to an integration, the broader range of subframe image quality criteria values (such as FWHM, Eccentricity, Noise, etc), that can be acceptable and supportive to a higher quality end result.

Then if the subframes were dithered one does drizzle integration (after ImageIntegration with “generate drizzle data” checked), the resulting image quality will be even higher usually in multiple measurable metrics.

Scott S
Edited ...
Like
ScottBadger 7.61
...
· 
Bill McLaughlin:
Since the advent of tools like NoiseX, that is not quite as relevant as it used to be (and recognize I have been doing this since 1993), especially for high res imaging. The reason?

You can now remove much of the excess noise that can come from using fewer frames by using the improved NR tools.

And since this allows using only the frames with better Seeing/Tracking/FWHM, you will improve your final resolution and tools like BlurX will work better with a better starting point. 

There are limits to this approach but, within reason, it has served me well.

Of course it depends on your goal. If you are looking for faint diffuse nebulosity at a short to moderate FL, more images is almost always better. OTOH if you are doing a small galaxy or Planetary at a long focal length, S/N is often one of the least important things and FWHM is king.

Agreed, though.....I've also found the X's like a lot of signal to work with, so a balance?

Interestingly, for an image I'm working on now (M106), the highest PSFSW value was also the worst FWHM!.....and at 9", the sub was well into even my cull zone. I happened to notice when I sorted by PSFSW to pick frames to integrate a LN ref image and saw that I had disapproved most of the top 10. All because of terrible FWHM's. Not sure I've seen anything like it before, but at that kind of seeing I can't even plate-solve, so don't usually have data that bad. Seeing must have changed dramatically after I started the sequence and went to bed (the time stamp of that sub is 3:30). Seeing, not focus, I believe since the stars aren't donuts and I think they would be that far out.

Cheers,
Scott
Like
CCDnOES 5.21
...
· 
·  1 like
Scott Badger:
the highest PSFSW value was also the worst FWHM


When things look weird I also go have a  look at star count. That tends to find images where the transparency may have dropped, affecting other numbers oddly.
Like
ScottBadger 7.61
...
· 
Bill McLaughlin:
When things look weird I also go have a  look at star count. That tends to find images where the transparency may have dropped, affecting other numbers oddly.

I do the same, but for this sub the number of stars was right on the median......

Cheers,
Scott
Like
jrista 8.59
...
· 
·  1 like
Scott Badger:
I only cull the very worst, more in the stack gets better noise reduction at least, right? Anyhow, I leave it to weighting (psf signal weight) and rejection (deviant student….or whatever it’s called) to do the picking, and on a pixel by pixel basis. I find that rejection algorithm isn’t as good as Winsorized for satellite trails, but what’s left is usually faint enough to not show after stretching. Adam Block has a good tutorial on integration that goes over the settings for the different algorithms, but it may require membership to access.

Cheers,
Scott

Well, I guess it depends on whether and how much each sub actually contributes to the signal of your target(s) of interest. If a sub is contributing more pollutant signal than target signal, or if the sub is bad in some way (i.e. bad tracking or something during that time), then its not going to contribute quality signal. I had ONE frame in my latest image that had a star jump issue. Despite teh pixel rejection I used during integration, that one sub actually left artifacts in the final image. Not sure how the sub missed my culling efforts, but it did. Goes to show how much one bad sub can impact the stack. 

It depends a lot on your goals. If some subs detract from your goals, its probably better to exclude them, than to include them for a small impact to additional SNR. Remember, as the stack depth gets deeper, each additional sub has lesser and lesser impact on the final result. When you are stacking hundreds of subs, you can actually afford to toss more than a few, and have minimal impact. While it is undeniably harder to deal with integrating hundreds (or even thousands) of subs, there are distinct benefits to doing so that can allow greater optimization of the final result. When each sub represents a smaller percentage of the total signal to start with, tossing many of them, especially when your stack is already very deep, is probably preferable to the minimal additional SNR keeping them might preserve.
Like
ScottBadger 7.61
...
· 
Jon Rista:
It depends a lot on your goals. If some subs detract from your goals, its probably better to exclude them, than to include them for a small impact to additional SNR. Remember, as the stack depth gets deeper, each additional sub has lesser and lesser impact on the final result. When you are stacking hundreds of subs, you can actually afford to toss more than a few, and have minimal impact.

All sounds good, though unless it’s a target I like we’ll enough to image multiple seasons, it’s more like 50 - 100 subs per channel….Anyhow, again a balance, but it seems like many cull too aggressively, especially when stack numbers are even lower. The number of times even satellite trails are mentioned as part of someone’s culling routine…..

Interesting that a single sub with  double stars caused an issue. Maybe because the core is so bright? Brighter than a satellite trail, say?

Cheers,
Scott
Like
ScottBadger 7.61
...
· 
Actually, I don’t think being brighter makes much sense as a reason. Could it be because you’re dithering and on other subs there are stars in the same location as the doubled stars in that one sub?

Cheers,
Scott

edit: never mind…. because they’re aligned when stacked that wouldn’t be an issue.
Edited ...
Like
jrista 8.59
...
· 
Scott Badger:
Actually, I don’t think being brighter makes much sense as a reason. Could it be because you’re dithering and on other subs there are stars in the same location as the doubled stars in that one sub?

Cheers,
Scott

edit: never mind…. because they’re aligned when stacked that wouldn’t be an issue.

I suspect it is because not all of the pixels of these doubled stars fell outside of the rejection criteria. So some parts were rejected, other parts that fell below the criteria were not. I didn't notice it at first, as it was usually just small clusters of pixels around specific stars. Once I combined the channels, these clusters showed up notably blue. But I had to be zoomed in enough to see that they were not natural structures. Instead of going all the way back to the beginning and re-integrate, I just clone stamped the issues out, and continued on. I will eventually fix the integration, that that will require a lot of additional re-work of my earlier processing steps, so I haven't done it yet.
Like
jrista 8.59
...
· 
Scott Badger:
Jon Rista:
It depends a lot on your goals. If some subs detract from your goals, its probably better to exclude them, than to include them for a small impact to additional SNR. Remember, as the stack depth gets deeper, each additional sub has lesser and lesser impact on the final result. When you are stacking hundreds of subs, you can actually afford to toss more than a few, and have minimal impact.

All sounds good, though unless it’s a target I like we’ll enough to image multiple seasons, it’s more like 50 - 100 subs per channel….Anyhow, again a balance, but it seems like many cull too aggressively, especially when stack numbers are even lower. The number of times even satellite trails are mentioned as part of someone’s culling routine…..

Interesting that a single sub with  double stars caused an issue. Maybe because the core is so bright? Brighter than a satellite trail, say?

Cheers,
Scott

Oh, I agree, a lot of stuff can simply be pixel rejected. Throwing away a sub because of a sat trail is throwing way good data.

For me, its when there is a notable and broad scale issue. Most of the time, this is when thin clouds just thick enough to obscure object signal pass through. This usually rsults in a large scale, frame filling (or mostly filling) change in the overall image. There may be little bits of object signal here or there, but if there is a major change in the dominant signal, then this will be detrimental to your stack. So I reject these. 

If I have tracking issues, resulting in trailing, jumps, etc. then I reject those as well.

Cosmic ray strikes, sat trails, anything that can simply be corrected with outlier pixel rejection, though, I definitely keep.
Like
CCDnOES 5.21
...
· 
Jon Rista:
Throwing away a sub because of a sat trail is throwing way good data.


Even really bright trails that typically leave a residue in the final integrated image can often be dealt with my using the clone tool on the affected raw image to clone away the trail. Sounds weird since one would not normally  clone on a raw image but it works. It can be done very crudely since all you are trying to do is to bring the affected area closer to the surrounding average. The final integration takes care of the rest and you don't waste the data.
Like
jrista 8.59
...
· 
Bill McLaughlin:
Jon Rista:
Throwing away a sub because of a sat trail is throwing way good data.


Even really bright trails that typically leave a residue in the final integrated image can often be dealt with my using the clone tool on the affected raw image to clone away the trail. Sounds weird since one would not normally  clone on a raw image but it works. It can be done very crudely since all you are trying to do is to bring the affected area closer to the surrounding average. The final integration takes care of the rest and you don't waste the data.

I am not sure why a bright trail would be problematic. In fact, the brighter the trail, the more easily it should be able to be rejected, as its deviation from the mean should be so much greater (and thus be guaranteed to fall outside of the rejection criteria for most algorithms.) If you are not fully rejecting bright trails, then I would have to figure your rejection criteria are incorrectly configured. The brighter the outlier, the easier and more guaranteed its rejection should be most of the time. This is why cosmic ray strikes are so easy to reject...they usually result in super bright clusters or structures of pixels that are WAY beyond the mean level of those pixels throughout the stack.

The most problematic trails are the FAINT ones, that are harder to identify as outliers relative to the mean for each pixel covered by the trail. However, if you are meticulous with your rejection criteria, you can usually tune them to reject even fainter trails like that. You might also reject some pixel values that you should probably keep, but there are algorithms you can use that will clamp rather than reject outlier pixels (i.e. make them equal to either the largest (for high outliers) or smallest (low outliers) non-rejected pixels in the distribution). This is not a perfect solution, but it can minimize the impact of pixel value loss during rejection.

Then yes, if something is so faint that it can't really be identified as an outlier regardless, then it should just get averaged out in the stack.
Like
 
Register or login to create to post a reply.