Combining OSC with Mono [Deep Sky] Processing techniques · Coolhandjo · ... · 133 · 2977 · 22

jrista 8.59
...
· 
Not so bad for OSC in the SNR department. However, the ugly green is a pain and hard to remove. Yet the mono RGB is naturally much more neutral and vibrant. Sure we could use some type of correction for the CFA of the OSC like @Jon Rista alluded to earlier. The problem is that last I checked neither Sony nor ZWO provides this correction. So we are left on our own to manually create our own calibration, which is extra complexity that shouldn't be needed. I want to point out here that SPCC even with all the correct camera and CFA filter settings DID NOT REMOVE THE GREEN BIAS.


Maybe I forgot to mention this before. But a color checker card can be used with any camera, along with various forms of free software you can find online, to create the necessary CCM for any camera. A lot of this software is command line, but pretty easy to use. You can then apply the CCM to the image. This should readily correct the color balance factor. Not only will you not see the green, the colors will usually saturate significantly compared to the uncalibrated baseline. 

I did this with my DSLR in the past. Just used the raw transformation matrix math with PixelMath, IIRC. This is the raw data out of the DSLR without any color correction:



This is with the CCM applied:



I would expect similar results with any OSC camera. They are the same kinds of sensors, only with them usually using Sony sensors, the difference in saturation might even be more extreme than this. So if SPCC isn't working (wonder why...I would have thought that would have corrected things...) then a CCM should.


EDIT: Apologies...the first image did have one correction in PI: SCNR to remove the green... It doesn't look like I have a version without that correction.
Edited ...
Like
rockstarbill 11.02
...
· 
Here is my take:

If dust == yes
   then Lum
Else RGB

Bill
Like
andreatax 7.72
...
· 
So, who invented the Luminance Layering technique?   Dr. Kunihiko Okano and Robert Dalby, indipendently. If memory serves me right, just before the turn of the millenium.
Like
HegAstro 11.91
...
· 
·  3 likes
My subjective observations have been that:
  • I really like the color I can get from OSCs. I recently reprocessed an M33 image and was happy with the color and detail I got without the addition of mono luminance which I had.
  • When I add luminance (LRGB), I find that if I overprocess the luminance, the resulting image gets washed out. So clearly, the supporting RGB data needs to be there. I'd be interested in how people judge at what point to do the combination, since we certainly do a number of adjustments after the combination.
  • When I process a pure RGB (whether from mono or OSC), there seems be little reason to process the extracted luminance separately. I used to do this in the past, but with BXT, there is no longer a reason to do this. Seeing how the stretch I make affects both contrast and color in combination is far more useful than doing it separately and then readjusting after the combination.


As Bill said, if I am going after extended dust, mono would be my tool of choice. OTOH, the only image of the IFN (not a very good one) I have was taken with an OSC, so without a doubt, it is possible. I think a good test would be a widefield of the Iris or M81/82 region with Mono and OSC having the same imaging time and same scope. I suspect the OSC would do better than many people think.
Like
vercastro 4.06
...
· 
Arun H:
My subjective observations have been that:
  • I really like the color I can get from OSCs. I recently reprocessed an M33 image and was happy with the color and detail I got without the addition of mono luminance which I had.
  • When I add luminance (LRGB), I find that if I overprocess the luminance, the resulting image gets washed out. So clearly, the supporting RGB data needs to be there. I'd be interested in how people judge at what point to do the combination, since we certainly do a number of adjustments after the combination.
  • When I process a pure RGB (whether from mono or OSC), there seems be little reason to process the extracted luminance separately. I used to do this in the past, but with BXT, there is no longer a reason to do this. Seeing how the stretch I make affects both contrast and color in combination is far more useful than doing it separately and then readjusting after the combination.


As Bill said, if I am going after extended dust, mono would be my tool of choice. OTOH, the only image of the IFN (not a very good one) I have was taken with an OSC, so without a doubt, it is possible. I think a good test would be a widefield of the Iris or M81/82 region with Mono and OSC having the same imaging time and same scope. I suspect the OSC would do better than many people think.

You must have special hands if you can consistently get really good colour from OSC

With regards to the step of adding luminance to RGB, the reason it may appear "washed out" is because the stretch of both RGB and L does not match. It takes some practice to get this right. But when you do you'll notice a significant improvement. As Jon alluded to before, there are "tricks" to increase the SNR of the RGB which allow it to more easily match L. The controversial method is to blast it with DeepSNR at 100%. If you carefully handled the L, the result will be excellent.

Generally speaking I do agree with the idea that processing an extracted L when you only have RGB is a waste of time.
Like
HegAstro 11.91
...
· 
You must have special hands if you can consistently get really good colour from OSC


Not good hands, no, but I have been doing this for many years and I do have a fair number of images under my belt and a fair bit of integration time. I will not use OSC from a light polluted site, but I will certainly use it from a dark site. I guess I don't have the issues with the green cast that seems to bedevil people. Perhaps that is from use of CFA drizzle and SPCC? I cannot say.

I am, of course, aware of the heavy noise reduction techniques you mention, but not a fan of them. I used Deep SNR in my last published image.  I guess I don't really like images whose color basis doesn't support the luminance. Part of this is subjective. There are those who will clearly overstretch things like faint nebulosity (IFN etc), oversaturate things etc. I try not to. The image needs to look good to me at magnification, not cell phone size. But others are free to make their judgements as they see fit. It is their image.
Edited ...
Like
vercastro 4.06
...
· 
·  1 like
You bring up an interesting point about CFA drizzle. The example I provided was not processed that way but I did use SPCC.

I am increasingly on the side of CFA drizzle being necessary to get the most out of OSC. I'll have to do some testing.
Like
HegAstro 11.91
...
· 
·  2 likes
Incidentally - today's IOTD has a lum to RGB ratio of 1:1:1:1, so roughly equal total signal between luminance and color. The overall image is excellent and the colors deep and vibrant.


Edit: I've been thinking about what it means to do an LRGB combination for an acquisition with a 1:1:1:1 LRGB ratio. The luminosity of the RGB image should, in this case, have a similar SNR to the actual L separately acquired. So extracting the L* from the RGB image and replacing it with the L cannot improve the SNR significantly. You are just as well off using the RGB data alone. So I suspect the technique here is to construct a superluminance from the combination of the L and RGB masters and use that in the LRGB combination. That certainly would result in a sqrt(2) increase in SNR while still having excellent color basis.
Edited ...
Like
andreatax 7.72
...
· 
·  1 like
The best way to use SPCC with OSC images is to have them via CFA drizzle.
Like
morefield 11.07
...
· 
Has anyone talked about using a super-luminance here?  This is my standard procedure with LRGB.  Knowing that I'm not throwing away the Luminance information from the RGB, I usually get to something like 2:1:1:1.  But I really let the data dictate the final amounts I capture of each.
Like
HegAstro 11.91
...
· 
Kevin Morefield:
Has anyone talked about using a super-luminance here?


Yes, that was the substance of my last post.
Like
jrista 8.59
...
· 
·  1 like
Kevin Morefield:
Has anyone talked about using a super-luminance here?  This is my standard procedure with LRGB.  Knowing that I'm not throwing away the Luminance information from the RGB, I usually get to something like 2:1:1:1.  But I really let the data dictate the final amounts I capture of each.

I did not mention super-luminance, however you can create a synthetic luminance by integrating your R, G and B channels together, to create an L channel that is effectively like having done 1:1:1:1 exposure ratio. With OSC, you would probably want to separate the RGB channels, then integrate them together, rather than extract a luminance. Its not a "super" lum, just a "synthetic" lum...but, it would give you that stronger SNR monochrome channel to work out the contrasts with.
Like
vercastro 4.06
...
· 
·  1 like
Jon Rista:
I did not mention super-luminance, however you can create a synthetic luminance by integrating your R, G and B channels together, to create an L channel that is effectively like having done 1:1:1:1 exposure ratio. With OSC, you would probably want to separate the RGB channels, then integrate them together, rather than extract a luminance. Its not a "super" lum, just a "synthetic" lum...but, it would give you that stronger SNR monochrome channel to work out the contrasts with.


No it's not necessarily equivalent to 1:1:1:1. The majority of luminance filters have broader bandpass compared to the RGB filters in the set.

Edit:
I wanted to take the extra effort to make abundantly clear why Luminance on mono captures more signal than just using the RGB filters.

I have created a spreadsheet which calculates the amount of "data units" captured by a selection of filter combinations. It assumes that each colour filter captures 1/3 the amount of data for a given unit of exposure time when compared to luminance. This is approximately based on reality.

All mono:
image.png
All RGB:
image.png
1:1:1:1:
image.png
3:1:1:1
image.png

The real world is more complicated of course so I don't expect this to scientific. This doesn't apply to OSC. The math for that is more complicated. I chose to ignore read noise etc. because their effect will be minimal in reality at several minute long exposure time per subframe on modern CMOS cameras with TEC cooling.

Edit2:
My analysis of the spreadsheets is that if colour gamut is really important, 1:1:1:1 is a good tradeoff ratio.
Edited ...
Like
andreatax 7.72
...
· 
Eh? Where the hell are those numbers coming from?
Like
vercastro 4.06
...
· 
andrea tasselli:
Eh? Where the hell are those numbers coming from?

What specific question do you have?
Like
andreatax 7.72
...
· 
Where did you pluck those numbers from, because the "where" is perfectly unclear.
Like
vercastro 4.06
...
· 
andrea tasselli:
Where did you pluck those numbers from, because the "where" is perfectly unclear.



The numbers in green are what I change for different combos.

I set the exposure length to 3 minutes for every filter because that's my standard and read noise is swamped across that board under my level of LP. This specific exposure length doesn't matter much for this exercise since signal captureed over time is linear.

I then adjust the sub count for each filter so that the total exposure time (in red section) is about 60 minutes for all scenarios. I picked 60 minutes total because it's simple.

The "data capture rate" and related "data units" are arbitrary numbers to which represent total captured signal for comparison. Each filter is assigned a rate, which is 3111 LRGB respectively.
Edited ...
Like
andreatax 7.72
...
· 
I really don't get what you're driving at, I'm afraid. I though you would somehow demonstrate that R+G+B<L but I can'r see anything close to such a demonstration which, to be perfectly clear would entail the integration of the convolution of transmission curves for each of the RGB filters with the QE curve of the camera divided by the spectral interval and compared with the same integration this time with the convolution between the luminance filter transmission curve and the QE curve of said camera (etc...). I don't see anything like that. Besides, one would beg the question: which filters are we talking about?
Like
vercastro 4.06
...
· 
I believe the spreadsheet very clearly shows that bigger number does indeed mean bigger number.

In which case they do demonstrate that foregoing Luminance while using mono cameras is a mistake. Which was my intention.

I alluded to in my pre- and post-amble that I'm cutting some variables for the sake of simplicity because the point of the comparison was to show, again, why luminance is in fact very important for mono imaging and not just some "fad" or "crutch" as I have read some people claim. Factoring in the QE curves of the camera will only serve to alter the "capture rate" of each RGB filter in relation to each other.

As for what filters? Chroma of course, since most in the industry agrees they are the best quality money can buy.

So here's their transmission curve:
image.png
Please make note of how each of the RGB roughly covers a 3rd of the entire spectrum captured under the Luminance. In line with my assumptions. In my scenarios above where we compare to the amount of signal captured by just L, this is a perfectly sufficient line of thinking.

The reason I'm still insistent on these discussion points is because of the significsnt amount of misinformation I see floating around on the Internet regarding these topics. I want every Astrophotographer to be able to capture the best possible images of space that is possible. I share what I've learned along the way from practical experience so that we may achieve that goal.
Edited ...
Like
Freestar8n 1.51
...
· 
·  1 like
Jon is summarizing a lot that is in the cloudynights thread referenced earlier - but I think the key point to realize is that the "Luminance" filter does not capture perceptual luminance - and is a poor proxy for it.  People think it is good as "Luminance" - but it departs in ways that are detrimental to color.  I prefer to call the "Luminance" filter signal T, so people are effectively doing TRGB rather than LRGB.  Calling the filter "Luminance" is misleading.

A true perceived luminance signal would be heavily weighted by green and very little by blue.

This isn't just a purist or pedantic distinction - it results in desaturation of the very colors you are trying to capture deeply.  And it explains why so many people find LRGB just doesn't work well for them.

As for the efficiency of OSC vs. mono - I recently did some measurements with various filters and confirmed the wider, overlapping bandpass for OSC result in *more* signal than non-overlapping mono filters, for a broadband target.

And as for OSC being less efficient because it captures double green - that is in effect acting as a form of LRGB since the green *is* a good proxy for L.  A broadband terrestrial image is no different from a broadband astro image - and the benefits of the RGGB bayer format for terrestrial apply also to astro.  The extra green improves both color and perceived luminance.

Finally - since T is not L you know that a long RGB or OSC image will be different from LRGB - particularly with special processing applied.  So it is possible people may prefer one over the other since they are different and it is subjective.  But there is good reason the LRGB concept is inherently flawed - because the T signal is not a good proxy for L.

Based on signals alone and matching to color vision, OSC *should* be better than mono in the same time.  If it is found not to be I'm not sure what the explanation is - but the simple explanations given based on throughput, resolution and so forth don't seem to apply.

Frank
Like
vercastro 4.06
...
· 
Jon is summarizing a lot that is in the cloudynights thread referenced earlier - but I think the key point to realize is that the "Luminance" filter does not capture perceptual luminance - and is a poor proxy for it. People think it is good as "Luminance" - but it departs in ways that are detrimental to color. I prefer to call the "Luminance" filter signal T, so people are effectively doing TRGB rather than LRGB. Calling the filter "Luminance" is misleading.

A true perceived luminance signal would be heavily weighted by green and very little by blue.



With due respect, this lacks any logical backing.

Luminance is in fact weighted to green because of the typical QE curves of CMOS sensors peaking in green. Therefore luminance filters are already truly capturing luminance (by your definition) which simply means the entire visual spectrum. That's not a mistake that the QE peak is in green. Green is the most prominent natural colour on earth and these camera sensors were originally designed for terrestrial photography.

Is there confusion originating from the thinking that more green spectrum means that the image should somehow be more green and deep space objects are actually more green thsn NASA has captured with Hubble or how a spectrograph color correction says my colour cast should be?
Edited ...
Like
Freestar8n 1.51
...
· 
·  1 like
With due respect, this lacks any logical backing.

Luminance is in fact weighted to green because of the typical QE curves of CMOS sensors peaking in green. Therefore luminance filters are already truly capturing luminance (by your definition) which simply means the entire visual spectrum. That's not a mistake that the QE peak is in green. Green is the most prominent natural colour on earth and these camera sensors were originally designed for terrestrial photography.

Is there confusion originating from the thinking that more green spectrum means that the image should somehow be more green and deep space objects are actually more green thsn NASA has captured with Hubble or how a spectrograph color correction says my colour cast should be?

It's true it ends up a bit weighted to the green, but there is still a very strong amount of red and blue, particularly blue, that results in the T signal being higher for blue than it should be if it were luminance.  If you then blend it in *as luminance* - it will make blue regions brighter than they should be, by desaturating them.  And for many people who blend in T and are disappointed by desaturation of colors - it is a factor that I don't think has been pointed out till now.

A true luminance filter would be fairly narrowly peaked in the green, and much more narrow than typical QE response curves.

Just because a filter gives you strong signal and an image that looks good and high SNR as grayscale - it doesn't mean it will work well as L in a final color image, because it may be locally too high or too low - and corrupt the colors as a result.  A true L signal would not do that.

As for "green and deep space objects" - green is an important component of most any color you see, particularly since it dominates what we perceive as luminance.  Objects don't need to have green hue in order for high SNR green to have benefit.  Any broadband scene, including gray regions, has the green component playing an important role.  That's why it is doubled for OSC - and its benefits apply in astro.  Except for narrowband, of course.

Frank
Like
jrista 8.59
...
· 
·  1 like
Jon Rista:
I did not mention super-luminance, however you can create a synthetic luminance by integrating your R, G and B channels together, to create an L channel that is effectively like having done 1:1:1:1 exposure ratio. With OSC, you would probably want to separate the RGB channels, then integrate them together, rather than extract a luminance. Its not a "super" lum, just a "synthetic" lum...but, it would give you that stronger SNR monochrome channel to work out the contrasts with.


No it's not necessarily equivalent to 1:1:1:1. The majority of luminance filters have broader bandpass compared to the RGB filters in the set.

I try to choose my words carefully, as this is a technical hobby filled with very intelligent people, like yourself. So just to be clear and fair...I said "effectively like", not "equivalent to" which similar, but not the same. 

The key difference being the nature of the RGB filters. Yes, some have LP gaps. Some blue filters cut off "early". There can be a little bit of additional spectrum integrated by an L filter. 

That to me is a minor point. The more important one, I think, is this. IF you do create a synthetic luminance from your RGB. You could get 30 hours of RGB. And, you could ALSO, without having to expend any additional acquisition time, effectively have a ~30 hour synthetic L by integrating the RGB channels together. (FWIW, I'm not saying extract the L*, I'm saying integrate them together...I find the latter seems to produce a better synthetic L.)

Once you have that L channel, since L is not really about SNR, and is much more about having a high SNR undiscriminating channel to process for contrast, then you can actually DO that processing. And also, still have your deep RGB. Without any additional acquisition time costs for L. 

Sort of a....have your cake, and eat it too, kinda deal. 

No, its not a 100% perfect, exact replica of what separately exposing an L filter would get you. I don't think the differences are actually going to matter enough in the end, though, not once you have a super strong RGB image to work with AS WELL.
Like
jrista 8.59
...
· 
As for the efficiency of OSC vs. mono - I recently did some measurements with various filters and confirmed the wider, overlapping bandpass for OSC result in *more* signal than non-overlapping mono filters, for a broadband target.

Do you by chance have any results from this? Would be great to have a robust test that could be referenced to demonstrate that OSC can in fact be more efficient.
Like
C.Sand 2.33
...
· 
Hello again everyone I'm back with more time on my hands now. And now to get into the goods.

@Freestar8n Just to clarify - is T your naming convention for what I would call L in LRGB, or is this an established thing elsewhere?

Also,
A true luminance filter would be fairly narrowly peaked in the green, and much more narrow than typical QE response curves.

How would this take into account for objects that are very obviously not "green"? For example, M45, SH2-136 (Ghost nebula near iris), or anything with an emission line?
Arun H:
Incidentally - today's IOTD has a lum to RGB ratio of 1:1:1:1, so roughly equal total signal between luminance and color. The overall image is excellent and the colors deep and vibrant.

If we're talking about IOTD's, there's plenty that have 3:1's as well. To take it to the extremes even, 2/15/24's IOTD by zombi (https://www.astrobin.com/r16oac/B/) is roughly 10: ~1 : ~0.7 : ~0.7 (5hrs 20mins L, 33mins R, 22mins G&B). While I'm not recommending a 10:1 ratio, clearly it worked. [EDIT: Jon helped me fix my wording. This "worked" in the manner that it got an IOTD and produced a very good image, not that it is the perfect rendition of color.] This isn't exactly an isolated event either. In my brief search I found plenty that had >4:1 ratios [EDIT: For more context on this as well, I found plenty of 4:1's some of which had (what I would call) accurate colors, and others that could definetly use more RGB time.]
Edited ...
Like
 
Register or login to create to post a reply.