Combining OSC with Mono [Deep Sky] Processing techniques · Coolhandjo · ... · 133 · 2980 · 22

velociraptor1 2.71
...
· 
·  1 like
I was thinking about similar, but it needs to have exact same size and resolution sensors in order to align data/stars properly.
I found it difficult to align data from ZWO 071MCP camera with ZWO 1600MMP camera as both have different sensor size and resolutions.
Like
velociraptor1 2.71
...
· 
This is a bit of a rabbit hole, but generally speaking there are spatial resolution losses because of the CFA array on OSC cameras. There's also quite noticeable losses in light gathering behind the CFA filters compared to the same colour filter on a mono camera (on a per pixel level with the same colour filter).

Then there's the fact that mono can capture across the entire spectrum on all pixels simultaneously with usage of a luminance filter. This is impossible with OSC.

I have done some direct comparisons between 533MC and 533MM myself (on the exact same telescope) with the same target and roughly similar sky conditions. The MM beats the SNR of the MC in 1/3 the exposure time.

So based on my experience it's almost always easier from a imaging logistics perspective and a processing complexity perspective to just sell the 533MC and buy LRGB filters. And I'm not even going to get into the difficulties in getting OSC colors to calibrate correctly without a green bias (it has consumed dozens of hours of my time attempting to get it look as good as mono without ever being completely satisfyingly).

Modern software (NINA and PixInsight) makes managing the extra filters trivial and completely automated.

Above pointers are TRUE I seen processing LRGB data from my 1600MMP is much easy and cleaner compared to recently bought 585MC DSO data as it have heavy green tint.
Like
jrista 8.59
...
· 
·  1 like
Abhijit Juvekar:
This is a bit of a rabbit hole, but generally speaking there are spatial resolution losses because of the CFA array on OSC cameras. There's also quite noticeable losses in light gathering behind the CFA filters compared to the same colour filter on a mono camera (on a per pixel level with the same colour filter).

Then there's the fact that mono can capture across the entire spectrum on all pixels simultaneously with usage of a luminance filter. This is impossible with OSC.

I have done some direct comparisons between 533MC and 533MM myself (on the exact same telescope) with the same target and roughly similar sky conditions. The MM beats the SNR of the MC in 1/3 the exposure time.

So based on my experience it's almost always easier from a imaging logistics perspective and a processing complexity perspective to just sell the 533MC and buy LRGB filters. And I'm not even going to get into the difficulties in getting OSC colors to calibrate correctly without a green bias (it has consumed dozens of hours of my time attempting to get it look as good as mono without ever being completely satisfyingly).

Modern software (NINA and PixInsight) makes managing the extra filters trivial and completely automated.

Above pointers are TRUE I seen processing LRGB data from my 1600MMP is much easy and cleaner compared to recently bought 585MC DSO data as it have heavy green tint.

The green tint is not a problem with the camera. It is a problem with how the data is handled. Usually, these sensors are run through a CCM or Color Correction Model. These are the same kinds of sensors used in digital cameras, smartphones, security cameras. They all produce perfectly fine color, right? 

They all apply a CCM. With astrophotography, we don't. We usually calibrate a different way. Thing is, you can generate your own CCM with a Gretag McBeth color checker card and some free software. You can also apply the necessary matrix with PixelMath in PI, to apply a CCM to OSC camera data.

The green is not an inherent problem with the sensor. Its a missing step in processing the data. I had the same problem using my high end Canon DSLRs in the past. Years ago, someone shared some info on how to find the proper CCM for each camera, and then apply it with PM in PI. It worked, and the green cast vanished right off (and wow, did the colors SAT-U-RATE!!) 

We need to stop blaming OSC sensors for things that are not actually the sensor's fault. Further, we need to stop generalizing all OSC as the same. Every OSC sensor is different. SOME, yes SOME, have poor Q.E. and not the best CFA dyes and things like that. But most of the more recent OSC cameras for astrophotographers that have hit the market? They are usually very good, with very high sensitivity, even with the CFA. Testing often shows that the OSC cameras are actually capturing more light overall, than mono sensors with RGB filters (going to purposely exclude L for now, as that is a controversial topic with regards to actually improving SNR). 

If you don't like the green cast, then you need to calibrate the data properly. A properly crafted CCM should do it. Multi-spectral photometric calibration should also do it. Another thing that will help with either CCM or MSPCC, is getting more data with the camera, rather than relying so much on AI to clean up weak data. A stronger REAL signal, will calibrate better, and that will help eliminate the green cast when calibration is done properly. 

FWIW, personally, I'm a mono fan. But, I'm not actually a fan of classic LRGB filter sets with their square-cutoffs and general lack of any overlap. This inherently leads to problems with reproducing a lot of colors that actually require some overlap between filters. Further, non-overlapping filters can result in multiple colors being mapped to the same rendered color, metamerism, which can reduce the color fidelity of your images. I have had Astronomik Type-2c filters in my filter wheel for the last several years, and will be joined by a set of J-C BVR filters soon. These filters all have overlaps, to resolve the metamerism problem of classic LRGB filters. 

OSC is not bad. It is not inherently less sensitive than mono, that really entirely depends on the specific OSC and mono cameras you are comparing. The CFA will usually cost you a couple percent or so of transmission vs. an interference filter. Not enough to matter when the overlapping filters of the OSC CFA often have larger areas than your typical RGB filters from LRGB sets. OSC filters are not green due to some kind of design flaw or anything like that...its actually a processing flaw! Dither and bayer drizzle with enough subs, and there is no inherent loss in color coverage due to the 25%/50%/25% color pixel coverage aspect. 

OSC cameras are a fine way to get colorful data with a lot less hassle than a monochrome camera. They produce BETTER color strait out of the box, than your average LRGB filter set with mono, because they can in fact actually reproduce a lot more colors in the visible spectrum than non-overlapping LRGB filters. Adding a mono version of the same camera so you can capture NB data, is a perfectly fine and productive endeavor, and there is little chance the OP would regret doing so. He would also have the option of using both cameras in an SBS rig to capture color and NB data of the same object simultaneously, if he wanted to
Like
Drothgeb 0.00
...
· 
·  1 like
Abhijit Juvekar:
I was thinking about similar, but it needs to have exact same size and resolution sensors in order to align data/stars properly.
I found it difficult to align data from ZWO 071MCP camera with ZWO 1600MMP camera as both have different sensor size and resolutions.

Must be your software.

I sometimes get bored with conventional processing, and like to get creative. As a result, I routinely combine data from 2, 3, and occasionally 4 different cameras. Works very well for me, and I only use Siril for registering images (I don't have PI). For this image I used a 2600MC for the stars, then had a 294MM and 533MM collecting NB data (at the same time to take advantage of good skies). The 294MM images were cropped to match the 533MM and then combined with the 2600MC's stars.

I'm a very casual astrophotographer, and don't know about the technical aspects discussed in this thread. But for me adding NB and OSC is easy, works well, and I like the results.

Crescent HP 4;3.jpg
Like
C.Sand 2.33
...
· 
@Jon Rista Just to clear some things up: I don't believe in the 8:1 philospophy. I try to stay ~3:1 (though couldn't be done unfortunately for my ic342, and you can see the issues I had). 

I didn't intend to suggest that one wouldn't be getting enough RGB data, infact the oppisite. LRGB shines once you have that RGB ratio established for the reasons you said in your last reply to me (26th mag and all). Increasing the saturation workings because you have the RGB data, obviously you can't just saturate a L image. The whole point of LRGB is to establish the RGB ratio, not ignore it.
Like
HegAstro 11.91
...
· 
·  1 like
Abhijit Juvekar:
I was thinking about similar, but it needs to have exact same size and resolution sensors in order to align data/stars properly.


Not true. Here are two concrete examples of color taken with OSC, luminance with mono. Different pixel sizes, different sensor sizes. PixInsight had no issues. Use frame adaptation in Star alignment.

https://astrob.in/araagr/C/

https://astrob.in/1gjusi/0/
Like
jrista 8.59
...
· 
@Jon Rista Just to clear some things up: I don't believe in the 8:1 philospophy. I try to stay ~3:1 (though couldn't be done unfortunately for my ic342, and you can see the issues I had). 

I didn't intend to suggest that one wouldn't be getting enough RGB data, infact the oppisite. LRGB shines once you have that RGB ratio established for the reasons you said in your last reply to me (26th mag and all). Increasing the saturation workings because you have the RGB data, obviously you can't just saturate a L image. The whole point of LRGB is to establish the RGB ratio, not ignore it.

I think there are a lot of anecdotal sayings about LRGB, why we do it, etc.

But the original point of LRGB, and thus I would say the whole point, is as a crutch, in an era when imaging was much harder than it was today (although, easier than it was before, with film!!) and people thought: How do I make prettier images despite the fact that my camera is monstrously noisy? 

The answer was not to establish an RGB ratio. The answer was, well...L filters capture undifferentiated light across the spectrum. We could use that to an advantage, if we were willing to sacrifice color fidelity. Lets do that! Boom, the point of LRGB was born. Its a CRUTCH!

We have mindblowing technology these days. We are literally starting to push the limits. Read noise of 1e-...once you get down into the subelectron read noise level (which the newer quantum film system type sensors might do), and once you hit 100% Q.E....there really isn't much more room for improvement. We are just about there, already. The technology we have is A-MA-ZING!! Why are we crutching still? :scratchhead:
Like
andreatax 7.72
...
· 
·  1 like
Q.E. higher than 100% has already been tried and tested so no news there (except isn't in consumer cameras) and it was quite noisy, if I recall right, mostly working < 400nm.  And read noise of around 1e- isn't breaking a sweat but how many ADUs is that? Because I don't read e- I read ADUs.
Like
C.Sand 2.33
...
· 
Jon Rista:
I think there are a lot of anecdotal sayings about LRGB, why we do it, etc.

But the original point of LRGB, and thus I would say the whole point, is as a crutch, in an era when imaging was much harder than it was today (although, easier than it was before, with film!!) and people thought: How do I make prettier images despite the fact that my camera is monstrously noisy? 

The answer was not to establish an RGB ratio.

Yes, the answer WAS something else. That's the point. Now, we use LRGB for that ratio. This isn't the ideas of the past, we have grown and learned new things! We can do this because of the technology and software and algorithms and all that stuff. There is only so much color out there, how do you collect detail then? Well obviously you could just do more RGB exposures to get more luminance (after all, RGB together does equate to a luminance) or you could just use one filter and get 3x the data (edit: sorta, I know it's not that simple)! 

Now I'm not saying an image of 1hr int (30/10/10/10) is going to have that RGB ratio, but an image of 12hrs? Maybe. An image of 24hrs? Most likely. This is where luminance works. We don't need one set format for every picture.
Edited ...
Like
jrista 8.59
...
· 
·  1 like
Jon Rista:
I think there are a lot of anecdotal sayings about LRGB, why we do it, etc.

But the original point of LRGB, and thus I would say the whole point, is as a crutch, in an era when imaging was much harder than it was today (although, easier than it was before, with film!!) and people thought: How do I make prettier images despite the fact that my camera is monstrously noisy? 

The answer was not to establish an RGB ratio.

Yes, the answer WAS something else. That's the point. Now, we use LRGB for that ratio. This isn't the ideas of the past, we have grown and learned new things! We can do this because of the technology and software and algorithms and all that stuff. There is only so much color out there, how do you collect detail then? Well obviously you could just do more RGB exposures to get more luminance (after all, RGB together does equate to a luminance) or you could just use one filter and get 3x the data (edit: sorta, I know it's not that simple)! 

Now I'm not saying an image of 1hr int (30/10/10/10) is going to have that RGB ratio, but an image of 12hrs? Maybe. An image of 24hrs? Most likely. This is where luminance works. We don't need one set format for every picture.

I think using LRGB in a ratio of 3:1:1:1 is fundamentally detrimental to the general level of IQ using the method. I don't think it has actually been an improvement over what people did back in the day. That's what I've been trying to say. Someone, somewhere, decided 3:1:1:1 was optimal, so everyone does that now. How was that decided? Where? When? It's just one of those "This is how we do it" things, but I don't think anyone ACTUALLY knows EXACTLY why. All anyone seems to know is "L better. RGB not." 

If you really dig into why people do LRGB these days, they will say they get better SNR, by having more L. (Not considering the question: What is SNR in an LRGB image? I mean, honestly...what is SNR in an LRGB image? Can anyone actually answer that question?)

But that's actually not really it, when you dig even deeper. When you really get to the bottom of it, people get lots of L, because a STRONGER SIGNAL....is a lot easier to process into a higher contrast image, thus extracting more details. DETAILS is really what people are after. LRGB, therefor, in the end, is about CONTRAST. That's what it was about before, back in 2001, really. That's still what its about today. The difference, is that people are generally working with weak RGB today. That wasn't quite the case back in 2001...they binned back then, so the RGB was in fact pretty high SNR. If they got 3x as much L, they were then binning the RGB 2x or 3x. That would mean their RGB SNR was either only slightly weaker, or even on par, with the L...just at a lower resolution. We don't do the same thing today...we've opted for the route of WEAK RGB. And it shows. Back in the day, LRGB combination still had complications with the fact that combining L will generally wash out the color, but the tactic was usually to blur the RGB, since it was already a lower resolution than the L, and then stretch it enough to maintain contrast with the L. We have this problem today as well.

Today, people are chasing contrast, at the cost[b] [/b]of color, in some cases almost entirely. Because with CMOS cameras we really don't have a binning option (software binning doesn't offer any real benefits, and sometimes losses). So we are acquiring weaker RGB data, AT NATIVE RESOLUTION. This is a major shift from the original LRGB process. Weak RGB. Increasing the time, 12 hours, 24 hours, is not actually going to change the underlying problem: That the act of combining L, is in fact one of the key factors that destroys the color (which is usually only compounded these days, by people getting so little RGB with weak RGB exposures.) L is not actually real luminance, not in the same way that the luminance you get intrinsically with pure RGB data is luminance. Swapping in an undifferentiating L channel, high contrast or not, is going to wash out colors, unless you do something to allow you to stretch the RGB the same amount you stretch the L (i.e. massive NR, or even blurring), and then perform some extensive saturation (which won't increase color accuracy or fidelity...it will artificially saturate colors, which has its own drawbacks). 

Its all a tradeoff in the end, though. What's curious to me, is why it seems to have become so much about L, and hardly at all about color... If you rebalance the equation. A 1:1:1:1 ratio. Your L is STILL going to have higher SNR for any given time. Its still guzzling 3x the photons. Undifferentiated photons, but more. If you get 12 hours with a 3:1:1:1 ratio? That would be 6h lum, 2h each RGB. If you get 24, that's 12/4/4/4. Four hours of RGB is not strong, in and of itself...and its still just 2 hours per channel, so quite weak. WORSE, usually, people match the RGB exposures to the L. So if you can only get 60 second L subs, then you are probably getting 60 second RGB subs. That is MOST likely going to be non-optimal for RGB. L generally covers about 3x the amount of spectrum that any RGB filter does. Which should generally mean, your RGB filters should be able to get at least 2x, if not 3x, in some cases even more than 3x, the sub-exposure time on your RGB filters. Non-optimal exposures on RGB, means that 4 hours of RGB combined together is not actually the same as 4 hours of L! The RGB is still going to be weaker, shallower, it won't show as many faint details...because you have handicapped your RGB data. Its imbalanced. 4+4+4 RGB != 12 L then. That 24 hours of exposure isn't going to help the color as much as you think.

It would take even more rebalancing of the acquisition process, to really improve color with LRGB imaging. You would, in key, need to adjust the actual exposure TIMEs for each RGB subframe. Maybe your B channel can handle 3.2x as much exposure as L, so 192 seconds, and your R can handle 3x so 180 seconds. The G band will often cover a lot of the broadband light from stars, so you might only be able to handle 2.5x the exposure, 150 seconds. There is another way to determine optimal RGB exposures that could lead to more optimal star halos. Expose each channel so that the stars grow the same way and clip the same way (hard to not clip some of the brightest, if you want optimal background signal), so if you have star halo problems, this is an alternative approach. G also is the color our eyes are most sensitive to, and thus the color that will contribute most to the intrinsic luminance of the RGB data itself. So you might then, want to acquire more G subs than R and B subs. An OPTIMAL LRGB acquisition then, might be a bit more complex to define, and not just a simple matter of increasing the time of all channels equally. I don't think anyone generally asks these questions, and certainly doesn't think about it all that deeply most of the time. Perhaps a 1.5:1:2:1 ratio is in fact, far more optimal, in a general sense, for the creation of highly detailed, more colorful images of space?

Objectively, pure RGB produces superior quality color and more specifically accurate color by a good deal. Accuracy is a key word here, as it usually requires a specific reference against which accuracy is measured. Such as, say, some distant fuzzy that's billions of light years away...most often, these are gray, or very, very weakly colored, and often show up yellowish, faintly. A highly distant redshifted quasar, however, should appear distinctly red, shouldn't it? (In Hubble images of popular galaxies, this is exactly how they appear!) That would be accuracy, which would be higher SNR. Interestingly...SNR is not actually synonymous with smooth or low noise...a high SNR signal implies accuracy, not low noise. The SNR argument for L, is really actually more of an image smoothness argument...as no one really knows how to actually define SNR in an LRGB image. (SNR requires a reference point, a subject of interest...i.e. the faint quasar fuzzy, so you can actually determine high vs low, right color vs. wrong color. LRGB doesn't actually support such analysis.)

Optimal RGB acquisition (vs. say just using the weak RGB people tend to acquire alongside their L) has minimal detractors in comparison to LRGB images, and some significant benefits with regards to color richness, fidelity, details, SNR and accuracy in fainter objects. You might not end up with images quite as smooth, but a 1:2:1 RGB ratio initially, followed up by acquiring additional data in whatever channel actually ends up noisiest in your preliminary images, should deliver high quality intrinsic luminance (thanks to higher green integration), as well as rich, deep color, throughout the entire signal range. Optimal exposures (i.e. tuning exposure length for each filter, rather than just using a single short exposure for all channels) should support the goals of acquiring faint details (i.e. one of the common reasons given for why people use an L filter). Finally, if you are really struggling with pulling out faint signals, you can always combine the R+G+B data into a synthetic luminance (which in a program like PI, can often be done with correct weightings for each RGB channel, to ensure that it re-combines with the RGB data without washing out any color), and pull out very faint signals that way. Since your L came from your RGB...well, you would naturally have long, deep optimal RGB exposures anyway, so you get the best of both worlds: Rich, high SNR, accurate color, AND the ability to use a strong L channel to maximize detail extraction and contrasts.


Everyone has their reasons for doing LRGB. Its about the ratio. Its about SNR. Its about details. It makes processing easier. I'm just trying to open the box a bit, and let people peek outside of it, to see that there may be other ways, ways that might be better. ;)
Edited ...
Like
jrista 8.59
...
· 
·  1 like
andrea tasselli:
Q.E. higher than 100% has already been tried and tested so no news there (except isn't in consumer cameras) and it was quite noisy, if I recall right, mostly working < 400nm.  And read noise of around 1e- isn't breaking a sweat but how many ADUs is that? Because I don't read e- I read ADUs.

Higher Q.E. only works if you have very high energy photons, i.e. deeper UV, X-ray, gamma-ray which aren't helpful to us. Or you use electron multiplying, which is also not really useful to us. 

You read e- into ADU. ADU is an arbitrary measure. Its different with every camera and gain setting. ADU ONLY has any meaning for a specific image read at a particular gain. 

The only way we can discuss signals in a normalized manner, is to discuss with e-. These are the actual constituent of the signals we acquire. Most of the time, people misunderstand what ADU means and what noise actually is. I'm sure you have heard people say "Increasing gain increases noise", right? People say it all the time. Why? Because it is what they SEE! But what they see is an illusion, and they aren't understanding what the real signal, and the real noise, are. 

Consider this:



Top row, is ADU. The more you crank the gain, the more noisy the results look! This completely deceives a lot of imagers, particularly more novice. The top row is an absolute lie! Its just the same PI stretch applied to a bunch of dark frames taken at different gains. 

Now look at the bottom row. Those same dark frames, were re-scaled by the gain factor (i.e. e-/ADU), and the same stretch was applied again. The bottom row, demonstrates the true benefit of increasing gain...by referencing the REAL signal level. The electrons. ADU are meaningless outside of a specific context. Electron counts, however, are normal. We can compare them across any gain, even any camera, and demonstrate the TRUE differences between cameras/sensors, images, etc. 

Read noise of 1e- is what matters. Read noise of 1.2e-, 1.5e-...these are incredible read noise levels. One camera with 1.5e- read noise might convert that to 15 ADU, another to 10. Doesn't really matter the ADU count in the end, especially if the quantization noise is low (as with both of these). What matters is the read noise is only 1.5e-, meaning you can swamp it a lot more easily than a camera with 5e-, 10e-, or 15e-, or more.
Like
HegAstro 11.91
...
· 
·  1 like
Jon Rista:
But that's actually not really it, when you dig even deeper. When you really get to the bottom of it, people get lots of L, because a STRONGER SIGNAL....is a lot easier to process into a higher contrast image, thus extracting more details. DETAILS is really what people are after. LRGB, therefor, in the end, is about CONTRAST.


I agree that a good portion of LRGB is about contrast. But it is not to easy to separate SNR and contrast - they are linked. When you stretch an image, you are increasing the separation between two points of signal that, in a linear image, are numerically close together. This is only meaningfully possible if the signal difference between them exceeds the noise threshold and does so significantly. Otherwise, you are just stretching statistical noise. In other words, the better the SNR at some point in the histogram you want to increase contrast in, the better you can drive increased contrast. The most ideal way of getting there is, I agree, through pure RGB imaging, since color information is better preserved. But yes, I cannot claim to know what the magic LRGB ratio is. For that matter, what the magic RGB ratio is probably isn't so easy to compute either, even if it is possible. It very likely is object dependent.
Edited ...
Like
C.Sand 2.33
...
· 
Jon Rista:
I mean, honestly...what is SNR in an LRGB image? Can anyone actually answer that question?)


An LRGB image SNR is essentially the same as any other form of SNR. Magnitude of signal vs unceartainty of signal. What's different about an LRGB image is that SNR can be though of as being broken up into two formats - RGB SNR and L SNR. RGB SNR is obvious, it's what you've been talking about and what you're framiliar with, establishing what each pixel's ratio of R, G, and B is in relation to everything else. What I think you're missing is that LRGB imaging takes advantage of the L SNR as well. Specifically L SNR (as I would define it) is the confidence we have that a pixel has a ceartain brightness value. We can think of this in terms of RGB if you like, (0,0,0) to (255,255,255), or just as a scalar from 0 to 255 (which I will be doing from here on out, since it's easier and I'm sure you can extrapolate if neccessary). The importance of L in an LRGB image is to tell us how bright any given pixel is. If we only have the RGB data we could look at 2 pixels and see that they are P1=(13,100,13) and P2 = (20,25,20), we know the ratios of RGB. The issue here is if P2 is actually significantly brighter than P1. From just looking at the ratios of RGB in the pixels we would think that P1 is brighter, though a nice dark greenstill  (in fact P2 looks essentially black). But if we then take our luminace data and see that our luminace value for P1 is 2, but for P2 it's 10, then we can realize that P1 is actually (26,200,26) and P2 is (200,250,200). What was just black is now a pale minty green. And that's why LRGB SNR is hard to define, it's not just one thing. You have to look at the relationship between your data.

To give you a one sentence answer: LRGB SNR is the combination of the ratio between the RGB values, and the scaling factor that the luminance data adds to this. 
Jon Rista:
I think using LRGB in a ratio of 3:1:1:1 is fundamentally detrimental to the general level of IQ using the method. I don't think it has actually been an improvement over what people did back in the day. That's what I've been trying to say. Someone, somewhere, decided 3:1:1:1 was optimal, so everyone does that now. How was that decided? Where? When? It's just one of those "This is how we do it" things, but I don't think anyone ACTUALLY knows EXACTLY why. All anyone seems to know is "L better. RGB not."

I didn't mean to say 3:1 is the end all be all. I'm still trying to figure that out myself. For now, I do 3:1 because I don't have the time to mess with it, and I know that my goal with any image is to saturate my RGB data so I can establish that ratio I was describing above. At that point, I will have a lot of luminace data yes, maybe too much. This also relates to my aquisition. I use NINA and filter offsets, so in my imaging runs it is a loop of 3xL, 1xR, 1xG, 1xB. This is to ensure that if I have funky data from a night - high clouds, weird gradients, etc, I don't suddenly have 8 hours of R and G, but 2 hours of B. But, once again, the goal is to establish that good RGB ratio. Essentially this means that if I were to take my RGB data alone and process it, it would be a good image, which is what it sounds like non-L imaging is (to an extent). Once I know my RGB data is good, it's easy to just take my 3xL data and throw it in there. I'm not attempting to maximize my L data, I'm concerned about the RGB. (Now, this does lead into the matter of getting more L data later can be benificial (later meaning after RGB is well established), but I'm going to get into that later on.) 

I don't know if 3:1 is the best strategy. Statistically, it's probably not. I think it is a changing ratio like you were saying here:
Jon Rista:
Perhaps a 1.5:1:2:1 ratio is in fact, far more optimal, in a general sense, for the creation of highly detailed, more colorful images of space?

But for now, I don't have the time to figure it out, and it has been working well enough. And I imagine it changes from target to target, too, so at some point it's probably more worthwhile to just accept that you won't be able to optimize everything and just deal with it. After all, I shoot from B8 most of the time anyway, so there's not much optimization going on. 

And as for exposure length, I have been testing (for example) 60s L images and 120s RGB images. Some form of not 1:1 exposure time. Again, don't have the time to figure out what's perfect, and it probably changes.
Jon Rista:
If they got 3x as much L, they were then binning the RGB 2x or 3x. That would mean their RGB SNR was either only slightly weaker, or even on par, with the L...just at a lower resolution. We don't do the same thing today...we've opted for the route of WEAK RGB.

Jon, what I've been trying to say is that LRGB imaging shouldn't be sacraficing RGB signal for L. The whole point of LRGB imaging in my eyes is to enhance a well established RGB signal, not shortcut it. It is a similar idea to what you're saying was happening back in the day, the difference being that now we can afford to not bin our RGB.
Jon Rista:
Objectively, pure RGB produces superior quality color and more specifically accurate color by a good deal.

What I keep trying to say is that I believe you're misunderstanding LRGB. Yes, a lot of people are getting it wrong, but the point is to have the accurate color. You're not trying to work wit the color in the L data, that's purely for that detail and ratio.

In my opinion it would be similar to saying that RGB imaging doesn't produce an accurate image because it has less data than LRGB. What does "less data" mean? Does it mean less images? Less integration time? What we're ignoring here is that the images could have 1 minute and 4 minutes total integration! Of course it's going to be innacurate! I'm not trying to say a 6hr LRGB image = 6hr RGB image, I'm looking at the effectiveness of adding more data. If I only have 2hrs of RGB each, it's almost always going to be more effective to add more RGB data. If I have 20hrs of RGB data each, it's probably going to be more effective to add L data. 


Jon Rista:
Interestingly...SNR is not actually synonymous with smooth or low noise...a high SNR signal implies accuracy, not low noise.

That's the point!!!! You want the high accuracy!!!! Your noise can be absolutely terrible! But what matters is that noise is the right color (or more specifically, the correct ratio of colors). This is why L data matters! If you have an atrocious smattering of what seems to be random colors across the field for your RGB image, and a pristine DSO in your L image, you can map that RGB data onto the L, creating that quality image. 

Jon Rista:
Finally, if you are really struggling with pulling out faint signals, you can always combine the R+G+B data into a synthetic luminance (which in a program like PI, can often be done with correct weightings for each RGB channel, to ensure that it re-combines with the RGB data without washing out any color), and pull out very faint signals that way. Since your L came from your RGB...well, you would naturally have long, deep optimal RGB exposures anyway, so you get the best of both worlds: Rich, high SNR, accurate color, AND the ability to use a strong L channel to maximize detail extraction and contrasts.

Here's where I see the biggest benifit of LRGB. If you need that faint signal, if you actually need that lumiance, sure you could get it from your RGB image. But at some point you're hitting diminshing returns. I'm sure you've seen a logorithmic graph in reference to astrophotography, I'm sure you know that with increasing exposure time you get less and less returns. This is the point of LRGB. Once you have hit some point you have an (almost) accurate RGB ratio. If you were to add another 30 hours of RGB to improve that ratio ever so slightly, improve that detail just a tad, you could. BUT, if you had a luminace filter you could do the same thing in one third the time (of course assuming 1:1:1 RGB ratio for simplicity's sake). And what's more is that beacuse of the shorter exposures you can take the L, that extra 10 hours is even more effective. 




This is going to be my last reply in this topic on this train of thought because in my eyes you are ignoring the point of LRGB. It's not to ignore RGB, LRGB cares about RGB just as much. You need that color data in order to make an image. There are things to figure out on what's most accurate and what ratio is best and whatnot, but at some point you will benifit more from luminance data than color data, and that's why LRGB is effective. Best of luck.
Like
jrista 8.59
...
· 
Jon Rista:
I mean, honestly...what is SNR in an LRGB image? Can anyone actually answer that question?)


An LRGB image SNR is essentially the same as any other form of SNR. Magnitude of signal vs unceartainty of signal. What's different about an LRGB image is that SNR can be though of as being broken up into two formats - RGB SNR and L SNR. RGB SNR is obvious, it's what you've been talking about and what you're framiliar with, establishing what each pixel's ratio of R, G, and B is in relation to everything else. What I think you're missing is that LRGB imaging takes advantage of the L SNR as well. Specifically L SNR (as I would define it) is the confidence we have that a pixel has a ceartain brightness value. We can think of this in terms of RGB if you like, (0,0,0) to (255,255,255), or just as a scalar from 0 to 255 (which I will be doing from here on out, since it's easier and I'm sure you can extrapolate if neccessary). The importance of L in an LRGB image is to tell us how bright any given pixel is. If we only have the RGB data we could look at 2 pixels and see that they are P1=(13,100,13) and P2 = (20,25,20), we know the ratios of RGB. The issue here is if P2 is actually significantly brighter than P1. From just looking at the ratios of RGB in the pixels we would think that P1 is brighter, though a nice dark greenstill  (in fact P2 looks essentially black). But if we then take our luminace data and see that our luminace value for P1 is 2, but for P2 it's 10, then we can realize that P1 is actually (26,200,26) and P2 is (200,250,200). What was just black is now a pale minty green. And that's why LRGB SNR is hard to define, it's not just one thing. You have to look at the relationship between your data.

To give you a one sentence answer: LRGB SNR is the combination of the ratio between the RGB values, and the scaling factor that the luminance data adds to this.


This last sentence right here, is not actually true, though. I'll stop here, because until this is cleared up, I don't know where we go. If you combine an L channel, with noisy RGB data...you won't really see much improvement in the noisiness of the image. This is easily demonstrable. The L channel doesn't improve chroma noise.

Another thing is that combining L is not just a simple multiplication with the RGB values. That isn't what L combination does... It uses L*a*b* space to replace the luminance channel, and the mathematics involved are quite complex! The RGB image is actually transformed through CIE XYZ space, into CIE L*a*b* space. The a* and b* represent chromaticity (which is not color), L* is the true luminance. You then use your artificial L channel in place of L*, and recombine with the chrominance channels. That result is then converted back through CIE XYZ to RGB. Its a complex process, and not just a simple multiplication problem.

You are basically saying that combining L scales the RGB. That's all you did, by multiplying 13, 100, 13 by 2, and 20, 25, 20 by 10...that is simple scaling. Thing you are missing is, if you did that, you would in fact have ZERO change in actual SNR. Any multiplication of a signal (i.e. the RGB signal) ALSO MUST multiply the noise in that signal. If you had, say, 4.9e- read noise in your 20e- starting signal, and multiplied the signal by a factor of 10, the SNR has to stay the same because the noise is intrinsic to the signal. You don't just multiply 20e- signal by 10x and magically gain 10x the SNR. If you actually did that, a simple scaling, and measured the SNR, you would find that nothing actually changed. A 20e- signal over 4.9e- noise has an SNR of 4.08:1. A 200e- signal, then, with correspondingly scaled read noise, would have 49e- noise, would also have an SNR of 4.08:1. 

A neat little check for this, is that the ratios of your scaled RGB values, also don't change. Consider 25/20 = 1.25. And 250/200 = 1.25. That implies the absolute difference between the B and G values increased. There couldn't be an improvement to SNR there...

L does not scale the RGB. L replaces L*. If your chromaticity is noisy, so is the final result after you replace L.

Further, combining the L channels does NOT preserve the chromaticity, in fact without other actions taken it usually washes it out. Intrinsic L* is weighted to the channels. An L filter luminance image is not. Particularly blue, then reds. The weaker the RGB data, the more extreme this effect tends to be. 

Usually, the way these issues are addressed, is to heavily denoise or even blur the RGB, and then stretch the RGB a lot more to match the signal profile of the L. Then, to artificially saturate after combining.

This can certainly create a pleasant picture. I'm not saying it can't. But it is a tradeoff, and there are consequences of doing it. If you do use weak RGB, which would generally be the case with 3:1:1:1 ratio and constant exposure length across filters (most common approach I've seen, although the L ratio seems to have climbed a lot more in recent years! Sometimes L is almost all the exposure time, and maybe as little as 20 minutes on the RGB), then that is going to limit the color you have in your image even more.
Edited ...
Like
C.Sand 2.33
...
· 
Jon Rista:
If your chromaticity is noisy, so is the final result after you replace L.

So I told a minor lie. This is my last post in here for real this time.

The whole point is to not have noisy chromanince, I don't know how else to say it.

Yes the scaling isn't accurate, I couldn't think of a way to explain it to you since you continued to ignore the point I was trying to make with having strong RGB signal. 

"If there is one form of noise reduction that I recommend above all others: integration. Stacking more light frames is, in my opinion, the single best way to reduce noise that we have at our disposal."
And so we get enough RGB, so that we can stack frames, to reduce our chromanince noise... 

Goodbye
Like
jrista 8.59
...
· 
Jon Rista:
If they got 3x as much L, they were then binning the RGB 2x or 3x. That would mean their RGB SNR was either only slightly weaker, or even on par, with the L...just at a lower resolution. We don't do the same thing today...we've opted for the route of WEAK RGB.

Jon, what I've been trying to say is that LRGB imaging shouldn't be sacraficing RGB signal for L. The whole point of LRGB imaging in my eyes is to enhance a well established RGB signal, not shortcut it. It is a similar idea to what you're saying was happening back in the day, the difference being that now we can afford to not bin our RGB.

I do understand that, although given X available clear nights to image Y, if you spend any time on L, then you ARE sacrificing time for RGB.

Based on the above reply, I am not sure if you fully understand how LRGB combination, the actual act of combining the two, works. It sounds like you think there is an inherent and natural and simple win to combining L...but its actually a lot more complex than that. Even if you have a well established (I am going to take that to mean high SNR, at least), the process of combining L is not without consequences. Even a good RGB signal, combined with L in one of the common and easy manners (i.e. just swap L* for L after decomposing your RGB image into L* a* and b*), you'll find that the colors change.

If combining L was ONLY going to enhance your RGB, you would see no change in the colors...and yet you do. Exactly how they change, is going to be dependent on exactly how L differs from L*...and that, is going to depend on exactly what your L filter captured, and what you do with it when you process the L channel. CORRECTING the RGB color issues that result from combining L, is also complex, and that usually leads to people developing the often complex LRGB combination processing workflows we frequently see. Its not an easy problem, and it often overcomplicates what could be a pretty strait forward process (i.e. combine RGB, stretch, curves to taste! )
Edited ...
Like
jrista 8.59
...
· 
·  1 like
Arun H:
Jon Rista:
But that's actually not really it, when you dig even deeper. When you really get to the bottom of it, people get lots of L, because a STRONGER SIGNAL....is a lot easier to process into a higher contrast image, thus extracting more details. DETAILS is really what people are after. LRGB, therefor, in the end, is about CONTRAST.


I agree that a good portion of LRGB is about contrast. But it is not to easy to separate SNR and contrast - they are linked. When you stretch an image, you are increasing the separation between two points of signal that, in a linear image, are numerically close together. This is only meaningfully possible if the signal difference between them exceeds the noise threshold and does so significantly. Otherwise, you are just stretching statistical noise. In other words, the better the SNR at some point in the histogram you want to increase contrast in, the better you can drive increased contrast. The most ideal way of getting there is, I agree, through pure RGB imaging, since color information is better preserved. But yes, I cannot claim to know what the magic LRGB ratio is. For that matter, what the magic RGB ratio is probably isn't so easy to compute either, even if it is possible. It very likely is object dependent.

I agree with this, for L. You need a strong signal in order to process to bring out the contrast. That is what I was trying to say previously. That is why people invest time in L, because its a fast way to get lots of signal, so they CAN do that processing. 

I am a believer that pure RGB, with modern camera technology, is going to deliver better quality data. I also actually believe that if you invest all your time in RGB  only, no L, that doesn't mean you can't HAVE a separate L channel. You can always integrate the separate RGB channels together to produce an L that would basically be what a separate L filter would have acquired.

Sometimes when I mention that, I'll get responses like "The L channel only works if its separate signal, you can't improve SNR with L made from RGB" and in a sense that is correct. However, at the same time, I don't consider L a good means of improving SNR (maybe improving smoothness, to some degree, although usually that means being destructive with NR or blurring to your RGB data, which I don't care for). What L is really good for, IF it itself IS higher SNR, is bringing out the contrasts for detail. ;) 

You can bring out those contrasts with a synthetic L as easily as with an acquired L. So if you DO just acquire RGB, that doesn't mean you are throwing away the option to do LRGB as well, and gain the same real world benefit as spending time on separate L acquisition. 

I also agree, what the optimal RGB ratio is isn't necessarily easy to computer...hence the reason why its probably easiest to start with 1:2:1 or even just 1:1:1, do some preliminary processing and see which of your channels exhibits the most noise, then reweight your continued acquisitions accordingly.
Like
vercastro 4.06
...
· 
·  4 likes
As I said, this is a rabbit hole. I believe @Jon Rista has offered some great insight.

Back to basics. To re-iterate a quick and easy answer for @Coolhandjo based on some scenarios:
  • If you intend to use 2 telescopes simultaneously, then 533MC (RGB) on one and 533MM (Lum) on the other is a viable and relativity efficient method of imaging.
  • If you intend to use one telescope, then just using the 533MM with the full gamut of filters is the most efficient option. It's also the cheaper option, since a set of decent LRGB filters is cheaper than the used market value of the 533MC.


There are a few reasons that mono is more efficient from a practical time/imaging perspective compared to OSC with one telescope:
  • When you swap cameras you have to spend time with the physical act of removing one and installing the other.
  • Then you have to orient the camera to match and confirm under stars with a plate solve.
  • Then you have to take brand new flats.
  • As controversial as Luminance seems to be around here (more so than I could have imagined), for the vast majority of targets it is practically a better use of time to image LRGB with a ratio of at very least 1:1:1:1. I won't dig more into the determining of the magic holy grail ratio at this time, I base my recommendations off of thousands of hours of Mono LRGB imaging experience over the past couple years.


Now, you may say "but @vercastro , mono takes more time to get a full set of LRGB than OSC, especially if you have bad weather!" And you would be right...as of several years ago.

Back in those days, you could only practically image one filter a night. Or at best a big batch of each with an auto focus routine in between. Those days are LONGgone. For a while now it has been possible with NINA to use all filters in a rotating sequence with perfect focus between all. The only time wasted between each exposure being the second or two it takes for the filter wheel to spin to the next filter on the list. This has been made possible by two innovations: filter offsets and advanced sequencer programming. I have been using these methods for a couple years, and I end every imaging session with a perfect balance of LRGB and a well rested body. I am to willing to talk about this topic in more detail if anyone is interested. For now, please enjoy a screenshot excerpt from my standard NINA sequence template:

image.png

So now time for the moment you've all been waiting for...some data.

On the left you have Mono RGB with 533MM, about 5 hours. On the right you have 533MC, about 9 hours. Both about SQM 20.6 according to my measurements. On each I have run DBE, SPCC, and STF. Please ignore bad flat donuts.
image.png
It would appear that OSC has a little more signal, but the noise profile of mono is much smoother and would fair better with de-noising.

And now for mono's specialty, lets add luminance. That brings the total mono integration up to approximately 10 hours.
image.png
Not so bad for OSC in the SNR department. However, the ugly green is a pain and hard to remove. Yet the mono RGB is naturally much more neutral and vibrant. Sure we could use some type of correction for the CFA of the OSC like @Jon Rista alluded to earlier. The problem is that last I checked neither Sony nor ZWO provides this correction. So we are left on our own to manually create our own calibration, which is extra complexity that shouldn't be needed. I want to point out here that SPCC even with all the correct camera and CFA filter settings DID NOT REMOVE THE GREEN BIAS.

Here's my quick attempt to fix the colours by eye:
image.png

And here's SCNR for fun (I don't use SCNR for BB because of how it destroys the colours and makes the entire image appear two toned):
image.png

So some closing words.

I have imaged thousands of hours of mono, and to my (and many other friends) eyes the image from mono is always more pleasing to look at and easier to process. The imaging logistics of mono are easy-peasy these days, unlike in the past.

Based on my experience I believe there is a practical point of diminishing returns for most targets with regards to RGB data. Beyond that point it's more efficient to capture luminance. Some of the issues with images lacking colour that @Jon Rista has observed is in my opinion down to inadequate processing methods. Luminance detail is widely considered to be vastly more important to human perception anyways. After all, our eyes contain 2 orders of magnitude greater number of rod cells than cone cells. That's why most image and video compression methods throw away almost all the colour data as compression rates increase.

I will probably do more tests in the future, particularly pure-RGB vs LRGB would be interesting. But my conclusion remains unchanged and that is of mono RGB or LRGB being the best option, period. The devil is in the details. And mono will always be the more expensive option.

I personally believe the results on my page prove out my methods. I welcome constructive criticism to that effect.
Edited ...
Like
coolhandjo 1.91
...
· 
·  1 like
As I said, this is a rabbit hole. I believe @Jon Rista has offered some great insight.

Back to basics. To re-iterate a quick and easy answer for @Coolhandjo based on some scenarios:
  • If you intend to use 2 telescopes simultaneously, then 533MC (RGB) on one and 533MM (Lum) on the other is a viable and relativity efficient method of imaging.
  • If you intend to use one telescope, then just using the 533MM with the full gamut of filters is the most efficient option. It's also the cheaper option, since a set of decent LRGB filters is cheaper than the used market value of the 533MC.


There are a few reasons that mono is more efficient from a practical time/imaging perspective compared to OSC with one telescope:
  • When you swap cameras you have to spend time with the physical act of removing one and installing the other.
  • Then you have to orient the camera to match and confirm under stars with a plate solve.
  • Then you have to take brand new flats.
  • As controversial as Luminance seems to be around here (more so than I could have imagined), for the vast majority of targets it is practically a better use of time to image LRGB with a ratio of at very least 1:1:1:1. I won't dig more into the determining of the magic holy grail ratio at this time, I base my recommendations off of thousands of hours of Mono LRGB imaging experience over the past couple years.


Now, you may say "but @vercastro , mono takes more time to get a full set of LRGB than OSC, especially if you have bad weather!" And you would be right...as of several years ago.

Back in those days, you could only practically image one filter a night. Or at best a big batch of each with an auto focus routine in between. Those days are LONGgone. For a while now it has been possible with NINA to use all filters in a rotating sequence with perfect focus between all. The only time wasted between each exposure being the second or two it takes for the filter wheel to spin to the next filter on the list. This has been made possible by two innovations: filter offsets and advanced sequencer programming. I have been using these methods for a couple years, and I end every imaging session with a perfect balance of LRGB and a well rested body. I am to willing to talk about this topic in more detail if anyone is interested. For now, please enjoy a screenshot excerpt from my standard NINA sequence template:

image.png

So now time for the moment you've all been waiting for...some data.

On the left you have Mono RGB with 533MM, about 5 hours. On the right you have 533MC, about 9 hours. Both about SQM 20.6 according to my measurements. On each I have run DBE, SPCC, and STF. Please ignore bad flat donuts.
image.png
It would appear that OSC has a little more signal, but the noise profile of mono is much smoother and would fair better with de-noising.

And now for mono's specialty, lets add luminance. That brings the total mono integration up to approximately 10 hours.
image.png
Not so bad for OSC in the SNR department. However, the ugly green is a pain and hard to remove. Yet the mono RGB is naturally much more neutral and vibrant. Sure we could use some type of correction for the CFA of the OSC like @Jon Rista alluded to earlier. The problem is that last I checked neither Sony nor ZWO provides this correction. So we are left on our own to manually create our own calibration, which is extra complexity that shouldn't be needed. I want to point out here that SPCC even with all the correct camera and CFA filter settings DID NOT REMOVE THE GREEN BIAS.

Here's my quick attempt to fix the colours by eye:
image.png

And here's SCNR for fun (I don't use SCNR for BB because of how it destroys the colours and makes the entire image appear two toned):
image.png

So some closing words.

I have imaged thousands of hours of mono, and to my (and many other friends) eyes the image from mono is always more pleasing to look at and easier to process. The imaging logistics of mono are easy-peasy these days, unlike in the past.

I will probably do more tests in the future, particularly pure-RGB vs LRGB would be interesting. But my conclusion remains unchanged and that is of mono RGB or LRGB being the best option, period. The devil is in the details. And mono will always be the more expensive option.

I personally believe the results on my page prove out my methods. I welcome constructive criticism to that effect.

*** thanks. Very clear  ***
Like
jrista 8.59
...
· 
So some closing words.

I have imaged thousands of hours of mono, and to my (and many other friends) eyes the image from mono is always more pleasing to look at and easier to process. The imaging logistics of mono are easy-peasy these days, unlike in the past.

Based on my experience I believe there is a practical point of diminishing returns for most targets with regards to RGB data. Beyond that point it's more efficient to capture luminance. Some of the issues with images lacking colour that @Jon Rista has observed is in my opinion down to inadequate processing methods. Luminance detail is widely considered to be vastly more important to human perception anyways. After all, our eyes contain 2 orders of magnitude greater number of rod cells than cone cells. That's why most image and video compression methods throw away almost all the colour data as compression rates increase.

I will probably do more tests in the future, particularly pure-RGB vs LRGB would be interesting. But my conclusion remains unchanged and that is of mono RGB or LRGB being the best option, period. The devil is in the details. And mono will always be the more expensive option.

I personally believe the results on my page prove out my methods. I welcome constructive criticism to that effect.

So, just a small fact. It is actually a mistaken notion that our detail vision involves rods. ;) There is some newer research, I think the main paper I found is from 2020. But, detail vision occurs within the foveal spot, and the most detailed vision is from the foveal centralis, or the foveal pit (pit for short). The foveal pit contains no rods at all, only M and L cones:

https://www.ncbi.nlm.nih.gov/books/NBK554706/

Rods don't play a role in detail vision. The foveal pit contains a high density of cones, and the fovea itself contains some S cones (albeit at a lower acuity than M and L.) Detail vision relies purely on color sensing cells, rods play no role at all. CONTRAST is really where we get our detail vision from. A monochrome L channel, certainly seems to have more contrast, but contrast is not limited to a monochrome L. Deep RGB data would allow for more contrast-related processing, allowing detail to be extracted from just RGB data. Further, there are microcontrast factors that are often only really distinct within color differences (i.e. when the luminance is the same, and only the chrominance differs), that can sometimes be lost in an L channel and diminished with weaker RGB data. 

Video uses an L channel and throws away chrominance information (which is not actually color, its a difference map, which allows it to be compressed a little more easily) because of efficiency, and the nature of chrominance channels (which are not actually RGB color data). This was originally devised in the early days of TV when bandwidths were very limited. As compression rates increase, the quality of the resulting video definitely decreases, notably so in the color. Throwing away color information is not a free operation, or one devoid of consequences. (I don't know why so many people seem to think it is...seems its the "common knowledge" of LRGB processing, but its not entirely true.) I'm a highly detail-oriented person, and I hate highly compressed video, mainly because color fidelity suffers (and ironically, it seems to me, more in the blues first...which really bugs me). O_o

Further, the notion that RGB data does not contain luminance information is also incorrect. RGB does contain intrinsic luminance (it has to, its the third dimension of color). We are REPLACING the luminance inherent to RGB, that is properly weighted for R, G and B at each pixel, with a different monochrome channel that is NOT properly weighted for R, G and B at each pixel. How much this matters is going to depend on the individual...however, since the vast majority of broadband imaging these days involves an L channel, I am not sure how many astrophotographers really have an idea of what pure RGB is really like, and how it compares (even I don't have much knowledge there, although my anecdotal sense is I prefer pure RGB at least when the exposures reach a certain depth.)
Edited ...
Like
rockstarbill 11.02
...
· 
So I told a minor lie. This is my last post in here for real this time.

The whole point is to not have noisy chromanince, I don't know how else to say it.

Yes the scaling isn't accurate, I couldn't think of a way to explain it to you since you continued to ignore the point I was trying to make with having strong RGB signal. 

"If there is one form of noise reduction that I recommend above all others: integration. Stacking more light frames is, in my opinion, the single best way to reduce noise that we have at our disposal."
And so we get enough RGB, so that we can stack frames, to reduce our chromanince noise... 

Goodbye




Not sure why all the animosity? This is a great discussion with a lot of great inputs. To me, it feels like you desire to be right no matter what, and when faced with other points of view, which have validity and weight behind them, you get combative. 

I love to always remind people, this is a hobby (even if it is a business for some) and it is intended to be fun. 

-Bill
Like
C.Sand 2.33
...
· 
Bill Long - Dark Matters Astrophotography:
Not sure why all the animosity? This is a great discussion with a lot of great inputs. To me, it feels like you desire to be right no matter what, and when faced with other points of view, which have validity and weight behind them, you get combative. 

I love to always remind people, this is a hobby (even if it is a business for some) and it is intended to be fun. 

-Bill

Here I am responding once again, lol.

I wouldn't describe it as animosity. I'd say frustration.

It's because to me it like my comments on how to solve the issues presented were being ignored. It's even more frustrating because 90%+ of what Jon says is accurate and reasonable. But again, it feels like I'm trying to make the same point over and over. I'm perfectly fine with being wrong, but it felt like I was being told about the history of LRGB, how LRGB is bad because it washes things out. All true, but avoidable with proper techniques. 

Additionally it felt as if I was being lumped in with those "lrgb'ers". If you read my messages I think it would be fair to say that I never explicitly bash RGB imaging, nor do I think it should be bashed. I just think LRGB is better for an "optimal" image (given our constraints of the real world blah blah blah). 

​​​​​

I'm a college student and at the end of the day I don't have time to try and make the same point a different way. This is a hobby, and while the discussion is fun, talking to a wall when I'm worried about a test is not. 

​​​
Like
rockstarbill 11.02
...
· 
·  1 like
Bill Long - Dark Matters Astrophotography:
Not sure why all the animosity? This is a great discussion with a lot of great inputs. To me, it feels like you desire to be right no matter what, and when faced with other points of view, which have validity and weight behind them, you get combative. 

I love to always remind people, this is a hobby (even if it is a business for some) and it is intended to be fun. 

-Bill

Here I am responding once again, lol.

I wouldn't describe it as animosity. I'd say frustration.

It's because to me it like my comments on how to solve the issues presented were being ignored. It's even more frustrating because 90%+ of what Jon says is accurate and reasonable. But again, it feels like I'm trying to make the same point over and over. I'm perfectly fine with being wrong, but it felt like I was being told about the history of LRGB, how LRGB is bad because it washes things out. All true, but avoidable with proper techniques. 

Additionally it felt as if I was being lumped in with those "lrgb'ers". If you read my messages I think it would be fair to say that I never explicitly bash RGB imaging, nor do I think it should be bashed. I just think LRGB is better for an "optimal" image (given our constraints of the real world blah blah blah). 

​​​​​

I'm a college student and at the end of the day I don't have time to try and make the same point a different way. This is a hobby, and while the discussion is fun, talking to a wall when I'm worried about a test is not. 

​​​



Fair enough. Jon is a wordy one, like me at times as well. He is very knowledgeable though. I doubt he was talking through you, he just likes to explain his perspectives and points fully so he is very clear about his perspective. I think as an undergraduate student you should be open to thoughts of some of these older experienced folks, like Jon that have been on the front lines of the astrophotography growth we have seen recently and understand the finer details fairly well.

It would be a knee jerk reaction to assume you are being lumped into some box you do not desire to be in. Exercise some patience, and if you have pressing matters you should probably attend to those instead of arguing with people you agree with on the internet. ;) 

To close, I would offer a fair bit of advice. Assume the people on the other side of chats in forums like this, may be far superior in terms of their knowledge -- or might not be. That should matter less though, than having positive exchanges.

Bill
Like
jrista 8.59
...
· 
·  3 likes
Bill Long - Dark Matters Astrophotography:
Not sure why all the animosity? This is a great discussion with a lot of great inputs. To me, it feels like you desire to be right no matter what, and when faced with other points of view, which have validity and weight behind them, you get combative. 

I love to always remind people, this is a hobby (even if it is a business for some) and it is intended to be fun. 

-Bill

Here I am responding once again, lol.

I wouldn't describe it as animosity. I'd say frustration.

It's because to me it like my comments on how to solve the issues presented were being ignored. It's even more frustrating because 90%+ of what Jon says is accurate and reasonable. But again, it feels like I'm trying to make the same point over and over. I'm perfectly fine with being wrong, but it felt like I was being told about the history of LRGB, how LRGB is bad because it washes things out. All true, but avoidable with proper techniques. 

Additionally it felt as if I was being lumped in with those "lrgb'ers". If you read my messages I think it would be fair to say that I never explicitly bash RGB imaging, nor do I think it should be bashed. I just think LRGB is better for an "optimal" image (given our constraints of the real world blah blah blah). 

​​​​​

I'm a college student and at the end of the day I don't have time to try and make the same point a different way. This is a hobby, and while the discussion is fun, talking to a wall when I'm worried about a test is not. 

​​​

Sorry, wasn't trying to box you in. I guess I debate enthusiastically. 

I know you weren't bashing RGB. I guess it was the nature of the support for LRGB. There are some notable issues with LRGB combination, evidenced by the sheer volume of post for help on astro forums all over the net. 

When the discussion shifted to LRGB, vs. the OP's original request about using OSC for color, and mono for NB so he could combine the two, I guess that's when I went down the rabbit hole of why LRGB isn't necessarily the best. To be quite honest, I think OSC under dark skies (and he's borderline between Bortle 4 and 5, which is EXCELLENT skies for imaging) is actually going to give him BETTER color than LRGB (reasons already stated in the thread...those overlapping filters don't have the metamerism problems of RGB filters on mono, and the OSC bandpasses usually pass more light than RGB filters on mono). If he then used a mono camera to do NB imaging, especially if its the same camera (same registration distance, same sensor size...so the only thing he would really have to sort out is orientation, which wouldn't have to be 100% absolutely perfect either), I think the combination would be awesome.

He doesn't necessarily need to move entirely to the mono camera and do LRGB (which I think would indeed complicate things from a color acquisition standpoint...OSC is a lot simpler) and sell the OSC camera. LRGB processing overall is more complex, both pre- and post-. OSC processing is pretty strait forward. The main reason people complain about OSC color vs. mono color, is not really because the OSC itself is inferior...its usually because debayering results in a different characteristic to the pixels of the resulting image. However, debayering can be completely avoided if you use bayer drizzling instead, which gives you very mono-like data that would combine better with his mono NB data anyway. 

So, OSC w/ bayer drizzling...fast and easy color acquisition, quality color output image data (i.e. it will have that nice per-pixel noise profile characteristic, rather than that smudgy-smeared noise profile characteristic of debayered images. Then mono+NB for getting high detail on the emission nebula. I think its a match made in heaven. Processing both bayer drizzled OSC and NB is pretty much the same. OSC is braindead simple, actually, since there isn't even an RGB combination step! You integrate and you've got your broadband. The NB then just needs to be blended into that broadband image to taste.

LRGB, in contrast...requires both a more complex pre-processing routine, and on a per-channel basis, all that extra post-processing to massage the L, then prepare the RGB for combination, then the combination, then the dealing with all the fallout of the combination, THEN finally NB blending. I duuno, I don't really agree that LRGB is the easy win for "simpler" here. I think the OSC is the dead to nuts easy win for simplicity...with the only real added complexity is swapping cameras and realigning the frame.
Edited ...
Like
C.Sand 2.33
...
· 
·  2 likes
Jon Rista:
Sorry, wasn't trying to box you in. I guess I debate enthusiastically. 

I know you weren't bashing RGB. I guess it was the nature of the support for LRGB. There are some notable issues with LRGB combination, evidenced by the sheer volume of post for help on astro forums all over the net.

I know you weren't Jon, it's the frustration talking. You're all good. I'm sorry if I came off rude, I wanted to be clear that I wasn't debating this more at this time, in this thread, so I was a bit blunt. Good luck

Edit: also, Vercastro has said it all better than I can.
Edited ...
Like
 
Register or login to create to post a reply.