Combining OSC with Mono [Deep Sky] Processing techniques · Coolhandjo · ... · 133 · 2978 · 22

jrista 8.59
...
· 
·  2 likes
Reg Pratt:
Coolhandjo:
Thanks! so when you say low efficiency was it degrading the overall image?

What I meant was the fact that color cameras can't collect as much light in a given amount of time as mono. The Bayer matrix reduces incoming light for one. Second, each pixel of a color camera is divided into 4 subpixels 1 red, 2 green, 1 blue (RGGB) is the most common configuration. So when collecting light only 25% of the sub pixels are collecting red photos, 50% green, and 25% blue.

With a mono camera there is no Bayer matrix and the entire chip is being used to collect light with a given filter. This means in a given amount of time a mono camera with a red filter collects 75% more light than an osc. Mono with a green filter collects 50% more light. Mono with a blue filter collects 75% more light. So a mono camera can collect more photos in less time than a color camera making them much more efficient.

This is not to say that OSC is bad. Just that it is less efficient. Which is best will depend on the individual, their goals, and their preferences.

Hmm, this is not NECESSARILY true, at least not for all OSC cameras. It entirely depends on the camera, and the nature and design of the CFA. CFA filters are dye based, and they can either be strong, delivering purer color but with lower sensitivity, or they can be a bit weaker, delivering more overlapped color and higher sensitivity. A weaker CFA dye, since usually results in more overlapping (wider bell shaped bandpass), will also be passing more light from a broader band than your average RGB interference filter. Testing has been done that indicates on some of the most popular CMOS cameras, while OSC pixels do transmit a few percent less than the native mono Q.E. curve, because of the overlaps, they often transmit more light than the mono camera using square cutoff filters. Depending on the exact nature of the bayer matrix design, some OSC cameras will pass even more light through the green pixels. Some bayer matrices have a green and a white pixel, or a medium green and a weak green, where the weak passes more light. Even further, because the bandpasses overlap, you are capturing parts of the spectrum in more than just red or green, or just green or blue. You are capturing parts of the spectrum with both sets of pixels. So it is FAR from just as simple as you collect 25% of the light in red and blue, or 50% in green. Its a lot more complex than that. 

Between all of these factors, while some OSC might be a little less sensitive than mono, some are actually more sensitive. Further, overlapping bandpasses allow for more accurate color reproduction, as they don't suffer from the metamerism that RGB interference filters do. There is absolutely no guarantee that mono with an interference filter will capture 75% more light than OSC. In fact, depending on the particular LRGB filter set being usd, OSC may very likely in fact pass more signal than the mono+interference filter. The RGB filters for mono cameras, though they may reach nearly 100% transmission, are often fairly narrow. It is not, in fact, strictly the peak transmission that matters. It is the integration of the entire filter area (i.e. area under teh curve) that matters. Compare the areas UNDER the curve for common OSC cameras and common mono filter sets. You might be surprised which wins, despite lower peak transmission. Oh, if you do that, don't forget to apply the CAMERA Q.E. curve to the RGB filter transmission rates...THEN take the area under that curve. You can't have a 100% transmission rate at the pixel, if the pixels efficiency is not 100%. The filter may pas 99%, then the pixel may only be sensitive to 80%, in which case you would really have around a 79% actual rate (this is in fact, accounted for in OSC sensitivity plots already.) 

With regards to the bayer matrix and the 25%/50%/25% argument. You can overcome the potential (potential!) limitations of the bayer pattern with sufficient dithering during acquisition, enough frames, and bayer drizzling to integrate, rather than any kind of debayering algorithm. Bayer drizzling will distribute all the source pixel data into EVERY output pixel, this distributing the signal from all channels more evenly. This will eventually negate any impact from the fact that there are only 25% red and blue pixels. It requires more frames (frames, not necessarily more time, which is actually easy enough with modern CMOS sensors since they are so sensitive), but once you stack enough dithered frames with bayer drizzling then you can eliminate this particular handicap of OSC. 

SOME OSC cameras may in fact be less efficient. Any of the cameras that use the Panasonic M sensor (i.e. ASI1600, QHY168), probably fall into the "less efficient" category. That is different than saying ALL OSC cameras are less efficient. The IMX183 sensor, for example, is actually highly efficient in general, and the OSC versions achieve only a couple percent loss in Q.E. with the CFA relative to the mono Q.E. curve, and the bandpasses of the CFA channels overlap fairly significantly as well. An OSC IMX183 is actually quite efficient. The 533 seems to peak in efficiency at around 90% Q.E., which is even higher than the IMX183, and seems to sustain higher Q.E. across the visible spectrum. So I would expect that any OSC version of that camera, should be quite efficient indeed.
Like
jrista 8.59
...
· 
·  1 like
Coolhandjo:
Screenshot_20240228_085708_Gallery.jpgJon Rista (jrista)

here is my sky showing the difference in Bortle

Ah, gocha. I guess then, I'd probably call that Bortle 5. Sorry, I missed that you said Bortle 5 was the zenith before. Still, Bortle 5 (this would correspond roughly with dark yellow zone) is very good skies. This also just going off of very rough "visual observation" criteria. You might want to pick up an SQM-L meter. I always have one of these with me whenever I go to any dark site. Its a simple push-button sky darkness meter that will report in SQM, which is stellar magnitudes per square arcsecond. You just point it at the part of teh sky you want to measure, hit the button and wait for the beep, and it will tell you exactly how dark the sky is. I regularly measured my dark site back in the day, and it averaged around 21.3mag/sq", which is a dark green bortle zone. Just point two stellar mags darker, and you have a blue zone. Taking regular SQM-L measurements will give you some useful historical and statistical data, too... It can help you figure out if your skies are darkening over time, or brightening, the average sky brightness, what times of night are brighter vs. darker, etc. etc.
Like
C.Sand 2.33
...
· 
@Coolhandjo are you planning on switching the 533mc and 533mm when you're getting narrowband data? As in, do you plan on having one rig (telescope, mount etc) and two cameras? If that's the case imo going full moon is worth the extra $200 that the filters would be. 

I agree with the mono>osc crowd, however it seems like that is being thoroughly explored elsewhere in this thread.
Like
coolhandjo 1.91
...
· 
@Coolhandjo are you planning on switching the 533mc and 533mm when you're getting narrowband data? As in, do you plan on having one rig (telescope, mount etc) and two cameras? If that's the case imo going full moon is worth the extra $200 that the filters would be. 

I agree with the mono>osc crowd, however it seems like that is being thoroughly explored elsewhere in this thread.

*** Thanks. I plan to do NB with mono and use OSC for stars.  I also plan OSC for broadband targets like orion and supement with Ha and Luminace from mono ***
Like
C.Sand 2.33
...
· 
Coolhandjo:
@Coolhandjo are you planning on switching the 533mc and 533mm when you're getting narrowband data? As in, do you plan on having one rig (telescope, mount etc) and two cameras? If that's the case imo going full moon is worth the extra $200 that the filters would be. 

I agree with the mono>osc crowd, however it seems like that is being thoroughly explored elsewhere in this thread.

*** Thanks. I plan to do NB with mono and use OSC for stars.  I also plan OSC for broadband targets like orion and supement with Ha and Luminace from mono ***

Well regardless of the osc vs mono quality argument, it's simply cheaper and less hassle to have one camera instead of switching cameras. So in my opinion I would say go full mono instead of messing with combining data (which isn't difficult if you do still decide to get two cameras)
Like
HegAstro 11.91
...
· 
·  1 like
Reg Pratt:
This is not to say that OSC is bad. Just that it is less efficient. Which is best will depend on the individual, their goals, and their preferences.


Andrea is correct. For RGB imaging, it is not at all the case that OSC is less efficient (neglecting for simplicity, the effect of light pollution). When using a mono camera, remember that you are only only one filter at a time. The transmission curve of a dichroic filter needs to be multiplied by the sensor QE and integrated across the bandwidth of the  filter, and do this for each of the three RGB filters. You essentially do the same with the OSC, except accounting for the Bayer array. The calculations I have seen (once again refer to Upton's work) show an advantage to OSC in pure photons collected given equal time due mainly to the broader bandwidth of the dye based color filters used in the Bayer array. The reason luminance imaging collects more information is simply because all pixels collect photons across the entire visible spectrum at each instant in time, at the complete expense of color data. That is not the case either with an OSC or with mono RGB.
Like
C.Sand 2.33
...
· 
Arun H:
Reg Pratt:
This is not to say that OSC is bad. Just that it is less efficient. Which is best will depend on the individual, their goals, and their preferences.


Andrea is correct. For RGB imaging, it is not at all the case that OSC is less efficient (neglecting for simplicity, the effect of light pollution). When using a mono camera, remember that you are only only one filter at a time. The transmission curve of a dichroic filter needs to be multiplied by the sensor QE and integrated across the bandwidth of the  filter, and do this for each of the three RGB filters. You essentially do the same with the OSC, except accounting for the Bayer array. The calculations I have seen (once again refer to Upton's work) show an advantage to OSC in pure photons collected given equal time due mainly to the broader bandwidth of the dye based color filters used in the Bayer array. The reason luminance imaging collects more information is simply because all pixels collect photons across the entire visible spectrum at each instant in time, at the complete expense of color data. That is not the case either with an OSC or with mono RGB.

What is "ignored" in this calculations (in my experience) is the use of a luminance filter. If we say mono RGB = OSC, assuming equal integration time evenly spread and all, there is a point where mono pulls ahead because of luminance. At some point the RGB data ratio is effectively set across the frame, and the only benefit you get with my integration time is luminance (for both mono and osc). In this case mono obviously has the benefit because it is simply collecting luminance across the whole sensor. 

In my experience osc is better/equal for short integrations (say, <30mins or an hour, though this depends on target/light pollution/equipment/...). Then mono becomes more efficient and increasingly so, until you hit that wall of diminishing returns. Of course osc will also hit that point, but it will take longer than osc.
Like
andreatax 7.76
...
· 
Well regardless of the osc vs mono quality argument, it's simply cheaper and less hassle to have one camera instead of switching cameras. So in my opinion I would say go full mono instead of messing with combining data (which isn't difficult if you do still decide to get two cameras)


Except for a couple of things:

1. If you already have an OSC why going through the trouble of adding FW and color filters.

2. You lose the simplicity and the effectiveness of shooting RGB with an OSC, especially if your imaging time is at premium (and when it is not?) . The spatial information is well captured by the luminance layer and color information doesn't need to be too deep (in terms of SNR) to be used to be combined in LRGB.
Edited ...
Like
HegAstro 11.91
...
· 
·  1 like
r. If we say mono RGB = OSC, assuming equal integration time evenly spread and all, there is a point where mono pulls ahead because of luminance. At some point the RGB data ratio is effectively set across the frame, and the only benefit you get with my integration time is luminance (for both mono and osc). In this case mono obviously has the benefit because it is simply collecting luminance across the whole sensor.


I believe we were extremely clear on this point. That is why Andrea, Jon, and I all compared RGB from mono to an OSC. A few posts ago, I specifically restricted my statement to RGB, and not LRGB. From my earlier post, the relevant point emphasized:

"at least some analyses show an efficiency advantage for OSC for RGB (note: not LRGB) imaging. "

Yes, luminance collects better SNR/time than RGB mono or OSC. The calculations I recall show a 20-30% advantage to mono  for a given imaging time split evenly between RGB and Luminance versus the same time for an OSC. So it isn't the enormous difference people think it is. You can go deeper on fainter objects using mono in a given time, but OSC does have the advantage, to some, of making available a "complete" dataset at all times.
Edited ...
Like
C.Sand 2.33
...
· 
andrea tasselli:
Well regardless of the osc vs mono quality argument, it's simply cheaper and less hassle to have one camera instead of switching cameras. So in my opinion I would say go full mono instead of messing with combining data (which isn't difficult if you do still decide to get two cameras)


Except for a couple of things:

1. If you already have an OSC why going through the trouble of adding FW and color filters.

2. You lose the simplicity and the effectiveness of shooting RGB with an OSC, especially if your imaging time is at premium (and when it is not?) . The spatial information is well captured by the luminance layer and color information doesn't need to be too deep (in terms of SNR) to be used to be combined in LRGB.

If you're getting a filterwheel anyway for mono, I don't see how 1 is an issue. 

We're losing simplicity anyway with adding two rigs into this. Learning mono processing isn't difficult, and even then you can just immediately rgb combine, which does lose the benefits of mono (again, debated apparently but that's not the point I'm trying to make). If your imaging time is at a premium I wouldn't recommend dealing with all the hoops of mono and osc for RGB and narrowband and all that anyway.
Like
C.Sand 2.33
...
· 
Arun H:
r. If we say mono RGB = OSC, assuming equal integration time evenly spread and all, there is a point where mono pulls ahead because of luminance. At some point the RGB data ratio is effectively set across the frame, and the only benefit you get with my integration time is luminance (for both mono and osc). In this case mono obviously has the benefit because it is simply collecting luminance across the whole sensor.


I believe we were extremely clear on this point. That is why Andrea, Jon, and I all compared RGB from mono to an OSC. A few posts ago, I specifically restricted my statement to RGB, and not LRGB. From my earlier post, the relevant point emphasized:

"at least some analyses show an efficiency advantage for OSC for RGB (note: not LRGB) imaging. "

Yes, luminance collects better SNR/time than RGB mono or OSC. The calculations I recall show a 20-30% advantage to mono  for a given imaging time split evenly between RGB and Luminance versus the same time for an OSC. So it isn't the enormous difference people think it is. You can go deeper on fainter objects using mono in a given time, but OSC does have the advantage, to some, of making available a "complete" dataset at all times.

I must have missed that. I'll go back and reread. 

I'm unsure why we're comparing RGB mono to osc? If the benefit of luminance is clear why is osc being promoted so much? If this is answered earlier please forgive me, I'll go back and read that.
Like
andreatax 7.76
...
· 
andrea tasselli:
Well regardless of the osc vs mono quality argument, it's simply cheaper and less hassle to have one camera instead of switching cameras. So in my opinion I would say go full mono instead of messing with combining data (which isn't difficult if you do still decide to get two cameras)


Except for a couple of things:

1. If you already have an OSC why going through the trouble of adding FW and color filters.

2. You lose the simplicity and the effectiveness of shooting RGB with an OSC, especially if your imaging time is at premium (and when it is not?) . The spatial information is well captured by the luminance layer and color information doesn't need to be too deep (in terms of SNR) to be used to be combined in LRGB.

If you're getting a filterwheel anyway for mono, I don't see how 1 is an issue. 

We're losing simplicity anyway with adding two rigs into this. Learning mono processing isn't difficult, and even then you can just immediately rgb combine, which does lose the benefits of mono (again, debated apparently but that's not the point I'm trying to make). If your imaging time is at a premium I wouldn't recommend dealing with all the hoops of mono and osc for RGB and narrowband and all that anyway.

Potentially you don't need one if you aren't planning to use RGB filters. And I was thinking of just one rig and switching sensors between sessions. In my time using monos I reserved the better nights (in terms of moonlight and transparency) to collect color information and all the other nights for L. Of course if seeing is good you're gunning for L.
Like
andreatax 7.76
...
· 
I'm unsure why we're comparing RGB mono to osc? If the benefit of luminance is clear why is osc being promoted so much? If this is answered earlier please forgive me, I'll go back and read that.


In an ideal world I'd never use LRGB and pros don't either. You get a cleaner image with just RGB (with monochrome cameras) if you can afford it (I mean the extra time).
Like
C.Sand 2.33
...
· 
andrea tasselli:
In an ideal world I'd never use LRGB and pros don't either. You get a cleaner image with just RGB (with monochrome cameras) if you can afford it (I mean the extra time).

Of course not an ideal world and all, and as you said: imaging time is at a premium. When you say "pros" who are you refering to? Space telescopes or the serial IOTD winners?
Like
C.Sand 2.33
...
· 
·  1 like
andrea tasselli:
Potentially you don't need one if you aren't planning to use RGB filters. And I was thinking of just one rig and switching sensors between sessions. In my time using monos I reserved the better nights (in terms of moonlight and transparency) to collect color information and all the other nights for L. Of course if seeing is good you're gunning for L.


I would rather leave filters in a filter wheel than swapping them every time you wanted to shoot something different. Of course this also adds in the automation bonus.
Like
andreatax 7.76
...
· 
Of course not an ideal world and all, and as you said: imaging time is at a premium. When you say "pros" who are you refering to? Space telescopes or the serial IOTD winners?


Professional astronomers and, yes, IOTD serial winners. Bortle 1-2 skies essential.

I would rather leave filters in a filter wheel than swapping them every time you wanted to shoot something different. Of course this also adds in the automation bonus.


I always do (with OSC) and see no downside to it. Obviously automation was never in the charts because naturally my scenario doesn't apply.
Like
C.Sand 2.33
...
· 
andrea tasselli:
Professional astronomers and, yes, IOTD serial winners. Bortle 1-2 skies essential.

As far as I know pretty pictures aren't quite the goal of professional astronomers, of course luminace isn't used.

I know I'm moving the goalposts a little here, but looking through the last two pages of (mono, boardband) IOTD's, I would say >80% used lum. I didn't keep a tally but it was heavily in favor of lum. 
andrea tasselli:
I always do (with OSC) and see no downside to it. Obviously automation was never in the charts because naturally my scenario doesn't apply.


 The immediate downsides I see would be dust, risk of damage, and hassle. Though on a osc cam I wouldn't recommend a filter wheel anyway.
Like
jrista 8.59
...
· 
·  2 likes
Arun H:
Reg Pratt:
This is not to say that OSC is bad. Just that it is less efficient. Which is best will depend on the individual, their goals, and their preferences.


Andrea is correct. For RGB imaging, it is not at all the case that OSC is less efficient (neglecting for simplicity, the effect of light pollution). When using a mono camera, remember that you are only only one filter at a time. The transmission curve of a dichroic filter needs to be multiplied by the sensor QE and integrated across the bandwidth of the  filter, and do this for each of the three RGB filters. You essentially do the same with the OSC, except accounting for the Bayer array. The calculations I have seen (once again refer to Upton's work) show an advantage to OSC in pure photons collected given equal time due mainly to the broader bandwidth of the dye based color filters used in the Bayer array. The reason luminance imaging collects more information is simply because all pixels collect photons across the entire visible spectrum at each instant in time, at the complete expense of color data. That is not the case either with an OSC or with mono RGB.

What is "ignored" in this calculations (in my experience) is the use of a luminance filter. If we say mono RGB = OSC, assuming equal integration time evenly spread and all, there is a point where mono pulls ahead because of luminance. At some point the RGB data ratio is effectively set across the frame, and the only benefit you get with my integration time is luminance (for both mono and osc). In this case mono obviously has the benefit because it is simply collecting luminance across the whole sensor. 

In my experience osc is better/equal for short integrations (say, <30mins or an hour, though this depends on target/light pollution/equipment/...). Then mono becomes more efficient and increasingly so, until you hit that wall of diminishing returns. Of course osc will also hit that point, but it will take longer than osc.

This is highly debatable, as an ongoing thread about whether to use or ditch the L filter demonstrates. 

https://www.cloudynights.com/topic/906827-anyone-else-completely-skipping-the-luminance-filter/

Use of an L filter does not necessarily improve SNR. It may make for a smoother image in the long run, usually at the GREAT cost of color accuracy and richness, but SNR requires a specific signal of interest in order to be measured. No one, as far as I know, knows of a model to determine the SNR of an LRGB image. What is it? The L channel you are swapping in, makes NO distinction of color, which is not the same as the intrinsic luminance of the RGB that you are tossing when you unceremoniously do said swapping. 

Smoother !== higher SNR

LRGB imaging usually comes at a notable cost: Color. I've been scanning through TONS of galaxy images the last several days or so. Go back five or ten years where people were doing LRGB with CCDs, and BINNING their RGB. At least 2x, often 3x. Because they were binning, there was not as much or even notable loss in SNR on the RGB data. There WAS loss in terms of color resolution, but not SNR. 

Today, with CMOS sensors that don't have hardware binning support, its not quite the same thing, LRGB combinations. The RGB is simply not that deep, signal wise. So when you look at galaxy images today, they are very often mostly grayscale, with little smatterings of color here and there. There are some very vibrant galaxy images that you'll come across, demonstrating the effects of saturation boost, but they lack the color diversity and extent that deeper RGB exposures gets you. Rarely, you come across a galaxy image where the imager really did go deep on the RGB. The difference is incomparable, IMO, between a deep RGB image (even if they still combined with L, although I find pure RGB to be best) and the very weak, shallow RGB data and super deep L data people generally aim for today.

I'm not very excited by a mostly-grayscale galaxy image with little over-saturated smatterings of color here and there...or not even just galaxies, any image. L combination has a cost, it is not free, and that cost is that Whiz-Bang-Pop WOW impact of an incredibly rich, colorful AND detailed image of deep space. 

You can find some deeply exposed OSC images of galaxies here on ABin. They also exhibit a broader range of color, thanks to the overlaps in the filters that allows for more accurate reproduction of more complete spectrum of colors. If you want the most vibrant, colorful, eye-popping images of space, ditch the L...go OSC or mono+RGB (but with overlaps, such as Astronomik Type-2c or Johnson-Cousins BVR), and find some reasonably dark skies. L imaging is a crutch. It was always a crutch, but at least the people who invented it knew that, and at lest the people who used it with CCDs understood it was a tradeoff, not some magical win that with a bit if pixie dust would improve every image. L is a tradeoff, and the tradeoff is color.

Color, IMO, is what makes space awesome. OSC is DARN GOOD at color (under dark skies)!
Edited ...
Like
C.Sand 2.33
...
· 
Jon Rista:
This is highly debatable, as an ongoing thread about whether to use or ditch the L filter demonstrates. 

[...]

I'm not very excited by a mostly-grayscale galaxy image with little over-saturated smatterings of color here and there...or not even just galaxies, any image. L combination has a cost, it is not free, and that cost is that Whiz-Bang-Pop WOW impact of an incredibly rich, colorful AND detailed image of deep space.


Crudely put: Just increase the saturation. With proper processing techniques you should be able to isolate the galaxy from the background, thus not increasing any background color noise. If you're worried about color accuracy you can compare your LRGB image to an RGB image created with just your RGB data as to not over saturate. 
Jon Rista:
You can find some deeply exposed OSC [...] not some magical win that with a bit if pixie dust would improve every image. L is a tradeoff, and the tradeoff is color.

Color, IMO, is what makes space awesome. OSC is DARN GOOD at color!


You can find plenty of LRGB galaxies (and plenty of other broadband targets, I'm not sure if we're excluding those for a purpose) that are wondefully colorful as well. Mono RGB filters also overlap, with something such as SPCC you have no issue accurately reproducing the complete spectrum. 

I don't think pixie dust and magic is a fair representation of what has been said.

Do you have a source for who invented luminace imaging? I'd be interested to read more about that.

---------------------------------------------------------------------------
Editted down the quotes so my message wasn't as much of a wall of text.
Edited ...
Like
HegAstro 11.91
...
· 
I'm unsure why we're comparing RGB mono to osc?


Because that was the OP's question, and he already has a color camera! Here is the question again:

"I am wondering if anyone would recommend combining OSC data with Mono data. in particular OSC for RGB "

Regardless of the benefit of LRGB versus RGB imaging, for pure RGB imaging, it is not at all the case that mono is superior to OSC. The OP has the option of doing pure RGB imaging with his OSC and using it as such, or combining it with luminance data from his mono. So the question was whether he should invest in RGB filters for his mono cam or not.
Like
andreatax 7.76
...
· 
Crudely put: Just increase the saturation. With proper processing techniques you should be able to isolate the galaxy from the background, thus not increasing any background color noise. If you're worried about color accuracy you can compare your LRGB image to an RGB image created with just your RGB data as to not over saturate.


I did on several of mine and with equal integrated time the RGB always appears with more brilliant colors and wider gamut than the LRGB. So much so that I posted pure RGB rather than LRGB for some of them.
Like
C.Sand 2.33
...
· 
Arun H:
Because that was the OP's question, and he already has a color camera! Here is the question again:

"I am wondering if anyone would recommend combining OSC data with Mono data. in particular OSC for RGB "

Regardless of the benefit of LRGB versus RGB imaging, for pure RGB imaging, it is not at all the case that mono is superior to OSC. The OP has the option of doing pure RGB imaging with his OSC and using it as such, or combining it with luminance data from his mono. So the question was whether he should invest in RGB filters for his mono cam or not.


Yes, and I answered that question with:
are you planning on switching the 533mc and 533mm when you're getting narrowband data? As in, do you plan on having one rig (telescope, mount etc) and two cameras? If that's the case imo going full moon is worth the extra $200 that the filters would be.


C.SandWell regardless of the osc vs mono quality argument, it's simply cheaper and less hassle to have one camera instead of switching cameras. So in my opinion I would say go full mono instead of messing with combining data (which isn't difficult if you do still decide to get two cameras)


I did misread OP's question a little bit, but in my opinion I still would reccommend mono. OP wouldn't have to deal with messing with the imaging rig. Flats could be reused, potential for damage and all that avoided. Plus if OP is commited to buying the narrowband already, a mono set goes for ~300 at most? Selling the 533mc would provide with ~300 net back (assuming 600 asking price). Money is always nice.
Like
C.Sand 2.33
...
· 
andrea tasselli:
I did on several of mine and with equal integrated time the RGB always appears with more brilliant colors and wider gamut than the LRGB. So much so that I posted pure RGB rather than LRGB for some of them.

Well, I can't say much to this without processing the data myself. Whatever works for you I suppose.
Like
HegAstro 11.91
...
· 
·  1 like
Yes, and I answered that question with:


He asked a specific question. We answered it. Other answers are possible, as with every other thing. Depending on the constraints

More importantly, down the thread, one of other posters made a manifestly incorrect statement about mono RGB versus OSC:

"There's also quite noticeable losses in light gathering behind the CFA filters compared to the same colour filter on a mono camera (on a per pixel level with the same colour filter)."

That was the reason why we corrected him, and that was why the discussion went into OSC versus mono RGB, at least for myself. 

I have, I think, outlived my usefulness in contributing to this thread, both for the OP, as well as myself, so I will exit it.
Like
jrista 8.59
...
· 
·  2 likes
Jon Rista:
This is highly debatable, as an ongoing thread about whether to use or ditch the L filter demonstrates. 

[...]

I'm not very excited by a mostly-grayscale galaxy image with little over-saturated smatterings of color here and there...or not even just galaxies, any image. L combination has a cost, it is not free, and that cost is that Whiz-Bang-Pop WOW impact of an incredibly rich, colorful AND detailed image of deep space.


Crudely put: Just increase the saturation. With proper processing techniques you should be able to isolate the galaxy from the background, thus not increasing any background color noise. If you're worried about color accuracy you can compare your LRGB image to an RGB image created with just your RGB data as to not over saturate. 
Jon Rista:
You can find some deeply exposed OSC [...] not some magical win that with a bit if pixie dust would improve every image. L is a tradeoff, and the tradeoff is color.

Color, IMO, is what makes space awesome. OSC is DARN GOOD at color!


You can find plenty of LRGB galaxies (and plenty of other broadband targets, I'm not sure if we're excluding those for a purpose) that are wondefully colorful as well. Mono RGB filters also overlap, with something such as SPCC you have no issue accurately reproducing the complete spectrum. 

I don't think pixie dust and magic is a fair representation of what has been said.

Do you have a source for who invented luminace imaging? I'd be interested to read more about that.

---------------------------------------------------------------------------
Editted down the quotes so my message wasn't as much of a wall of text.

Just increasing saturation doesn't increase SNR, does not increase the accuracy of the color, etc. Yes, you can boost saturation. That tends to show, as boosted saturation. Problem with that is, if you didn't pick up any color in the first place, you can't boost it by increasing saturation. If you have a GRAYISH color for the bulk of your galaxy, or say strait up gray for IFN, or a very weak brown for dark dusty nebula....then you don't really have color. You boost saturation, that grayish color of your galaxy, for example, is going to take on whatever minimal color might be present. That is most often BLUUUUE, sometimes more of a tan.  Neither are really actually representative of the nature of the galaxy, which involves a whole wide range of color. But I see a lot of galaxy images that don't really have much more than blue, and yellow, with some slight variations in their tones. They may be VIBRANT blues, and BRIGHT yellows...but a bicolor galaxy isn't really all that realistic. It is a consequence, though...

I already said that there are some colorful galaxies posted here. I also said some are RGB only. I ALSO said, there is a huge difference between weak RGB and L, and strong RGB. If you have, what, 15 hours of L and 2 hours each of RGB, and 20 hours of just RGB...the latter is going to be beautiful image, while the former, is gonna be mostly gray. Or blue, and maybe some vibrant pink if someone saturated the little color they had. 

This is a debate that has been ongoing over on the CN thread I linked, if you want more background. LRGB is not a guaranteed way to improve your images. It is a tradeoff. The way people seem to be doing it these days, its maybe 1 hour each of RGB, and then 5, 7, or maybe even more of L, or maybe 2-2.5 hours of RGB, and 15, 20 hours or more of L. That is WILDLY imbalanced. It used to be 3:1:1:1, but even that, has problems (read the CN thread, its been over-iterated over there, and its kind of a lot, but there is a fundamental reason why blending an L channel causes color problems). It would probably be better, if you insist on doing LRGB, to do 1:1:1:1, equal integration time on each channel. That would still leave you with an L signal, that is about 3x as strong as the RGB, right? 

As for who invented LRGB... Let me see if I can dig up the old pages. These go back to like 2001 or 2002... Its been a LONG time. You need to understand the context of the times back then. CCD cameras were incredibly expensive...and usually extremely noisy. Read noise levels were 20e- to 40e-, or worse in some cases. It was a wildly different time than today. Getting any good signal, was tough, especially with RGB. Shifting the balance of exposure time, though, weighted to L, but then binning the RGB 2x or 3x, reduced the resolution of the color data. You were then going to compensate for that, by getting more L data, to give it a higher SNR, so it could HANDLE more aggressive processing. The key purpose of the aggressive L processing, was to increase the contrast of all the details. CONTRAST, is the key.  The RGB was lower resolution, so you aren't going to pick up much in the way of color DETAILS, and there are color details. Instead, you were going to take monochromatic details, and crudely, with a broad brush, paint them with heavily denoised or even strait up blurred RGB data that was upsampled from its 1/2 to 1/3 resolution. Once you had blurred the RGB, you could then stretch it heavily, to roughly match the signal distribution of the L channel, and then combine them. By stretching both roughly the same, this avoided the color washout problem....but, they had the "SNR" to do so, because they binned their RGB acquisitions. 

The way LRGB is done today, is a bit different. We don't bin, largely because CMOS cameras currently can't (they do software "binning" which doesn't give you the same SNR boost as CCD binning), so we generally have a hard time stretching our RGB data sufficiently, to blend with the L without problems. Hence the long-standing "combining my L with RGB washes out the colors" issues, for which there are countless posts and threads all over the net spanning....well, at least 12-14 years (I found a bunch of threads on the topic on PixInsights forums recently dating as far back as 2010!!) 

The general gist is, in order for LRGB to provide the benefit it was intended to...reduce the INSANE integration times people back in 2001 were facing with their extremely noisy cameras and slower systems, you pack the exposure time into L and make that high SNR, you blur the CRAP out of your RGB, and you make do....with a bad situation. 

Why are we still doing this today? I mean, I get that on some occasions, you might be chasing something so faint, you just can't get enough data in any sense of a "reasonable timeframe", and using an L filter with LRGB combination could help. You would still get lots of RGB data, but then use an L filter as well to dig deep, and maybe pull out something 26th magnitude or thereabouts. But, why is LRGB still STANDARD practice...in light of the AMAZING technology we have today? CMOS cameras with as little as 1-1.5e- read noise. Telescopes at f/2!! Off the shelf automation technology. Point, push button, shoot. That's how easy astrophotography has become, compared to the era in which LRGB was invented as a crutch, because a crutch was desperately needed... 

At the very least, I think we've entered territory where the equation is wildly imbalanced. Too darn much L, not even close to enough RGB. The lack of color in images seems to be growing at an accelerated rate, and what color there is...yes, is often very oversaturated. I honestly think, scaling the L back, boosting the RGB, with more of a 1:1:1:1 integrated time balance, might breathe some more colored light back into astrophotography these days.
Edited ...
Like
 
Register or login to create to post a reply.