The need for REAL signal - Thoughts on true image quality [Deep Sky] Processing techniques · Jon Rista · ... · 79 · 2160 · 0

rockstarbill 11.02
...
· 
Luka Poropat:
Bill Long - Dark Matters Astrophotography:
I have a PL16803 with a measured 8e of noise, which is ridiculously low.


Considering modern technology that is ridiculously high. Even IMX 455 at LCG at its highest has 3,5e of read noise whilst reaching 1,5e of readnoise at HCG.
As someone that is using both large CCD and CMOS sensors (above FF size) I can tell you the only upside in the CCD architecture in 2024 is the size of the pixels that are good match for longer focal lenght telescopes on average seeing conditions. Apart from that CMOS sensors demolish them completely, read noise, dark current, higher QE amongst other things. The future is now.



You entirely missed the per micron squared part of what I said. Measured in that manner, it has less read noise. That was my entire point in the statement. I also said it would kill it in broadband, which will swamp noise just fine. The exposures will be a little longer is all.

I use a Sony IMX461 (44x33mm medium format sensor) so you do not need to tell me about the future being now. My 20" CDK in Chile will use the IMX455. I am all aboard the CMOS train.
Edited ...
Like
jrista 8.59
...
· 
·  1 like
Jon Rista:
Now, one thing I think is terrible these days, is the utter obliteration of noise. That is, in a nutshell, one of the radical overreliance on AI issues that I'm alluding to. I think obliterating noise is terrible! Maybe its a trend, maybe newer imagers like it...if you think its producing a better quality image, that's one of the things I'm trying to call attention to. I would strongly disagree. This is one of the kind of sad and depressing trends I've seen...OBLITERATION. Not just of the noise, but the finer details as well. It seems the newer generation of imagers have...well, I guess not lost, maybe never developed...an eye for the NUANCES of their images. Nuances which seem to get obliterated rather readily these days, between excessive NR and what I believe is, based on my newfound recognition of these particular artifacts, is star removal and star addition. I've rarely done star removal, as it always seemed to be destructive to the finer details and nuances of the image. Supposedly there are some more manual approaches these days that can preserve the details, I still have to try them out. But with things like SXT, there is a characteristic in the artifacts of the image that are destructive to the details. I see it all over the place. Obliteration. It's really disappointing, and in this case I am not in alignment with the majority...I don't think it looks good, I don't think it will ever look good. It looks damaged, to be perfectly honest.


As you mention later in your post, AI will never reveal what was not there in the first place. In that sense, when you leave granular noise in the image, are we sure our eyes are just not being fooled into thinking there is detail when it really is just noise? Part of the reason I leave noise in images is for this reason. When I flip back and forth between no noise reduction and full noise reduction, the only thing I am 'obliterating' is the noise, but to the untrained eye - which is everyone who does not see the raw data - that noise appears as if it is detail. 

If you were talking about images with low integration times I would be inclined to agree here, but there are images out there where people have completely de-noised it while preserving a good amount of detail.

I am also not going to sit here and pretend that people don't nuke details with de-noise techniques. It lies with the skill of the processor.
Jon Rista:
Regarding technology. Some of the best images ever produced, are still from CCDs. In particular the KAF-16803 CCD, which is still the king of the best images I've ever seen. SOME CMOS images are starting to get there, but I'm still not quite sure I've come across a CMOS image that really surpassed the best 16803 images I've ever encountered.


I think this would be true before the rise of the IMX571 and friends. Respectfully I have to say this is a very dated opinion that would be true if we were comparing images from the 1600MM with CCD cameras.  The best images I see today are from CMOS users. Some of the big names are clinging onto their CCD cameras to be sure, but to say that only some CMOS images are starting to approach CCD images is a bit of a smack in the face to a lot of the high quality work that has been completed in the past few years.

Since we are just two people arguing on the internet, I suppose this is very subjective. So I will say when I look at competition winning images (not AB's Iotd), the only place where CCD's still have a lot of fight left in them is in images of galaxies, aka where their large pixels can still excel with large telescopes. However, even here they are losing ground to CMOS images.
Jon Rista:
FWIW, overall signal collection efficiency, isn't just about Q.E. A lot of it is about aperture and image scale. An old noisy sensor, with big pixels, and a big aperture, even if it had say 60% Q.E. could still be a more efficient system. Read noise is also even a factor that can be negated, if you can expose long enough per frame. Q.E. differences can be overcome with often small adjustments to image scale. Newer CMOS cameras are amazing, can't wait to get a QHY600, but...there are images produced with CCDs paired with systems that were still incredible light guzzlers, in their time and today.


Yes I am aware, and I am glad you brought this up because I think more people need to realize this! Still, the only realm where CCDs can maintain any sort of competitiveness is with large, long focal length telescopes that can fully utilize a CCD's large pixel size. Even then, I think for someone building a system from scratch they are just better off buying a CMOS these days. It's not going to get any better for CCD's as the CMOS revolution marches on.
Jon Rista:
I actually am working with RC-Astro to try and improve NXT's recognition of fine, dark details, which is one of the kinds of details that is so readily destroyed by NXT. I'm feeding him carefully selected exemplars to try and help future NXT trainings identify and recognize fine, dark details, so that darker structures don't just get totally smoothed over. I have always found that the details you can find in images with bright backgrounds, and lots of dark foreground dust, are some of the most intriguing. These days, dark dust is almost always rendered very smooth, flat, largely structureless (utterly on a finer level, somewhat on coarser scales) and largely lacking in any interesting detail.


At any rate, its good to see you getting back into things and I hope your work with RC-Astro pays off, because we all benefit!

Thanks for the welcome. I'm a detail guy, so, I really hope that NXT can indeed be trained to recognize the fine details. I'm not sure how its training works, so mainly I'm just trying to find images that I know have lots of fine dark details that NXT's current version tends to nuke, and am sending them along. Hopefully with enough, future versions of NXT will be able to recognize fine detail and preserve it accordingly. 

Regarding CCD images...I need to see if I can find some of the old ones I really thought were beyond exceptional. There is a characteristic with them...notably in the backgrounds, that I don't think CMOS images have achieved yet...at least, not without using AI to effectively obliterate the noise, and, to me, I guess it shows when that's done (i.e. AI derived characteristics...its something I attune to, I guess, these kinds of details.) I am not saying every imager achieved this, certainly not. But, when a CCD was paired with the right light bucket at the right image scale...omg...the results were beyond phenomenal. I've been looking for this characteristic in CMOS images...either I see AI noise obliteration, or...I see that CMOS background signal noise still hasn't quite achieved the same glassy-pond like characteristic yet. I'll see if I can find some examples...its going to take some digging though, as I was never good about remembering WHO made the images.

I would agree with people going with CMOS these days. This thread isn't really about technology per-se...I'm not saying people should be getting CCDs. Just that...there is a level of quality that, outside of AI obliteration tactics, I haven't yet seen CMOS achieve. I think in part, it is the image scale difference. For any given light bucket, the CMOS image scale for background signal SNR is actually a bit of a handicap. I think with careful downsampling with just the right resampling algorithms, the differences could probably be largely negated. Maybe? Still, its hard to quantify and qualify all of this, given the advent and use of AI. I DO see these characteristics in the background signals of very smooth images, that I now attribute to AI processing. The use of AI definitely hampers knowing for sure, though. Some of these CCD images though (and a lot are definitely galaxies, but there are some nebula images that are just sublime when a CCD is used right with the right light bucket), still exhibit a level of real signal quality that I don't think AI will ever touch, and maybe native-resolution CMOS won't either (due to the finer image scale.) 

Anyway, truly, not saying people should get CCDs, not at all. I like CMOS! ;) Big fan. Just wondering if AI is limiting IQ, due to certain usage patterns. Is it being overused? Is it being used too aggressively? Seems that way? Even when I come across those images with that initial "wow" impact. They look great in the small rectangle on the tech details page. But when you view them larger...even those "wow impact" images exhibit characteristics... When you view them full size, those characteristics really start to show. Not 100% of the time, there are those images that are just amazing, period. But it surprised me how often I go "ooh, yikes" when I view a full size image, and wonder what happened. 

Also wondering, if there is an optimal scope to pair with these smaller pixeled CMOS cameras, that might normalize aperture and image scale, allowing that same kind of light guzzling efficiency that some of the big pixel CCDs had back in the day. One thing I DO know, is that a longer focal length is generally going to bend light less, and have fewer optical aberrations as a result. For galaxies, a 14", 20" scope, paired with 9 micron, even 12 micron pixels...I don't think CMOS is ever going to touch that, unless they ALSO have the bigger pixels. You could use a shorter scope, but with the same aperture... That could normalize the image scale and aperture, but the FoV is going to change. The smaller pixels could negate the FoV change...but...the real kicker her is the shorter focal length and the greater bending of light, IS going to impact the optical IQ, and you'll have to deal with more optical aberrations.

I really wonder, if we might see a CMOS camera with small pixels, but that had useful hardware binning. Even if it wasn't charge binning, I would say voltage binning would be fine here, if it could work up to say 3x3 pixel groups. If a CMOS camera came along with say 3 micron native pixel size, and two voltage binning modes allowing 6 micron and 9 micron binned pixels. THAT, could level the playing field once and for all. ;)
Like
rockstarbill 11.02
...
· 
·  1 like


I love seeing a thread where Jon, the world biggest fan of CMOS back in the day, gets called out in a thread with people telling him how much better CMOS is than CCD. 

Oh the irony! 

With my personal enjoyment of that over (mostly) I think AI is being overused. I have watched folks process modern CMOS data, and take the noise slider in NXT and BXT's non-stellar detail slider and mash them to the far right. I have free data on my website, which I am not going to link to here as it's not super relevant. Anyhow, thousands of people have downloaded and processed that data. The exact same data, and some of the results were a melted Orion, like it literally was hammered so hard with every Xterminator there is, that it looked like a wax figurine of Orion. This experience I had, with many people processing the data and sharing their results with me -- completely validates the premise of Jon's post here -- but the interesting part of this, is that no extra signal was needed. The data was way over the SNR mark, given the subject being Orion. 

Not to get to deep into this, but the number of likes, comments, and bookmarks these melted Orions received, were substantial. So to some degree, that is appreciated as a quality in some views. I did not quite understand it myself when reviewing them, and really -- I still don't.
Like
jrista 8.59
...
· 
·  1 like
@Bill Long - Dark Matters Astrophotography Well, the quote functionality seems to have broken. I can't quote any posts. 


I am still a fan of CMOS, and still prefer them. In fact, to be perfectly honest, I don't really believe that TEHCNOLOGY is actually the differentiating factor here. I think the actual differentiating factor that I'm mentally referencing here, is the mentality of imagers of different...eras. CCD is really just the dominant technology of the era from which most of my most favorite images come from. That said, not every single one is a CCD image, and in fact some are DSLR images! ;) 

I think what I'm really trying to get at with this thread, is NOT a matter of technology, but a matter of mindset, approach, and technique. I do see some images with immense integration time, sometimes well over 100h when a group of people get together to work on a project. At the same time, some of even those ultra deep images, exhibit quality detractors. I don't even know why those detracting characteristics even exist...I'm not trying to make any assumptions that it is even a known factor. My suspicion is that, and this is based on some things people have written and in some cased told me directly, a non-trivial part of it is an overt reliance on AI, so that people can "achieve the same quality results with less."

I strongly dispute the notion that the same quality CAN be achieved with less, because of AI. There may be some small degree cases where more could be done with less, but I don't think the degree of "less" is nearly as much as it seems a lot of imagers try to get away with. Again, I'm not basing this primarily on what I see on the main ABin page. Its based on scanning through thousands of images on multiple sites, without any explicit attempt to find contest winners, or boards of the best, or anything like that. Just...average IQ across the average distribution of images that a simple search might turn up. On that basis, IQ seems to have dropped, a lot. 

I used to browse through the ABin great wall a lot. Often found some true gems that way, stuff from people who never even tried for IOTD, but were great imagers. Its...not quite the same these days. Its not a technology thing, not really. Are people relying too much on AI, to get them to some minimally acceptable finish line? I know there are some exceptional images out there, but when I dig into the detail on a lot of the top images on any site, the IQ factors I became accustomed to in the past, don't seem to be evident nearly as much these days. Its far rarer, that I can zoom in to the 100% detail scale, and not find some kind of detracting artifacts...and, a large percentage of that, now that I've spent time with these new AI processing tools myself, have a now-familiar characteristic about them. 

I think AI tools can be extremely useful. At the same time, they can clearly be RADICALLY over-applied, and I think the results when they are are costing a growing majority of images to lose significant quality attributes. Hence the thread. I'm sure there are some who are happy dong what they are doing and have no interest in changing. No problem with that. I'm hoping there are some imagers out there, who may not realize how much AI overprocessing is costing them in their images, who might inquire more and investigate more, and maybe find way to use AI to "optimum" more than "overdone", maybe rely less on AI when its not strictly necessary and learn some other processing techniques, etc.
Like
aabosarah 6.96
...
· 
·  1 like
The idea that a tool can be overapplied is valid for all tools in astrophotography, not just AI. You can overapply a stretch. You can overapply saturation. You can over apply colors, you can use too much of drizzle for a largely well sampled image. Misusing a tool is not AI centric. 

That being said, the application of AI like BlurX comes with many options that you can use to your liking. For example if you don't like the sharpening effects, you can lower the settings. In fact you don't need to do any sharpening at all. You can also do the same to your stars if you like nice big stars. You can just use the correct only function and that would allow you to just correct optical errors without doing much in the way of additional sharpening at all.  There is no shortage to how you can customize an AI tool and its application.

The optimal use of AI is the ultimate goal, just like the optimal use of any tool, like stretching linear data.  But there is no question that AI has made AP processing significantly more pleasant, and in some instances as demonstrated by Adam Block, can actually extract out more details in the data that are real, that could have been completely destroyed otherwise in old deconvolution methods.
Edited ...
Like
AlvaroMendez 2.39
...
· 
·  1 like
@Jon Rista
”I actually use BXT in its full mode...but, I do greatly reduce the degree to which it affects stars. I like the overall shape corrective aspect of BXT, but I try not to let it overly reduce the stars. I'm still working on that...I'm still new to all of these tools, so I'm having to figure out how to use them to moderate effect on my own.”

Hi Jon,

In that case what you need to do is select “Correct Only” mode, with no star reduction and no non-stellar sharpening. That mode does an automatic PSF calculation and performs a very sophisticated deconvolution that will only restore the image to an opticaly corrected state. That includes, of course, a slight sharpening but as a result of the correction in the optical deviations. There is no fake sharpening if you stick to that option. The great thing about this is that it detects variabilities in different parts of the image and adjusts the operation accordingly, which is what sets it apart from a traditional deconvolution.

About your recommendation to shoot also in non perfect clear nights, yes, I try to do that but some times it is hard because from session to session I need to disassemble the optical train and when you try to find again the same rotation from last time in a gap between clouds, well… some times I’ve been lucky, but most of the times clouds were back when I had finished readjusting. But yes, I try to do that and it is the only way I have been able to complete the few pictures I have that go beyond the 15 hour mark. In a near future I might be able to install a shed so that will help me enormously. Thanks for your reply.
Like
jrista 8.59
...
· 
Álvaro Méndez:
@Jon Rista
”I actually use BXT in its full mode...but, I do greatly reduce the degree to which it affects stars. I like the overall shape corrective aspect of BXT, but I try not to let it overly reduce the stars. I'm still working on that...I'm still new to all of these tools, so I'm having to figure out how to use them to moderate effect on my own.”

Hi Jon,

In that case what you need to do is select “Correct Only” mode, with no star reduction and no non-stellar sharpening. That mode does an automatic PSF calculation and performs a very sophisticated deconvolution that will only restore the image to an opticaly corrected state. That includes, of course, a slight sharpening but as a result of the correction in the optical deviations. There is no fake sharpening if you stick to that option. The great thing about this is that it detects variabilities in different parts of the image and adjusts the operation accordingly, which is what sets it apart from a traditional deconvolution.

About your recommendation to shoot also in non perfect clear nights, yes, I try to do that but some times it is hard because from session to session I need to disassemble the optical train and when you try to find again the same rotation from last time in a gap between clouds, well… some times I’ve been lucky, but most of the times clouds were back when I had finished readjusting. But yes, I try to do that and it is the only way I have been able to complete the few pictures I have that go beyond the 15 hour mark. In a near future I might be able to install a shed so that will help me enormously. Thanks for your reply.

No, I want some sharpening. I am not worried about deconvolving the details, and BXT does a pretty good job here. I just don't want wild reduction of the stars. Correct only barely does any detail enhancement, and I generally want more than that, I just don't want rogue detail enhancement or rogue star reduction.

Also curious...why do you need to disassemble the optical rain? I never do. I take my scope assembly off the mount, but I NEVER disassemble the assembly unless I am purposely reconfiguring the scope (i.e. adding or removing a reducer). Otherwise, I keep the train exactly as-is. I even bought a bit case and custom-cut some foam so that I could transport the scope fully assembled. IMO, this is one of the most important aspects of increasing our ability to take advantage of more clear sky time, even if its just holes in the clouds. This, as well as as much automation as you can get away with. Fast setup/no train deconstruction, and automation.
Like
jrista 8.59
...
· 
Ashraf AbuSara:
The idea that a tool can be overapplied is valid for all tools in astrophotography, not just AI. You can overapply a stretch. You can overapply saturation. You can over apply colors, you can use too much of drizzle for a largely well sampled image. Misusing a tool is not AI centric. 

That being said, the application of AI like BlurX comes with many options that you can use to your liking. For example if you don't like the sharpening effects, you can lower the settings. In fact you don't need to do any sharpening at all. You can also do the same to your stars if you like nice big stars. You can just use the correct only function and that would allow you to just correct optical errors without doing much in the way of additional sharpening at all.  There is no shortage to how you can customize an AI tool and its application.

The optimal use of AI is the ultimate goal, just like the optimal use of any tool, like stretching linear data.  But there is no question that AI has made AP processing significantly more pleasant, and in some instances as demonstrated by Adam Block, can actually extract out more details in the data that are real, that could have been completely destroyed otherwise in old deconvolution methods.

You can over-apply many tools. However it is far easier with AI tools. The AI is designed to do tasks we used to do manually and do them "better", and usually with FAR fewer dials to turn. All it takes is for someone to drag one of the two, maybe three sliders they have to the very end and hit apply...and an AI tool can utterly wipe out not just noise, but a significant amount of details as well. Sure, you could do the same thing manually, but its a lot more effort.

The other problem with AI...is the RECOMMENDATIONS are often to do just that: wildly over-apply. It is recommended to obliterate noise, then try and use some kind of detail enhancement AI to "restore the details"... This is the kind of thing that leads to what I'll call the "AI Look"....obliterated then artificially restored. 

I agree the optimal use of AI is the goal. Reason I started this thread is I see wild overapplication of AI all over the place.
Like
aabosarah 6.96
...
· 
·  1 like
Jon Rista:
Ashraf AbuSara:
The idea that a tool can be overapplied is valid for all tools in astrophotography, not just AI. You can overapply a stretch. You can overapply saturation. You can over apply colors, you can use too much of drizzle for a largely well sampled image. Misusing a tool is not AI centric. 

That being said, the application of AI like BlurX comes with many options that you can use to your liking. For example if you don't like the sharpening effects, you can lower the settings. In fact you don't need to do any sharpening at all. You can also do the same to your stars if you like nice big stars. You can just use the correct only function and that would allow you to just correct optical errors without doing much in the way of additional sharpening at all.  There is no shortage to how you can customize an AI tool and its application.

The optimal use of AI is the ultimate goal, just like the optimal use of any tool, like stretching linear data.  But there is no question that AI has made AP processing significantly more pleasant, and in some instances as demonstrated by Adam Block, can actually extract out more details in the data that are real, that could have been completely destroyed otherwise in old deconvolution methods.

You can over-apply many tools. However it is far easier with AI tools. The AI is designed to do tasks we used to do manually and do them "better", and usually with FAR fewer dials to turn. All it takes is for someone to drag one of the two, maybe three sliders they have to the very end and hit apply...and an AI tool can utterly wipe out not just noise, but a significant amount of details as well. Sure, you could do the same thing manually, but its a lot more effort.

The other problem with AI...is the RECOMMENDATIONS are often to do just that: wildly over-apply. It is recommended to obliterate noise, then try and use some kind of detail enhancement AI to "restore the details"... This is the kind of thing that leads to what I'll call the "AI Look"....obliterated then artificially restored. 

I agree the optimal use of AI is the goal. Reason I started this thread is I see wild overapplication of AI all over the place.

Star sharpening is up to personal taste. Technically stars should always look like points of light in reality and their size is largely limited by our optical and seeing limitations. Bloated stars are no more "real" than sharpened ones. If you prefer a more diffuse, larger original look, you can simply leave the star sharpening slider at zero, while still using non-stellar sharpening. It will still correct for optical aberrations.

I guess what you are arguing against is that your taste when it comes to stars differs than most people you see on AB, which is fine, but not really an issue with AI tool per-se.
Like
jrista 8.59
...
· 
Ashraf AbuSara:
Star sharpening is up to personal taste. Technically stars should always look like points of light in reality and their size is largely limited by our optical and seeing limitations. Bloated stars are no more "real" than sharpened ones. If you prefer a more diffuse, larger original look, you can simply leave the star sharpening slider at zero, while still using non-stellar sharpening. It will still correct for optical aberrations.

I guess what you are arguing against is that your taste when it comes to stars differs than most people you see on AB, which is fine, but not really an issue with AI tool per-se.

Its not that I want them to be bloated. I just want them to maintain a natural distribution of characteristics that demonstrate differences in each. 

Its excessive vs. measured application of the tool. The issue with ai tools, is it is too darned easy for everyone to drag the slider to the maximum, and do that on every image. Nuke the noise. Nuke the stars. There is no personality in anyone's images anymore...they are all the same, all....AI made. 

THAT'S the problem I see with AI tools. Again, I'm not against using them. Just trying to get people to think more about how they use them. Don't just nuke the noise (and all the fine details). Don't just nuke the stars. There is a balance point, and, ideally, each individual would find the balance point that fits each of their own images... There is another thread here were I've been trying to help an imager process a galaxy. They made great strides...then boom... Nuked. They learned a lot, then obliterated all their progress with one slider and the click of a button.
Like
neverfox 2.97
...
· 
·  2 likes
Matthew Proulx:
I consider 30 hours to be my acceptable minimum now.


Integration time has to be interpreted in the context of the speed of the system being used (including the sky conditions). 30 hours in Bortle 2 with an f/2 scope means something completely different than 30 hours in Bortle 9 with an f/7. It's also a rather flawed measure when doing things like mono where the convention is just to add the time across all the filters e.g. 30 hours each of R/G/B is listed as 90 hours, when the resulting image SNR is the same as a 30-hour OSC image, all else being equal. Ideally, we'd talk about images in terms of their measured linear SNR rather than time.
Edited ...
Like
neverfox 2.97
...
· 
Luka Poropat:
the only upside in the CCD architecture in 2024 is the size of the pixels that are good match for longer focal lenght telescopes on average seeing conditions


And sampling to larger effective pixels is as easy an operation as it gets. Additive read noise isn't really an issue if you're taking long enough subs to make read noise practically irrelevant.
Like
jrista 8.59
...
· 
Roman Pearah:
Matthew Proulx:
I consider 30 hours to be my acceptable minimum now.


Integration time has to be interpreted in the context of the speed of the system being used (including the sky conditions). 30 hours in Bortle 2 with an f/2 scope means something completely different than 30 hours in Bortle 9 with an f/7. It's also a rather flawed measure when doing things like mono where the convention is just to add the time across all the filters e.g. 30 hours each of R/G/B is listed as 90 hours, when the resulting image SNR is the same as a 30-hour OSC image, all else being equal. Ideally, we'd talk about images in terms of their measured linear SNR rather than time.

I agree, it would be better to measure background SNR. The challenge there is, how to measure that SNR? There are different ways to do so, and they are inconsistent. Even within a single program, such as PI, its MRS noise estimates (according to Juan) are not necessarily directly comparable from one data set to the next. 

Comparisons on an SNR basis are quite challenging. Different programs won't necessarily compute comparable SNR measures either. Noise evaluation is a challenging problem.

Calling equivalence between different sensor technologies also has its challenges. Sometimes the OSC might in fact acquire more signal in a given time, than mono, thanks to the overlapping filters. Other times OSC Q.E. might be low enough that mono+RGB filters does better. The comparison gets even more janky when you include a separate acquired L channel, etc. etc. 

I do think though, referencing the length of each individual channel is probably better than the total. There ARE SNR factors to consider in a combined image. The appearance of noise and signal smoothness in a combined RGB image will often be better than the appearance of the individual channels. Even though there are different colors, our eyes see them all blended together, so you are seeing the combined signal from each in the RGB color image, vs. the individual channels... So there is that complexity as well. Even so, individual channel exposure time is probably the least...divergent measure that is more comparable than others.

Sky differences would be the next most significant difference. The f-ratio might not matter at all, depending on the pixel scale. Aperture, pixel scale, and maybe Q.E. would give you more comparable results, as a large enough pixel can entirely negate differences in f-ratio. You can also downsample your image, to create an effectively coarser pixel scale, and negate differences in f-ratio. This is where normal etendue is relative to the sensor size and aperture, rather than pixel size. For a given sensor size and aperture, you would gather the same amount of light overall... Thus, the key difference is the brightness of the sky itself. Again, another complexity in comparing. Brightness differences in the sky, though, often vastly outpower mean differences in optics and technology. 

I should have mentioned before, light pollution really throws a wrench in the mix here, and it makes it tough to evaluate anything. The impacts of LP are often devastating, negating anything else that might be on the table. I only really image either at a dark site, or with very narrow NB filters if I must image under polluted skies. Its been hard for me to see any value in imaging broadband under polluted skies for many, many years now. The impact polluted skies have on imaging is so tremendously dramatic, I don't know how to evaluate that in the context of this thread.
Edited ...
Like
AlvaroMendez 2.39
...
· 
@Jon Rista
”Also curious...why do you need to disassemble the optical rain? I never do.”

I wish I didn’t have to do that! The problem is my scope is a 19 kg Ritchey-Chrétien (12 inches) and I just can’t handle it. It needs to be mounted in my garden the whole year with a rain cover. And I need to take the camera out and pull the optical train in to being able to fit the cover, each time. I could buy a bigger rain cover too but I fear the camera is sticking out too much and each time the gardener passes around the scope it might damage it so the sole idea terrifies me. All this would of course cease to be a problem with a shed. I’m thinking maybe one of those StarBubble shelters. I don’t know, I’ll have to think about it.
Edited ...
Like
aabosarah 6.96
...
· 
·  2 likes
Jon Rista:
Ashraf AbuSara:
Star sharpening is up to personal taste. Technically stars should always look like points of light in reality and their size is largely limited by our optical and seeing limitations. Bloated stars are no more "real" than sharpened ones. If you prefer a more diffuse, larger original look, you can simply leave the star sharpening slider at zero, while still using non-stellar sharpening. It will still correct for optical aberrations.

I guess what you are arguing against is that your taste when it comes to stars differs than most people you see on AB, which is fine, but not really an issue with AI tool per-se.

Its not that I want them to be bloated. I just want them to maintain a natural distribution of characteristics that demonstrate differences in each.

I am not sure what you mean by this. Star sharpening doesn't make all stars of equal size and color. It simply sharpens each star in relation to its original size. Also with the most recent update it has done a great job of maintaining the original color. So the result should maintain the original characteristics, but in a sharpened but preserved ratio.

While some users are "nuking details" I feel that the top imagers and on Astrobin are doing a great job of not doing that.

I feel you are discussing more your taste of how images should look like vs what others are doing rather than an objective problem with AI tools.

At the end of the day, AI tools are a significant net positive over older tools.​​​​​​​
Like
jrista 8.59
...
· 
·  1 like
Ashraf AbuSara:
Jon Rista:
Ashraf AbuSara:
Star sharpening is up to personal taste. Technically stars should always look like points of light in reality and their size is largely limited by our optical and seeing limitations. Bloated stars are no more "real" than sharpened ones. If you prefer a more diffuse, larger original look, you can simply leave the star sharpening slider at zero, while still using non-stellar sharpening. It will still correct for optical aberrations.

I guess what you are arguing against is that your taste when it comes to stars differs than most people you see on AB, which is fine, but not really an issue with AI tool per-se.

Its not that I want them to be bloated. I just want them to maintain a natural distribution of characteristics that demonstrate differences in each.

I am not sure what you mean by this. Star sharpening doesn't make all stars of equal size and color. It simply sharpens each star in relation to its original size. Also with the most recent update it has done a great job of maintaining the original color. So the result should maintain the original characteristics, but in a sharpened but preserved ratio.

While some users are "nuking details" I feel that the top imagers and on Astrobin are doing a great job of not doing that.

I feel you are discussing more your taste of how images should look like vs what others are doing rather than an objective problem with AI tools.

At the end of the day, AI tools are a significant net positive over older tools.

Well, I just wrote a fairly long reply, and it somehow vanished... I don't want to re-type everything. (For some reason, the ABin editor doesn't seem to support CTRL+X, CTRL+SHIFT+Z or CTRL+Y to undo or redo.) 

Anyway, in a truncated form of what I wrote before. One, I am not really concerned with top imagers. They know what they are doing, obviously. I'm not concerned about IOTD winners or other contest winners (or for that matter even participants, as if you participate you are more likely to know what you are doing). 

I am not basing what I'm seeing on say the ABin front page, or anything like that. I'm basing it off of general deep searches of astrophotography in general, across ABin and numerous other places on the web. So its more the average, general astrophotographer, not the top imagers, who's work I am seeing. FWIW, its also not just images that clearly have been processed with AI tools, but there also appears to be a segment of imagers who just seem to stretch, without any real processing at all. I'm not assuming that these imagers are doing what they are doing, by conscious, educated and intentful choice. On the contrary, I am wondering...DOES the average imager, really understand what they are doing? I would say with regards to the subclass of imagers that seem to just do a stretch, without any other apparent processing, leaving their images riddled with noise, gradients, blotchiness, often rogue star halos, etc. that they most likely do not really know what they are doing. When it comes to the images that have that...AI Look...over smoothed, nuked details, stars reduced to about a pixel and lacking in a natural distribution in size...DO those imagers, really know that the way they are processing is destructive, to one degree or another?

My assumption is basically, it seems like a lot of the standard distribution of astrophotographers these days, are processing in very similar, often overly aggressive ways. This doesn't necessarily JUST mean AI processing, although AI tools are most often specified if any processing is described. The overarching trend seems to be either...no processing, and just a stretch, or destructive overprocessing. DO imagers know that they are being destructive like this? If not, why, and could they use better assistance in understanding how to process...optimally?

One of the things I DO have against AI, is that it seems to normalize the results. You can flip through a hundred images, an in the average case, they are all generally going to look roughly the same....overly smoothed backgrounds, medium to larger scale mottling due to over-smoothing, lack of any fine details in darker structures in particular, often over-enhanced and artificial details (and frequently strange looking, because they stand out SO much once they are the only details left) in only the brightest of structures, galaxy arms that have a thread of sharp detail running through them while the outer parts of the arms are super smooth and soft lacking pretty much any details, etc. etc. There is a THEME, and one image after another, are starting to look the same. 

Are people in general still creating astrophotography? (Obviously, there are of course some people who are, top imagers, contest participants, a lot of the people here on ABin, there ARE people who have a deep dedication to the craft, for sure!! Not saying otherwise.) Or...has AI already taken over a large percentage of it, producing this common, standard, canned look that...well, is starting to look less and less like real space, and more and more like an impressionist painting?
Like
aabosarah 6.96
...
· 
·  3 likes
Jon Rista:
Anyway, in a truncated form of what I wrote before. One, I am not really concerned with top imagers. They know what they are doing, obviously. I'm not concerned about IOTD winners or other contest winners (or for that matter even participants, as if you participate you are more likely to know what you are doing). 

I am not basing what I'm seeing on say the ABin front page, or anything like that. I'm basing it off of general deep searches of astrophotography in general, across ABin and numerous other places on the web. So its more the average, general astrophotographer, not the top imagers, who's work I am seeing. FWIW, its also not just images that clearly have been processed with AI tools, but there also appears to be a segment of imagers who just seem to stretch, without any real processing at all. I'm not assuming that these imagers are doing what they are doing, by conscious, educated and intentful choice. On the contrary, I am wondering...DOES the average imager, really understand what they are doing? I would say with regards to the subclass of imagers that seem to just do a stretch, without any other apparent processing, leaving their images riddled with noise, gradients, blotchiness, often rogue star halos, etc. that they most likely do not really know what they are doing. When it comes to the images that have that...AI Look...over smoothed, nuked details, stars reduced to about a pixel and lacking in a natural distribution in size...DO those imagers, really know that the way they are processing is destructive, to one degree or another?

My assumption is basically, it seems like a lot of the standard distribution of astrophotographers these days, are processing in very similar, often overly aggressive ways. This doesn't necessarily JUST mean AI processing, although AI tools are most often specified if any processing is described. The overarching trend seems to be either...no processing, and just a stretch, or destructive overprocessing. DO imagers know that they are being destructive like this? If not, why, and could they use better assistance in understanding how to process...optimally?

I think Astrophotography in general is getting significantly more popular than it was few years ago before COVID. There are folks of all levels in terms of imaging, post processing. Some people are just content with a basic stretch and application of an AI tool after acquiring a few hours of data. Others go deep acquiring hundreds of hours and spend days post processing and reprocessing to get the perfect image.  Could the beginners /intermediate imagers improve on their post processing? Absolutely. We all have significant things to improve. 

The thing is, if you do a search for a target imaged 5 years ago vs today, the vast majority still look significantly better from all levels of astrophotographers today than they did 5 years ago. The AI tool has made the beginner / intermediate images come with significantly better results with less effort than they would have 5 years ago, despite any artifacts you may notice. For example issues with back focus and minor guiding star elongations, poor color calibration seriously plagued older images, and are readily correctable compared to in the past, which frankly is far worse than over sharpened stars or mottled / smooth background.
Jon Rista:
One of the things I DO have against AI, is that it seems to normalize the results. You can flip through a hundred images, an in the average case, they are all generally going to look roughly the same....overly smoothed backgrounds, medium to larger scale mottling due to over-smoothing, lack of any fine details in darker structures in particular, often over-enhanced and artificial details (and frequently strange looking, because they stand out SO much once they are the only details left) in only the brightest of structures, galaxy arms that have a thread of sharp detail running through them while the outer parts of the arms are super smooth and soft lacking pretty much any details, etc. etc. There is a THEME, and one image after another, are starting to look the same. 

Are people in general still creating astrophotography? (Obviously, there are of course some people who are, top imagers, contest participants, a lot of the people here on ABin, there ARE people who have a deep dedication to the craft, for sure!! Not saying otherwise.) Or...has AI already taken over a large percentage of it, producing this common, standard, canned look that...well, is starting to look less and less like real space, and more and more like an impressionist painting?

Again your criticism seem to have to do more with how people are using those tools rather than an issue with the actual tools. I really don't understand what you are faulting the AI tools with how they are actually being used. And yes people that are using AI tools like BlurX are still producing astrophotography, not impressionist painting. The fact that many people imaging the same target with the same SHO filters, cameras, OTAs and using the same palette come with the similar results is rather expected, since we are actually imaging the same target. The variations that you had 5 years ago are more likely to do with significant alterations introduced by the significantly more complex /manual tools that were used in the past. 

Adam Block has shown that what he used to spend hours to produce with meticulous effort that took hours and maybe days 5 years ago, now can be produced to the same effect with far less effort with AI tools and the click of a button. As far as BlurX is concerned, I have not seen any "fake data" being introduced.  

It is no more "impressionist painting" today than it was 5 years ago before AI tools.
Edited ...
Like
jrista 8.59
...
· 
@Ashraf AbuSara It is true, my criticisms against AI tools are more the way they are used, than the tool itself. I think its the ease with which they can be abused, and how often they seem to be, that I'm targeting. 

You and I must be searching different subsets of astrophotography, as I'm not seeing quite the same thing as you are, with regards to most AP being better. Stuff today is often different, but I wouldn't necessarily say better. I think when people read this thread, I suspect their primary reference point is the ABin front page, which is not really representative of what I'm talking about. The front page here, is "popular" astrophotography, which means its being filtered through the inherent, intrinsic "filtration" of the collective human preference, which is naturally going to promote a subset of astrophotography ABOVE some certain bar for quality, and displace/hide the rest below that bar. But, that's not really representative of the state of the community at large. I keep arguing this point, but its clearly not getting through. :shrug: 

Five years ago, was not actually all that long. The first AI tools hit the market some time in 2020 or early 2021, like Topaz's AI denoise tools. I've been thinking of astrophotography more like 10-15 years ago, some even older than that. Some as far back as the early 2000s.
Edited ...
Like
jrista 8.59
...
· 
Álvaro Méndez:
@Jon Rista
”Also curious...why do you need to disassemble the optical rain? I never do.”

I wish I didn’t have to do that! The problem is my scope is a 19 kg Ritchey-Chrétien (12 inches) and I just can’t handle it. It needs to be mounted in my garden the whole year with a rain cover. And I need to take the camera out and pull the optical train in to being able to fit the cover, each time. I could buy a bigger rain cover too but I fear the camera is sticking out too much and each time the gardener passes around the scope it might damage it so the sole idea terrifies me. All this would of course cease to be a problem with a shed. I’m thinking maybe one of those StarBubble shelters. I don’t know, I’ll have to think about it.

Ah, yeah...if the scope is that large, then I can understand. I had an RC for a little while, and I had trouble dealing with it as well. Mine was one of those RCs that had the focuser attached to the primary mirror sell, and getting the scope off the mount pretty much always screwed up collimation. Dealing with that scope drove me to just using refractors.

If you can change things...keeping the scope set up so its just ready to go, and automating as much as you can, will let you take advantage of far more clear sky time. Once you do that, getting tens of hours across all filters becomes a lot easier. Automation is the only reason I'm able to get 10-20 hours a filter myself. Without the automation, and without being able to keep the scope assembled (I do take it off, but if I could leave it on the mount I'd prefer that!!), I would probably have less than 10 per channel.
Like
neverfox 2.97
...
· 
·  1 like
Jon Rista:
a large enough pixel can entirely negate differences in f-ratio. You can also downsample your image, to create an effectively coarser pixel scale, and negate differences in f-ratio. This is where normal etendue is relative to the sensor size and aperture, rather than pixel size. For a given sensor size and aperture, you would gather the same amount of light overall...


Yep, I'm aware that it's f-ratio and pixel size (or aperture and pixel scale) that determines basic system speed (along with transmission efficiency and QE) before considering effective sky flux and read noise. One point that a lot of people fail to realize is the at speed at a given pixel scale is related to aperture not f-ratio. I was just making a general point that time is abstract without the particulars. I agree with the rest of what you said as well.
Like
jhayes_tucson 22.40
...
· 
·  11 likes
Hi Jon,
Long time, no talk.  Welcome to AB...where I've been since abandoning CN a long time ago.  There is way too much in this discussion for me to want to jump into all of the various topics but I'll just say a couple of things in general:

1)  AB users span a very wide range of experience from folks who are just beginning to figure out how AP works with simple equipment in their driveway under polluted skies to folks with large, very expensive remote systems who are quite expert at processing data.  Some folks want to get better at it and others are content just to be able to say that they imaged the moon...or even a galaxy.

2) I have a verylightly used FLI 16803 camera that was one of the last made that I'll give you a good price on if you want it.  I also have a more heavily used FLI 16803 that still has a lot of useful life for a lot less.  I'm sold on CMOS for a lot of reasons and I believe that both the IMX455 and the IMX461 sensors are very worthy replacements for the 16803.  They are sufficiently large sensors that have lower read noise, they are less sensitive to cosmic rays, they are more sensitive and the PRNU is lower.  They are also a better potential match for sampling many faster systems.

The one last thing that I want to say here is that I see a lot of potentially incendiary statements flying around this discussion that flow from some of your original premise.  I always enjoyed our technical discussions but I have enjoyed the relative lack of food fights here on AB compared to what I used to see on CN.  I just hope that we can have civil discussions here without including the incendiary stuff that usually flows from assumptions stated about other people's motivations and goals.  I know that you are a reasonable guy so I make that suggestion gently hoping that you will take it in the gentle spirit that it's offered.  Let's focus on technical comments and questions without saying why imagers are doing what they are doing.  Operating old-school or using AI is fine with me and although I may occasionally scratch my head over a method or result, it' all okay with me.  I'm happy to talk about the technical aspects of this hobby but I'm more focused on trying to improve my own images...with every single one that I produce.

John
Like
Siriusdwarf77 1.51
...
· 
·  5 likes
Very interesting thoughts all round. However, the first reply to the original post, made by Matthew makes me laugh. 30 hours exposure at a minimum!! In my area, you will be lucky to get 30 hours in 6 months. It would be near impossible to get perhaps 2 objects imaged in a year, based on that minimum. I think many imagers, especially in England are generally happy to get 3 or 4 hours on a image, before moving on to a different object. The obsession (hobby) would get very frustrating otherwise, so personally while I agree signal is everything, AI certainly is a fantastic breakthrough and very useful for those not able to get 30 hours plus on every object.

Kes.
Like
jrista 8.59
...
· 
Kerry Bloor:
Very interesting thoughts all round. However, the first reply to the original post, made by Matthew makes me laugh. 30 hours exposure at a minimum!! In my area, you will be lucky to get 30 hours in 6 months. It would be near impossible to get perhaps 2 objects imaged in a year, based on that minimum. I think many imagers, especially in England are generally happy to get 3 or 4 hours on a image, before moving on to a different object. The obsession (hobby) would get very frustrating otherwise, so personally while I agree signal is everything, AI certainly is a fantastic breakthrough and very useful for those not able to get 30 hours plus on every object.

Kes.

I think he was just saying for himself, that's his minimum.

That said, I am curious what most people consider a "viable" night to image on. I throw away a decent number of subs on most of my images...however, I image on a lot of nights that are not entirely clear. Sometimes I even set an alarm for early am to go out and put my scope away, if rain or snow is due later in the morning. 

I used to get less than 10 hours per image (when using a DSLR/OSC) in my first year or two. Once I started imaging more regularly from my yard, and was able to automate more, I would put the scope out every single chance I had, even when the skies weren't totally clear the whole night. Sometimes I'd only get a few keepers, and most of the night was a bust. But...I would get a few keepers (and with NB, that was usually 10 minutes a sub, so maybe 10, 20, 30 minutes of data or so.) I might throw away several hours worth, but over say a 3 month period of time (which was often how long I would chase a given target), those 30 minutes would add up to hours of additional data.
Edited ...
Like
Siriusdwarf77 1.51
...
· 
·  2 likes
No, you are incorrect. I try to get as much exposure as possible. In my area clear skies are like gold, not easily found. Obviously, you certainly seem to have a lot of time on your hands, if you can do what you say you do. That is totally impractical for a lot of people. I am sure I am not talking just about myself. I see many images on here taken for a lot shorter exposure times than 2 or 3 hours and many of those images are excellent.

Kes.
Like
AstroDan500 4.67
...
· 
·  7 likes
Kerry Bloor:
No, you are incorrect. I try to get as much exposure as possible. In my area clear skies are like gold, not easily found. Obviously, you certainly seem to have a lot of time on your hands, if you can do what you say you do. That is totally impractical for a lot of people. I am sure I am not talking just about myself. I see many images on here taken for a lot shorter exposure times than 2 or 3 hours and many of those images are excellent.

Kes.

Exactly. This may be my last time responding on an Astrobin forum which I have not participated that much in since I am a  novice astrophotographer but have years and years in photography in general.
This whole post just seems be an exercise to establish that if you have not been photographing space for 20 years or some such number you
don't really "get" astrophotography.
2024 processing tools allow too many people to "intrude" on the hallowed turf of the old school.
Of course more time spent on a target is what everyone knows is the best way to get the best images. 
I am not sure why it took 3 pages to keep repeating that point but we all get it by now.
I enjoy shooting and processing images. I live like a lot of people where I don't get a lot of clear nights so I
make do with short hours and use every PP tool I can to make my images look as nice as possible.
Sorry about that.
Like
 
Register or login to create to post a reply.