2.71
#...
·
|
---|
Michael Broyles: Definitely did, thank you Michael. Kinda what I was thinking but I wasn’t sure never doing this procedure before. I’ve stacked in WBPP, but didn’t like the amount of rejection I was getting and when it totally failed on my M51 shot and Siril processed it, I said enough was enough, but I’m watching a vid on PI’s YouTube channel saying that you need to do this if you shoot with a DSLR which I do. I thought maybe I was doing it all wrong. I mean it was from the horses mouth. I mean it totally worked, lots of finished files, but it worked and in zooming in the noise did seem to be a little bit better handled so I’m not bashing it, just didn’t know how to figure what I ended up with. It’s just when you’re used to Siril writing the frames used and the total time in the fits header, I kinda thought an app as sophisticated as PI would do the same. I did see those rejections and I did save the log file. So I’ll check there. I usually delete the calibrated files after processing to free up hard drive space so those are gone. So total got it now. |
6.72
#...
·
|
---|
Anthony Johnson:Michael Broyles: That isn't how it works in my way of thinking. See..you used program that reported a value to you. How do you know the meaning of that value? On a more general note, just because a program outputs something (and another does not)- how do you know the output is optimal? It just isn't that simple. So let us play this game. If you have a program like PixInsight that applies weights to images (using various different schemes)- how would you determine a total exposure time? You stack 20 images that have 300 second exposures and 15 of them contribute little in terms of significance to the integrated result. So you would say (or some other program says) this is a 100 minute exposure? So the other program reports a value *even though* it is weighting images? Do you see how what you think is a benefit could actually be misleading? And let me continue... weighing images by FWHM is specious in many cases. "Sophisticated" programs often use a combination of metrics to assign an image quality in order to offset the downfalls of any particular one. I happen to prefer SNR measures attained through photometry. In my opinion, the resolution doesn't matter if there isn't any signal there to begin with (I am not talking about lucky imaging). -adam |
2.71
#...
·
|
---|
Adam Block:Anthony Johnson:Michael Broyles: I think I see what you are saying Adam, then truthfully what you are saying, the time of integration is of no consequence. Because you could have let say 2 images, image 1 is 40secs image 2 is 40secs, but image 1 is only half as good as image 2 then image 1 is actually making the total of image 1 and 2 worse, or bringing down the quality, so it would not be a 80sec integration, but only maybe a 60 sec integration because of the quality issue of image 1. So integration time means nothing unless you know how much each image actually contributed to the whole. If I follow your reasoning correctly. I’m not sure that knowledge comforts me any how can we ever know how long our image integration time really is if we don’t know the amount of contribution of each individual sub. Nobody is gonna look through the processing log to see the weights and values of each individual frame, and still I don’t see how you would ever know the percentage of contribution of each sub frame even looking there. So it seems to be an arbitrary number. If I understand your train of thought. To coin a phase from Scrooge in a Christmas carol, speak comfort to me Adam. Lol |
6.72
#...
·
·
2
likes
|
---|
Anthony Johnson:Adam Block:Anthony Johnson:Michael Broyles: What I am saying is that the total time you leave your shutter open is not always representative of the total number of photons you observed (or used). I am saying that you are attributing more importance to one over the other (or not quite recognizing the difference). So, total integration time of frames used is a good estimate of total exposure time in terms of counting photons- but it is not a true accounting of the amount of light you detected or used (weighting). This is why PixInsight isn't going to give you a number (perhaps like other software might). What you originally stated is a lacking of a "sophisticated software" is actually the result of a deeper understanding of this distinction. What is customary to do is to state the total shutter open time of the integrated images with the understanding there is still a wide range of variability in one person's results compared to another (even with the same equipment and open shutter time). -adam |
#...
·
|
---|
Adam Block:Anthony Johnson:Adam Block:Anthony Johnson:Michael Broyles: Anthony, Only caught this thread late. It appears you may have your answer. I am curious about your comment several posts earlier that stated for DSLR images, one must do the integration with a color chanel separation/recombination method. I do not use a DSLR, but I do use exclusively OSC cameras, and wonder what the difference might be in that recommendation for OSC. I have never processed that way and it is my understanding and experience that the deBayering algorithms are actually quite good at dealing with color, etc. Nor have I felt that my images would improve moving to that method. I also struggle to understand how, why, or if, PI would drop certain frames from a specific chanel, but not others for the very same single image (original sub), though I can imagine that such a sub chanel may have been on the ragged edge of the acceptance criteria because of S/N or if there was some color fringing from poor optics, etc. But, still, it would seem to be a real outlier experience and likely to cause little impact on specific loss of frames. Which gets to my final point: Unless this is a detail that will substantially affect your understanding or practice of this process, does the precision of the data your seeking really matter? I will assume that you are in this for the art, not trying to get infallible photometric data from your images. A forest from the trees point... |
2.71
#...
·
·
1
like
|
---|
Alan Brunelle:Adam Block:Anthony Johnson:Adam Block:Anthony Johnson:Michael Broyles: My original question has actually gotten lost in the thread. My only question was how to figure total integration time when it seems WBPP was rejecting frames from one channel of an image and not the others. Also the technique I was mentioning was from a video that I watched on pixinsight’s YouTube channel on WBPP. The narrator mentioned that with DSLR data you should be splitting the channels and then doing a final integration also with drizzle. I know absolutely nothing about this process past what I’ve watched on YouTube. Adam block is the main guy I listen to because I feel he gives it to me straight, but this new info coming from pixinsight itself I thought it was legit and to some degree it is but not for the reasons I was thinking. Like I said I put my camera n the back of my scope pop the shutter and hope for the best. I’m a total newbie even after a year. I was just trying to get a number for the total number of frames used when WBPP was slitting channels but rejecting different numbers of frames for different channels then recombining those frames into a single image. What Adam said in his last post makes perfect sense to me. There are so many factors that figure into your final exposure it’s difficult to put a number on it. And no I’m far from looking for pristine data. Just trying my best to understand a hobby that seems as though the deeper I get the more complicated it becomes, but with that said I do try to understand how this works on all levels. This way when something goes wrong I can have a basic understanding of what might have caused it. Like I said my original question is in the title of my post. How do I find out how many frames PI stacked and what’s the total integration time of the final photo. I only got into a long description of what I did because to answer my question you needed to know what I did. I guess I was too descriptive. |
#...
·
·
1
like
|
---|
Anthony, Thanks for the clear reply. I'll certainly look into this splitting of channels during preprocessing. As a OSC person, I have tried with some success to create separate luminance data from a select set of subs toward the end of generating sharper images with better resolution. But it is not clear what the benefit of splitting color channels is. So I'll have to look into that. That said, I have seen a fair amount of bad advice on the internet. But you alluded to that in trusting good sources such as Adam. As you learn from various sources, be critical as you do so. Ask why and is this really necessary? And test your hypothesis. Prove to yourself if any actions are worth it. I find that the internet is full of bragging contests. Cooling a camera excessively, to only gain that extra percent of shot noise reduction. Yet complaining about why the camera sensor keeps frosting over! Taking 50hrs of subs on a subject yet not spending the time in post to make the time worth it. Etc, etc. |