PI for preprocessing
Posted: Sun Jan 08, 2023 8:22 am
I'm a couple weeks into my PI trial. Yikes, what these guys subject themselves to.
Still, it's a vast product, and I intended to buy it even before trying out the trial, which I will. Star alignment is really good, there's quite a few tools for inspecting files, the blink is excellent, the plate solving and annotation is pretty cool, and after just a few tries I'm thinking WBPP will be my stacker of choice going forward. For post-processing, I've been playing (fighting?) with it, but just to make myself somewhat literate. The translation table should help me with that too.
Of course there's a learning curve, because...PI, so you have to use a lot of Google and YouTube before you can get anything stacked. But once you get past that hurdle, the features are nice. I was able to use directory load to easily toss in two separate nights of 5 filters, each with their own flats per night, just the way NINA saves them, along with my library darks (two different exposures) and bias, and everything was stacked to the single best reference. ASTAP, which I've been using for a year, cannot do it quite that way, requiring 5 separate stacks which are then registered, and while DSS can stack to a chosen reference, it's more hands-on and I think you have to do it 5 separate times.
The question becomes, what PI/WBPP bells and whistles should/can/shouldn't be used for post in ST, or perhaps just in general if one is concerned with data fidelity. I see we don't have a PI or WBPP preferred stacking workflow.
There are two techniques of concern, I think. First is the namesake weighting of weighted batch pre processing. At first blush, that seems a little cheaty. I'm not sure how strong the increase or decrease in weighting of any particular sub can be, but is it legitimate to do such a thing, based on quality of sub, and for inclusion in the stack rather than full culling? That said, I suppose sort of the same thing could happen if you see an iffy sub come into the computer during acquisition, and so just tell NINA to take an extra one (or two) at the end. That's weighting in a way. But then those would be real subs. Still, I'm not sure this one is too egregious.
Second is Local Normalization, which apparently has undergone great improvement and is now considered good as standard procedure rather than use only in case of emergency that it may have been in the beginning.
I know the ST mantra is no background normalizing in stacking, although I am having second thoughts on that a bit. I ran a blink on an hour of 60s L subs from my M78 last month. It goes like a little movie, and one option is to select the same STF stretch for all subs. It was amazing how much difference in brightness occurred in just an hour. I have to think that would turn sigma rejection and averaging into a complete mess.
As far as I know it's only DSS that can turn off all normalizing, at least if using rejection. In ASTAP it's just done and you have no say in the matter.
But that's global normalizing. LN apparently does what it says, normalizing to one of the reference frames in a...spatially variant?...manner. Supposedly it's good to pick your best/darkest likely at meridian. Depending on your acquisition, there could be no difference at all from global normalizing. The stated purpose though is for multiple, or moving, gradients and/or multi-session acquisition, which of course exacerbates those even more. I have all of those in my backyard, of course.
I'm still trying to think my way through this. Again at first it seems like a red flag for unwanted pre-processing manipulation, but then perhaps not, at least if it's doing it right and not artifacting. The complex mix of gradients should be simplified, and either way it's going to be hit with Wipe -- possibly with very or too high aggression if the gradients are too scrambled?
Ponder ponder.
So, wondering what the official ST thoughts and guidelines on this stuff would be. As well what's being used out there by anyone else PI-stack/ST-post, especially in lots of LP.
Still, it's a vast product, and I intended to buy it even before trying out the trial, which I will. Star alignment is really good, there's quite a few tools for inspecting files, the blink is excellent, the plate solving and annotation is pretty cool, and after just a few tries I'm thinking WBPP will be my stacker of choice going forward. For post-processing, I've been playing (fighting?) with it, but just to make myself somewhat literate. The translation table should help me with that too.
Of course there's a learning curve, because...PI, so you have to use a lot of Google and YouTube before you can get anything stacked. But once you get past that hurdle, the features are nice. I was able to use directory load to easily toss in two separate nights of 5 filters, each with their own flats per night, just the way NINA saves them, along with my library darks (two different exposures) and bias, and everything was stacked to the single best reference. ASTAP, which I've been using for a year, cannot do it quite that way, requiring 5 separate stacks which are then registered, and while DSS can stack to a chosen reference, it's more hands-on and I think you have to do it 5 separate times.
The question becomes, what PI/WBPP bells and whistles should/can/shouldn't be used for post in ST, or perhaps just in general if one is concerned with data fidelity. I see we don't have a PI or WBPP preferred stacking workflow.
There are two techniques of concern, I think. First is the namesake weighting of weighted batch pre processing. At first blush, that seems a little cheaty. I'm not sure how strong the increase or decrease in weighting of any particular sub can be, but is it legitimate to do such a thing, based on quality of sub, and for inclusion in the stack rather than full culling? That said, I suppose sort of the same thing could happen if you see an iffy sub come into the computer during acquisition, and so just tell NINA to take an extra one (or two) at the end. That's weighting in a way. But then those would be real subs. Still, I'm not sure this one is too egregious.
Second is Local Normalization, which apparently has undergone great improvement and is now considered good as standard procedure rather than use only in case of emergency that it may have been in the beginning.
I know the ST mantra is no background normalizing in stacking, although I am having second thoughts on that a bit. I ran a blink on an hour of 60s L subs from my M78 last month. It goes like a little movie, and one option is to select the same STF stretch for all subs. It was amazing how much difference in brightness occurred in just an hour. I have to think that would turn sigma rejection and averaging into a complete mess.
As far as I know it's only DSS that can turn off all normalizing, at least if using rejection. In ASTAP it's just done and you have no say in the matter.
But that's global normalizing. LN apparently does what it says, normalizing to one of the reference frames in a...spatially variant?...manner. Supposedly it's good to pick your best/darkest likely at meridian. Depending on your acquisition, there could be no difference at all from global normalizing. The stated purpose though is for multiple, or moving, gradients and/or multi-session acquisition, which of course exacerbates those even more. I have all of those in my backyard, of course.
I'm still trying to think my way through this. Again at first it seems like a red flag for unwanted pre-processing manipulation, but then perhaps not, at least if it's doing it right and not artifacting. The complex mix of gradients should be simplified, and either way it's going to be hit with Wipe -- possibly with very or too high aggression if the gradients are too scrambled?
Ponder ponder.
So, wondering what the official ST thoughts and guidelines on this stuff would be. As well what's being used out there by anyone else PI-stack/ST-post, especially in lots of LP.