To be more precise I did combine them first before stretching and then extracting luminance. Though I did try the other way too and have in the past. Yes, I fully admit this is a bastardized workflow and goes against best practices. I put that aside for a moment anyway to go with what I was familiar with learning from other people who also were probably teaching poor practices from my limited experience just to compare.Non-linear stretching and then combining makes no sense... Is it likely you just stretched until you saw the detail "you liked" (which is mostly visible in the Ha) and then decided it was time to combine?
That's what it is supposed to do and what its purpose is, but squashing and stretching the dynamic range, detail and the per-pixel SNR prior, doesn't make a whole lot of sense.
Yeah I'm sorry if this is confusing- it is kind of hard to describe and be precise with what is actually jumping out at me. A lot of this is just overall big picture initial reaction and hard to pinpoint what I'm seeing since there are many different things going on that all contribute to the overall look and feel of an image. Looking at it zoomed out gives an initial impression different from now when I am zooming in at those particular features where they look better. While zoomed out I think I am seeing some of the effects of the HDR module creating a certain texture/look over parts of the image that is what I am having a reaction to and am not a big fan of.Then I am frankly not sure what you are after any more... It shows all the detail you were after that you indicated in this post and is better(!) visible than in your PI/ST hybrid on all my screens. I am really wondering if there is something amiss with your screen calibration? Do you have, say, a phone or tablet handy? What do the images look like on those?
Zooming in it does look like there is decent detail on those specific areas and looks better. There is then overall depth and contrast of the nebula to the background, and around the entire "rim". I thought my hybrid one appeared the least flat overall, for whatever reasons and specific factors are contributing to my perception of that.
Maybe my perception would change if I saw it in color, as that gives another form of contrast.
My monitor is not calibrated, but I have looked on a few. Time is also a factor as I notice different things viewing at different times and if I have been staring at any of them too long.
Now I'm looking at yours again bigger on its own screen and I'm starting to like it more, but looking at it zoomed out and smaller when it's in a post less so. Visual perception is strange.
ok that's interesting, I think I am starting to understand that.It does not add noise depending on the band (unless a fixed amount of noise is present - thermal or shot noise from skyflow, etc.). This is the main reason for why you would make a linear synthetic luminance frame, weighted according to exposure times . And it's also the reason why stretching channels individually and then mixing them into a luminance frame yields wildly varying noise signatures.
this was a while ago and memories from prior processing methods (where I probably learned and was emulating bad techniques from others). I don't think I tried using just an Ha lum and had noisy SII and Oiii that was becoming problematic in terms of noise in the image and background.Not enough in terms of detail? Color? Were the objects poor in Ha signal/detail?
I have not tried it much yet in startools really with narrowband
I have noticed processing LRGB images that it seemed like I had too much luminance and not enough data on RGB channels to color them well. I would try to shoot 2-3x as much luminance as individual RGB times to try to get a more clean luminance, and it often appears faint stuff like dust or edges of galaxies wasn't colored well.