Mike in Rancho wrote: ↑Sun Aug 21, 2022 5:25 am
As of this week, I finally have a fully-complete 7x2 EFW.
![Very Happy :D](./images/smilies/icon/biggrin.gif)
This now has me thinking about NB mapping in ST again, since I will soon do it with my own data.
Congrats!!
I know ST is unique (and keeps us within guardrails) with the compose and color, split parallel luminance and chrominance, plus the pre-fab matrices, but I was still wondering if anyone knows a resource for a better understanding of what is going on with these techniques. Nuts and bolts sort of. I've looked about but haven't found much yet.
I believe in other solutions this is generally, though probably not identically, available via channel pixel math equations. I read them but I can't say I always understand them. Or perhaps more so - I am often skeptical when they are posted. But that could be because I don't understand them yet.
For example, in one case a greater than 1.0 multiplier of Ha was fed into the R channel, with the OIII subtracted from it. I'm baffled, but it is giving me a "red flag" feeling. Could be the obvious of creating a hole in the Ha to stick the OIII through?
This would give me a red flag feeling as well. Mostly from signal processing point of view as it opens up clipping scenarios (e.g. what if a Ha multiplier > 1.0 was fed into R and there was no O-III in that area to pull it down from above 1.0? Unless some sort of normalization is applied afterwards...). Plus it makes things rather intuitive for someone trying to figure out what is going on in an area. Hue creation from tristimulus values doesn't really call for (or work with) subtraction, unless perhaps some sort of out-of-gamut color space is being targeted (which doesn't really make sense in this context IMO).
In any event, I notice in the matrix list that all the mapping adds (are they actual addition functions?) add up to 100% for each of the R, G, and B channels as mapped into by the S, H, and O components. This of course seems unlikely to be coincidental, and so must have some good cause. I was thinking perhaps clipping prevention, but then again if these are linear functions that probably isn't a concern (other than star cores?).
Anyway, since we can't go beyond the presets I know we also probably can't go astray in ST with "wrong" pixel math, and that the mapping just dictates the three hues we will be working with for our relative emission concentrations. Those I can look at the formulas and get a good sense of what the colors will be.
But I'm still curious and want to understand what I read out there in the wild (not a how to but more of a why, and what is and isn't ethical). If anyone knows any good tutorials...
It really is (almost) as simple as following the assignment laid out in the Matrix parameter options.
Let's take, for example "SHO 40SII+60Ha,70Ha+30OIII,100OIII"
What is happening internally is;
- The assumption is made that your data was imported as SHO:RGB in the compose module (e.g. as it says on the butttons at the top in that module).
- Therefore, ST assumes that S-II currently purely lives in the red channel, Ha currently purely lives in the green channel and O-III currently purely lives in the blue channel.
- The Color module then scales the R, G and B channels (aka the S-II, Ha and O-III signal respectively) by the multipliers defined by Red/Green/Blue Bias Reduce/Increase. In other words, by modifying those Red/Green/Blue Bias controls, you are directly throttling the pure S-II, Ha and O-III signals before any remapping is done. Note that the S-II, Ha and O-III signals are normalized after the multiplication so that no channel will ever exceed 1.0 (or in other words, if I, say, massively pull on the green slider, the other channels really just "shrink" by the weighted inverse). For example RGB 0.6:12:3 would become 0.05:1.0:0.25 after normalization. This is now your input signal for the remapping.
- Finally, the remapping is performed. In the case of our example "SHO 40SII+60Ha,70Ha+30OIII,100OIII", the remapping becomes;
- Rnew = 0.4*Rbiased + 0.6*Gbiased
- Gnew = 0.7*Gbiased + 0.3*Bbiased
- Bnew = 1.0*Bbiased
In the case of StarTools that is not entirely the end of the story, however, as, only the resultant
coloring is used. The brightness component is unceremoniously thrown away, while the separate brightness/luminance from the (parallel-processed) detail/luminance dataset is adopted instead.
How exactly the luminance is integrated with the coloring is dependent on the chose LRGB Method Emulation (see docs or in-app help item).
The remapping, however, is quite transparent and you can, for example, calculate from the mapping equation what hue a pure S-II, Ha or O-III signal will have. For example, a pure Ha signal (1.0) in our example would yield;
R = 0.4 * 0 + 0.6 * 1.0 = 0.6
G = 0.7 * 1.0 + 0.3 * 0 = 0.7
B = 0
You can plug that into, say an online
RGB to HSL converter (multiply by 255 as this one expects an 8-bit value for each R, G and B value). For the resultant RGB 153, 179, 0, you will get a Hue of 69 degrees,
which looks like this. In other applications (e.g. PixInsight), many (not all) people would now freak out because that's green! (ew!) They would now likely resort to tweaking the equation (in essence picking another hue for Ha), or resorting to SCNR/green killing tools (selectively modifying pixels above a threshold of green), rather than do the logical/"correct" thing and tweaking
the input signal. In StarTools this is as easy as chaining the Green Bias Reduce/Increase to a point where that yucky green is overwhelmed by the other S-II and O-III signal.
So you get something nicely balanced like this even though Ha is (of course) dominant;
-
- StarTools_2838.jpg (215.04 KiB) Viewed 3243 times
That smidgen of green dominance at 9 o'clock is enough to let the viewer know that that area is indeed strongly Ha dominant compared to other areas.
Crucially, the input signal is just attenuated (or boosted) in relation to the other three signals, but is not harmed in any other way that is either destructive, unpredictable or non-replicable across other datasets/objects/gear. The holy grail of color rendering for documentary photographical purposes is;
- to be able to get comparable coloring across different objects with comparable emissions
and
- to be able to replicate comparable coloring by different people with different gear, however with comparable filters
Narrowband imaging is really no exception. Sure, you have leeway in the hues, but the way these hues behave and the story they tell about the same object should be consistent.
To make a long story short, it's not so much about the pixel math / compositing equation - it just establishes the hue (which is best kept as simple and predictable as possible to aid the viewer seeing the three different emissions). The important bit is the throttling of the
input signal to the equation to get draw the viewer's attention to a specific feature/emission via color.
Hope this helps!