xonefs wrote: ↑Sun Oct 24, 2021 11:34 pm
Thanks for giving it a shot. Yeah this is my challenge I know my data is generally pretty good and should be capable of producing the level/quality of images I want but I struggle to get there.
The attempt above looks better than mine and from a color standpoint but still not quite. I think I am also having an issue earlier on processing and being able to get the detail out of them. If you look at the individual channels in another program after a basic stretch you can see there is a ton of depth and detail (look at S II in particular but also others), and when processing in ST I think I am bringing it out but once applying color realize I didn't and it is super flat and those details and depth to ridges with separation are not there (I am also noticing similar in your attempt- how do you bring out that definition on all those ridges of nebulosity that seem to get muted in ST?). compare to individual channel data initially or even while you are processing before color module where those features seem more prominent. I had the same thing last time trying to process cygnus wall data in comparison to pix workflow which I think I talked about previously on here but hadn't then resolved either how to fix it.
I wish I could see some start to finish workflow of anyone creating the kind of images I'm looking for with typical decent mono amateur data like this on the latest startools release. most vids I've been able to find are much older versions of ST now where the info isn't as relevant with different modules and either like hubble data or more problematic dslr/OSC data.
Quite a few things to unpack here.
1. Color processing is entirely separated from luminance processing - if you see
detail "disappear" (or
appear for that matter) your screen's calibration is off, or you are really pushing saturation to cause out-of-gamut colors, or you are choosing a compositing (LRGB Emulation) method that explicitly allows for this. Or there is a bug in StarTools (but I am not aware of any).
2. You obviously cannot color balance narrowband images; there is no concept of a white point. What you are doing in the Color module when processing narrowband, is balancing the strenght of the three narrowband signals.
It seems very tedious and unintuitive trying to adjust each RGB color bias very incrementally.
3. On the contrary, things could not be more intuitive. The red, green and blue sliders,
directly and linearly control S-II, Ha, and O-III.
4. The Rosette, as a HII area, is not in the same evolutionary stage as M8. As a consequence, it will look somewhat different.
I think it would really help to take a step back and, rather than processing by trial-and-error, "have a plan" so to speak when using a module. Pulling sliders without understanding what is being governed by them, indeed can becomes quite tedious.
Such a plan starts with better understanding what sort of signal you acquired, what is happening to your signal as you process it, and how StarTools treats it. For example,
in StarTools, there is virtually no difference between processing a DSLR dataset or a complex composite due to the strict separation (and parallel processing) of luminance and chrominance. There are no special secrets, techniques or workflows needed to process complex composites. This is not PI - no hacks are needed; signal flow is optimal and respected 100%.
Using only defaults and presets, your dataset looks like this in the Color module with the SHO(HST) preset as a starting point;
-
- StarTools_2787.jpg (331.51 KiB) Viewed 6566 times
This is incidentally, extremely close to all other SHO M8 renditions in StarTools when using the SHO(HST) preset. That is, of course, because ST respects the actual recorded signal (e.g. emission concentrations). It's the "science" part of astrophotography - repeat an experiment and you should be able to achieve the same results.
If the defaults are too pastel, you can bump up the saturation in the highlights or shadows;
-
- StarTools_2788.jpg (338.09 KiB) Viewed 6566 times
If you prefer S-II to poke through more (yielding a more copper tone), increase red, etc.;
-
- StarTools_2789.jpg (329.51 KiB) Viewed 6566 times
Be absolutely sure to calibrate you screen properly when processing (
and viewing) color images, as many consumer-oriented screens are
way off and are set to make colors "pop". Compared side-by-side, a well-calibrated screen will shows a lot more muted colors until you really push the saturation.
SCNR (or Cap Green in StarTools) is
not a color calibration tool. It is meant for booting green-dominant coloring that has been ascertained to be aberrant (e.g. things like chrominance noise). You should only use this a a last resort in case of a data issue.
If you find any remnant green offensive, then you can of course, reduce the strength of the responsible wavelength. Because you imported your dataset as SHO:RGB and you have chosen a SHO mapping Matrix, green is mostly caused by Ha-dominance in a region.
Because you imported SHO:RGB, modifying the green slider modifies the Ha contribution
regardless of how the matrix subsequently mapped it. In other words, by using the green bias slider, no matter how Ha ends up being used in your coloring, you are purely controlling the Ha strength. No other band.
E.g. you could reduce Ha (green) to arrive at this;
-
- StarTools_2790.jpg (264.79 KiB) Viewed 6566 times
Etc. etc.
There is just one important thing to remember; the coloring in your image should, just like detail/luminance, convey important information about reality. In this case it conveys, per-pixel, what emissions are relatively dominant. Red pixel? S-II is dominant. Green pixel? Ha is dominant. Blue pixel? O-III is dominant. Same goes for mixes of the three primary colors. Yellow? S-II and Ha are roughly equally dominant (but O-III is less dominant in that area), Purple? S-II and O-III are equally dominant (but Ha is less dominant in that area). And so on, and so forth.
StarTools is not in the business of helping people "fudge" coloring into something "pretty" if that coloring encroaches on an accurate representation of reality. In astrophotography, we are dealing with... well... photography and documenting reality. You have
lots of leeway in how you wish to do this, but the goal is always to document reality. As such, any operations that destroy (or invent) signal and meaning (whether they be detail/luminance or chrominance/color) are much harder to perform in ST.
Does this help?