Stefan B wrote: ↑Wed Apr 27, 2022 8:06 am
That's a huge issue IMHO. In my M81/M82 I unceremoniously masked M81's core and used the undo layering, since otherwise it would have looked ridiculous. Certainly not documentary...
I read about the process of continuum subtraction for cases like this. There, you subtract the red signal from the Ha signal so you aim to retain all those HII regions in the arms but get rid of the signal in the Ha line which comes from continuum emission instead of line emission. As far as I understood you use stretched data for this. But the amount of stretching isn't objective, especially when gamma etc. is manipulated (or some parts of DSOs are highlighted or tamed by HDR in (L)RGB but not in Ha).
@admin Ivo, is there a possibility in ST to do something similar to continuum subtraction? Or to avoid overly red galaxy cores?
Continuum subtraction is
sort of what the NB Accent module does. Though it is really more accurate to say that the NBAccent module performs conditional continuum synthesis and
replacement.
The NBAccent's premise is based on two assumptions;
- the NB accent data represents a continuum whose RGB manifestation may be custom defined (for example, red channel only or Balmer series across multiple channels in the case of Ha)
- the visual spectrum data contains a measure of the same signal already and has been color balanced correctly
When evaluating a pixel;
- the NB accent data is stretched (AutoDev is chosen here as it guarantees the best possible dynamic range use, maximizing detail). Visual spectrum accuracy is taking a backseat to show exaggerated detail. As you point out the stretches are different, but mostly the image would not look much different if we didn't exaggerate NB detail!
- the stretched signal is translated to the target continuum's RGB representation
- the new value for the pixel (per channel), is the value of the pixel that is largest (original or accented)
Further constraints (or boosts) may be imposed on the accent signal. before evaluation For example, the removal of signal beyond a specific structure size, specifying a noise floor, or artifically boosting the signal linearly or non-linearly.
The fundamental workings of the continuum replacement outlined above tends to yield useful results, as long as the visual spectrum data truly already incorporates a measure of the NB accent. The emissions of other parts of the spectrum in the visual spectrum image, should typically overwhelm any accent signal (even when moderately boosted), effectively protecting star cores and galaxy cores from being touched.
Mike in Rancho wrote: ↑Wed Apr 27, 2022 6:56 am
I thought about running another SS, perhaps Isolate, afterwards, but just bumped the denoise some instead. I wonder if SS might be best off after NBAccent overall?
You can absolutely use SS with NB accents in place, but the main reasoning for not using it after, is that NB accents were added in a more arbitrary manner (for example, from the perspective of the Tracking code, such accents just "materialize" out of nothing). Having an algorithm build on something that has no "history" and was added in an arbitrary manner will make any derivatives arbitrary as well. Some modules are able to completely ignore (e.g. compensate for) the accents, but SS is not one of those modules (it's a very superficial / artistic / WYSIWYG type of module).
BrendanC wrote: ↑Wed Apr 27, 2022 8:39 am
@admin - great suggestion about vignetting in Wipe, but in that case, does this mean my subs are displaying excess vignetting?
Indeed - this is just an AutoDev of your luminance set, nothing else done to it, except a crop;
-
- Selection_744.jpg (431.15 KiB) Viewed 4229 times
...which is also the reason why a basic sample-setting algorithm as found in APP or Siril will not yield correct results in a technical sense; the bulk of the "gradients" here appear to be caused by an error in division (flats), rather than subtraction/addition (light pollution). Wipe models these things separately, as they are separate issues at their core, requiring different signal processing and math to correct for.
Manual sample setting based algorithms may give you quick fix if not scrutinised too severely, but it is much closer to arbitrary "doctoring" (using the user as an arbiter of what is "background" and what isn't, rather than an objective algorithm) and is particularly destructive to datasets with "important" faint signal (e.g. the IFN in this case of this dataset, though I don't think it is deep enough yet for good IFN signal to poke through).
EDIT: I should also say that the first 1.9 alpha will come with some changes/upgrades to the NBAccent module, which are already in place. Unfortunately, I can't give an ETA on this just yet, as there is a lot to be done still on other parts.