Hi Richard, and welcome to the forums!
riccdavis wrote: ↑Wed Jan 26, 2022 10:23 am
My, a lot has changed
You're not wrong!
More on that below.
I’m currently imaging with a vintage Pentax SDHF 75 and an ‘old fashioned’ Moravian G3 11000 mono.
Fortunately, good glass doesn't age.
I have here some S H O data of IC 443 (The Jellyfish) which I have taken. It’s all stacked and calibrated and I’m trying to get something decent out of it using Startools, but I’m struggling a bit. My friend, who uses Pixinsight has already produced a very good image from my data using that software and I am hoping to achieve the same with Startools.
And this is where not all change, these past 17 years, has been
good change.
It appears your friend hasn't been completely truthful (or at least has omitted some crucial details). This image does not appear to be exclusively processed with PixInsight, but rather, has been significantly altered ("deep faked") by an algorithm based on neural hallucination (the latter may sound ridiculous, but is actual part of AI research nomenclature). In fact, these images are actually one of the more egregious examples I have seen. Most likely we're looking at the interventions of the Topaz AI suite (ostensibly not part of PI as it has not place in astrophotography), which is trained for the purpose of "enhancing" images of people, buildings, vehicles, nature scenes and animals with
plausible (but not
real) detail.
As a consequence, sadly, much of the fine detail in the two images presented by your friend, is entirely made up and does not originate from your actual dataset. A quick comparison with a high resolution, non AI-tainted image (for example
this image by Chris Heapy) should readily reveal the discrepancies in the fine details. The neural hallucination algorithms will have created - out of nowhere - many wavy patterns, "hair", "tendrils", "veins" and "textures" that do not truly occur in the object or, indeed, anywhere in outer space. AI-based detail augmentation tends to be most easily detectable around bright stars, where diffraction patterns tend to thoroughly confuse such detail generators.
Yours truly, as does the author of PixInsight, vehemently opposes such manipulations, as it quite simply "lies" to your viewers (and yourself) as to what is truly out there. By and large, astrophotographers aspire to practice documentary photography (e.g. documenting what is out there to the best of their abilities), and software like ST and PI exist for the purpose of facilitating this. Particularly in StarTools, everything is focused on making the most of your actual recorded signal, and keeping artifacts at bay, for the purpose of conveying reality in the most truthful way.
With that out of the way, the color compositing of your friend's image is, as others pointed out, not according to the Hubble Palette (aka SHO palette). The SHO palette is named so, because it maps S:H:O to R:G:B (e.g. S-II to red, H-alpha to green, O-III to blue). Depending on prevalence of the three distinct emissions, this tends to yield the familiar/classic blue/golden coloring with hints of green in areas of particularly strong H-alpha emissions.
While false color is a necessity to be able to show the three different emissions, contrary to popular-ish belief, it is not the case of "anything goes". The coloring is meant to convey to a viewer an accurate representation of emission concentrations in any one spot of your image, across your
entire image, independent of location or pixel brightness. As such, compositing of the three bands - for the purpose of coloring - must be done in the linear domain; individually stretching the three bands is a big no-no; it completely skews hues and saturations, breaking the requirement that coloring is an accurate prediction of emission concentrations everywhere in your image (incidentally, there is a reason why non-linearly processing/stretching individual channels for terrestrial imaging is similarly not done).
In more practical terms; In properly processed (and calibrated) SHO images, green-ish tints correspond to a relative dominance of H-alpha. Orange corresponds to a relative dominance of H-alpha and S-II but a relative paucity of O-III. Blue corresponds to a relative dominance of O-III and a relative lack of H-alpha and S-II. Teal points to a relative dominance of of H-alpha and O-III, but a relative lack of S-II, and so on, and so forth. In other words, it is important you and your viewers can trust the coloring; the colors are not just there to be "pretty".
To achieve such an informative/documentary color retention, StarTools processes your detail and coloring separately (yet simultaneously) in one unified workflow. Coloring is respected perfectly linearly, while you can process the detail (luminance) unencumbered non-linearly.
A quick workflow looks as follows;
Compose; load S-II as red, load Ha as green, load O-III as blue. Set the exposure times for the three channels accordingly. StarTools will, behind the scenes, create a properly weighted synthetic luminance (detail) dataset and separate color dataset. You will be processing mostly the detail (in mono), with the coloring popping up here and there. The two aspects - detail and color - are composited once you hit the Color module.
AutoDev; to see what you are working with.
Crop; crop away the stacking artefacts.
Wipe; remove the gradient and bias levels in both luminance and color (I increased the Dark Anomaly Filter just a little bit, as the dataset is quite noisy).
AutoDev; We can now non-linearly stretch the detail in earnest. Pick a Region of Interest (click & drag) that includes the parts of the image that are most important and excludes "empty" background. I also increased the "Ignore Fine Details >" parameter to make AutoDev "blind" to the fine background noise.
Contrast; to taste, I used defaults.
HDR; to taste, I used defaults.
Sharp; to taste, I used defaults.
Decon; the
only true way of recovering real detail from your dataset. Deconvolution is able to reverse some of the adverse effects of atmospheric turbulence, as well as imperfections in your optics. I sampled a few stars across your image. See docs on how to operate this module. E.g. to give you an idea;
-
- StarTools_decon.jpg (465.06 KiB) Viewed 4405 times
Color; color and detail are now composited. Start off with the SHO(HST) preset. Use the Bias Reduce sliders to throttle the relative contribution of the bands. To reiterate, this is done entirely in the linear domain, and what you are doing here is simply multiplying (or dividing in the case of Bias Reduce) the signal of an individual band by a specific factor.
Shrink; to taste, I used the defaults + Unglow preset.
Super Structure; Isolate preset (with smaller Airy Disc setting of ~ 13% to better match the field of view) and dialled back Gamma (0.75).
Super Structure; Saturate preset (with smaller Airy Disc setting of ~ 13% to better match the field of view) with dialled back 100% saturation.
Switch off Tracking and perform noise reduction. I used defaults.
You should end up with something like this;
It may not show the fantastic(al) detail of your friend's image, but it certainly approaches the actual detail (and emission concentrations) that I am familiar with from other SHO datasets.
Hope this helps!