Modern versions of StarTools more cleanly separate luminance and colro processing.
It is left here for archival purposes, though all steps are still reproducible in modern versions of StarTools.
This is a tutorial that was posted to the CloudyNights forum that goes into detail on how to process Ha, R, G, B into a new synthetic master luminance frame, an proceeds to combine them with the R, G and B to result in a colour image.
The original thread can be found here
http://www.cloudynights.com/ubbthreads/ ... in/5411437
Hi All,
The trick is to process luminance separate from the colour data.
To help you on your way, here is a quick description on how I processed the luminance part of the image. I'll post the remainder of the steps later when I can find some more time.
First I created a new synthetic luminance master by creating a weighted average of all the signal that we collected across the different bands (Ha 110m, R:30m, G:30m, B:30m). By doing this, we create a new master that makes optimal use of the photons captured to bring out detail and reduce noise.
Unfortunately the way to do this is currently not very elegant in StarTools (or anywhere else as far as I know). This will change in 1.4 with a revamp of the LRGB module.
Load R, indicate that data is NOT linear ("This image has already been stretched") even though it clearly is.
Then, in the Layer module, open G, set blend to 50%, Copy, Paste->Bg, open B, set blend to 33%, Copy, Paste->Bg, open Ha, set blend to 55%.
What we're doing here is progressively creating a luminance master; first we have a 50/50 mix of R and G, then we have a 66/33 mix of (R+G) and B (so we effectively end up with 33% R, 33% G and 33% B) and finally we have a mix of 45/55 of (R+G+B) and (Ha). Why use 45% of the RGB luminance and mix it with 55% of Ha? That's because the R+G+B signal constitutes 90 minutes of exposure time and the Ha constitutes 110m exposure time. So of the 200 minutes of total exposure time for this object, 90 minutes was allocated to R+G+B (90/200=0.45) and 110 minutes was allocated to Ha (110/200=0.55).
Now that we have our new synthetic luminance signal, we can start processing it for maximum detail. We will introduce colour at a later stage.
First thing I do is let StarTools know that what we have now is 'trackable' linear data. With that knowledge it will be able to give us enhanced results. To let StarTools know, I click the Track button.
So let's see what we got! I launch the Autodev module which automatically does a global stretch, allocating dynamic range in the best possible way for the detail in your image. As long as 'tracking' is on, global stretches in StarTools are non-permanent and you can redo them whenever you like - even if you have already done other stuff to your image such as local stretching, sharpening, deconvolution, etc.
We can immediately see two things; the image needs cropping (due to stacking artefacts) and there is a faint gradient present. I 'Keep' the stretched image and launch the Crop module. I settle for some values that centre the nebula and get rid of the artefacts. I also make sure I keep track of the values as I need them later on to crop the color data in the exact same way (you can write them down, or you could just look the up in the StarTools.log processing log file).
Parameter [X1] set to [52 pixels]
Parameter [Y1] set to [173 pixels]
Parameter [X2] set to [3027 pixels (-321)]
Parameter [Y2] set to [2168 pixels (-328)]
Next, I get rid of the gradient, by using the Wipe module.
I use the default values and I bump up the dark anomaly setting a little, just in case the are some dead pixels (though I didn't see any).
Parameter [Dark Anomaly Filter] set to [2 pixels]
I do as the Wipe module suggested when I launched it and re-do the global stretch. I use the Autodev module for that (though if you wanted to, you could use the Develop module as well). Things look good. The gradient and artefacts are gone.
AutoDev's job is to find the histogram curve that gives the best possible trade-off between detail and brightness (i.e. 'dynamic range allocation'). It's exactly what you would be doing playing with curves and checking histograms in traditional software packages, only Autodev is infinitely faster and better at it than you are.
Our image has some fine background noise, which AutoDev actually tries to bring out, assuming it is real detail. To tell Autodev not to bother with the fine noise, we can actually tell it to ignore details that are finer than a specific amount of pixels. I select a value of 1.5.
Parameter [Ignore Detail <] set to [1.5 pixels]
We can immediately see the noise becomes less visible, as Autodev no longer allocates dynamic range for it.
I'm happy with this result, so I click 'Keep'.
Next I launch the Deconvolution module, to see if it thinks our signal is good enough to make any enhancements. I define a preview area and play a bit with the 'Radius' parameter. I don't bother with a star mask just yet, though it's pretty obvious I'm going to need one. After a bit of playing around, I settle for a radius of 2.0 pixels and can't see any improvement beyond 10 iterations. You'll notice that deconvolution in StarTools is not only blazingly fast, it simply refuses to introduce noise and seems to magically 'know' which detail it can enhance and where to back off because it would only introduce noise and artefacts. To make a long story short - you don't need to create detail protection masks or elaborate luminance masks for the Decon module to do its job properly and effectively.
What we do need however is a star mask. Fortunately, that's not hard to do either. I click Mask, Auto, the 'Stars' preset and the 'Do'. Out comes a mask.
I click 'Shrink', 'Grow' and 'Grow' to make sure that small detail noise is not accidentally selected and that stars are well covered by green mask pixels. I'm happy so I click 'Keep'.
Oops! StarTools has picked up a mistake I made - we need an *inverse* star mask (e.g. something that has 'gaps' where the stars are so that they *don't* get treated by the decon module). I launch Mask again, and correct my mistake by clicking 'Inverse', after which I 'Keep' the result and am returned to the Decon module.
It's a huge improvement - the ringing artefacts are gone. I can see, however, some spurious other pixels around some stars that don't seem to belong there.
They are caused by the de-ringing algorithm which, unlike those in other deconvolution algorithms, still makes an attempt to coalesce singularities and sharpen up stars with an alternative algorithm. It's a typical case of 'Your Mileage May Vary'. You can use the Mask Fuzz parameter (which makes the mask we created appear 'fuzzy'/'blurry' to StarTools algorithms) to control how much of the non-deconvolved image is blended in. I our case, I settle for 3 pixels, as opposed to the default 8 pixels.
Next I click 'All' to apply the deconvolution settings to the whole image and not just the preview I define earlier.
I'm happy with this, so I click Keep.
Next, I'm going to use some local dynamic range optimisation to bring out the nebula further by launching the HDR module.
I just use the default settings which performs an equalisation of local dynamic range, effective taming too bright regions (of which we have little) and enhancing detail in dark regions (of which we have lots). The only parameter I change is the 'Most Affected Detail Size' (to 178 pixels). This parameter specifies the rough size of 'things' in your image that should be affected most by the enhancement. It helps HDR find the right areas in your image that require enhancement and ensures the enhancements look natural. Don't be too concerned about noise being exacerbated - StarTools 'tracking' feature is keeping track of it in the background and will deal with it later. I'm happy and, once again, 'Keep' the result.
We still have that mask active we made earlier for the Decon module. We might as well put it to good use. This time I'm going to do some Wavelet sharpening with it, making sure that stars are not affected as much. I launch the 'Sharp' module and click 'Next'.
I modify the mask slightly by clicking 'Shrink' twice, so that pixels that are in the neighbourhood of stars aren't affected as much by the sharpening. I set Mask Fuzz to 4 pixels so non-sharpened and sharpened areas transition smoothly. I also 'overdrive' the sharpening a little by setting Amount to 150%. Finally I set 'Small Detail Bias' to 90%. This important and unique feature acts as an arbiter when different scales try to enhance the same pixel. In such a case, 'Small Detail Bias' looks at what both proposed modifications bring to the table and decides who gets priority based on how it will affect the image. Increasing 'small detail bias' gives a progressively bigger weight to decisions in favor of small detail enhancement. It is an important feature that makes wavelet sharpening much more useful and controllable, as opposed to other implementations.
Also notice that StarTools wavelet sharpening implementation (as is the case with nearly every other module in StarTools) never clips your data, no matter how crazy the parameters you choose.
The Added detail definitely makes a difference. I'm happy with that and I 'Keep' the result.
I'm a big fan of one particular module in StarTools, which is called the 'Life' module.
It was written as a way to manipulate super structures (such as whole nebulae and galaxies) and give the user control over their presentation. Especially in wide(r) field images, busy starfields can be terribly distracting from the object that is really the centrepiece of your image. Such objects seem 'buried' under stars. The Life module can push back the star fields and bring out the nebula, drawing the eye to what your image is really about.
Other uses for the Life module are the complete reconstruction and re-synthesis of super structures that were lost due to heavy-handed HDR processing (the infamous 'flat' PixInsight look) or lost and irrecoverable from background noise.
The choice is yours a this point to apply it or not.
Either way, we'll need a mask that is fully set instead of the partially set mask that we have now (Mask, Clear, Invert, Keep).
For this image, I chose to use it to push back the star field and 'Isolate' the nebula. For this I used the Isolate preset with default values, but used a strength of 50% to make the effect a little more subtle.
I perform one last step before final noise reduction and that is the 'Contrast' module. It helps even out contrast across large parts of the image. It's a cousin of the Wipe module, but created especially for psychovisual medium-to-large scale dynamic range optimisation. I use the default values with 'Dark Anomaly Headroom' at 100%, Dark Anomaly Filter at 2 pixels.
Now it's time to switch off tracking and let StarTools perform its unique, extremely targeted noise reduction. During the setup phase, I just use the default values and click 'Next'.
Straight away, StarTools does a great job of getting rid of all the background noise.
Really, it's up to taste on how you would like to trade-off the rest of the noise in the image against the retention of detail. I finally settled on;
Parameter [Scale 1] set to [100 %]
Parameter [Scale 2] set to [100 %]
Parameter [Scale 3] set to [85 %]
Parameter [Scale 4] set to [50 %]
Parameter [Scale 5] set to [50 %]
Parameter [Color Detail Loss] set to [50 %] (irrelevant as this is not a colour image)
Parameter [Brightness Detail Loss] set to [35 %]
Parameter [Structural Emphasis] set to [3 pixels]
Parameter [Edge Repair Strength] set to [10 %]
Parameter [Noise Tracking Influence] set to [150 %] (overdrive StarTools' decision to noise-reduce pixels, based on noise propagation levels it observed throughout our processing)
And that's our luminance frame!
Part 2; adding colour to our image.
This is fortunately a little bit easier. If you haven't done so, save the new luminance master first.
Next, launch the LRGB module which we'll use to create an RGB composite (though without luminance!).
Click 'Red', 'Green' and 'Blue' to load the respective channel data until you end up with a (linear) color image.
Noet that I do not include Ha in the red channel - for a 'natural' color, I refrain from Ha touching the color balance. Remember though that it has, however, had a huge influence on the luminance, so any Ha detail will definitely be visible.
I keep everything at the default settings and hit 'Keep'. Normally, you would input the filter ratios here (color filter don't all have the same permeability), but since we don't knwo them for this data we'll make corrections by eye later.
First I'm going to crop the image to exactly the same area as we did for the luminance.
In this case;
Parameter [X1] set to [52 pixels]
Parameter [Y1] set to [173 pixels]
Parameter [X2] set to [3027 pixels (-321)]
Parameter [Y2] set to [2168 pixels (-328)]
Next I hit Auto Develop to see what we have. I'm greeted with some sort of green bias signal. Fortunately this should be no match for Wipe - I cancel and run Wipe with the default settings.
I run Autodev again and things look a lot better - I can see our nebula, but I can also spot some faint discolorations - some red, some green, some blue. These are the hall marks of so called 'dark anomalies' - pixels that have an anomalous dark value due to severe noise or CCD defects. The discolorations are caused by Wipe backing off upon seeing the dark anomalies and mistaking them for real background value pixels, whereas really they are much darker than the real background.
I hit cancel to exit AutoDev, click 'Undo' to undo Wipe and launch Wipe again. This time, I'm going to increase the Dark Anomaly Filter setting. This will filter out any (smallish) dark anomalies before presenting the data to Wipe for evaluation. I try 7 pixels and keep the default settings for the rest of the image.
I run AutoDev again to evaluate the results. That's better! I set Ignore Detail to 3 pixels to avoid AutoDev allocating dynamic range to noise.
Notice also that Wipe helped us here getting close to an ideal color balance. It does that by assuming that the background is neutral.
Next I save the image to its own TIFF file.
It is now time to combine Luminance and (prepped) RGB - once more I launch the LRGB module and load the luminance file for 'Luminance' and load the RGB file we just created 3 times for R, G and B.
Our nebula is now colored, but color noise was introduced. To reduce the color noise, I up RGB blur a little (1.8) - a trick that depends on the fact that the human eye is much less sensitive to color blurring than it is to luminance blurring.
Now it's time to get the color balance right. For that I launch the Color module. To start off with, I bump up the saturation by 300%. You don't have to go with a high saturation if you don't want, but it does help gauging color balance while modifying your color in the color module.
The colour balance I first do by eye - we know that the star color should vary between red, via range and yellow to blue and that their distribution is roughly equal. Therefore, I try to find values for Red, Green and Blue ratio that give me that sort of look. I think the values of R 1.0, G 1.1 and B 1.1 give me a good color balance. However, as I was processing on a non-color calibrated laptop I cannot be sure that what I am seeing is going to be what other people will be seeing. Luckily there is a tool in StarTools that provides a sanity check for those without a color calibrated screen. By Clicking the 'Max RGB' button in the top right, the view changes to show which channels are dominant. If your image is too red, pixels that are supposed to be 'neutral' (such as the background) will show mostly red. If your image is too green they will show mostly green. If, however, your image is well calibrated, these neutral pixels will alter between red, green and blue. I find that I pushed the green too much and could use a little more blue, so I back off the green to 1.03 and up the blue to 1.12.
Lastly, I set Cap Green to 'To Yellow'. We're making use here of a the fact that very little objects in space are predominatly green when imaged in RGB. If we do encounter a predominantly green pixel and we are absolutely certain we got the color balance right, then we can assume that this pixel predominantly green because of noise in our measurement. Therefore we can cap the predominantly green pixel to the max value of red or blue, so that it no longer is predominantly green. Finally, I click 'Keep'.
As a double sanity check, I let Wipe be the final arbiter of my color calibration. I keep the default settings but set Dark Anomaly Filter to 7 (as we found out before). I also set 'Cap Green' to Yellow once more for good measure. I see Wipe make barely any difference (except for some final gradient removal), so I was close!
Now I'm going to sharpen the nebula one last time.
In order not to sharpen the stars any further, I'm going to make a star mask. I set 'Exclude Color' to 'Purple (Red + Blue)' and 'Filter Sensitivity' to a less sensitive 10.
When done I click 'Shrink' and 'Grow' 4x until stars are well selected. Finally I click 'Invert'.
In the Sharpen module I set Mask Fuzz to 6 pixels, small detail bias to 96% and set scale 1 to 0 in order not to sharpen any fine noise. Finally, I let it sharpen both Luminance (brightness) *and* Color by setting Channels to 'Brightness & Color'. I 'Keep' the result.
As a final step, I run the Isolate preset in the Life module to push back the stars and 'isolate' the nebula. I will not only enhance the nebula, but also its apparent color as I increase the saturation (125%) of the super structure.
Season to taste!