Now that we know how to change the colour balance, how do we know what to actually set it to?
The goal of color balancing in astrophotography, is achieving an accurate representation of emissions, temperatures and processes. A visual spectrum dataset should show emissions where they occur in the blend of colours they occur in. A narrowband dataset, equally, should be rendered as an accurate representation of the relative ratio of emissions (but not necessarily with the color they correspond to the wavelength they appear at in the visual spectrum). So, in all cases, whether your dataset is a visual spectrum dataset or a narrowband dataset, it should let your viewers allow to compare different areas in your image and accurately determine what emissions are dominant, where.
There are a great number of tools and techniques that can be applied in StarTools that let you home in on a good colour balance. Before delving into them, It is highly recommended to switch the 'Style' parameter to 'Scientific (Color Constancy)' during colour balancing, even if that is not the preferred style of rendering the colour of the end result, this is because the Color Constancy feature makes it much easier to colour balance by eye in some instances due to its ability to show continuous, constant colour throughout the image. Once a satisfactory colour balance is achieved you should, of course, feel free to switch to any alternative style of colour rendering.
Upon launch the Color module samples whatever mask is set (note also that the set mask also ensures the Color module only applies any changes to the masked-in pixels!) and sets the 'Red, Green and Blue Increase/Reduce' parameters accordingly.
We can use this same behaviour to sample larger parts of the image that we know should be white. This method mostly exploits the fact that stars come in all sorts of sizes and temperatures (and thus colours!) and that this distribution is usually completely random in a wide enough field. Indeed, the Milky Way is named as such because the average color of all its stars is perceived as a milky white. Therefore if we sample a large enough population of stars, we should find the average star color to be - likewise - white .
We can accomplish that in two ways; we either sample all stars (but only stars!) in a wide enough field, or we sample a whole galaxy that happens to be in the image (note that the galaxy must be of a certain type to be a good candidate and be reasonably close - preferably a barred spiral galaxy much like our own Milkyway).
Whichever you choose, we need to create a mask, so we launch the Mask editor. Here we can use the Auto feature to select a suitable selection of stars, or we can us the Flood Fill Brighter or Lassoo tool to select a galaxy. Once selected, return to the Color module and click Sample. StarTools will now determine the correct 'Red, Green and Blue Increase/Reduce' parameters to match the white reference pixels in the mask so that they come out neutral.
To apply the new colour balance to the whole image, launch the Mask editor once more and click Clear, then click Invert to select the whole image. Upon return to the Color module, the whole image will now be balanced by the Red, Green and Blue bias values we determined earlier with just the white reference pixels selected.
StarTools comes with a unique colour balancing aid called MaxRGB. This mode of colour balancing is exceptionally useful if trying to colour balance by eye, but the user suffers from colour blindness or uses a screen that is not colour calibrated very well. The mode can be switched on or off by clicking on the MaxRGB mode button in the top right corner.
The MaxRGB aid allows you to view which channel is dominant per-pixel. If a pixel is mostly red, that pixel is shown red, if a pixel is mostly green, that pixel is shown green, and if a pixel is mostly blue, that pixel is shown blue.
By cross referencing the normal image with the MaxRGB image, it is possible to find deficiencies in the colour balance. For example, the colour green is very rarely dominant in space (with the exception of highly dominant OIII emission areas in, for example the Trapezium in M42).
Therefore, if we see large areas of green, we know that we have too much green in our image and we should adjust the bias accordingly. Similarly if we have too much red or blue in our image, the MaxRGB mode will show many more red than blue pixels in areas that should show an even amount (for example the background). Again we then know we should adjust red or green accordingly.
A convenient way to eliminate green dominance is to simply click on an area. The Color module with adjust the 'Green Bias Reduce' or 'Green Bias Increase' in response so that any green dominance in that area is neutralised.
StarTools' Color Constancy feature makes it much easier to see colours and spot processes, interactions, emissions and chemical composition in objects. In fact, the Color Constancy feature makes colouring comparable between different exposure lengths and different gear. This allows for the user to start spotting colours repeating in different features of comparable objects. Such features are, for example, the yellow cores of galaxies (due to the relative over representation of older stars as a result of gas depletion), the bluer outer rims of galaxies (due to the relative over representation of bright blue young stars as a result of the abundance of gas) and the pink/purplish HII area 'blobs' in their discs. Red/brown (white light filtered by dust) dust lanes complement a typical galaxy's rendering.
Similarly, HII areas in our own galaxy (e.g. most nebulae), while in StarTools Color Constancy Style mode, display the exact same colour signature found in the galaxies; a pink/purple as a result of predominantly deep red Hydrogen-alpha emissions mixed with much weaker blue/green
emissions of Hydrogen-beta and Oxygen-III emissions and (more dominantly) reflected blue star light from bright young blue giants who are often born in these areas, and shape the gas around them.
Dusty areas where the bright blue giants have 'boiled away' the Hydrogen through radiation pressure (for example the Pleiades) reflect the blue star light of any surviving stars, becoming distinctly blue reflection nebulae. Sometimes gradients can be spotted where (gas-rich) purple gives away to (gas-poor) blue (for example the Rosette core) as this process is caught in the act.
Diffraction spikes, while artefacts, also can be of great help when calibrating colours; the "rainbow" patterns (though skewed by the dominant colour of the star whose light is being diffracted) should show a nice continuum of colouring.
Finally, star temperatures, in a wide enough field, should be evenly distributed; the amount of red, orange, yellow, white and blue stars should be roughly equal. If any of these colors are missing or are over-represented we know the colour balance is off.
Colour balancing of data that was filtered by a light pollution filter is fundamentally impossible; narrow (or wider) bands of the spectrum are missing and no amount of colour balancing is going to bring them back and achieve proper colouring. A typical filtered data set will show a distinct lack in yellow and some green when properly colour balanced. It's by no means the end of the world - it's just something to be mindful of.
Correct colouring may be achieved however by shooting deep luminance data with light pollution filter in place, while shooting colour data without filter in place, after which both are processed separately and finally combined. Colour data is much more forgiving in terms of quality of signal and noise; the human eye is much more sensitive to noise in the luminance data that it is in the colour data. By making clever use of that fact and performing some trivial light pollution removal in Wipe, the best of both worlds can be achieved.
Many modern OSC cameras have a spectrum response that increases in sensitivity across all channels beyond the visual spectrum red cut-off (the human eye can detect red wavelengths up until around 700nm). This is a feature that allows these cameras pick up detail beyond the visual spectrum (for example for use with narrowband filters or for recording infrared detail).
However, imaging with these instruments without a suitable IR/UV filter (also known as a "luminance filter") in place, will cause these extra-visual spectrum wavelengths to accumulate in the visual spectrum channels. This can significantly impact the "correct" (in terms of visual spectrum) colouring of your image. Just as a light pollution filter makes it fundamentally impossible to white-balance back the missing signal, so too does imaging with extended spectrum response make it impossible to white-balance the superfluous signal away.
Hallmarks of datasets that have been acquired with such instruments, without a suitable IR/UV filter in place, is a distinct yellow cast that is hard (impossible) to get rid of, due to a strong green response coming back in combined with extended red channel tail.
The solution is to image with a suitable IR/UV filter in place that cuts-off the extended spectrum response before those channels increase in sensitivity again. The needed IR/UV filter will vary per OSC. Consult the respective manufacturers' spectral graphs to find the correct match for your OSC.
I'm relatively new to image processing and just wanted to say how straight forward and powerful StarTools is.
Bin the image up until each pixel describes one unit of real detail.
The Synth module generates physically correct diffraction and diffusion of point lights (such as stars) in your image, based on a virtual telescope model.
With a suitable dataset, workflows in StarTools are simple, replicable and short.
StarTools is famous for its Color Constancy color rendering.
You can convert everything you see to a format you find convenient. Give it a try!