The new Color module demystified
Posted: Sun Oct 20, 2013 8:56 am
So you had a play with the latest beta and you may have come across the new Color module.
If all is well, the results you've been able to achieve are a lot better, but there are a good number of new features in there.
All of a sudden there are parameters like 'Style' and 'LRGB Method Emulation' and some of you have understandably cried out "I thought StarTools was supposed to make things simpler!".
This post is to demystify what exactly StarTools is doing to your colors and how the Style and LRGB Method Emulation let you wield awesome power, virtually without having to lift a finger.
First up is the 'Style' parameter. Ever since Tracking (and by extent, non-linear processing) was introduced in 1.3, a number of completely new and innovative abilities were bestowed upon StarTools' modules.
We're all familiar with StarTools detail aware noise reduction, deconvolution after stretching, and lately the intelligent 'detail aware' wavelet sharpener. Around 1.3.2 the Color module was revamped with a new way of Color compositing, separating luminance and color processing completely. At the time it seemed like the logical way to implement it, having the Tracking data at our disposal, but it turned out to have some more interesting atributes.
This algorithm, which is now embodied in the 'Scientific (Color Constancy)' setting of the 'Style' parameter, is a new, more scientifically accurate way of representing color in your images. Color perception is a difficult subject and there are no right answers to what constitutes 'the right way' to present color in deep space images. I'd love to elaborate on this, but we'll save that discussion for another time. What does exist, however is a 'worse' way of presenting color in deep space images and it is, unfortunately, surprisingly common. Take this image of the Orion nebula for example;
It's not the prettiest image, but there's nothing majorly wrong with that. Or so you'd say... Indeed, this is typical of the best that people using other software come up with in terms of color. But here is the rub;
This is the exact same image, same color calibration, just stretched less. The core now appears green (which, incidentally, is correct; OIII emissions are dominant), you might also notice that the stars that are visible are much bluer.
So how come, in the previous image, the core was nearly white?
The answer is that this is the result of (erroneously) stretching the color data along with the luminance (brightness) as well, which is strangely very much common practice, even in other software that prides itself on scientific accuracy. Color is a product of discrepancies in the red, green and blue channels. Stretching these channels non-linearly deforms, stretches, squashes and skew these discrepancies based on their in-channel brightness. The result is that hues and saturation levels vary wildly as the user starts stretching the data in order to recover more luminance detail.
Ofcourse, the notion that things out there in space 'magically' change their color based on how we stretch our image would be a fairly ludicrous proposition. Yet, sadly, here we are, with a majority of images out there suggesting that this is the case.
Now look at the two images again. The core is clearly green in the second (less stretched) image, while the core in the first image is so pale that we might have missed it is actually green (indeed many astrophotographers throw their hands up in the air and just depict it as white). However, notice the area adjacent to the core at 3 o'clock (in the second image it is completely absent). It is decidedly more green. Could the two areas be of similar chemical make up?
In fact they are! It is just neigh impossible to see, as stretching the luminosity component has (for no good reason at all) drained the core of its color, compared to the area adjacent to it.
And here is where StarTools' Tracking aided color compositing algorithm comes in;
New let's ignore for a minute that the colors might appear a little oversaturated. In the interest of fairness I applied the same settings for image #1 and image #3. I had to be a good bit more aggressive with the saturation to show any color in #1.
What we're seeing here is 'color constancy'. No matter how the image was stretched, the colors are absolutely comparable between areas that vary wildly in their dynamic range. The area adjacent to the core at 3 o'clock is the exact same green as the core. Also spare a moment to look at the stars. They now show color, no matter how dull or bright they are. The full visible black body emission spectrum is covered and what's most important - temperatures are completely stable and independent of how dull or bright they were recorded and subsequently stretched. This is because StarTools stays true to the color data as initially recorded, undoing all the stretching (thanks to Tracking!) to the luminance data to arrive at the 'virgin' colors as they were recorded.
This is how color in space should be presented from a scientific point of view - reproducibility no matter the exposure time or who processed it, with true separation from the way luminance was processed. Faint Andromedas should produce the same colors as bright Andromedas. Short exposure star fields should produce the same colors as long exposure starfields, etc.
Here is another striking example;
vs
Notice how star colors are recovered until well into the core of the globular cluster, while the viewer is able to objectively compare star temperatures all the way; colors are constant and comparable throughout the DSO.
Now, as of 1.3.5 there is actually a way to 'dumb down' (which perversely was a lot of work to implement and get right!) color compositing to the levels of other software. StarTools is all about enabling the user, not about pushing philosophies, so if you want to create images with the 'handicapped' traditional look you now can, using the 'Artistic' setting for the 'Style' parameter.
As a StarTools user you may have felt that your colors were never quite like 'the other guys'; more vivid, just more colorful. And you would have been right. You may have secretly liked it but you may have been uneasy being proud of the result as it is 'different' to what a lot of other folks have been producing (your Andromeda may have had a 'strange' yellow core, your M42 may have had an 'odd' green core, your M16 may have had many more tints of red and pink, or your stars seemed almost too CGI-pretty to be true). Now you now why, and now you know you can be just as proud at your images - perhaps even more, as they have been more scientifically correct/valuable to look at than the other guy's.
Now, on to the enigmatic 'LRGB Method Emulation' setting.
Over the past two decades, astrophotographer have struggled with compositing color (e.g. combining separate luminance and color information) images, leading to a number of different LRGB compositing methods. I won't go into them here, but all attempt to invarably 'marry' luminance and color information. The different techniques all produced different hues and saturations for the same data. And what is true for all of them is that they involved a lot of layering and pixel math to produce.
Because StarTools separates color from luminance until the very last moment (that moment being when you use the Color module), really any method of color compositing can be chosen for combining the data at that moment. The 'LRGB Method Emulation' simply lets you choose from a number of popular compositing techniques at the click of a button. You will notice that all these different settings have a subtle (or sometimes not-so-subtle) impact on the hues (and sometimes saturation and luminance) that are produced. It's just that easy.
Even if your data was from a non-LRGB source, it still works as StarTools will have still separated color from luminace during processing by creating a synthetic luminance frame from your RGB-only data where needed. Conversely, as you may have seen in the latest YouTube video, processing LRGB data is now no different than processing DSLR or OSC data. Luminance is separated anyway! The only difference is that you load your data through the LRGB module. It's a big step towards a unified yet extremely powerful workflow for all data sources.
I hope this little write-up has been informative (and hopefully mildly thought prevoking ) and the usage of the Color module is now a bit clearer. If you have any questions, do ask away!
If all is well, the results you've been able to achieve are a lot better, but there are a good number of new features in there.
All of a sudden there are parameters like 'Style' and 'LRGB Method Emulation' and some of you have understandably cried out "I thought StarTools was supposed to make things simpler!".
This post is to demystify what exactly StarTools is doing to your colors and how the Style and LRGB Method Emulation let you wield awesome power, virtually without having to lift a finger.
First up is the 'Style' parameter. Ever since Tracking (and by extent, non-linear processing) was introduced in 1.3, a number of completely new and innovative abilities were bestowed upon StarTools' modules.
We're all familiar with StarTools detail aware noise reduction, deconvolution after stretching, and lately the intelligent 'detail aware' wavelet sharpener. Around 1.3.2 the Color module was revamped with a new way of Color compositing, separating luminance and color processing completely. At the time it seemed like the logical way to implement it, having the Tracking data at our disposal, but it turned out to have some more interesting atributes.
This algorithm, which is now embodied in the 'Scientific (Color Constancy)' setting of the 'Style' parameter, is a new, more scientifically accurate way of representing color in your images. Color perception is a difficult subject and there are no right answers to what constitutes 'the right way' to present color in deep space images. I'd love to elaborate on this, but we'll save that discussion for another time. What does exist, however is a 'worse' way of presenting color in deep space images and it is, unfortunately, surprisingly common. Take this image of the Orion nebula for example;
It's not the prettiest image, but there's nothing majorly wrong with that. Or so you'd say... Indeed, this is typical of the best that people using other software come up with in terms of color. But here is the rub;
This is the exact same image, same color calibration, just stretched less. The core now appears green (which, incidentally, is correct; OIII emissions are dominant), you might also notice that the stars that are visible are much bluer.
So how come, in the previous image, the core was nearly white?
The answer is that this is the result of (erroneously) stretching the color data along with the luminance (brightness) as well, which is strangely very much common practice, even in other software that prides itself on scientific accuracy. Color is a product of discrepancies in the red, green and blue channels. Stretching these channels non-linearly deforms, stretches, squashes and skew these discrepancies based on their in-channel brightness. The result is that hues and saturation levels vary wildly as the user starts stretching the data in order to recover more luminance detail.
Ofcourse, the notion that things out there in space 'magically' change their color based on how we stretch our image would be a fairly ludicrous proposition. Yet, sadly, here we are, with a majority of images out there suggesting that this is the case.
Now look at the two images again. The core is clearly green in the second (less stretched) image, while the core in the first image is so pale that we might have missed it is actually green (indeed many astrophotographers throw their hands up in the air and just depict it as white). However, notice the area adjacent to the core at 3 o'clock (in the second image it is completely absent). It is decidedly more green. Could the two areas be of similar chemical make up?
In fact they are! It is just neigh impossible to see, as stretching the luminosity component has (for no good reason at all) drained the core of its color, compared to the area adjacent to it.
And here is where StarTools' Tracking aided color compositing algorithm comes in;
New let's ignore for a minute that the colors might appear a little oversaturated. In the interest of fairness I applied the same settings for image #1 and image #3. I had to be a good bit more aggressive with the saturation to show any color in #1.
What we're seeing here is 'color constancy'. No matter how the image was stretched, the colors are absolutely comparable between areas that vary wildly in their dynamic range. The area adjacent to the core at 3 o'clock is the exact same green as the core. Also spare a moment to look at the stars. They now show color, no matter how dull or bright they are. The full visible black body emission spectrum is covered and what's most important - temperatures are completely stable and independent of how dull or bright they were recorded and subsequently stretched. This is because StarTools stays true to the color data as initially recorded, undoing all the stretching (thanks to Tracking!) to the luminance data to arrive at the 'virgin' colors as they were recorded.
This is how color in space should be presented from a scientific point of view - reproducibility no matter the exposure time or who processed it, with true separation from the way luminance was processed. Faint Andromedas should produce the same colors as bright Andromedas. Short exposure star fields should produce the same colors as long exposure starfields, etc.
Here is another striking example;
vs
Notice how star colors are recovered until well into the core of the globular cluster, while the viewer is able to objectively compare star temperatures all the way; colors are constant and comparable throughout the DSO.
Now, as of 1.3.5 there is actually a way to 'dumb down' (which perversely was a lot of work to implement and get right!) color compositing to the levels of other software. StarTools is all about enabling the user, not about pushing philosophies, so if you want to create images with the 'handicapped' traditional look you now can, using the 'Artistic' setting for the 'Style' parameter.
As a StarTools user you may have felt that your colors were never quite like 'the other guys'; more vivid, just more colorful. And you would have been right. You may have secretly liked it but you may have been uneasy being proud of the result as it is 'different' to what a lot of other folks have been producing (your Andromeda may have had a 'strange' yellow core, your M42 may have had an 'odd' green core, your M16 may have had many more tints of red and pink, or your stars seemed almost too CGI-pretty to be true). Now you now why, and now you know you can be just as proud at your images - perhaps even more, as they have been more scientifically correct/valuable to look at than the other guy's.
Now, on to the enigmatic 'LRGB Method Emulation' setting.
Over the past two decades, astrophotographer have struggled with compositing color (e.g. combining separate luminance and color information) images, leading to a number of different LRGB compositing methods. I won't go into them here, but all attempt to invarably 'marry' luminance and color information. The different techniques all produced different hues and saturations for the same data. And what is true for all of them is that they involved a lot of layering and pixel math to produce.
Because StarTools separates color from luminance until the very last moment (that moment being when you use the Color module), really any method of color compositing can be chosen for combining the data at that moment. The 'LRGB Method Emulation' simply lets you choose from a number of popular compositing techniques at the click of a button. You will notice that all these different settings have a subtle (or sometimes not-so-subtle) impact on the hues (and sometimes saturation and luminance) that are produced. It's just that easy.
Even if your data was from a non-LRGB source, it still works as StarTools will have still separated color from luminace during processing by creating a synthetic luminance frame from your RGB-only data where needed. Conversely, as you may have seen in the latest YouTube video, processing LRGB data is now no different than processing DSLR or OSC data. Luminance is separated anyway! The only difference is that you load your data through the LRGB module. It's a big step towards a unified yet extremely powerful workflow for all data sources.
I hope this little write-up has been informative (and hopefully mildly thought prevoking ) and the usage of the Color module is now a bit clearer. If you have any questions, do ask away!