It is true that calibration is performed before debayering, and that this can in theory have an effect on the quality of the calibration of the light frames.ecuador wrote:Interesting thread. I was under the impression that when using RAW files DSS does calibration before debayering, which, as I understand the process, is the right way to go and should make a difference. So, using DCRAW first would debayer the images, so they would not be whitebalanced, but their calibration would not be as good. Am I correct in that?
In order to quantify this effect (and to weigh up the pros and cons of this DSS-specific workaround), we'd have to look at how the debayering is performed (there are various kinds) and whether any data/measurements that were present in the calibration frames (such as hot pixels and gradients) can reasonably be assumed to survive the debayering stage.
When it comes to dark and bias frames, there may indeed theoretical consequences; instead of, for example, subtracting a warm pixel's bias as a single pixel, we're now subtracting its debayered equivalent. What that equivalent looks like is dependent on the chosen debayering algorithm. For DSS bilinear interpolation algorithm, it will be now be "smeared out" over neighbouring pixels that partially used it for interpolation.
However, in the case of the light frame, the warm pixel's unwanted signal is "smeared out" in a similar fashion and can therefore still be subtracted.
This, however, become a lot more complicated if we start using an algorithm like AHD or VNG which tries to "intelligently" infer detail by looking at the gradients of neighbouring pixels. Obviously these gradients are different between the dark/bias and light frames as the light frame has actual celestial detail mixed in. This is were things would start going wrong. However, given that DSS only supports AHD as an alternative to bilinear (and Luc does not recommend its use), this scenario does not come in play; bilinear interpolation does not take into account gradients.
In the case of flat frames, we're dealing with low frequency (e.g. large scale) gradients or out-of-focus dust specks and donuts. Given debayering artifacts only occur on very small scales, the difference is negligible. Case in point; often you can even significantly blur a master flat without consequences.
Now for the benefits of not color balancing (yet);Also, the problem with DSS white-balancing has only to do with the colors of our image when processing Star Tools? I.e. if I can get the colors I want out of an image even though DSS has whitebalanced it, would there be a benefit from trying to get non-whitebalaned data?
- 1. luminance noise levels are still "virgin" and not impacted by the (arbitrary) scaling of red, green and blue balancing.
- 2. given that gradient subtraction still needs to take place, more detail can potentially be extracted from the highlights by not scaling up some channels beyond clipping yet.
Explanation for 1. Before color balancing, StarTools can know noise levels are 1:1:1 in the red, green and blue channels. Giving StarTools color balanced data (for example 2.4:1:1.4) , all bets are off (since the color balancing will have multiplied the signal AND noise). Given that StarTools processes luminance and color data separately when applicable to keep noise propagation down (both in real terms and psychovisually) as much as possible, you are giving yourself an unnecessary disadvantage by providing StarTools with color balanced data; there is no way StarTools can know how to create a 1:1:1 weighting for luminance purposes from the data if it has been color balanced. It will be assigning too much weight to (in most cases) the blue and red channels, adopting their noise in the process. It's actually worse than that (given that there are 2 samples for the green channel, making it way more accurate than red and blue - we're missing out way more!), but that's the gist of it.
Explanation for 2. As outlined above, color balancing scales up some channels, (necessarily) clipping their data. However, in almost all cases (sky glow, light pollution, etc.) we need to subtract a bias from the data. We can subtract this bias from non-clipped (not-color balanced data), but not from clipped (due to scaling) parts of the data. Of course, subtracting bias from parts of the data that are clipped in both will yield the same (undefined) result.
Obviously, not having DSS color balance the data would be highly preferable but for some weird reason that's not an option.
This whole convoluted dcraw work flow is the next-best thing.