OSC duoband HOO strategies
OSC duoband HOO strategies
Just messing around with my Wizard nebula data while I wait for more clear nights. No broadband stars yet, just ~11 hours of L-Ultimate duoband using the AT130EDT refractor.
My normal process produces about what I expected going in:
Pushing back the red any more than that starts to get less and less attractive to my eye.
The 'playing around' part started with thinking about using Wipe separately on the red vs the green+blue. Why stop there? I processed the G+B channels completely separately as just b&w luminance, with a pointed effort to push the Oiii component as hard as I dared. Most of the push happened in Optidev, where I bumped the initial gamma way up (1.5) and squeezed the shadow dynamic range allocation way down (~5%). I squeezed shadow dynamic range again in Contrast. No HDR or Sharp on the Oiii, just SVD. Then a heavy-handed SS using Isolate, then default NR and saved out as a b&w tiff.
For the Ha, I processed as I normally would - Optidev, Contrast, HDR (toned way down), Sharp, SVD. A rather light touch of SS (dimsmall at ~70% strength) and a light touch with NR.
Then back to Compose with the Ha in red and the Oiii in green and blue. I told Compose to use the Ha as the primary luminance by setting the red time to quite a lot more than the g/b. The only module I was for the composite was Color:
I wouldn't use that version, but it seems promising? Has anyone else played around with separate Ha/Oiii processing for duoband? Any tips and tricks?
My normal process produces about what I expected going in:
Pushing back the red any more than that starts to get less and less attractive to my eye.
The 'playing around' part started with thinking about using Wipe separately on the red vs the green+blue. Why stop there? I processed the G+B channels completely separately as just b&w luminance, with a pointed effort to push the Oiii component as hard as I dared. Most of the push happened in Optidev, where I bumped the initial gamma way up (1.5) and squeezed the shadow dynamic range allocation way down (~5%). I squeezed shadow dynamic range again in Contrast. No HDR or Sharp on the Oiii, just SVD. Then a heavy-handed SS using Isolate, then default NR and saved out as a b&w tiff.
For the Ha, I processed as I normally would - Optidev, Contrast, HDR (toned way down), Sharp, SVD. A rather light touch of SS (dimsmall at ~70% strength) and a light touch with NR.
Then back to Compose with the Ha in red and the Oiii in green and blue. I told Compose to use the Ha as the primary luminance by setting the red time to quite a lot more than the g/b. The only module I was for the composite was Color:
I wouldn't use that version, but it seems promising? Has anyone else played around with separate Ha/Oiii processing for duoband? Any tips and tricks?
Re: OSC duoband HOO strategies
The issue with processing your different signals non-linearly like that, is that you are now suggesting relative emissions in places are higher and lower than they are in reality. The whole point of being able to linearly throttle your relative emissions, is that someone can look at one part of your image (say a darker part) and compare its coloring to another part (say a brighter part) and still draw the same conclusion about emission concentrations.Most of the push happened in Optidev
Processing your different channels non-linearly independently destroys that correlation, and an observer looking at your image can no longer do that.
Ivo Jager
StarTools creator and astronomy enthusiast
StarTools creator and astronomy enthusiast
-
- Posts: 1166
- Joined: Sun Jun 20, 2021 10:05 pm
- Location: Alta Loma, CA
Re: OSC duoband HOO strategies
All absolutely true! If, relative concentrations of Ha vs OIII is something you want to show your audience (or yourself). I mean the data is there to do it if you want to, though possibly with some quibbling about bandpass widths, particularly in the case of a duo filter, oh and perhaps Bayer bleed...admin wrote: ↑Sat Nov 09, 2024 12:23 amThe issue with processing your different signals non-linearly like that, is that you are now suggesting relative emissions in places are higher and lower than they are in reality. The whole point of being able to linearly throttle your relative emissions, is that someone can look at one part of your image (say a darker part) and compare its coloring to another part (say a brighter part) and still draw the same conclusion about emission concentrations.Most of the push happened in Optidev
Processing your different channels non-linearly independently destroys that correlation, and an observer looking at your image can no longer do that.
The problem is perhaps that image2 is just much more aesthetically pleasing, IMHO. Spatial distribution of emissions is still there, though likely again with caveats but we have plenty of those in all that we are doing.
Maybe with a bicolor there's a bit more need (?) to keep the two hues tied to each other relatively that way, see-saw like, though if Ron were to add another factor into the equation, say RGB stars, query if that still holds as strongly.
Tricolor SHO is another where one wonders if the three gasses must always be kept relatively balanced, as sometimes the hue choices don't really pan out well no matter which matrix is tried, without independent color selection (at the very least for the starting point). And to be honest, a more aesthetic or gripping image gets more attention and would be likely to lead to questions about the gasses. Kind of hard to explain that the white-grayish fog in image1 is actually oxygen (blue!).
I guess my analogy would be to NASA/ESA imagery. Yes those splashy pictures are meant for outreach, but they are fairly particular about their processing and the researchers do have input and veto power, as they want to ensure the image backs up the science they are doing. But, they frequently blend different bands, widths of bands (especially JWST), and sometimes even fully different spectrum areas, like radio or x-ray. They overlay this data, sometimes even with a visual set, all very much non-linearly of course, in order to reveal the spatial distribution of interesting (or pretty!) things that are going on way out yonder. Relative concentrations are discarded for the purposes of what they are trying to show.
We do similarly in ST with NB Accent, at least for a single NB band, where we are of course blending that against visual. So I guess the big question is, is there something that necessarily ties these particular slices of hydrogen and oxygen (other slices exist), or of hydrogen oxygen and sulfur, together such that they must be kept relative? Unless relative gasses is your goal to display to the viewer.
[Apologies for the rambling. I'm bored tonight. ]
Re: OSC duoband HOO strategies
Thanks, Ivo. I agree that I was willingly violating a fundamental assumption of ST processing. I totally abused the Oiii just to see what came out the other side. That said, I would be quite humbled if anyone used my properly processed version as some sort of scientific evidence for anythingadmin wrote: ↑Sat Nov 09, 2024 12:23 amThe issue with processing your different signals non-linearly like that, is that you are now suggesting relative emissions in places are higher and lower than they are in reality. The whole point of being able to linearly throttle your relative emissions, is that someone can look at one part of your image (say a darker part) and compare its coloring to another part (say a brighter part) and still draw the same conclusion about emission concentrations.Most of the push happened in Optidev
Processing your different channels non-linearly independently destroys that correlation, and an observer looking at your image can no longer do that.
The idea of always keeping a goal of maintaining relative emission strengths raises (to me) a couple of different questions.
First - I think in the vast majority of emission nebulae the Oiii is completely a subset of the Ha structure. Are there examples of a nebula with particularly strong Oiii regions without correspondingly stronger Ha signal? (not counting PNs, where the ionizing radiation is so strong) Certainly the standard-processed Wizard is a more faithful rendition, with the Oiii serving mostly to just lighten the red regions.
Second - how is preservation of relative emission strengths impacted by using substantially increased exposure times for Oiii vs Ha? This is obviously not an issue for OSC-duoband, but it's pretty common practice in mono imaging.
-
- Posts: 1166
- Joined: Sun Jun 20, 2021 10:05 pm
- Location: Alta Loma, CA
Re: OSC duoband HOO strategies
That's an interesting point. I would guess it's all about the SNR, and you have to put that explanation in your description or the astrobin stats. 5 hours of hydrogen and 50 hours of oxygen! But that means little unless you are astrophotography-knowledgeable, and even for those of us who are you can only sort of ballpark it. Relative kind of gets pushed off a cliff, no?
Much may depend on how ST sets the floors and strengths before stashing away color info, and then creating the synth L from that unbalanced SNR. But at minimum that's still altering the luminance to then be painted.
Re: OSC duoband HOO strategies
Apart from different integration times it gets even more complicated when different filter bandwidths and different quantum efficiencies for the different emission lines are taken into account. That's why I question this concept for drawing conclusion about
Regards
Stefan
since therefore I guess you'd have to consider so much more than only the pixel brightness of different channels even if equally stretched.
Regards
Stefan
Re: OSC duoband HOO strategies
I suppose, this is a misunderstanding. Well, not so easy to explain … It’s _not_ about setting ‘absolute’ emission strengths in relation to each other. Following Ivo’s concept it’s absolutely fine to set for example 100 hours of OIII in relation to only one hour of Ha. Or to throttle and to balance the intensity of this two gasses against each other in order to show what you would like to point out or just to get a pleasing result. Or to use a high efficiency sensor to gather OIII, but a lame duck to gather Ha. That’s all fine.
Instead the point which Ivo mentioned
is about something else. Say, given a location A within your image shows an intensity of 70 % for Ha and of 70 % for OIII as well. That’s a light gray (probably not, but for the sake of this example). If you equally / dependently (but of course, non-linearly) stretched your channels, you may have another location B within your image, which is darker, say 25 % Ha and 25 % OIII. That’s a dark gray. But: it’s still gray, the same hue. If you on the other hand stretched the two channels _independently_ (and non-linearly) of each other, location B may now show 25 % intensity for Ha, but only 15 % for OIII. This will result in a _different hue_. (Edit *) And this is, what Ivo dislikes.
Dietmar.
Edit: * Whereas location A may remain at 75 % / 75 % and stay grey.
Re: OSC duoband HOO strategies
That makes sense - thanks.decay wrote: ↑Wed Nov 13, 2024 12:18 pmis about something else. Say, given a location A within your image shows an intensity of 70 % for Ha and of 70 % for OIII as well. That’s a light gray (probably not, but for the sake of this example). If you equally / dependently (but of course, non-linearly) stretched your channels, you may have another location B within your image, which is darker, say 25 % Ha and 25 % OIII. That’s a dark gray. But: it’s still gray, the same hue. If you on the other hand stretched the two channels _independently_ (and non-linearly) of each other, location B may now show 25 % intensity for Ha, but only 15 % for OIII. This will result in a _different hue_. (Edit *) And this is, what Ivo dislikes.
Dietmar.
Edit: * Whereas location A may remain at 75 % / 75 % and stay grey.
No question - if there's a line between documentary photography and astro-art, my initial post was about getting a running start and long-jumping across the line into astro-art.
Re: OSC duoband HOO strategies
Hi guys!
Dietmar, you're certainly right that I misunderstood Ivo's intention (although he expressed it pretty clearly), thanks for clarification. I am nonetheless not convinced in terms of documentary value.
I looked at one of my images and determined the R(ed) / B(lue) ratio in two parts of the image at one pixel. For the bright part I had a ratio of 216/94 = 2,298. In a darker part of the image I got a ratio of 92/41 = 2,244. Pretty similar. I then adjusted the blackpoint (in pretty coarse manner). For the bright part I then got 213/78 = 2,731 which is +19%. For the dark part I got 75/19 = 3,947 which is +76%.
This means that a simple adjustment of the blackpoint distorts the signal ratios across the image massively. I never considered an adjustment of blackpoint non documentary. Is it?
I guess the same happens by application of sky glow in the FIlmDev module.
The ratio is probably not affected if it's basically 1 which is Dietmar's example What a blackpoint adjustment or something similar doesn't do is an inversion of the ratio. If a ratio is smaller than 1 it stays like that. If it's bigger it stays like that too. That's not true for independent non linear stretching. But in the color module you can certainly invert the ratio. What has been red (Ha / OIII > 1) can get blue (Ha / OIII < 1) by reducing red bias. Right?
So we have adjustments that affect ratios in quantitative manner (blackpoint adjustment) or even in a qualitative manner (color module). I guess both are considered documentary. What's the difference to independent non linear stretching which does the same?
Apart from the rather technical aspects: the throttling in the color module is deemed okay since the intention of throttling back Ha in order to show OIII in a certain area is considered valid. But this often enough doesn't work well since you get a grey appearance of a nebula (see the frequent discussions on duoband images) which doesn't speak (grey = Ha AND OIII) to many people even in the AP community, let alone in non AP people. Independent stretching does that and you can highlight OIII areas much better. I assume that the NB images of Hubble published by NASA are processed by stretching the channels independently to tell this story, aren't they? Not that I want to use an argument of authority but...
I am not sure if that's not documentary. The distinction between documentary vs art is probably not a defined line but rather a spectrum. I tend to not consider this independent stretching crossing that line or spectrum too much.
Or am I again totally off here?
Regards
Stefan
Dietmar, you're certainly right that I misunderstood Ivo's intention (although he expressed it pretty clearly), thanks for clarification. I am nonetheless not convinced in terms of documentary value.
Let's talk about relative signal intensity/photon counts since that's what's been measured/detected since relative emissions isn't what's detected because that's massively influenced by QE, filter bandwidth (together with focal ratio in fast systems), relative integration times etc. So I guess we are talking about signal ratios, e.g. Ha / OIII, and that processing shouldn't change these ratios across the image, e.g. for darker and brighter parts of the image which would surely be the case if both signals would be stretched independently.admin wrote: ↑Sat Nov 09, 2024 12:23 am The issue with processing your different signals non-linearly like that, is that you are now suggesting relative emissions in places are higher and lower than they are in reality. The whole point of being able to linearly throttle your relative emissions, is that someone can look at one part of your image (say a darker part) and compare its coloring to another part (say a brighter part) and still draw the same conclusion about emission concentrations.
I looked at one of my images and determined the R(ed) / B(lue) ratio in two parts of the image at one pixel. For the bright part I had a ratio of 216/94 = 2,298. In a darker part of the image I got a ratio of 92/41 = 2,244. Pretty similar. I then adjusted the blackpoint (in pretty coarse manner). For the bright part I then got 213/78 = 2,731 which is +19%. For the dark part I got 75/19 = 3,947 which is +76%.
This means that a simple adjustment of the blackpoint distorts the signal ratios across the image massively. I never considered an adjustment of blackpoint non documentary. Is it?
I guess the same happens by application of sky glow in the FIlmDev module.
The ratio is probably not affected if it's basically 1 which is Dietmar's example What a blackpoint adjustment or something similar doesn't do is an inversion of the ratio. If a ratio is smaller than 1 it stays like that. If it's bigger it stays like that too. That's not true for independent non linear stretching. But in the color module you can certainly invert the ratio. What has been red (Ha / OIII > 1) can get blue (Ha / OIII < 1) by reducing red bias. Right?
So we have adjustments that affect ratios in quantitative manner (blackpoint adjustment) or even in a qualitative manner (color module). I guess both are considered documentary. What's the difference to independent non linear stretching which does the same?
Apart from the rather technical aspects: the throttling in the color module is deemed okay since the intention of throttling back Ha in order to show OIII in a certain area is considered valid. But this often enough doesn't work well since you get a grey appearance of a nebula (see the frequent discussions on duoband images) which doesn't speak (grey = Ha AND OIII) to many people even in the AP community, let alone in non AP people. Independent stretching does that and you can highlight OIII areas much better. I assume that the NB images of Hubble published by NASA are processed by stretching the channels independently to tell this story, aren't they? Not that I want to use an argument of authority but...
I am not sure if that's not documentary. The distinction between documentary vs art is probably not a defined line but rather a spectrum. I tend to not consider this independent stretching crossing that line or spectrum too much.
Or am I again totally off here?
Regards
Stefan
Re: OSC duoband HOO strategies
Hi Stefan & all,
The same goes for setting sky glow in the FilmDev module, I would suppose?
I’m absolutely with you, documentary processing vs. art is a spectrum – Mike started a thread some months ago in which we discussed quite some aspects of this topic. And/but for me (too), independent non-linear stretching is a substantial step towards ‘art’ within this spectrum.
_But_ - first – actually, I don’t like the term ‘art’ in this case. ‘Art’ sounds like painting or likewise. This is really not the case here. Everything is driven by the data itself. Second – StarTools already offers quite some features, which seem to go towards art in a similar manner – for example local contrasts, applied by the Contrast or Sharp modules. Local contrasts are changing brightness ratios. The observer can no longer draw a conclusion about the ‘real’ brightness at a specific location in relation to the brightness of another location. But we all consider local contrasts to be accepted and documentary, don’t we?
I’m absolutely with you (and Mike) that applying independent non-linear stretches are a common practice – for PI users, but even for NASA/ESA publications. And I too think, that this a justifiable way to show the things you want and to tell the story that you want to tell.
Now it’s getting a bit speculative: I don’t think that Ivo sees it much differently. But I could imagine that it it not easy to integrate this feature into StarTools. It might be necessary to reconstruct the current internal software architecture – maybe in a very expensive way. But of course, it’s up to Ivo to tell us about this. (I remember a thread you (Stefan) started some time ago, regarding different stretches for main and NBAccent data. If I remember correctly, the conclusion was that there’s no easy way to implement this into StarTools at this time.)
Now with regard to Jeff’s last post: _if_ it is the case that a _growing_ amount of people prefer PI over StarTools it would be helpful to find out about the reason(s). In order to save Ivo’s valuable and limited time to work on the most important missing or wanted features. And I too think, that this missing possibility to apply independent non-linear stretches might be an important reason for many people to prefer PI.
Best regards, Dietmar.
Good catch. I think, basically you’re right. Setting the black point is used to define the darkest parts of an image. If you are setting the black point to already visible, brighter values, the data is clipped. Clipping is done by fixed/absolute values and therefore relative ratios get distorted. There is high distortion in the darker parts and less distortion in the brighter parts. Well … I guess setting the black point in an intended/reasonable manner will not introduce too much distortion … ?
The same goes for setting sky glow in the FilmDev module, I would suppose?
Right. But relative ratios across the whole image are displayed in a consistent way: Each relative ratio is identified by it’s hue, no matter if dark or bright.
The difference is, that applying different non-linear stretches per channel destroys the fixed/constistent correlation between ratio and hue.
I’m absolutely with you, documentary processing vs. art is a spectrum – Mike started a thread some months ago in which we discussed quite some aspects of this topic. And/but for me (too), independent non-linear stretching is a substantial step towards ‘art’ within this spectrum.
_But_ - first – actually, I don’t like the term ‘art’ in this case. ‘Art’ sounds like painting or likewise. This is really not the case here. Everything is driven by the data itself. Second – StarTools already offers quite some features, which seem to go towards art in a similar manner – for example local contrasts, applied by the Contrast or Sharp modules. Local contrasts are changing brightness ratios. The observer can no longer draw a conclusion about the ‘real’ brightness at a specific location in relation to the brightness of another location. But we all consider local contrasts to be accepted and documentary, don’t we?
I’m absolutely with you (and Mike) that applying independent non-linear stretches are a common practice – for PI users, but even for NASA/ESA publications. And I too think, that this a justifiable way to show the things you want and to tell the story that you want to tell.
Now it’s getting a bit speculative: I don’t think that Ivo sees it much differently. But I could imagine that it it not easy to integrate this feature into StarTools. It might be necessary to reconstruct the current internal software architecture – maybe in a very expensive way. But of course, it’s up to Ivo to tell us about this. (I remember a thread you (Stefan) started some time ago, regarding different stretches for main and NBAccent data. If I remember correctly, the conclusion was that there’s no easy way to implement this into StarTools at this time.)
Now with regard to Jeff’s last post: _if_ it is the case that a _growing_ amount of people prefer PI over StarTools it would be helpful to find out about the reason(s). In order to save Ivo’s valuable and limited time to work on the most important missing or wanted features. And I too think, that this missing possibility to apply independent non-linear stretches might be an important reason for many people to prefer PI.
Best regards, Dietmar.