Can masked stretch be done in ST?

General discussion about StarTools.
opestovsky
Posts: 14
Joined: Fri Oct 22, 2021 6:19 pm

Can masked stretch be done in ST?

Post by opestovsky »

Hi.

So here's a practical question:

Can this
https://www.cloudynights.com/topic/7920 ... ry11405886
be turned into this
https://www.cloudynights.com/topic/7920 ... ry11441699
in Startools, a fake HOO image from OSC with a dual narrowband filter?

Here's the explanation how this was done by the author of the second picture:
The main thing with this is that after you get an integrated RGB image from pre-processing, you then split the RGB channels out into separate R, G and B greyscale images. With a dual narrowband filter, the R channel is effectively an Ha channel. And a combination of the B and G channels is OIII. You can then work on these separately, and combine them back as Ha and OIII images, which I did using a HOO mix (Ha = R, OIII=G, OIII=B).
I use Pixinsight, so apologies if this explanation is too PI. Summary of steps would be:
1. calibrate, debayer, star align, integrate
2. Dynamic Background Extraction (reduce gradients), Colour Calibration (pixinsight offers a narrowband version of this)
3. Split R,G and B channels out using ChannelExtraction. Combine B and G using Pixelmath expression max(B,G)
4. process the two images as necessary - stretch, de-noise etc. With this Pacman, my OIII was too weak to really show any blue - just ended up with white really. So I followed some of the other advice on the thread, and tried using a mask of the OIII image on the Ha image, and actually reduced the intensity of the Ha a bit, which helped the OIII show through.
5. Combine using Pixelmath as above for HOO. For a pseudo SHO I have used R=Ha, G = (Ha*0.5)+(OIII*0.5), B = OIII.
6. adjust saturation, final stretch, denoise, whatever. I think I also did a light bit of HDR with this one too (HDRMultiscaleTransform)
I think this is the key to this whole procedure:
"With this Pacman, my OIII was too weak to really show any blue - just ended up with white really. So I followed some of the other advice on the thread, and tried using a mask of the OIII image on the Ha image, and actually reduced the intensity of the Ha a bit, which helped the OIII show through."

I think this is called masked stretch in Pixinsight. I believe it is a histogram transformation that is done only to pixels defined by a mask.

If this is possible, could you show how to do this? That would be great.

I don't have his data, but here's mine: should be the same for the purpose, as we used similar filters.
This data was stacked in DSS with recommended parameters, except one - "Per channel background calibration" was on.
The stacking artifacts may extend up to +-110 pixels on each side. All of the other details about the data are in the first link above.
https://drive.google.com/file/d/1ClEtdm ... sp=sharing

Thanks.
Mike in Rancho
Posts: 1166
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Can masked stretch be done in ST?

Post by Mike in Rancho »

Same as acrh2?

No masks involved in ST's stretching (AutoDev, FilmDev) that I am aware of. Seems questionable to me anyway, though it also seems there sure is a lot of it going on. Far more than I thought when I started doing this. At least the linear fitting I can somewhat understand.

But as you captured a whole bunch of L-eNhance data, why not just do it as a real HOO? Unless you only consider it real and not fake when done with a mono cam?

ST is pretty much built for it. Use compose to load your same dataset into R, G, and B; choose bicolor from OSC/DSLR, which as you can see from the description splits out R for your H, and combines 2xGB for your OO; process normally. When you get to color, select bicolor preset. The default matrix when doing so is HOO, though there are several others. Style and LRGB will change, but you can alter them if desired. Then just a matter of working out your saturations and balance. B and G are the same O. In most cases you will bottom those sliders out, and use R (Ha) bias decrease, in order to balance out the far stronger Ha.

Here's a very quick run at your data, what they would call a hack rather than actually working it up nicely, just to show the Ha-OIII ratio that you captured. Pretty similar to some of the ones I posted of my data. No surprise as I used the L-eNhance also.

opestovsky pac HOO ST8 1B.jpg
opestovsky pac HOO ST8 1B.jpg (455.57 KiB) Viewed 6869 times

Now, if instead you mean "art," which seems to be what all this masking and subtracting and manipulating stuff is, you could theoretically do that also, as well as any channel splitting and saving you wish to. The compose module can do all sorts of things - all in the descriptions, manual, and user notes by Guy here. Then it would just be a matter of processing them in different manners to get your base files, and then your own talent in blending them together selectively using the Layer module and Mask creation.
opestovsky
Posts: 14
Joined: Fri Oct 22, 2021 6:19 pm

Re: Can masked stretch be done in ST?

Post by opestovsky »

Mike in Rancho wrote: Sat Oct 23, 2021 11:53 pm Same as acrh2?

No masks involved in ST's stretching (AutoDev, FilmDev) that I am aware of. Seems questionable to me anyway, though it also seems there sure is a lot of it going on. Far more than I thought when I started doing this. At least the linear fitting I can somewhat understand.

But as you captured a whole bunch of L-eNhance data, why not just do it as a real HOO? Unless you only consider it real and not fake when done with a mono cam?

ST is pretty much built for it. Use compose to load your same dataset into R, G, and B; choose bicolor from OSC/DSLR, which as you can see from the description splits out R for your H, and combines 2xGB for your OO; process normally. When you get to color, select bicolor preset. The default matrix when doing so is HOO, though there are several others. Style and LRGB will change, but you can alter them if desired. Then just a matter of working out your saturations and balance. B and G are the same O. In most cases you will bottom those sliders out, and use R (Ha) bias decrease, in order to balance out the far stronger Ha.

Here's a very quick run at your data, what they would call a hack rather than actually working it up nicely, just to show the Ha-OIII ratio that you captured. Pretty similar to some of the ones I posted of my data. No surprise as I used the L-eNhance also.


opestovsky pac HOO ST8 1B.jpg


Now, if instead you mean "art," which seems to be what all this masking and subtracting and manipulating stuff is, you could theoretically do that also, as well as any channel splitting and saving you wish to. The compose module can do all sorts of things - all in the descriptions, manual, and user notes by Guy here. Then it would just be a matter of processing them in different manners to get your base files, and then your own talent in blending them together selectively using the Layer module and Mask creation.
Thanks, Mike.
Same as acrh2?
Reporting for duty.
No masks involved in ST's stretching (AutoDev, FilmDev) that I am aware of. Seems questionable to me anyway, though it also seems there sure is a lot of it going on. Far more than I thought when I started doing this. At least the linear fitting I can somewhat understand.
I don't get the whole idea behind the "moral" objections Ivo has against this masked stretch business. It seems to me like it's a useful tool to make pretty nebula pictures. If it doesn't fit with the philosophy of ST's noise evolution and tracking, well, it could be one of those post-post-processing modules.
But I don't know much about it.
But as you captured a whole bunch of L-eNhance data, why not just do it as a real HOO? Unless you only consider it real and not fake when done with a mono cam?
Well. There's significant Ha signal leaking into the green channel of O3. So, yeah. It's kind of fake HOO because the Ha and O3 signals aren't properly separated.
ST is pretty much built for it. Use compose to load your same dataset into R, G, and B; choose bicolor from OSC/DSLR, which as you can see from the description splits out R for your H, and combines 2xGB for your OO; process normally. When you get to color, select bicolor preset. The default matrix when doing so is HOO, though there are several others. Style and LRGB will change, but you can alter them if desired. Then just a matter of working out your saturations and balance. B and G are the same O. In most cases you will bottom those sliders out, and use R (Ha) bias decrease, in order to balance out the far stronger Ha.
Thank you for explaining this. I played with Compose a little bit for the first time yesterday. I still don't quite understand how things work.
But let me ask you this - when you select the exposure lengths for RBG channels, does that affect the proportion of of Ha/O3/S2 data that is used in blending and creating the rgb image? So, suppose that I wanted to really make the O3 contribute more to the final image. Would I need to lower the exposure of the O3 dataset in Compose?
Here's a very quick run at your data, what they would call a hack rather than actually working it up nicely, just to show the Ha-OIII ratio that you captured. Pretty similar to some of the ones I posted of my data. No surprise as I used the L-eNhance also.
Thanks for taking a stab at this. Difficult to tell what is going on because the stars are out of control. I'll try this myself.
Still learning how to remove stars using ST. It seems a lot more involved than Starnet++.
Now, if instead you mean "art," which seems to be what all this masking and subtracting and manipulating stuff is, you could theoretically do that also, as well as any channel splitting and saving you wish to. The compose module can do all sorts of things - all in the descriptions, manual, and user notes by Guy here. Then it would just be a matter of processing them in different manners to get your base files, and then your own talent in blending them together selectively using the Layer module and Mask creation.
This grasshopper still has much to learn.
I wish Ivo would rewrite the manual to be more newbie friendly - instead of explaining what different sliders and options do to the data on programmatic level, it would be useful to know how they affect the final image, with examples.
User avatar
admin
Site Admin
Posts: 3381
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Can masked stretch be done in ST?

Post by admin »

Hi,

By masked stretch, I assume you mean the PI operation?

If so, then I will just repeat here what I wrote in another thread;
admin wrote: Mon Aug 30, 2021 1:52 am Masked stretching is a very crude tool that really has no place in any modern image processing software.
It has the potential to introduce artifacts, but most most of all, it is sub-optimal.

Non-linear stretching in StarTools by means of AutoDev (please note FilmDev should not be used unless you wish to emulate film!) serves a very specific purpose. From the AutoDev docs;
Keeping in mind AutoDev's purpose
The purpose of AutoDev is to give you the most optimal global starting point, ready for enhancement and refinement with modules on a more local level. Always keep in the back of your mind that you can use local detail restoration modules such as the Contrast, HDR and Sharp modules to locally bring out detail. Astrophotography deals with enormous differences in brightness; many objects are their own light source and can range from incredibly bright to incredibly dim. Most astrophotographers strive to show as many interesting astronomical details as possible. StarTools offers you various tools that put you in absolute, objective control over managing these enormous differences in brightness, to the benefit of your viewers.
E.g. StarTools' "do-it-once-do-it-right" mantra sees you gradually refine your image (the suggested workflow is no accident - it progresses from coarse to fine detail enhancement), much like a sculptor progresses from coarse block of marble to an intricately detailed sculpture. First order of business is to establish a stretch that is as neutral as possible and does not pick "winners" in the shadows, midtones, nor highlights. This is you coarse block of marble. Masked stretch (or levels and curves, etc.) is not concerned with this notion at all, and would you start off with a Masked Stretch image, you would have a harder time wielding the tools that are supposed to build on your "base".

Or, taking an globular cluster as an example, you would find a stretch with AutoDev, that is the best compromise between showing the entirety of the cluster (including its fainter stars in periphery) and resolving (probably only just) the core. You then progress - to taste - resolving the core with the Contrast module (medium-to-large local dynamic range optimization) and - most useful for globs, the HDR module (medium-to-small local dynamic range optimization; vary Detail Size Range if needed), You finish off with deconvolution, which will gladly use the "helpful" dynamic range allocation to reolve the finest details.

All the steps that came after AutoDev will have achieved better results thanks to AutoDev's inital neutral dynamic range allocation. It means these tools don't have to work harder than absolutely needed to bring detail back from the brink. The latter is the crux of the matter and why tools like Masked Stretch or levels & curves etc. are considered sub-optimal and best avoided.

All that said, if you have a dataset where the use of MaskedStretch yielded a significantly better end result, please feel free to share the dataset and the result itself. I am always looking to improve StarTools or better cater to edge cases, in the case these are not covered.
FWIW, the "mask" part of masked stretch refers to a luminance mask, which is not objectionable as it is not selective (e.g. it's not "drawn" by any sort of manual intervention). The concept is the same as, for example, the "Brightness Mask" in the Layer module; the brightness of the input pixel throttles the brightness of the output pixel. As long as this is done across the entire image, there are no discontinuities and the operation is algorithmically reversible without any sort of manual intervention.

Any sort of manipulations that are selective and/or taint the integrity of the data are - regardless of my (or anyone else's) personal preferences - problematic for further signal processing. This includes star removal (there is almost never a good reason to do this, if you think there is - please let us know what you're trying to do and we'll offer an alternative).

These sorts of manipulations are a little like saying 1+1 is 2, except in some cases where I'd like it to be 3. The whole house of cards collapses in terms of the mathematics and physics that are supposed to govern the signal in your entire image. The signal in every pixel is connected to every other pixel in your image. What you do to one pixel has consequences to other, sometimes even far-flung, pixels elsewhere. Whether it is noise stats or point spread functions.

Hope this helps!
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1166
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Can masked stretch be done in ST?

Post by Mike in Rancho »

Hmmm. There's been a lot of discussion of this lately in the past couple CN monthly targets, Tulip last month but particularly the Pacman this month. It's possible I'm just not understanding how PI works. :think:

Linear Fit of the channels was explained, and I do think I understand that. Raising all the channels to the same background, essentially, could boost something weak like OIII, though it seems to me that might bring a lot of noise along with it.

The masking talk does leave me further befuddled, but you say not objectionable? Something akin to an automask or global selection parameters (threshold and so forth?). So when JB was trying out my data in his PI, he mentioned that he used the OIII channel to levels-and-curves it and then create a mask. That mask was then applied doing color, thus boosting the blue/OIII.

Other discussions of techniques seemed even more suspect to me - such as using said OIII mask as a reduction upon the Ha channel, apparently therefore letting the OIII through when all is said and done.

Dunno...some of this stuff "sounds" selectively manipulative, but maybe it's just the way stuff has to be done in PI. :?:


OP, I hear you on the OSC, there's always going to be Bayer bleed over, or leakage, whatever you want to call it. A compromise, and imperfect, but not so much that I would say fake. And of course we both are dealing with the infamously-pesky Hb line in the B+G, which is probably a bigger hindrance than Bayer bleed. I didn't think it through well before ordering the L-eN, which is still a cool filter.

Your other question we just discussed here a few pages back, in a way. Ivo explained the bicolor balancing in a pretty useful way. But also, yes I have tried adjusting the exposure sliders in compose in this situation, as an experiment. It does work - on the synthetic luminance, which is all those sliders control. I believe having one channel set lower will boost it relative to the other (meaning I think it makes up for it, wanting them to all be evenly exposed). However, that may cause more problems than cures, and likely boost a bunch of OIII noise into the synthetic L.

Here's the link, it started off a little different but then the questions continued getting more details on bicolor in Color as well. viewtopic.php?f=10&t=2366
opestovsky
Posts: 14
Joined: Fri Oct 22, 2021 6:19 pm

Re: Can masked stretch be done in ST?

Post by opestovsky »

admin wrote: Mon Oct 25, 2021 2:34 am Any sort of manipulations that are selective and/or taint the integrity of the data are - regardless of my (or anyone else's) personal preferences - problematic for further signal processing. This includes star removal (there is almost never a good reason to do this, if you think there is - please let us know what you're trying to do and we'll offer an alternative).
Thank you for a detailed reply.
I read some threads here on making fake HOO pictures from OSC data taken with dual narrowband filters.

I followed the procedure from the thread titled "Crazy idea or not HOO approaches Soul neb."
Here's what I got.
I was going to call it "quick and dirty," but it certainly wasn't quick. And it definitely is dirty.
Image
But how do I deal with these crazy stars?
Look at Mike's image for example. Mine started out very much in a similar way.
I had to mask the stars out in the color module, which also removed most of their color and created cyan halos due to imperfect masking.
What could I do to make stars better?
Thanks.
User avatar
admin
Site Admin
Posts: 3381
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Can masked stretch be done in ST?

Post by admin »

opestovsky wrote: Mon Oct 25, 2021 6:55 am I read some threads here on making fake HOO pictures from OSC data taken with dual narrowband filters.
There is (usually) nothing fake about them! When shooting with, for example, an Optolong L-Extreme with an OSC or DSLR, the result is true Ha and true O-III neatly separated in red(Ha), mostly green (O-III) and some blue (O-III).
how do I deal with these crazy stars?
There are many workarounds and your solution (to desaturate them) is one.
It depends on what you find "crazy" about the stars? Their color? Their profile? What would you like them to be (and what is your reasoning)?
Indeed, stars accumulate signal in Ha much slower than O-III. Therefore they always appear cyan. The same thing can be observed in SHO (HST) images where stronger S-II and O-III (red + blue) vs weak Ha (green) yields quite purple stars.
Mike in Rancho wrote: Mon Oct 25, 2021 5:11 am The masking talk does leave me further befuddled, but you say not objectionable? Something akin to an automask or global selection parameters (threshold and so forth?). So when JB was trying out my data in his PI, he mentioned that he used the OIII channel to levels-and-curves it and then create a mask. That mask was then applied doing color, thus boosting the blue/OIII.
It's just the word "masking" that may be throwing you off. :)
Think of a luminance mask as greyscale image. Now imagine you have a two other images (we'll call the image A and image B).
A luminance mask simply takes the value of the greyscale image (black, white, or somewhere in between) for each pixel and uses it to make a blend of the image A and B at that same pixel location. It does that for every pixel. Black = use pixel from image A, white = use pixel from image B, grey = 50% pixel from A and 50% from pixel B. And so on, and so forth.

Often the luminance mask is derived from image A or B. And often the luminance mask itself is either image A or B itself.

You can, of course create or manipulate the luminance mask manually (which is indeed a no-no), but often the luminance mask is created algorithmically.

PI in particular relies heavily on luminance masks. The reason for that is that it doesn't have signal evolution Tracking, so the best you can do in PI is to try to approximate how signal changed by looking at its brightness; if something is bright it was probably stretched a lot, if something is dark it was probably not stretched a lot. This assumption, of course, falls apart very quickly once you do any sort of local dynamic range optimization, decon, or, really, most operations beyond a global stretch. Nevertheless, it is this very crude assumption that drives most operations in PI - it simply doesn't have the per-pixel history available that StarTools has, in order to figure out how a pixel changed from its linear state to final image.
Other discussions of techniques seemed even more suspect to me - such as using said OIII mask as a reduction upon the Ha channel, apparently therefore letting the OIII through when all is said and done.
Hmmmm... yeah. That doesn't make sense to me... :think:
Ivo Jager
StarTools creator and astronomy enthusiast
Mike in Rancho
Posts: 1166
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Can masked stretch be done in ST?

Post by Mike in Rancho »

Thanks Ivo. I'll lay off the PI guys a little. Just a little. 8-)

Your explanation starts to make some sense - though I will still pay attention to what they are doing with these masks, if they even say, to see if it has non-objectionable logic.

After your description, I now vaguely remember doing workflows, back in the month or two I was using Gimp-only prior to ST, where these various layer masks were created from the image being worked on. Grayscale, probably brightness as you state, sometimes inverted, etc etc. No recollection what they were used for, but...ugh. It was awful!
opestovsky
Posts: 14
Joined: Fri Oct 22, 2021 6:19 pm

Re: Can masked stretch be done in ST?

Post by opestovsky »

admin wrote: Mon Oct 25, 2021 11:41 pm
opestovsky wrote: Mon Oct 25, 2021 6:55 am I read some threads here on making fake HOO pictures from OSC data taken with dual narrowband filters.
There is (usually) nothing fake about them! When shooting with, for example, an Optolong L-Extreme with an OSC or DSLR, the result is true Ha and true O-III neatly separated in red(Ha), mostly green (O-III) and some blue (O-III).
how do I deal with these crazy stars?
There are many workarounds and your solution (to desaturate them) is one.
It depends on what you find "crazy" about the stars? Their color? Their profile? What would you like them to be (and what is your reasoning)?
Indeed, stars accumulate signal in Ha much slower than O-III. Therefore they always appear cyan. The same thing can be observed in SHO (HST) images where stronger S-II and O-III (red + blue) vs weak Ha (green) yields quite purple stars.
Mike in Rancho wrote: Mon Oct 25, 2021 5:11 am The masking talk does leave me further befuddled, but you say not objectionable? Something akin to an automask or global selection parameters (threshold and so forth?). So when JB was trying out my data in his PI, he mentioned that he used the OIII channel to levels-and-curves it and then create a mask. That mask was then applied doing color, thus boosting the blue/OIII.
It's just the word "masking" that may be throwing you off. :)
Think of a luminance mask as greyscale image. Now imagine you have a two other images (we'll call the image A and image B).
A luminance mask simply takes the value of the greyscale image (black, white, or somewhere in between) for each pixel and uses it to make a blend of the image A and B at that same pixel location. It does that for every pixel. Black = use pixel from image A, white = use pixel from image B, grey = 50% pixel from A and 50% from pixel B. And so on, and so forth.

Often the luminance mask is derived from image A or B. And often the luminance mask itself is either image A or B itself.

You can, of course create or manipulate the luminance mask manually (which is indeed a no-no), but often the luminance mask is created algorithmically.

PI in particular relies heavily on luminance masks. The reason for that is that it doesn't have signal evolution Tracking, so the best you can do in PI is to try to approximate how signal changed by looking at its brightness; if something is bright it was probably stretched a lot, if something is dark it was probably not stretched a lot. This assumption, of course, falls apart very quickly once you do any sort of local dynamic range optimization, decon, or, really, most operations beyond a global stretch. Nevertheless, it is this very crude assumption that drives most operations in PI - it simply doesn't have the per-pixel history available that StarTools has, in order to figure out how a pixel changed from its linear state to final image.
Other discussions of techniques seemed even more suspect to me - such as using said OIII mask as a reduction upon the Ha channel, apparently therefore letting the OIII through when all is said and done.
Hmmmm... yeah. That doesn't make sense to me... :think:
Thank you again for a prompt reply. A few comments and a question...

There are many workarounds and your solution (to desaturate them) is one.
It depends on what you find "crazy" about the stars? Their color? Their profile? What would you like them to be (and what is your reasoning)?
Indeed, stars accumulate signal in Ha much slower than O-III. Therefore they always appear cyan. The same thing can be observed in SHO (HST) images where stronger S-II and O-III (red + blue) vs weak Ha (green) yields quite purple stars.
My solution is unfortunately extremely slow and laborious.
I would like to know if there are easier ways to get rid of the cyan colors and/or halos than painstakingly mask every pixel by hand.

My reasoning is to create a picture like this:
Image
As you can see, the stars have RGB appearance and I cannot see any cyan artifacts. The picture doesn't have to be accurate, just pretty. I am not sending these pictures to APOD, I am showing them to my friends. These friends have zero idea about astrophotography or astronomy, but they immediately see that the stars have cyan colors and/or halos.
PI in particular relies heavily on luminance masks. The reason for that is that it doesn't have signal evolution Tracking, so the best you can do in PI is to try to approximate how signal changed by looking at its brightness; if something is bright it was probably stretched a lot, if something is dark it was probably not stretched a lot. This assumption, of course, falls apart very quickly once you do any sort of local dynamic range optimization, decon, or, really, most operations beyond a global stretch. Nevertheless, it is this very crude assumption that drives most operations in PI - it simply doesn't have the per-pixel history available that StarTools has, in order to figure out how a pixel changed from its linear state to final image.
Other discussions of techniques seemed even more suspect to me - such as using said OIII mask as a reduction upon the Ha channel, apparently therefore letting the OIII through when all is said and done.
Hmmmm... yeah. That doesn't make sense to me... :think:
Perhaps the problem is that you are not thinking like a user.
I don't think a user, myself included, really cares much about signal evolution and error propagation, as long the picture looks pretty in the end. :D

The actual description by the user,
"So I followed some of the other advice on the thread, and tried using a mask of the OIII image on the Ha image, and actually reduced the intensity of the Ha a bit, which helped the OIII show through."

I asked him for the details, if the stars acquire cyan color/halos, and if so, how to deal with them.
I will follow up on that when I get a reply.


There is (usually) nothing fake about them! When shooting with, for example, an Optolong L-Extreme with an OSC or DSLR, the result is true Ha and true O-III neatly separated in red(Ha), mostly green (O-III) and some blue (O-III).
I don't believe that this is strictly true.
My camera, ASI533MC Pro for example, is sensitive to Ha signal in its green/blue pixels.
Image
So even if L-Extreme filter was used, the O3 signal in the green/blue pixels would still contain a chunk of Ha.
From the graph above,
the response of green/blue pixels to O3 at 500nm is 0.92/0.51
the response of green/blue pixels to Ha at 656nm is 0.16/0.05.
So, green/blue pixels will contain (0.16+0.05)/(0.16+0.05+0.92+0.51) = 12.8% of Ha signal.
And since Ha is usually much stronger than O3... well, hence the fake HOO.
User avatar
admin
Site Admin
Posts: 3381
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Can masked stretch be done in ST?

Post by admin »

opestovsky wrote: Tue Oct 26, 2021 3:12 am I would like to know if there are easier ways to get rid of the cyan colors and/or halos than painstakingly mask every pixel by hand.

My reasoning is to create a picture like this:
Image
As you can see, the stars have RGB appearance and I cannot see any cyan artifacts. The picture doesn't have to be accurate, just pretty. I am not sending these pictures to APOD, I am showing them to my friends. These friends have zero idea about astrophotography or astronomy, but they immediately see that the stars have cyan colors and/or halos.
One quick way, for example, is to the Filter module's Fringe Killer mode (this tools is typically used for ameliorating chromatic aberration). Create a star mask, then Grow the mask until the discolored halos are all covered. Back in the Filter module, set Filter Mode to "Fringe Killer". Now click on the color that you wish to remove in one of the stars. Do that a few times. Every time you click, you should see stars of that same color become more and more neutral in color; they will act like little "chameleons" adopting the colors of their surroundings. Stars with other colors will not be affected.

This does not affect the stellar profiles of course. For that you will want use the Shrink module. E.g. applying all this to Mike's image yields, for example. this;
Untitled.jpg
Untitled.jpg (223.89 KiB) Viewed 6743 times
Perhaps the problem is that you are not thinking like a user.
I don't think a user, myself included, really cares much about signal evolution and error propagation, as long the picture looks pretty in the end. :D
I think you are gravely mistaken. In my estimation, ~50% of ST users are PI alumni who are looking to make better use of their signal. They care very much about achieving superior detail and clarity that can be corroborated by their fellow AP-ers (and NASA images, etc.). Many AP'ers who get into the hobby are so-called "object chasers". They get their thrills by capturing objects they have seen in magazines or online. It's why people spend massive amounts of money on bigger scopes, more sensitive cameras and higher precision tracking solutions. They could just paint in the detail (or colors) they like, but they don't.

If you are not interested in preserving photographic integrity of objects I would highly recommend using other software (for example Affinity Photo) - it will allow you to paint at will and achieve any coloring your like. You will get frustrated with ST rather quickly if you are trying to do things that are destructive to your signal, or contravene best practices, or go against the actual contents of your data - such things are much harder to do in ST than in other software. You have tons of leeway for expressing your personal vision for a dataset in StarTools, but the software aims to only facilitate expressions that are rooted in reality and physics. ST is chiefly meant for the field of astrophotography, not art.
The actual description by the user,
"So I followed some of the other advice on the thread, and tried using a mask of the OIII image on the Ha image, and actually reduced the intensity of the Ha a bit, which helped the OIII show through."
Indeed, that makes rather little sense unfortunately. Nor in the luminance, nor in the chrominance domain.
There is (usually) nothing fake about them! When shooting with, for example, an Optolong L-Extreme with an OSC or DSLR, the result is true Ha and true O-III neatly separated in red(Ha), mostly green (O-III) and some blue (O-III).
I don't believe that this is strictly true.
My camera, ASI533MC Pro for example, is sensitive to Ha signal in its green/blue pixels.
Image
So even if L-Extreme filter was used, the O3 signal in the green/blue pixels would still contain a chunk of Ha.
From the graph above,
the response of green/blue pixels to O3 at 500nm is 0.92/0.51
the response of green/blue pixels to Ha at 656nm is 0.16/0.05.
So, green/blue pixels will contain (0.16+0.05)/(0.16+0.05+0.92+0.51) = 12.8% of Ha signal.
And since Ha is usually much stronger than O3... well, hence the fake HOO.
That's not how that works though?

The response of the blue + green channels @ 500nm is (0.5 + 0.9 = 1.4), with Ha at that line being ~0.04.
This yields (0.04 / 1.4) * 100% = 2.9% Ha "contamination" in the aggregate O-III signal. That is virtually negligible.

The response in the blue + green channel @656 is (0.16+0.05 = 0.21), with Ha at that line being ~0.8.
This yields (0.21 / 0.8) * 100% = 26.25% O-III "contamination" in the red channel. That is much more significant.

Given that we know what the (almost) pure O-III looks like, we can subtract 26.25% times the O-III signal from the Ha signal at every pixel and arrive at the original Ha signal. That's what the Wipe + Color module combo does, as color is determined by relative differences after subtraction of a constant (by Wipe).

For luminance, the contamination too is not a problem (unless you are trying to isolate purely the Ha of course, like is the case with the moon example cited above, in which case a measure of the O-III needs to be subtracted for this particular camera). For luminance, you would add all signal you have collected together anyway to achieve the best SNR.
Ivo Jager
StarTools creator and astronomy enthusiast
Post Reply