Can masked stretch be done in ST?
Re: Can masked stretch be done in ST?
There is something about the blue and green data that gets wiped in Wipe module. Admittedly I worked on the original jpg shared and using compose first created separate file for each colour channel.
Split into channels and used wipe and managed this. Split channels but went straight to colour after compose. Illustrates why wipe can't be skipped but it is blue
Split into channels and used wipe and managed this. Split channels but went straight to colour after compose. Illustrates why wipe can't be skipped but it is blue
-
- Posts: 14
- Joined: Fri Oct 22, 2021 6:19 pm
Re: Can masked stretch be done in ST?
This just dropped.
Masked stretch in action:
https://www.youtube.com/watch?v=2QS2Pyhf7as
It looks effortless.
Also.
I think that using green/blue channels on their own to enhance the blue color naturally in the final image introduces a ton of noise.
Because those channels are quite a bit noisy on their own, coming from an OSC camera with a dual narrowband filter and the O3 signal being fairly weaker than Ha.
In that video, he makes a mask of the O3 signal, and he has a setting that adds a lot of fuzziness to make it smooth.
So the blue enhancement with the masked stretch has nearly zero noise inherent in the green/blue channels.
EDIT:
I ended up with this picture, after what, about 20 hours of banging my head against the wall? It's quite a bit noisier than the original here: And that was 15 hours of integration.
Masked stretch in action:
https://www.youtube.com/watch?v=2QS2Pyhf7as
It looks effortless.
Also.
I think that using green/blue channels on their own to enhance the blue color naturally in the final image introduces a ton of noise.
Because those channels are quite a bit noisy on their own, coming from an OSC camera with a dual narrowband filter and the O3 signal being fairly weaker than Ha.
In that video, he makes a mask of the O3 signal, and he has a setting that adds a lot of fuzziness to make it smooth.
So the blue enhancement with the masked stretch has nearly zero noise inherent in the green/blue channels.
EDIT:
I ended up with this picture, after what, about 20 hours of banging my head against the wall? It's quite a bit noisier than the original here: And that was 15 hours of integration.
-
- Posts: 1166
- Joined: Sun Jun 20, 2021 10:05 pm
- Location: Alta Loma, CA
Re: Can masked stretch be done in ST?
Happy-kat, I'm not sure that is the correct thing to do with already-stretched images, particularly a jpg, and if that actually tells us anything.
O, I like your new bicolor image! There are indeed differences from your original. Both fully in ST? We could perhaps look at the logs to see how you got from A to B in both, and try to replicate certain looks. Original has some very nice edge smoothing to it. New is a little chunkier. But some of the rest I believe is perceptual, due to there being two colors. The detail of New shows far better, both where the colors offset and emphasize it, but even in the lower portion red-only nebula where the cloud blobs have greater clarity.
I watched about half that video. I must say, between this and the other big PI-like vs ST thread going on here right now, I am starting to understand PI a lot better. And I am dissapoint in how much of it is just art. Why did he add a portion of the Ha into his B and G? Here we are complaining about the L-eNhance as well as Bayer contamination, and he goes and extra contaminates it on purpose! Then that mask based on the OIII, even if done with a brightness slider, seemed awfully selective to me, as was his control to then mega-smooth it out. Well, I now understand another way that PI is getting that big-time soft-smoothed look, anyway.
But then he went and changed the coloring/hues willy nilly until he thought it looked the way he wanted it? Barf! I cut the video at that point, but I think after that he was just working on his vignetting problem.
I understand the pretty picture appeal, but it does make me wonder how much I should trust all these PI images on CN, or that I should be awed with what was done.
O, I like your new bicolor image! There are indeed differences from your original. Both fully in ST? We could perhaps look at the logs to see how you got from A to B in both, and try to replicate certain looks. Original has some very nice edge smoothing to it. New is a little chunkier. But some of the rest I believe is perceptual, due to there being two colors. The detail of New shows far better, both where the colors offset and emphasize it, but even in the lower portion red-only nebula where the cloud blobs have greater clarity.
I watched about half that video. I must say, between this and the other big PI-like vs ST thread going on here right now, I am starting to understand PI a lot better. And I am dissapoint in how much of it is just art. Why did he add a portion of the Ha into his B and G? Here we are complaining about the L-eNhance as well as Bayer contamination, and he goes and extra contaminates it on purpose! Then that mask based on the OIII, even if done with a brightness slider, seemed awfully selective to me, as was his control to then mega-smooth it out. Well, I now understand another way that PI is getting that big-time soft-smoothed look, anyway.
But then he went and changed the coloring/hues willy nilly until he thought it looked the way he wanted it? Barf! I cut the video at that point, but I think after that he was just working on his vignetting problem.
I understand the pretty picture appeal, but it does make me wonder how much I should trust all these PI images on CN, or that I should be awed with what was done.
Re: Can masked stretch be done in ST?
That's not a solely ST-processed image, correct?
Detail is quite good, but the stars (that haven't gone missing?) are a little burnt out (swallowing nearby stars) and "stringy", while the Ha appears rather oversaturated on my calibrated screens.
Are you able to get something like this in StarTools OK? E.g. a textbook HOO bi-color where red perfectly controls Ha contribution and green and/or blue perfectly control O-III contribution?
Or you could process the dataset as a regular RGB dataset (e.g. not use Compose module), in which case, you can get some slightly different coloring (effectively you get control over the hue of the O-III via independent green and blue bias behavior).
Then you could use the Shrink Module; and/or Super Structure module, to push back the stars without completely mangling them. The halos on them are quite strong, which can be caused by adverse (or variable) atmospheric conditions (bad transparency, changing seeing, etc.).
Regardless, if done properly (e.g. weighed properly and stacked properly), then noise in whichever channel, is not an issue (unless there is some severe remnant shot noise from subtracted light pollution etc.). It's all properly accounted for when added to the synthetic luminance signal. E.g. that's why you would use the stacker settings as recommended(!), and then import such a dataset in the Compose module using 'L + Synthetic L from R(2xG)B, R(GB)(GB) (Bi-Color from OSC/DSLR)'. Or if you are not after a bi-color and rather use tristimulus response as recorded, you can indeed load the data without bi-color, for example like so;
Color noise is indeed barely visible to humans and can indeed be blurred to extremes without people noticing.
Also note that, an OSC or DSLR samples twice as many pixels in the green channel, compared to the red and blue channels (Bayer matrix). This yields a sqrt(2) = ~1.4x better signal to begin with.
The only issue with this dataset is really the stars. However, it is (or should be) trivial to bring out O-III dominant areas in a linear, documentary fashion.
With regards to the video
These sorts of videos make me a little angry to be totally honest.
Juxtaposing O-III vs Ha should be (is) trivial. There is no need for "rescuing" anything! The "method" in the video is nonsensical straight out of the gate; non-linear manipulation of individual channels is inappropriate in the chrominance domain. It just introduces hue artifacts, suggesting emission concentrations or dominance that do not exist. This is *bad*. Star removal is equally unnecessary (and obviously introduces neurally hallucinated artifacts) and the Range Selection stuff is pretty borderline as well (selectively modifying color in the image based on a mask). Background extraction from a stretched image? Big frown. To top it off DenoiseAI's dog hair and cat fur generator. Just. No.
It's destructive nonsense that has rather little to do with photography. Sadly the author still thinks he's "not betraying the spirit of the data". That's just not true.
Instead, the "secret" is decoupling luminance from chrominance, and respecting these separate domains. Unless you're creating 90s album art, we don't do non-linear stretches on individual RGB channels for terrestrial imaging either; the coloring (RGB ratios) gets distorted per-pixel, based on brightness and the hues no longer describe anything remotely related to reality (they no longer convey anything about relative emission concentrations).
AFAIK, the O-III in the dataset of the video, is just fine and can even be added the luminance without problems for a deeper signal (and better color vs detail correlation, so we don't have to rely on just the Ha detail for O-III coloring to show).
This is the dataset from the video after a 2 minute, completely standard process in StarTools;
Workflow; Imported in the Compose module as follows; Ha as red, O-III as green O-III as blue. Exposure time for synthetic luminance generation is set to 0 for green and blue (e.g. O-III); the point of the video is to "rescue" coloring when your O-III is poor (it is not - it is faint, sure - but let's assume it is poor and we don't want to use it). Bin, Crop, Wipe. AutoDev, then straight into the Color module.
E.g. what we're doing here is using Ha luminance (detail) and HOO for coloring (chrominance). It just works, gives you full control over the true linear Ha:O-III coloring in your image and truly respects the data. Not a mask or curve in sight for this purpose, because they are inappropriate tools.
Noise reduction after switching tracking off (not a single tweak), yields this; PS by know you should have noticed the utterly consistent color rendering of HOO bi-colors in StarTools (if using defaults of course); you can trust blue/cyan to be truly indicative of O-III, red to be truly indicative of Ha and white to be indicative of a mix.
Detail is quite good, but the stars (that haven't gone missing?) are a little burnt out (swallowing nearby stars) and "stringy", while the Ha appears rather oversaturated on my calibrated screens.
Are you able to get something like this in StarTools OK? E.g. a textbook HOO bi-color where red perfectly controls Ha contribution and green and/or blue perfectly control O-III contribution?
Or you could process the dataset as a regular RGB dataset (e.g. not use Compose module), in which case, you can get some slightly different coloring (effectively you get control over the hue of the O-III via independent green and blue bias behavior).
Then you could use the Shrink Module; and/or Super Structure module, to push back the stars without completely mangling them. The halos on them are quite strong, which can be caused by adverse (or variable) atmospheric conditions (bad transparency, changing seeing, etc.).
Regardless, if done properly (e.g. weighed properly and stacked properly), then noise in whichever channel, is not an issue (unless there is some severe remnant shot noise from subtracted light pollution etc.). It's all properly accounted for when added to the synthetic luminance signal. E.g. that's why you would use the stacker settings as recommended(!), and then import such a dataset in the Compose module using 'L + Synthetic L from R(2xG)B, R(GB)(GB) (Bi-Color from OSC/DSLR)'. Or if you are not after a bi-color and rather use tristimulus response as recorded, you can indeed load the data without bi-color, for example like so;
Color noise is indeed barely visible to humans and can indeed be blurred to extremes without people noticing.
Also note that, an OSC or DSLR samples twice as many pixels in the green channel, compared to the red and blue channels (Bayer matrix). This yields a sqrt(2) = ~1.4x better signal to begin with.
The only issue with this dataset is really the stars. However, it is (or should be) trivial to bring out O-III dominant areas in a linear, documentary fashion.
With regards to the video
These sorts of videos make me a little angry to be totally honest.
Juxtaposing O-III vs Ha should be (is) trivial. There is no need for "rescuing" anything! The "method" in the video is nonsensical straight out of the gate; non-linear manipulation of individual channels is inappropriate in the chrominance domain. It just introduces hue artifacts, suggesting emission concentrations or dominance that do not exist. This is *bad*. Star removal is equally unnecessary (and obviously introduces neurally hallucinated artifacts) and the Range Selection stuff is pretty borderline as well (selectively modifying color in the image based on a mask). Background extraction from a stretched image? Big frown. To top it off DenoiseAI's dog hair and cat fur generator. Just. No.
It's destructive nonsense that has rather little to do with photography. Sadly the author still thinks he's "not betraying the spirit of the data". That's just not true.
Instead, the "secret" is decoupling luminance from chrominance, and respecting these separate domains. Unless you're creating 90s album art, we don't do non-linear stretches on individual RGB channels for terrestrial imaging either; the coloring (RGB ratios) gets distorted per-pixel, based on brightness and the hues no longer describe anything remotely related to reality (they no longer convey anything about relative emission concentrations).
AFAIK, the O-III in the dataset of the video, is just fine and can even be added the luminance without problems for a deeper signal (and better color vs detail correlation, so we don't have to rely on just the Ha detail for O-III coloring to show).
This is the dataset from the video after a 2 minute, completely standard process in StarTools;
Workflow; Imported in the Compose module as follows; Ha as red, O-III as green O-III as blue. Exposure time for synthetic luminance generation is set to 0 for green and blue (e.g. O-III); the point of the video is to "rescue" coloring when your O-III is poor (it is not - it is faint, sure - but let's assume it is poor and we don't want to use it). Bin, Crop, Wipe. AutoDev, then straight into the Color module.
E.g. what we're doing here is using Ha luminance (detail) and HOO for coloring (chrominance). It just works, gives you full control over the true linear Ha:O-III coloring in your image and truly respects the data. Not a mask or curve in sight for this purpose, because they are inappropriate tools.
Noise reduction after switching tracking off (not a single tweak), yields this; PS by know you should have noticed the utterly consistent color rendering of HOO bi-colors in StarTools (if using defaults of course); you can trust blue/cyan to be truly indicative of O-III, red to be truly indicative of Ha and white to be indicative of a mix.
Ivo Jager
StarTools creator and astronomy enthusiast
StarTools creator and astronomy enthusiast
Re: Can masked stretch be done in ST?
Thank you for that great reply Ivo. I will learn to look away and not take assumption of what I see is right as I had begun to doubt my processing as I could not replicate the strong blue and golds regularity seen and had not thought to think of it as enhanced. If I like what I've managed then that is good enough for me.
-
- Posts: 14
- Joined: Fri Oct 22, 2021 6:19 pm
Re: Can masked stretch be done in ST?
I played with this data for another hour and followed your recommendations from the previous post.admin wrote: ↑Thu Oct 28, 2021 1:46 am That's not a solely ST-processed image, correct?
Detail is quite good, but the stars (that haven't gone missing?) are a little burnt out (swallowing nearby stars) and "stringy", while the Ha appears rather oversaturated on my calibrated screens.
Are you able to get something like this in StarTools OK?
StarTools_2799.jpg
E.g. a textbook HOO bi-color where red perfectly controls Ha contribution and green and/or blue perfectly control O-III contribution?
Or you could process the dataset as a regular RGB dataset (e.g. not use Compose module), in which case, you can get some slightly different coloring (effectively you get control over the hue of the O-III via independent green and blue bias behavior).
Then you could use the Shrink Module;
StarTools_2800.jpg
and/or Super Structure module, to push back the stars without completely mangling them. The halos on them are quite strong, which can be caused by adverse (or variable) atmospheric conditions (bad transparency, changing seeing, etc.).
Regardless, if done properly (e.g. weighed properly and stacked properly), then noise in whichever channel, is not an issue (unless there is some severe remnant shot noise from subtracted light pollution etc.). It's all properly accounted for when added to the synthetic luminance signal. E.g. that's why you would use the stacker settings as recommended(!), and then import such a dataset in the Compose module using 'L + Synthetic L from R(2xG)B, R(GB)(GB) (Bi-Color from OSC/DSLR)'. Or if you are not after a bi-color and rather use tristimulus response as recorded, you can indeed load the data without bi-color, for example like so;
StarTools_2801.jpg
Color noise is indeed barely visible to humans and can indeed be blurred to extremes without people noticing.
Also note that, an OSC or DSLR samples twice as many pixels in the green channel, compared to the red and blue channels (Bayer matrix). This yields a sqrt(2) = ~1.4x better signal to begin with.
The only issue with this dataset is really the stars. However, it is (or should be) trivial to bring out O-III dominant areas in a linear, documentary fashion.
With regards to the video
These sorts of videos make me a little angry to be totally honest.
Juxtaposing O-III vs Ha should be (is) trivial. There is no need for "rescuing" anything! The "method" in the video is nonsensical straight out of the gate; non-linear manipulation of individual channels is inappropriate in the chrominance domain. It just introduces hue artifacts, suggesting emission concentrations or dominance that do not exist. This is *bad*. Star removal is equally unnecessary (and obviously introduces neurally hallucinated artifacts) and the Range Selection stuff is pretty borderline as well (selectively modifying color in the image based on a mask). Background extraction from a stretched image? Big frown. To top it off DenoiseAI's dog hair and cat fur generator. Just. No.
It's destructive nonsense that has rather little to do with photography. Sadly the author still thinks he's "not betraying the spirit of the data". That's just not true.
Instead, the "secret" is decoupling luminance from chrominance, and respecting these separate domains. Unless you're creating 90s album art, we don't do non-linear stretches on individual RGB channels for terrestrial imaging either; the coloring (RGB ratios) gets distorted per-pixel, based on brightness and the hues no longer describe anything remotely related to reality (they no longer convey anything about relative emission concentrations).
AFAIK, the O-III in the dataset of the video, is just fine and can even be added the luminance without problems for a deeper signal (and better color vs detail correlation, so we don't have to rely on just the Ha detail for O-III coloring to show).
This is the dataset from the video after a 2 minute, completely standard process in StarTools;
StarTools_2803.jpg
Workflow; Imported in the Compose module as follows; Ha as red, O-III as green O-III as blue. Exposure time for synthetic luminance generation is set to 0 for green and blue (e.g. O-III); the point of the video is to "rescue" coloring when your O-III is poor (it is not - it is faint, sure - but let's assume it is poor and we don't want to use it). Bin, Crop, Wipe. AutoDev, then straight into the Color module.
E.g. what we're doing here is using Ha luminance (detail) and HOO for coloring (chrominance). It just works, gives you full control over the true linear Ha:O-III coloring in your image and truly respects the data. Not a mask or curve in sight for this purpose, because they are inappropriate tools.
Noise reduction after switching tracking off (not a single tweak), yields this;
NewComposite.jpg
PS by know you should have noticed the utterly consistent color rendering of HOO bi-colors in StarTools (if using defaults of course); you can trust blue/cyan to be truly indicative of O-III, red to be truly indicative of Ha and white to be indicative of a mix.
This is the best I could get. This was done in Startools, then in Gimp, I cranked up saturation and a bit of contrast.
I absolutely hate the blue stars.
I love the fact that there isn't as much noise as in the previous picture.
I am going to show this to my friends and see their reaction.
BTW, you should check out Chuck's Astrophotography channel on Youtube.
This guy does a lot of post-processing of images with all kinds of masks that would turn your stomach.
And he gets his images published on NASA APOD and Astrobin Top Picks, if I am not mistaken.
Which makes me think that there are a lot more people out there who would prefer to have prettier than more accurate pictures.
Also, you said that my stars were burnt out.
Should I lower my exposure times?
Thanks.
Re: Can masked stretch be done in ST?
Couple of things you can do about the blue stars;opestovsky wrote: ↑Fri Oct 29, 2021 5:06 am I played with this data for another hour and followed your recommendations from the previous post.
This is the best I could get. This was done in Startools, then in Gimp, I cranked up saturation and a bit of contrast.
I absolutely hate the blue stars.
I love the fact that there isn't as much noise as in the previous picture.
- Use the Super Structure module's Saturate preset (tweak to taste); this avoids saturating the entire image (e.g avoids stars) and only saturates super structures.
- When using the Shrink module, take note of that Color Taming parameter.
- When using the Color module, take not of the Highlight Repair parameter.
This is actually a prime example of a self-selection bias; the people that seek out recognition are - obviously - the ones you see most on YT and AstroBin.I am going to show this to my friends and see their reaction.
BTW, you should check out Chuck's Astrophotography channel on Youtube.
This guy does a lot of post-processing of images with all kinds of masks that would turn your stomach.
And he gets his images published on NASA APOD and Astrobin Top Picks, if I am not mistaken.
Which makes me think that there are a lot more people out there who would prefer to have prettier than more accurate pictures.
However, that does not mean they are representative of the actual population of astrophotographers, many of whom never publish a thing, or have nothing to prove to friends and family (any AP photo can look majestic without fudging things!), or work in academia where fudging is a dead sin, or "shadow" processors that work solely with the latest <insert your favourite probe or telescope here> data for fun, etc.
Not usually (unless your sensor starts blooming, or when dealing with extremely high dynamic range objects such as M42, maybe the Tarantula nebula, or maybe the seven sisters). It's just a matter of constructing a non-linear stretch that doesn't clip your white point or stretches the highlights too much. E.g. something like AutoDev;Also, you said that my stars were burnt out.
Should I lower my exposure times?.
Notice how the star core remains visible and doesn't "bleed" (doesn't "bloat") into the neighbouring pixels?
Masked Stretch in PI (e.g. the module/script, not the act of using a stretch and a selective mask!) also avoids star bloat quite well (but has issues in the shadows).
Hope that helps!
Ivo Jager
StarTools creator and astronomy enthusiast
StarTools creator and astronomy enthusiast
-
- Posts: 11
- Joined: Wed Sep 16, 2020 7:45 pm
Re: Can masked stretch be done in ST?
Ok, I'll bite.
I have used Startools almost exclusively for a couple of years. I completely understand Ivo's scientific approach when it comes to preserving the integrity of the data. However, I also (as Opestovsky) want my pictures to look the way *I* want them to look.
I am of the opinion that Astrophotography is a mix between art and science and the creator of the picture is the one who decides where in this spectra your picture will end up. And I don't have an Instagram or Facebook account where I show them...
Now, the good news: Opestovsky, I think I know what you want and if you want that you could get it with Startools (and some other software).
Here is how:
1) Use normal procedure in Startools to get your image to where you want to (disregarding stars). This may include using Filmdev in some cases as Autodev sometimes bloats stars. (I know that it's accurate to the signal, it's just that later in the procedure autodevs processing may introduce artefacts when removing stars)
2) Use your prefered software to remove stars from said picture. (You could use Startools, but Starnet++ is free (and better), and there is an even better new PS-plugin which I have not tried)
3) Process starless picture in Gimp, PS or Startools. This might even include AI-software that Ivo hates as the Topaz suite. (Sorry)
4) Make an "only stars" image of your data. If you have used Filmdev you can often just use the processed data in (1) minus your starless image. Another way, which will yield even better stars is to use the compose module and use the Legacy RGB option (Ivo won't be happy about this I think ) I tend to stretch this Image a bit less so I don't get as many stars as in the full stretch in (1).
5) With this "star-stretch" repeat (2) and then remove the nebulosity as in (4)
6) Add the stars to your starless picture.
I followed this procedure with a *very* small amount of data that I captured in my Bortle 8 backyard (3*1.5h total of SHO) and ended up with this:
Now, remember, this is no way near as much data as it should be, but the stars shows as points of light the way I want them. (You could also add RGB- stars to mix colours in even more blasphemous ways. )
My philosophy when it comes to a picture of space is that the stars are enormously important. A picture with haloed stars will never have the comme-il-faut that pictures with great stars have. Therefore I use whatever means there is to get this result. As I said earlier, I totally get that some people want absolute scientific accuracy, but If I follow their procedure, I end up with something that I don't want.
Just my opinion -
/Ulf
I have used Startools almost exclusively for a couple of years. I completely understand Ivo's scientific approach when it comes to preserving the integrity of the data. However, I also (as Opestovsky) want my pictures to look the way *I* want them to look.
I am of the opinion that Astrophotography is a mix between art and science and the creator of the picture is the one who decides where in this spectra your picture will end up. And I don't have an Instagram or Facebook account where I show them...
Now, the good news: Opestovsky, I think I know what you want and if you want that you could get it with Startools (and some other software).
Here is how:
1) Use normal procedure in Startools to get your image to where you want to (disregarding stars). This may include using Filmdev in some cases as Autodev sometimes bloats stars. (I know that it's accurate to the signal, it's just that later in the procedure autodevs processing may introduce artefacts when removing stars)
2) Use your prefered software to remove stars from said picture. (You could use Startools, but Starnet++ is free (and better), and there is an even better new PS-plugin which I have not tried)
3) Process starless picture in Gimp, PS or Startools. This might even include AI-software that Ivo hates as the Topaz suite. (Sorry)
4) Make an "only stars" image of your data. If you have used Filmdev you can often just use the processed data in (1) minus your starless image. Another way, which will yield even better stars is to use the compose module and use the Legacy RGB option (Ivo won't be happy about this I think ) I tend to stretch this Image a bit less so I don't get as many stars as in the full stretch in (1).
5) With this "star-stretch" repeat (2) and then remove the nebulosity as in (4)
6) Add the stars to your starless picture.
I followed this procedure with a *very* small amount of data that I captured in my Bortle 8 backyard (3*1.5h total of SHO) and ended up with this:
Now, remember, this is no way near as much data as it should be, but the stars shows as points of light the way I want them. (You could also add RGB- stars to mix colours in even more blasphemous ways. )
My philosophy when it comes to a picture of space is that the stars are enormously important. A picture with haloed stars will never have the comme-il-faut that pictures with great stars have. Therefore I use whatever means there is to get this result. As I said earlier, I totally get that some people want absolute scientific accuracy, but If I follow their procedure, I end up with something that I don't want.
Just my opinion -
/Ulf
Re: Can masked stretch be done in ST?
Ulf,
Certainly from an artistic point of view your image is very appealing.
Eric
Certainly from an artistic point of view your image is very appealing.
Eric
Re: Can masked stretch be done in ST?
And you're absolutely right. Nothing precludes you from making your pictures the way you want; science and art are not necessarily mutually exclusive. You have an massive amount of leeway to achieve both.BainthaBrakk wrote: ↑Fri Oct 29, 2021 8:01 am I completely understand Ivo's scientific approach when it comes to preserving the integrity of the data. However, I also (as Opestovsky) want my pictures to look the way *I* want them to look.
I am of the opinion that Astrophotography is a mix between art and science
So many steps during processing are subjective and necessarily open to interpretation. But some definitely aren't.
AutoDev does not bloat stars (see animated GIF in previous post, demonstrating this). It does, however, reveal stellar profiles (where other stretching algorithms, like FilmDev bloat). Is that what you mean?Autodev sometimes bloats stars. (I know that it's accurate to the signal, it's just that later in the procedure autodevs processing may introduce artefacts when removing stars)
Apart from the benefits for stellar profiles, the main purpose of AutoDev is to achieve a neutral dynamic range allocation for the entire image (which, yes, includes the stars), so that further processing yields better/easier results and detail is maximized.
The Hubble image in the animated GIF demonstrates why this is important - even in that tiny area, a great number of background galaxies are resolved (some even look like "fuzzier" stars). Whiting them out would be... a shame.
Why? What do you think you gain by removing stars?2) Use your prefered software to remove stars from said picture. (You could use Startools, but Starnet++ is free (and better), and there is an even better new PS-plugin which I have not tried)
Nothing wrong with replacing NB stars with RGB stars. Of course, letting your audience what they are looking at would be greatly appreciated. I, for one, think it adds something very useful to otherwise hard to interpret HST stars.I followed this procedure with a *very* small amount of data that I captured in my Bortle 8 backyard (3*1.5h total of SHO) and ended up with this:
Now, remember, this is no way near as much data as it should be, but the stars shows as points of light the way I want them. (You could also add RGB- stars to mix colours in even more blasphemous ways. )
Stars are indeed enormously important. They convey an immense amount information to the trained eye (but also sub-consciously).My philosophy when it comes to a picture of space is that the stars are enormously important. A picture with haloed stars will never have the comme-il-faut that pictures with great stars have. Therefore I use whatever means there is to get this result. As I said earlier, I totally get that some people want absolute scientific accuracy, but If I follow their procedure, I end up with something that I don't want.
For example, if fine detail does not match the point spread function of the stars in your image, it triggers the "uncanny valley" response in a layperson, and can trigger outright distrust in a more seasoned astrophotographer. Intentionally bloating and white-clipping your stars, may be a clever way of taking away much of that information (e.g. making the Point Spread Function much harder to detect by intentionally clipping it). However, not all stars can be white-clipped this way, and the fainter stars will still convey the point spread function if you care to look for them. Doing so makes this trained eye rather uneasy about some of the too-good-to-be-true detail.
My spidey senses are tingling (to be honest they are ringing like deafening fire alarms), but if this is really what you want, meets your personal standards, and isn't "sold" to anyone as a photograph portraying real detail, then power to you.
Of course, comparing the NGC281 image to an APOD from some years back, makes it appear some of the fine detail aspects of structures are altered; either missing or - in other places - made up (particularly visible in the dark areas and when juxtaposed darker and brighter areas in the same image).
https://apod.nasa.gov/apod/image/1411/N ... OXPugh.jpg
(APOD blurb here, but acquisition details here)
That's 30 hours worth of data by Martin Pugh on a PlaneWave 17 from 2014 (e.g. pre AI-fakery), versus 4.5h but from a workflow that incorporated neural hallucination in its workflow. Which image should I trust? They can't both be photos of the same area/object. One portrays reality, one... "takes some liberties" shall we say.
Ivo Jager
StarTools creator and astronomy enthusiast
StarTools creator and astronomy enthusiast