Astro to Astro-art Continuum, 2024 Edition

General discussion about StarTools.
dx_ron
Posts: 288
Joined: Fri Apr 16, 2021 3:55 pm

Re: Astro to Astro-art Continuum, 2024 Edition

Post by dx_ron »

Mike in Rancho wrote: Tue Jan 30, 2024 8:45 am I don't know, Jeff. I guess that's what we are here to discuss. :D Via the low/high pass filtering, and/or layer operations, however you want to look at it, "edges" are being detected and then hardened by having their transition zones narrowed. Can that be thought of as a correction? Well, it's an enhancement, and I imagine it can "seem" to correct errors due to being out of focus or having motion blur (I used UM on a flying red tail once and it helped).

A true correction would be deconvolution, at least as to those matters the synthetic modeling is designed to undo, or that the PSF sampling has actually measured.

:confusion-shrug:
Hopefully some day Ivo's boss will let him out of the sweatshop long enough for him to visit the forums again.

I think you're starting to converge on Frank's (Freestar8n on CN) way of thinking here, where you only want to allow yourself global 'curves' adjustments. That's basically where Siril stands now, I think. You can get pretty fancy with the shape of the stretch curve using GHS, but the same curve is applied across the full image. You can get a pretty nice image that way. I assume you can do the same in PI as long as you are intentional about it. Of course, most Siril GHS stretchers are using starnet to create a starless facsimile - but you can use GHS without doing that.

Having read some of Dietmar's links (thanks!), I have a better understanding of things like sharpening. My impression is that Ivo got into this business because he has a deep interest in image processing, and he clearly views Contrast, Sharp and HDR as part of normal image processing. If you go back to his "Welcome Pixinsight users!" post viewtopic.php?f=7&t=447, the equivalence table is pretty clear about ST's approach
In StarTools histogram transformations for global dynamic range assignment are considered obsolete and sub-optimal tools. Use AutoDev and Develop/FilmDev instead and optimize local dynamic range subsequently with Contrast, HDR and Sharp.
Basically, I realize that I knew the answer to my "is it global or is it local?" question all along...
Mike in Rancho
Posts: 1166
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Astro to Astro-art Continuum, 2024 Edition

Post by Mike in Rancho »

dx_ron wrote: Tue Jan 30, 2024 4:58 pm Hopefully some day Ivo's boss will let him out of the sweatshop long enough for him to visit the forums again.
Unless Ivo's boss is Ivo, all entrepreneur-like? :D

dx_ron wrote: Tue Jan 30, 2024 4:58 pm I think you're starting to converge on Frank's (Freestar8n on CN) way of thinking here, where you only want to allow yourself global 'curves' adjustments.
Frank has interesting philosophies, and a smart guy so worth at least understanding the point of view.

But no it isn't global vs local for me. And hey I'm already using LN in WBPP. ;)

I'm just at the point where I think I need a better nuts-and-bolts understanding of what I am doing. And well if I tend to look sideways at masked stretching, BXT, or whatever else, I should at least know what is happening when I use my own tools and modules.

So I'm okay with locality, and I think have proper comprehension that such processing will alter true relative intensities. I want to know if I am fudging more than just relative intensities though.

While Frank does have an overall (global? ;) ) philosophy on local vs global tools, he also takes that a step further on his global stretch and argues against curves. Meaning, just set a black point and a white point and leave it otherwise "linear." That seems a bit too hard core for my taste, and is it even realistic? I'd say we have to manipulate this faint, compressed, stacked data quite a bit, in order to make it fit for human consumption. Due to high dynamic range, curves may almost be mandatory in many cases? I guess if you wanted to put your foot down and show everyone the massive dynamic range of M42, you could do it by way of a triptych. Or dodecatych? One for the Trapezium, and the rest of the image would be black. All the way to one just for the outer dust, and 95% of center image will be pure white. :lol:

Anyway I've known OptiDev/AutoDev asserts it is an optimal but compromise global curve, as adjusted by the settings. Though I do wonder what optimal means from a range allocation standpoint. If GHS acts in a similar way I suppose I better knuckle down and finally learn it, perhaps that will help my overall understanding of things (though I thought GHS was still iterative levels/curves).

dx_ron wrote: Tue Jan 30, 2024 4:58 pm Having read some of Dietmar's links (thanks!), I have a better understanding of things like sharpening. My impression is that Ivo got into this business because he has a deep interest in image processing, and he clearly views Contrast, Sharp and HDR as part of normal image processing.
This is the part I still need to work on. Not that I am intending to alter up my workflow absent some deep revelation, just that I want to better know what the modules I use are doing, and why and how, and what kinds of things may end up altered reality. And if so, is that altered reality still a logical presentation of what I want to show from the data, and thus acceptable. Easy peasy! :?
decay
Posts: 497
Joined: Sat Apr 10, 2021 12:28 pm
Location: Germany, NRW

Re: Astro to Astro-art Continuum, 2024 Edition

Post by decay »

dx_ron wrote: Tue Jan 30, 2024 4:58 pm Hopefully some day Ivo's boss will let him out of the sweatshop long enough for him to visit the forums again.
Mike in Rancho wrote: Tue Jan 30, 2024 6:06 pm Unless Ivo's boss is Ivo, all entrepreneur-like? :D
I guess, that's true and in this case, often the customers take the role of the boss and we all know that this can be far worse :cry:

Dietmar.
hixx
Posts: 254
Joined: Mon Sep 02, 2019 3:36 pm

Re: Astro to Astro-art Continuum, 2024 Edition

Post by hixx »

these are _local_ contrast enhancements and for me _local_ contrast enhancements are always a ‘ manipulation of reality’
Hi,
I'd say in this regards, all Post Processing would be a manipulation of reality. Think of an Arcsinh stretch, White balancing or whatever.

The key point is the conditions determining how local enhancements are processed got derived from the data itself depending on parameters, rather than from a "human-touch-up-mask" which is arbitrary. This is the case in ST, I believe, so "manipulation" would not be a valid description in my understanding

Manipulation would start if whatever process is performed on the data, cannot be driven from the data itself via parameters, leading to non-linear effects, "made-up" details etc.
That would be "the border line" for me

Regards,
Jochen
decay
Posts: 497
Joined: Sat Apr 10, 2021 12:28 pm
Location: Germany, NRW

Re: Astro to Astro-art Continuum, 2024 Edition

Post by decay »

Hi Jochen!

Thanks for joining :)

I think, the main problem is the term ‘manipulation of reality’ we’ve used in this discussion.

Photography produces a representation of reality. For me, this representation should look as much as possible like the ‘reality’. This of course is difficult in AP as we cannot see this objects with our own eyes and even if we could the dynamic range is often much to vast.

So yes, already every non-linear stretch reduces the fidelity of this depiction. But we need to do this in order to see something at all. White-balancing, the setting of dark- and white points and (linear & global) contrast changes are more of a technical necessity for me. And even our eye, and the subsequent processing in the visual cortex or wherever, does all this permanently. So, that's all fine ;-)

For me, local contrast enhancements do have a much more serious impact on representation of reality or fidelity or whatever may be the right term here. Moderately used (for example on images of terrestrial objects) they can enhance the appearance, but a stronger use quickly leads to non-acceptable results in our perception of terrestrial objects. So we have to be cautious with processing of astronomical objects as it is much more difficult to see and say what is ‘too much’.

Yes, ‘manipulation’ probably was the wrong term in this case. Everything is derived from the data itself and from this point of view everything is fine. But we should be aware of the fact that local contrast enhancements may change our perception in a significant and unexpected way. But of course, this can be intentional as well, and that’s completely fine!

Talking of (personal) border lines: I don’t want to miss a single one of this local-contrast-enhancing tools! ;-)

Best regards, Dietmar.
decay
Posts: 497
Joined: Sat Apr 10, 2021 12:28 pm
Location: Germany, NRW

Re: Astro to Astro-art Continuum, 2024 Edition

Post by decay »

We talked about local contrast enhancements, ST’s HDR, Sharp and Contrast modules, how all this works and the impact on our images.

Explanations on the internet often use graphs of intensity profiles like here on Wikipedia:
2024-02-01 16_40_28-Window.jpg
2024-02-01 16_40_28-Window.jpg (21.06 KiB) Viewed 7166 times
Mike wrote that he tried out things with Gimp and I wonder if we might do that 1) with StarTools itself and 2) with synthetic images created specifically created to asses the impact of image processing on them.

In this case it would be helpful to have a simple way to watch the intensity profiles of the images before and after processing. So I decided to write a small tool for this:
2024-02-01 16_13_43-Window.jpg
2024-02-01 16_13_43-Window.jpg (20.04 KiB) Viewed 7166 times
The background shows an image created for testing purposes. It shows a soft transition on the left side and a hard one on the right side of the bright vertical bar. The tool draws the corresponding intensity graph. R, G, B is distinguished with different lines, if applicable. But I would recommend to use grey scaled images only.

It may be downloaded here. No viruses in there, I promise! It should run on current Windows 10 and 11 systems.
https://c.web.de/@334960167135216273/Wk ... ftD_439MzQ

Yeah, dunno. Is it helpful? Comments welcome!

Nonetheless, it may be used for the assessment of astronomical images as well, as I found out. :D
2024-02-01 16_28_20-Window.jpg
2024-02-01 16_28_20-Window.jpg (114.99 KiB) Viewed 7166 times
Simply move the tool across the image and it shows the RGB levels, the level of the background, if there’s a colour hue, if the stars or the core of M31 are saturated etc ...

Best regards, Dietmar.
Mike in Rancho
Posts: 1166
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Astro to Astro-art Continuum, 2024 Edition

Post by Mike in Rancho »

hixx wrote: Thu Feb 01, 2024 11:18 am Hi,
I'd say in this regards, all Post Processing would be a manipulation of reality. Think of an Arcsinh stretch, White balancing or whatever.

The key point is the conditions determining how local enhancements are processed got derived from the data itself depending on parameters, rather than from a "human-touch-up-mask" which is arbitrary. This is the case in ST, I believe, so "manipulation" would not be a valid description in my understanding

Manipulation would start if whatever process is performed on the data, cannot be driven from the data itself via parameters, leading to non-linear effects, "made-up" details etc.
That would be "the border line" for me

Regards,
Jochen
Hi Jochen!

Thanks for joining in. I agree as a base concept that we don't want to misrepresent "reality" (more there later), but as I believe Dieitmar also points out, there can be some snags in striving for that goal even if we only derive from the data. And hence, the discussion of just what these tools are doing.

We know that even automasking sometimes has hiccups, perhaps more than we'd like especially in SVD or anything (Sharp?) using the new starfishy algorithms.

Also, something like Foraxx is fully automated pixel math, derived fully from the data (fairly cleverly, I must admit), yet the result, due to subtraction, is an alteration of reality for prettiness purposes. And that's true whether our base reality is to maintain relative emissions intensities global to each channel, or even if we toss relative global aside and accept non-linear stretching per wavelength type (as we do in NB Accent). Foraxx takes a big step beyond that.

Even we we acknowledge assumptions and compromises along the way -- and there are lots of them, from our sensor QE, to the filters or CFA array used, to stacking parameters, and even to light pollution seeing and guiding, we can still try to aim for a certain display of ground reality - our captured data - which does require manipulation for human consumption.

To me, that some processing is inherently a manipulation doesn't kick the door wide open to enter the art zone for what could be considered, oh, maybe a deepfake? Nor would I think things are just a matter of degree. Thus, is Topaz okay as long as you only use a little bit?

I need to go back to the chart but I think it may already address your border line. Wouldn't this be the intrinsic vs extrinsic row? :think:
Mike in Rancho
Posts: 1166
Joined: Sun Jun 20, 2021 10:05 pm
Location: Alta Loma, CA

Re: Astro to Astro-art Continuum, 2024 Edition

Post by Mike in Rancho »

decay wrote: Thu Feb 01, 2024 3:42 pm The background shows an image created for testing purposes. It shows a soft transition on the left side and a hard one on the right side of the bright vertical bar. The tool draws the corresponding intensity graph. R, G, B is distinguished with different lines, if applicable. But I would recommend to use grey scaled images only.

It may be downloaded here. No viruses in there, I promise! It should run on current Windows 10 and 11 systems.
https://c.web.de/@334960167135216273/Wk ... ftD_439MzQ

Yeah, dunno. Is it helpful? Comments welcome!
Neat, Dietmar! :bow-yellow:

I will try this out later or on the weekend. :D

Does the graph represent a single horizontal pixel row of the target image?

I do wonder if the one sample gradient image (soft, hard transition) is realistic. It may show us some useful things especially as to UM and maybe contrast too, but may not have the inherent scaled details for wavelets to work? :confusion-shrug:

The other night on my tablet I tried to do a bit more reading, but quickly ended up mired in the muck. :lol:

While likely broad categories compared to how ST handles things in a more complex and astro-specific manner, there were a few Wikipedia entries that cover stuff like the scaling involved in wavelets.

https://en.wikipedia.org/wiki/Pyramid_( ... rocessing)
https://en.wikipedia.org/wiki/Scale_space
https://en.wikipedia.org/wiki/Wavelet
https://en.wikipedia.org/wiki/Wavelet_transform

They sure use the work Guassian a lot. I need to figure out what that is. :lol:

But there are interesting tidbits about the need to break things down for enhancement when global scales wouldn't be proper (think tree trunk vs all the leaves), and also the need to not introduce "new" structure. That's the part where I am still stuck on just how that can be accomplished when you are doing these blur/subtract/add edge detections. :confusion-shrug:

Interesting thoughts here too: https://www.ianmorison.com/deconvolution-sharpening/

I know it's his commentary, but does simplify some things in a decent way it seems. Though he does reference Clark too. :think:

Also, terminology is all over the place everywhere. Some seem to call everything sharpening, with deconvolution (as we know it, meaning R-L data recovery) as a subset. Others including research papers call everything deconvolution, with UM and wavelets in the "deblurring" category of it, and R-L (plus some others) in the data restorative subcategory.
User avatar
admin
Site Admin
Posts: 3382
Joined: Thu Dec 02, 2010 10:51 pm
Location: Melbourne
Contact:

Re: Astro to Astro-art Continuum, 2024 Edition

Post by admin »

If it helps, almost every processing algorithm/technique has an equivalent in "1D" such as in audio/music;

For example;
  • Wavelet sharpening is about isolating and manipulating specific frequencies (like a graphic equalizer).
  • Deconvolution can be thought of as undoing the effects of a reverb/reflection (see also convolution reverbs as audio effects).
  • Local dynamic range manipulation can be thought of as using a limiter/compressor (the things they use to make commercials sound so obnoxiously loud).
In all of this - whether audio or image - masking should ideally only be used to help isolate anomalies or passively sample signal of interest, for example
  • spikes in the signal where linearity breaks down or where clipping occurs (e.g. bright stars, loud noises causing microphone saturation) or where signal was disrupted (dead/hot pixels, audio interference or cut/outs)
  • candidates for PSF estimation (stars, audio pulses)
If you're wondering what an algorithm or processing step does to a 2D signal, it can sometimes be very helpful to find out (or imagine) what it does to a simpler 1D signal (just amplitude changing over time).
Ivo Jager
StarTools creator and astronomy enthusiast
decay
Posts: 497
Joined: Sat Apr 10, 2021 12:28 pm
Location: Germany, NRW

Re: Astro to Astro-art Continuum, 2024 Edition

Post by decay »

Mike in Rancho wrote: Thu Feb 01, 2024 6:46 pm Does the graph represent a single horizontal pixel row of the target image?
Correct! Just play around a bit and you will see ...
Mike in Rancho wrote: Thu Feb 01, 2024 6:46 pm I do wonder if the one sample gradient image (soft, hard transition) is realistic. It may show us some useful things especially as to UM and maybe contrast too, but may not have the inherent scaled details for wavelets to work?
This really only was to demonstrate the functionality of the tool. (What is UM?) We have to think on how input images have to look like. So which specific ST module/function we want to evaluate and what might be an appropiate input pattern in this case? My intention was to simplify things by using simple input patterns and reducing it to one dimension (like Ivo wrote) as it is easier to see what happens using such intensity graphs.

BTW: A square pulse (hard transition) as input signal contains all frequencies (harmonic waves) up to infinity. You can use such pulses, feed them into any unknown transmission system (a ST module for example) and evaluating the output signal allows you to figure out the characteristics and parameters of the transmission path.

For example I played around with Contrast module a bit:
2024-02-02 13_21_38-Window.jpg
2024-02-02 13_21_38-Window.jpg (37.19 KiB) Viewed 6978 times
I could imagine creating syntetic input images containing sine wave patterns of different freqencies. I could write a tool for this task ... :mrgreen:

But of course, I'm not sure how to go on. We should think about what we want to know ... and what to do first ... I'm confused :lol:
Post Reply