Id’ like to share some additional thoughts on the discussion of Unsharp Mask and (Wavelet) Sharpening. (Yeah, again late on the party. That’s me )
As Mike wrote, this techniques do not add any information, they just change the acutance (just learned that word on Wikipedia ) which means, our visual perception. And yes, this is done by contrast enhancement, in this case _local_ contrast changes. To put it very simply, edges in the image are detected and stressed by local contrast enhancement.
I think, this articles on Wikipedia are worth a look, as it is quite easy to follow how it works:
https://en.wikipedia.org/wiki/Unsharp_masking
https://en.wikipedia.org/wiki/Edge_enhancement
As far as I understand, Wavelet Sharpening is mainly a more sophisticated and advanced implementation of this idea. The image is disassembled into multiple components representing different spatial frequencies. For example, above mentioned Edge Enhancement uses a high pass filter (high frequencies) which – well – extracts the edges in the image. By using other filters with lower frequencies it it possible to extract and stress larger structures. These are the sliders we know from Registax or the Contrast Equalizer of Darktable. And of course from the Sharp module of StarTools. And the Contrast module’s ‘Locality’ slider works the same way, I guess.
It seems to be much more difficult to find decent information on this. At least something which is understandable to ordinary mortal human beings. Maybe this helps at least to get an idea:
https://photo.stackexchange.com/questio ... n-registax
I too think, that all this should preferably be used with caution and in a subtle way. Using strong enhancement on large(r) structures can deform the whole object. This may not be too obvious at a first look while processing deep sky objects as they are quite abstract to our eye. But it may be useful to first try this on terrestrial objects to get a better feeling, like Mike did. I can recommend the Contrast Equalizer of Darktable to do this, it’s intuitive to use and quite powerful.
Having all this said: I wonder, if it is possible to use ST’s Contrast and/or Sharp modules in order to get what Freddy did using Unsharp Mask in Gimp? I reread both module documentations and if I understand correctly their implementations should be more sophisticated and superior compared to others. This would have the benefit to take full advantage of ST’s great tracking feature (taking SNR into account) and Ivo’s promise, that data never gets clipped. Unfortunately I have not too much experience using both modules. Often I omit using them and other times it’s more of trial and error.
But maybe it’s worth a try?! Any thoughts?
Best regards, Dietmar.
OK, so a Tadpoles Startools version - SHO color palet -IC 410
-
- Posts: 1164
- Joined: Sun Jun 20, 2021 10:05 pm
- Location: Alta Loma, CA
Re: OK, so a Tadpoles Startools version - SHO color palet -IC 410
Nice post Dietmar, good additional thoughts.
I wonder how close in theory the (ST) implementation of contrast change is though, and how that compares to what is happening in sharpening routines. And thus, is it correct to use the term contrast in that regard.
I've actually been thinking about stuff like this, when free time permits, and how modules like OptiDev, Contrast, and HDR all work. I get the sense there are similarities, though with OptiDev you are going global, with Contrast it could be global or local, and HDR of course is local. Amirite on that? And as basic theory they are all just altering dynamic range. That loses some information (true relative intensities - which is almost a necessity due to the data we are dealing with) but gains the human perception of the structural information that would otherwise be impossible to view (buried in the shadows or lost in bright highlights).
There are likely safeguards (anti-clipping, re-normalizing) built into all of those. Now, and if that is the case, are sharpening algorithms therefore different in nature? As described in the Wikipedia entries, and as I found by doing sort of a simple "unsharp mask" via manual operations, a subtraction is involved. Specifically, a blurred rendition of the same data. Well, I am always wary when I see a subtraction (think Foraxx or hole-cutting with masks to "reveal" OIII), at least if not possibly involved with initial gradient extraction. Does the fact that the blur is Guassian somehow make things more legitimate? I am unsure.
Not that I am necessarily against some usage of sharpening, though we ought to be knowledgeable of what it is exactly doing. I'm not sure I am that knowledgeable yet. But based on my little experiment it seems somewhat more manipulative than the other modules discussed.
Somewhere in these forums Ivo once made a little table that ranked various techniques as to their level of data fidelity vs, I suppose, artistry. But probably only Dietmar could find it again.
I wonder if it would be useful to augment that table (I believe it only had three columns) to include more techniques, and split them up into finer gradations of where they fall on the documentary-versus-artsy continuum, and further of course try to discuss why each such module would be so ranked. Maybe split it off from Freddy's Tadpoles though.
Or maybe it wouldn't be useful. Likely I would just confuse myself.
I wonder how close in theory the (ST) implementation of contrast change is though, and how that compares to what is happening in sharpening routines. And thus, is it correct to use the term contrast in that regard.
I've actually been thinking about stuff like this, when free time permits, and how modules like OptiDev, Contrast, and HDR all work. I get the sense there are similarities, though with OptiDev you are going global, with Contrast it could be global or local, and HDR of course is local. Amirite on that? And as basic theory they are all just altering dynamic range. That loses some information (true relative intensities - which is almost a necessity due to the data we are dealing with) but gains the human perception of the structural information that would otherwise be impossible to view (buried in the shadows or lost in bright highlights).
There are likely safeguards (anti-clipping, re-normalizing) built into all of those. Now, and if that is the case, are sharpening algorithms therefore different in nature? As described in the Wikipedia entries, and as I found by doing sort of a simple "unsharp mask" via manual operations, a subtraction is involved. Specifically, a blurred rendition of the same data. Well, I am always wary when I see a subtraction (think Foraxx or hole-cutting with masks to "reveal" OIII), at least if not possibly involved with initial gradient extraction. Does the fact that the blur is Guassian somehow make things more legitimate? I am unsure.
Not that I am necessarily against some usage of sharpening, though we ought to be knowledgeable of what it is exactly doing. I'm not sure I am that knowledgeable yet. But based on my little experiment it seems somewhat more manipulative than the other modules discussed.
Somewhere in these forums Ivo once made a little table that ranked various techniques as to their level of data fidelity vs, I suppose, artistry. But probably only Dietmar could find it again.
I wonder if it would be useful to augment that table (I believe it only had three columns) to include more techniques, and split them up into finer gradations of where they fall on the documentary-versus-artsy continuum, and further of course try to discuss why each such module would be so ranked. Maybe split it off from Freddy's Tadpoles though.
Or maybe it wouldn't be useful. Likely I would just confuse myself.
Re: OK, so a Tadpoles Startools version - SHO color palet -IC 410
Hi Mike,
From my understanding the Sharpen algorithm got its "unsharp" name from analog photography when two negatives of an image were placed on top of one another slightly apart and photographed which made the result slightly blurry or "unsharp". That blurred "unsharp mask" image was then placed on top and the original and photographed where the result was to make light lighter and dark darker at the contrast edges to improve sharpness.
The digital version's algorithm goes through some kind of rigamarole to accomplish the same result, maybe involving some kind of blur at the pixel level. As I understand StarTools' Sharp, it works to sharpen certain of those "edges" in various scales at various strengths by making the bright side of the edge brighter or the dark side darker or by a little bit of both. Anyway, that is what I get from reading what the parameters do, and I may be totally wrong. If there was some sort of "subtraction" I assume it was added back in beneficially
Jeff
From my understanding the Sharpen algorithm got its "unsharp" name from analog photography when two negatives of an image were placed on top of one another slightly apart and photographed which made the result slightly blurry or "unsharp". That blurred "unsharp mask" image was then placed on top and the original and photographed where the result was to make light lighter and dark darker at the contrast edges to improve sharpness.
The digital version's algorithm goes through some kind of rigamarole to accomplish the same result, maybe involving some kind of blur at the pixel level. As I understand StarTools' Sharp, it works to sharpen certain of those "edges" in various scales at various strengths by making the bright side of the edge brighter or the dark side darker or by a little bit of both. Anyway, that is what I get from reading what the parameters do, and I may be totally wrong. If there was some sort of "subtraction" I assume it was added back in beneficially
Jeff
Re: OK, so a Tadpoles Startools version - SHO color palet -IC 410
Sure, Mike. Always at your serviceMike in Rancho wrote: ↑Thu Jan 25, 2024 5:54 pm Somewhere in these forums Ivo once made a little table that ranked various techniques as to their level of data fidelity vs, I suppose, artistry. But probably only Dietmar could find it again.
viewtopic.php?p=11393#p11393
I will respond to all your points later. Too much at once for me
And yes, probably we should decouple at least the theoretical discussions to another thread and stop destracting from Freddy's fine tadpoles
One of my intentions was to see if it's possible to do in ST what Freddy did with Gimp (or PI). But we might come back here when we found the solution. Or enlightenment
Best regards, Dietmar
Re: OK, so a Tadpoles Startools version - SHO color palet -IC 410
If you see that electric glow around objects, then it is just applied too much, Mike. You can see that if you apply the flux module also...Mike in Rancho wrote: ↑Fri Jan 19, 2024 6:41 am .
For scientific purposes, I opened a picture of my backyard in Gimp. Copied it and applied a Guassian blur, then subtracted that off the original, and then took that result and added it to the original. Yep things looked "sharper," though the power wires ended up with an electrified appearance around them.
I know there's also a scaling that can be done but I didn't think up a way to implement that (in retrospect maybe the opacity % would have worked for that). In any event, why does blurring something then subtracting and adding back create a "sharpening"? Maybe subtracting off the blur leaves just the strongest elements of the image for pasting back onto the image?
Unsharp mask can do great things but it is always a balancing act. But i think that counts for just about every processing too. And someimes , it does not add much either, it all very depends.Unsharp mask might (mind the word might) accentuate things in the picture, well if they are there. Eg i am now working on the Christmas tree, taken in bad moonlight, and really , unsharp mask just does not do much there for whatever reason..;
It is thesame with stretching...it is a hard thing to do. Often one thinks it is pretty ok, but after finishing the image, often it is applied either too much , or not enough..it is a tricky thing imho. I can't really do it well in PI in most cases...Startools is way better imho. I could not make an M42 with an descent core, it came out always way overblown in PI. Strangely in Startools , it is not hard to do...or at least let say far more easier.
Processing is a bit like Pandora's box. Often it goes bad, but the hope remains on a next occasion you will do it better and eventually get a descent image