Mike in Rancho wrote: ↑Mon Jan 23, 2023 8:04 am
Thanks, Ivo.
Yes, further study needed. And who knew I was unwittingly being taught these things just by using ST?
Sadly I do have a math minor, though that's ancient history, but this seems more of a specialty and has a bit of it's own language. In fact until I started AP a few years ago, I knew squat about Photoshop, image processing, layers, etc. etc. When I first downloaded Gimp my eyes bugged out. What is all this stuff!?
So when I see things like "kernel," I think - isn't that the low level inside my CPU?
And when the guy split up his charts into "time" on one side and something else on the other, and then talked about "convolving," I just threw up my hands.
While advanced math is useful in some cases, it's indeed more of a question of getting to know the terms/jargon and basic thinking/processes.
Convolving in the spatial domain (e.g. of an X by Y pixel image as you're used to) is as simple as this;
Given a filter kernel that looks like, for example, this;
1 2 1
2 4 2
1 2 1
Then the "new" target pixel in the image at location X, Y will become;
pixel value at (X-1, Y-1) x 1
+ pixel value at (X, Y-1) x 2
+ pixel value at (X+1, Y-1) x 1
(we just processed the top row of the filter kernel)
+ pixel value at (X-1, Y) x 2
+ pixel value at (X, Y) x 4 <--- yup, that's our original pixel going into the mix as well
+ pixel value at (X-1, Y-1) x 1
(we just processed the middle row of the filter kernel)
+ pixel value at (X-1, Y+1) x 1
+ pixel value at (X, Y+1) x 2
+ pixel value at (X+1, Y+1) x 1
(we just processed the bottom row of the filter kernel)
The new value will need to be corrected ("normalized"), by simply dividing it by the total of the filter values, which in this case is 16 (1+2+1+2+4+2+1+2+1). (we could also just use fractions in the filter of course, so that everything adds up to 1.0).
Congratulations, you just convolved a pixel by a 3x3 filter kernel! This way of convolving is really just taking a weighted average of the immediate vicinity of a pixel. This particular filter is (due to its specific weights/numbers) a Gaussian kernel, aka a "Gaussian blur". There are a massive amount of other useful kernels (different numbers, different sizes besides 3x3) with useful properties. And of course, the by now familiar "PSF" is also a kernel just like the others; a point light at a location in our image is "filtered" by this kernel. Deconvolution then, is attempting to reverse this filtering.
As you can see a lot of the "scariness"/math tends to melt away into a puddle of basic addition, subtraction, multiplication and division. It does get a little more esoteric when you start venturing into "frequency domain" and Fourier transforms. Super interesting and useful(!) in their own right, but it's possible to understand the bulk of what's going on without delving into that.
In any event...without realizing I had jumped in way over my head, the genesis of it all was a (I presumed false) equivalency between artifacting in BXT and true deconvolution. As in, hey, deconvolution can cause artifacts too! So, my going theory was that true deconvolution artifacts, primarily ringing, to my knowledge, have to be pretty well understood - at least by the experts. And that does seem to be the case.
Your hunch is definitely correct. It is indeed a false equivalency. Ringing is indeed well understood, as are the "artefacts" it creates. The "artefacts" - if any - are distinct and are easily identifiable, as opposed to something that was - by design - made to look like plausible detail.