Mike in Rancho wrote: ↑Tue Jun 27, 2023 6:54 pm
Thanks Ivo. I've been following along. I notice you mentioned that SVD is not really meant for "fixing" star shapes, and was wondering if you could expound on that for us.
I know we do have a spatial error slider (I'm not very practiced with it), and you've also mentioned before that adding iterations will bring things ever more towards a point source...
Also, granted we should all, first things first, attempt to get our business under control with respect to things like tilt and corrector backspacing...
Is it more that SVD is trying to clarify the non-star detail, and the stars PSF's are just the (variable) roadmap? Thus, if a corner star is a bit of an egg, ST will understand the the proper deconvolution of a nearby bok (or whatever) is reverse-egg?
That's correct! All pixels are treated equal however in principal (star, Bok globule or otherwise). It's all about signal quality of the areas and how much noise and inaccuracies you'd be "scooping up" trying to gather all that spread out signal and re-concentrate it.
Whereas maybe at center FOV the deconvolution is more reverse-circle.
Exactly.
Of course, while we all want that, it's probably also true that star shapes can make or break an image. The eye is just drawn to any such defects. Hence probably the popularity of something like BXT, regardless of where it might fall on the deconvolution continuum (which seems more on the de-blur and warp side than the data recovery side, if one defines expansively, or somewhere in the middle if it is trying to do both).
How much power is there in true R-L deconvolution to undo things like eggy or coma stars, or is that really off the table? Maybe I haven't been thinking about SVD's purpose properly.
Theoretically, deconvolution can reverse any distortion. The problem is
always in the precision of the data and calculations, the data being;
1. the PSF model
2. the source data ("blurred image")
The key to understanding why we can't perfectly undo blurs, is this "deconvolution is an ill-posed problem" thing that is being thrown around.
Deconvolution is a prime example of an
inverse problem. Wikipedia defines it well;
An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, source reconstruction in acoustics, or calculating the density of the Earth from measurements of its gravity field. It is called an inverse problem because it starts with the effects and then calculates the causes. It is the inverse of a forward problem, which starts with the causes and then calculates the effects.
Scrolling down on that Wikipedia page we get to the crux of the issue
Mathematical and computational aspects
Inverse problems are typically ill-posed, as opposed to the well-posed problems usually met in mathematical modeling. Of the three conditions for a well-posed problem suggested by Jacques Hadamard (existence, uniqueness, and stability of the solution or solutions) the condition of stability is most often violated. In the sense of functional analysis, the inverse problem is represented by a mapping between metric spaces. While inverse problems are often formulated in infinite dimensional spaces, limitations to a finite number of measurements, and the practical consideration of recovering only a finite number of unknown parameters, may lead to the problems being recast in discrete form. In this case the inverse problem will typically be ill-conditioned. In these cases, regularization may be used to introduce mild assumptions on the solution and prevent over-fitting. Many instances of regularized inverse problems can be interpreted as special cases of Bayesian inference.
What this means
practically for deconvolution as implemented in computer algorithms, is that even the tiniest variation in your PSF model, source data or calculations (rounding errors) has a *massive* effect on the "solution".
Any tiny error will quickly destabilize the solution. You can only imagine how seriously catastrophic any sort of noise or non-linearity is if even rounding errors can destabilize the solution under perfect conditions.
It should also be noted that non-blind deconvolution (e.g. where you know the PSF) as for example implemented in SV Decon or the paper cited on CN, can be proven to converge on a - for all intents and purposes - unique solution. Or in other words, the uniqueness aspect is not so much a problem for the deconvolution implemented in astronomical image processing software. Note by the way that the same cannot be said or proven for an opaque neural hallucination algorithm.
Now that we know that
stability is the major issue, we can delve into this a bit deeper. The one trick we have up our sleeve to keep a solution from destabilising is called
regularization. Regularization makes sure that the solution after applying deconvolution doesn't veer too much of course from the input data/image. What "too much" means is entirely down to the regularization algorithm; it's usually where the analytical smarts of a decon algorithm reside.
At the core of it, regularization tries to statistically (hence me bolding the
Bayesian inference part) quantify the uncertainty (errors) in the source and PSF and minimize their destabilizing effect on the solution. Or in other words, regularization tries to determine how probable it is that something is real recovered signal, versus artefacting noise, and then weighs the recovered signal accordingly.
A regularization algorithm can be as unsophisticated as re-blurring the "new" image so that artefacting is blurred away again (= spreading uncertainty over neighboring pixels, alas along with the recovered signal). Or it can be as complex as ST's algorithm that introduces complex statistics from the entire processing history of a pixel, to much more accurately estimate the veracity of the recovered signal vs it just being the result of noise, non-linearities or other bad influences.
As an aside, R&L proposed to repeat this process a few times (e.g. over a few
iterations); deconvolve, pull back (regularize), then deconvolve that, pull back, then deconvolve that, pull back again, etc. As a result, an ideal deconvolution algorithm "converges" on a solution beyond which no improvements can be made. Uncertainty and improvements achieve a sort of equilibrium where the successive deconvolving and pulling back start to roughly cancel each other out.
Now that we know what regularization does (quantifying uncertainty and using that knowledge to stabilize the solution), we can better appreciate what happens in the case of a severely deformed star where the ideal solution (a nice point light) is
very far away from what we start off with (a highly defocused or elongated mess). Unless our dataset and PSF models are pristine and highly accurate, you will find that the equilibrium is somewhere half way between deformed and corrected. Pushing the deconvolution further would destabilize the solution too much (causing ringing, artefacts, etc.).
In ST's SVD module, you will find for example that severely deformed stars tend to have their high SNR cores corrected, but not always their low SNR "halos".
In summary, in
true signal processing, there is no free lunch; the signal has to come from somewhere. If tiny bits of that signal are spread (convolved) amongst the neighbouring pixels, then you can attempt to recover said signal (deconvolve). But whatever you recovered will be subject to the noise in all those neighbouring pixels' signal. And it will further be subject to the accuracy of your model of how much to take from those neighbouring pixels (PSF).
Of course, if you use neural hallucination, you can just side step all of this and make up (hallucinate) some nice new clean substitute signal that is plausible for that area, based on the way it looks (input). It has absolutely nothing to do with deconvolution, nor with the signal that you originally captured. The original signal is re-interpreted and replaced with something "nice" in one go. None of the procedures, considerations, pitfalls, etc. of true deconvolution apply. Yay for "progress".
Hope that helps?