Mathematically, this is a very simple affair, though as (most of you know), deconvolution in the real world is much harder without the solution destabilising due to the signal being imperfect. This does necessitate some intervention in the otherwise very basic process, hence iterative algorithms like Richardson & Lucy decon.
The important thing here is that you can articulate what is happening to the signal and "how you got there" - a prerequisite in science so the result can be scrutinized and replicated.
In the case of real deconvolution;
- you are responsible for (and in control of) providing the PSF(s)
- the algorithm then reverse-applies this/these PSFs in a well understood, well documented, and well-accepted manner
- no new information is introduced, and all information comes from the observation/dataset itself
- deconvolution is not explicitly implemented and there is no concept of a method, algorithm, let alone PSF that can be articulated, provided or extracted
- the neural net re-interprets the image in a black box manner, without being able to articulate how and why some pixels were changed or how they relate to the input data
- new information is introduced, and the information does not come from the observation/dataset itself
In the case of neural net hallucination, the result is a re-interpretation of the dataset, that may or may not look like a restoration, that has no intrinsic evidence supporting its validity, method or origins. As such it destroys documentary and scientific value.