Mike in Rancho wrote: ↑Sun Dec 18, 2022 6:05 am
The RC webpage for BXT seems like he claims there's data fidelity in what he is doing - possibly trying to distance himself from the Topaz-like issues? I wouldn't know how to otherwise evaluate.
A couple concerns I had was the way he discussed PSF, and that there was no need for iterations. Unsure if that meant truly no iterations are run, or the module determines how many are needed and just does it with no user intervention.
It reads to me like it's a typical one-pass impulse->response consultation of a neural net, without any actual traditional deconvolution step happening. E.g. it is not the case of determining the PSF and helping a traditional decon algorithm to reverse-apply this PSF while keeping noise propagation in check.
It may be entirely by accident, but the description on BXT's webpage to me definitely reads like a "this-is-totally-not-Topaz-AI" piece.
Particularly the - paraphrasing according to my impression - "deconvolution is an ill-posed problem" therefore "anything goes" allusion is worrying (and would be quite wrong if meant that way). The "
it "understands" what astronomical structures actually look like at finer scales than can be resolved using amateur equipment" is setting off big bat-symbol-in-the-sky alarm bells for me. There is
nothing special or different about structures that cannot be resolved by amateur equipment. It would be a monumental astronomical discovery if it were! There is no such magical threshold beyond which things need to be treated differently.
The Centaurus A image on that page definitely displays the "this-is-what-you-
want-to-see, right?" magically thinning structures (
which cannot be corroborated in closer up images.) so reminiscent of the Topaz "dog hair" filter. The choice of a galaxy to demonstrate the module's prowess is somewhat unfortunate, as they are objects where detail is much easier to resolve further in higher quality images (rather than, say a nebula in our own galaxy that becomes more diffuse at larger scales).
E.g. what should be structures that "fall apart" into smaller structures of diffuse dust when better resolved, with BXT become solid/defined thinning interconnected strands.
The non-resolving of the stars (which by and large remain fuzzy) vs other parts of the image that become better resolved than said stars (which are supposed to be the smallest resolvable detail) is also quite jarring/atypical for deconvolution.
I have three CN friends I regularly PM who are trying BXT out right now. So far I've just seen a couple crops of zoomed side-by-sides, one as a before/after and the other I believe compared PI's own deconvolution (i.e. don't use this) to BXT. There was improvement no doubt. I thought it worked better on the smallest stars, though all were resolved inward some even the big ones. I'll have to look again - yes I didn't check to see if there was brightening along with the "shrinking." I also haven't yet seen a result on non-stellar detail.
We have not yet swapped any of these files for comparison versus ST. And yes...I mentioned that we should do so.
That'd be interesting to see.
PI being PI and deconvolution, as well as BXT, being a linear operation, I'm not sure if they have a screen stretch preview to see what they are doing? Regardless, after running BXT, they will still be using PI for their global stretch - maybe with masks or possibly GHS - and thus they are likely to have what we would probably call FilmDev stars for the bigger ones.
There is indeed a non-permanent "screen stretch" you can apply on top while you're processing in the linear domain. For comparison purposes, you can keep this stretch constant and also apply it permanently if you wish.
But, I'm not sure there's necessarily a problem with AI in and of itself, but rather what it actually does that's important.
That's entirely my stance as well. I think AI is amazing and it will always be my first love (I studied it for a while in University), but I don't think people understand what is happening when they use it.
In AP, when AI is used in this unsophisticated way, you are no longer the sole contributor of the signal when you use it. It's that simple. Whether that's a problem (and to whom) is a different story. It's ethically no different to airbrushing. Airbrushing is not a problem in itself. But it
is a problem if you are trying to deceive people.
Assuming you are engaging in documentary photography, I don't regard your image well if I detect its use. At the very least, I will think less of your skills as an astrophotographer, because you have proven to be a poor custodian of your signal. I may even think less of you as a person if I know that you actually fully understand what you did to your signal and did it anyway and
still claim it as a documentary photograph. Now you're actively trying to deceive me and trying to make yourself and your skills look better than they are.
Different would be if an AI would establish the right PSF(s) to reverse-apply. No new signal or information would be added and you can fully articulate why you transformed your signal in that way and
how (e.g. using a generally accepted and mathematically reversible algorithm). The only thing the AI would do in that case is act as tool to model the distortion. It would not engage in any interpretative black box signal modification.