BXT begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.
[use Austrian accent]
No snark; it's a real thing and the use of the term has gained a lot of popularity to call out this sort of AI behavior. In fact the term has now become so widespread that some people in the field wish to reign in its use;Mike in Rancho wrote: ↑Mon Jan 09, 2023 5:24 pm Wow.
I've seen some headlines about ChatGPT but haven't read through them to see what it's all about.
Interesting answers. I had the feeling "neural hallucination" was a bit of Ivo snark. Is that term really "a thing"?
Ring a bell?When an X-ray or MRI is undertaken, there is nowadays a likely chance that some kind of AI will be used to clean up the images on a reconstruction basis or otherwise analyze the imagery. Researchers caution that this can introduce AI hallucinations into the mix: “The potential lack of generalization of deep learning-based reconstruction methods as well as their innate unstable nature may cause false structures to appear in the reconstructed image that is absent in the object being imaged. These false structures may arise due to the reconstruction method incorrectly estimating parts of the object that either did not contribute to the observed measurement data or cannot be recovered in a stable manner, a phenomenon that can be termed as hallucination” (as stated in “On Hallucinations in Tomographic Image Reconstruction” by co-authors Sayantan Bhadra , Varun A. Kelkar, Frank J. Brooks, and Mark A. Anastasio, IEEE Transactions on Medical Imaging, November 2021).
I understand I may have appeared that way (the thing is *really* good!), but they are real, un-trained (by me that is) answers to unloaded questions. You should be able to get substantially similar answer by posing the same question (some noise is injected on purpose to force the model to vary answers slightly to appear more natural, therefore the answers may vary slightly).Thus, as-posed, the question could be a bit loaded. And could it have any pre-training from prior discussions with, oh, maybe Ivo, or did it spit this out entirely on its own? The topic here seems a bit arcane for it to be so authoritative.
The devious thing is that not all intentional hallucination is dream-like, as is the case for inpainting, or face generation, or the amazing Mid Journey and Stable Diffusion projects, or indeed the <x>XTerminator and Topaz AI suites. The resulting hallucinations can be quite plausible indeed, but nevertheless remain hallucinations, whether in whole or in part.Mike in Rancho wrote: ↑Tue Jan 10, 2023 4:28 am I changed that to AI hallucination and came up with more relevant results. That in itself seems to be split into two - intentional (AI "dream-like" results), and of course the unintentional.
and at other times explains in detail;The term "neural hallucination algorithm" is not a widely used or well-established term in the field of AI or machine learning.'
I guess the terminology is very fluid still in this fast moving field, and GPT3's dataset is a snapshot of the Internet figuring it all out.A neural hallucination algorithm is a type of machine learning algorithm that is used to generate new data based on patterns learned from a training set of data. These algorithms are typically based on artificial neural networks, which are modeled after the structure and function of the human brain. They are trained on a dataset and can generate new, similar data that is often difficult to distinguish from real data. Additionally, neural hallucination algorithm is a subtype of generative models which are focused on mimicking the data generation process. They’re able to create new data instances that are similar to the training set."
It's a great example of wishful thinking "astrophotography" and how neural hallucination algorithms play into this; in this image, everything that it (randomly) latches on to, is made into "a star" by BXT and is thus being replaced by neat point lights (even galaxies, which, of course, have entirely different shapes).dx_ron wrote: ↑Thu Feb 16, 2023 11:34 pm Interesting example posted on CN of what BX can "do" with an image: https://www.cloudynights.com/topic/8642 ... p=12501590
I'm having a hard time seeing that result as "an approximation of deconvolution", but what I know about image processing could fit on the head of a pin - so - maybe?
That's exactly the thing; with real deconvolution, the math treats all data/pixels equally. The distant galaxies' pixels will be spread over a larger area (because they are bigger, more diffuse objects), but their constituent detail will be blurred exactly the same as stars (point lights). If noise is too high or PSF was chosen incorrectly, any artefacts created by an R&L decon routine will be the same and easy to detect as aberrant.Now, I know we've discussed how additional iterations in real deconvolution/SVD will start bringing the stars more inward towards a point, and also thus rounder if shapes were a bit flawed.
There must be something about the nature of the math and passes made by actual deconvoultion which prevents point-sourcing a small galaxy? Do those pixels react differently to that reversal, even if tiny in size? Whereas this is perhaps assuming star, and treating it as such, when it isn't.