Personally, I hope AI doesn't ever get to the point you can use its output without checking it. The hallucinations are one easy way I can tell when someone is regurgitating AI.
allears•1h ago
So-called "hallucinations" aren't a threat to AI, and they're not hallucinations. That's a marketing term meant to anthropomorphize a statistical machine. They're a mathematical certainty. AI software is simply picking the most statistically likely words and phrases as a response to your prompt, compared to all the other training data it's been fed. There's no agency, no concept of "truth" or "facts."
leakycap•1h ago