Most people frame LLM errors as “hallucinations.” But that metaphor misses the point. Models don’t see, they predict. The real issue is fidelity decay: words drift, nuance flattens, context erodes, and outputs become accurate but hollow. This paper argues we should measure meaning collapse, not just factual mistakes.
realitydrift•3h ago