Semantic drift: when AI gets the facts right but loses the meaning
1•realitydrift•5mo ago
Most LLM benchmarks measure accuracy and coherence, but not whether the intended meaning survives. I’ve been calling this gap fidelity, the preservation of purpose and nuance. Has anyone else seen drift like effects in recursive generations or eval setups?
Comments
docsorb•5mo ago
You're touching upon a very nuanced point of identifying the true "intent" behind these words. Do you think the way these models are trained should be different to correctly map the potential intent vs the true meaning?
like your example, "the meeting is at 3pm", _we got enough time_ intends something else with "the meeting is at 3pm" _where the hell are you?_ intends something else. It is not so obvious to get that intent without a lot of context (like, time, environment, emotion etc.)
realitydrift•5mo ago
Exactly. That’s the hard part. Meaning is often carried less by the literal words and more by context (time, environment, emotion, shared knowledge). My point with fidelity is that current benchmarks don’t check whether outputs preserve that function in context. An AI can echo surface words but miss the intended role: coordination, reassurance, accountability. And that’s where drift shows up.
docsorb•5mo ago
like your example, "the meeting is at 3pm", _we got enough time_ intends something else with "the meeting is at 3pm" _where the hell are you?_ intends something else. It is not so obvious to get that intent without a lot of context (like, time, environment, emotion etc.)
realitydrift•5mo ago