Axiom 3 (stable global reference frame) seems most practically actionable. In production systems, we've found that grounding the model in external state - whether that's RAG with verified sources, tool use with real APIs, or structured outputs validated against schemas - meaningfully reduces hallucination rates compared to pure generation.
This suggests the "drift" you describe isn't purely geometric but can be partially constrained by anchoring to external reference points. Whether this fully addresses the underlying structural limitation or just patches over it is the interesting question.
The counterargument to structurally unavoidable: we've seen hallucination rates drop substantially between model generations (GPT-3 to GPT-4, Claude 2 to Claude 3, etc.) without fundamental architectural changes. This could mean either (a) the problem is not structural and can be trained away, or (b) these improvements are approaching an asymptotic limit we haven't hit yet.
Would be curious if your framework predicts specific failure modes we should expect to persist regardless of scale or training improvements.
On Axiom 3: you’re right that grounding (RAG, APIs, schema-validated outputs) functions as an external anchor. In the RCC framing, these are not global reference frames but local stabilizers inserted into the manifold. They reduce drift in the anchored subspace, but they don’t give the system visibility into the shape of the container itself.
Put differently: grounding constrains where the model can step, but it doesn’t reveal the map it is stepping in.
This is why drift shows up again between anchors, or when the external structure is sparse, contradictory, or time-varying.
On model improvements (GPT-3 → GPT-4, Claude 2 → 3): RCC doesn’t claim that hallucination rates are fixed constants — only that there is a geometric ceiling beyond which improvements cannot generalize globally. Larger models can push the boundary outward, but they cannot remove the boundary, because they still satisfy Axioms 1–4: • partial self-visibility • partial container visibility • absence of a global reference frame • forced local optimization
Unless an architecture violates one of these axioms, the constraint holds.
What RCC predicts will persist regardless of scale:
1. Cross-frame inconsistency Even with strong grounding, coherence will fail when generation spans contexts that are not simultaneously visible.
2. Long-horizon decay Chain-of-thought reliability degrades after a fixed window because the model cannot maintain a stable global state across recursive updates.
3. Self-repair failure Corrections do not propagate globally — the model “fixes” a region of its inference surface, but the global manifold remains unknown, so inconsistencies re-emerge.
These aren’t artifacts of current models; they fall out of incomplete observability.
Grounding, tools, and scale are all powerful ways to shift the failure point — but in the RCC view they can’t eliminate the underlying geometry that produces the failures.
Happy to go deeper if you’re curious which architectural modifications would actually violate the axioms (and thus escape the constraint). That’s where things get fun.
noncentral•1h ago
The claim is simple: if a system lacks (1) full introspective access, (2) visibility into its container manifold, and (3) a stable global reference frame, then hallucination and drift become mathematically natural outcomes.
I’m posting this to ask a narrow question: if these axioms are wrong, which one — and why?
Not trying to make a grand prediction; just testing whether a boundary-theoretic framing is useful to ML researchers.
verdverm•1h ago