frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Are LLM failures – including hallucination – structurally unavoidable? (RCC)

http://www.effacermonexistence.com/rcc-hn-1
3•noncentral•1h ago

Comments

noncentral•1h ago
Author here. Quick clarification: RCC is not proposing a new architecture. It’s a boundary argument — that some LLM failure modes may emerge from the geometric limits of embedded inference rather than from model-specific flaws.

The claim is simple: if a system lacks (1) full introspective access, (2) visibility into its container manifold, and (3) a stable global reference frame, then hallucination and drift become mathematically natural outcomes.

I’m posting this to ask a narrow question: if these axioms are wrong, which one — and why?

Not trying to make a grand prediction; just testing whether a boundary-theoretic framing is useful to ML researchers.

verdverm•1h ago
I think it's simpler, the models are sampling from a distribution. Hallucinations are not an error, they are a feature
Soerensen•1h ago
Interesting framing. On your axioms:

Axiom 3 (stable global reference frame) seems most practically actionable. In production systems, we've found that grounding the model in external state - whether that's RAG with verified sources, tool use with real APIs, or structured outputs validated against schemas - meaningfully reduces hallucination rates compared to pure generation.

This suggests the "drift" you describe isn't purely geometric but can be partially constrained by anchoring to external reference points. Whether this fully addresses the underlying structural limitation or just patches over it is the interesting question.

The counterargument to structurally unavoidable: we've seen hallucination rates drop substantially between model generations (GPT-3 to GPT-4, Claude 2 to Claude 3, etc.) without fundamental architectural changes. This could mean either (a) the problem is not structural and can be trained away, or (b) these improvements are approaching an asymptotic limit we haven't hit yet.

Would be curious if your framework predicts specific failure modes we should expect to persist regardless of scale or training improvements.

noncentral•1h ago
Thanks for the thoughtful read. this is exactly the point where RCC becomes interesting.

On Axiom 3: you’re right that grounding (RAG, APIs, schema-validated outputs) functions as an external anchor. In the RCC framing, these are not global reference frames but local stabilizers inserted into the manifold. They reduce drift in the anchored subspace, but they don’t give the system visibility into the shape of the container itself.

Put differently: grounding constrains where the model can step, but it doesn’t reveal the map it is stepping in.

This is why drift shows up again between anchors, or when the external structure is sparse, contradictory, or time-varying.

On model improvements (GPT-3 → GPT-4, Claude 2 → 3): RCC doesn’t claim that hallucination rates are fixed constants — only that there is a geometric ceiling beyond which improvements cannot generalize globally. Larger models can push the boundary outward, but they cannot remove the boundary, because they still satisfy Axioms 1–4: • partial self-visibility • partial container visibility • absence of a global reference frame • forced local optimization

Unless an architecture violates one of these axioms, the constraint holds.

What RCC predicts will persist regardless of scale:

1. Cross-frame inconsistency Even with strong grounding, coherence will fail when generation spans contexts that are not simultaneously visible.

2. Long-horizon decay Chain-of-thought reliability degrades after a fixed window because the model cannot maintain a stable global state across recursive updates.

3. Self-repair failure Corrections do not propagate globally — the model “fixes” a region of its inference surface, but the global manifold remains unknown, so inconsistencies re-emerge.

These aren’t artifacts of current models; they fall out of incomplete observability.

Grounding, tools, and scale are all powerful ways to shift the failure point — but in the RCC view they can’t eliminate the underlying geometry that produces the failures.

Happy to go deeper if you’re curious which architectural modifications would actually violate the axioms (and thus escape the constraint). That’s where things get fun.

Show HN: Localflare – Local Dev Dashboard for Cloudflare Workers(D1, KV, R2 etc.

https://github.com/rohanprasadofficial/localflare
2•rohanpdofficial•1m ago•0 comments

DHS is trying to force tech companies to hand over data about Trump critics

https://techcrunch.com/2026/02/03/homeland-security-is-trying-to-force-tech-companies-to-hand-ove...
2•speckx•1m ago•0 comments

The Focus You Fear

https://avinashv.net/newsletter/the-focus-you-fear/
2•tvchurch•2m ago•1 comments

Pivot Toward AI and Agents

https://nexivibe.com/posts/pivot-to-ai-agents.html
1•mathgladiator•2m ago•0 comments

Conductors who died while conducting

https://en.wikipedia.org/wiki/Category:Conductors_(music)_who_died_while_conducting
1•chiwilliams•2m ago•0 comments

Snowflake Launches Cortex Code CLI

https://www.snowflake.com/en/product/features/cortex-code/
1•livewirecrazy•3m ago•0 comments

Show HN: A Notion CLI for Agents (OS)

https://github.com/Balneario-de-Cofrentes/notion-cli-agent
1•sujito•4m ago•0 comments

Your Favorite Problem Is an Ising Model

https://iagoleal.com/posts/ising-qubo-milp/
1•romes•4m ago•0 comments

Owl Browser – AI-assisted, privacy-focused browser for power users

1•Tye45•5m ago•3 comments

LoRA AI is a cutting-edge platform LoRA AI images quickly and efficiently

https://loraai.me/
1•guowuzong•5m ago•0 comments

China bans all retractable car door handles

https://arstechnica.com/cars/2026/02/china-bans-all-retractable-car-door-handles-starting-next-year/
2•worik•5m ago•0 comments

Trump: Republicans 'should take over the voting' and 'nationalise' US elections

https://www.bbc.co.uk/news/articles/c0mke841zj0o
6•ColinWright•6m ago•0 comments

Unbrowse – Skip browser automation on OpenClaw by calling internal APIs directly

https://github.com/lekt9/unbrowse-openclaw
1•lekt8•7m ago•1 comments

Why speech-to-speech is the future for AI voice agents: Unpacking the AIEWF Eval

https://www.ultravox.ai/blog/why-speech-to-speech-is-the-future-for-ai-voice-agents-unpacking-the...
2•underfox•8m ago•0 comments

Zero-sysroot hermetic LLVM cross-compilation using Bazel [video]

https://fosdem.org/2026/schedule/event/F8SDAA-zero-sysroot_hermetic_llvm_cross-compilation_using_...
1•agluszak•8m ago•0 comments

WebKit adds .claude/ for Claude Code commands/skills

https://github.com/WebKit/WebKit/commit/ceb4a05a51792bd00d02a515945edc092ca6ac6b
1•OGEnthusiast•9m ago•0 comments

New AI Quiz Generator

https://www.learvo.com/
1•aneeshr33•9m ago•1 comments

A Quick Look at QUIC

https://www.potaroo.net/ispcol/2019-03/quic.html
1•fanf2•10m ago•0 comments

The Problem with Using AI in Your Personal Life

https://www.theatlantic.com/family/2026/02/ai-etiquette-friends/685858/
2•fortran77•11m ago•1 comments

HP L52448-1C1 replacement battery – UAEBattery

https://en.uaebattery.ae/hp-en/battery-hp-l52448-1c1.htm
1•JKGOLD•11m ago•0 comments

AliSQL: Alibaba's open-source MySQL with vector and DuckDB engines

https://github.com/alibaba/AliSQL
6•baotiao•12m ago•0 comments

Libfyaml v0.9.4: multi platform support YAML 1.2 C library

https://github.com/pantoniou/libfyaml/releases/tag/v0.9.4
1•fypanto•12m ago•1 comments

Roundup of Events for Bootstrappers in February 2026

https://bootstrappersbreakfast.com/2026/01/29/roundup-of-february-2026-bootstrapper-events/
1•skmurphy•13m ago•1 comments

Cline CLI 2.0 with free Kimi K2.5 for a limited time

https://cline.bot/blog/announcing-cline-cli-2-0
5•juanpflores•14m ago•0 comments

Low earth orbit LEO is not crowded

https://www.johndcook.com/blog/2026/02/02/satellites-have-a-lot-of-room/
2•ibobev•14m ago•0 comments

Polish Serenity

https://www.johndcook.com/blog/2026/02/03/polish-serenity/
1•ibobev•15m ago•0 comments

What the Top Clawdbot Skills Reveal About Agent Architectures in the Wild

https://twitter.com/belindmo/status/2018755490751340796
1•belindamo•15m ago•0 comments

Most AI assistants are feminine, and it's fuelling harmful stereotypes and abuse

https://theconversation.com/most-ai-assistants-are-feminine-and-its-fuelling-dangerous-stereotype...
1•binning•16m ago•0 comments

Show HN: Autoliner – write a bot to control a virtual airline

https://autoliner.app/
2•msvan•17m ago•0 comments

New Female Maladies: How Diagnosis Took the Place of Rebellion

https://fairerdisputations.org/new-female-maladies/
2•binning•17m ago•0 comments