frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

RCC: Why LLMs Still Hallucinate Even at Frontier Scale (Axioms Included)

http://www.effacermonexistence.com/rcc-hn-1
1•noncentral•1h ago

Comments

noncentral•1h ago
I’ve been working on something I call Recursive Collapse Constraints, or RCC. It’s a boundary theory for any inference system that operates inside a larger manifold, including modern LLMs.

RCC is not an architecture and not a training trick. It’s a set of structural axioms that describe why hallucination, inference drift, and loss of long-horizon consistency appear even as models get larger.

Axiom 1: Partial Observability An embedded system never has access to the full internal state of the manifold it operates in.

Axiom 2: Non-central Observer The system cannot determine whether its viewpoint is central or peripheral.

Axiom 3: No Stable Global Reference Frame Internal representations drift over time because there is no fixed frame that keeps them aligned.

Axiom 4: Irreversible Collapse Each inference step collapses information in a way that cannot be fully reversed, pushing the system toward local rather than global consistency.

Several predictions follow from these axioms: • Hallucination is structurally unavoidable, not just a training deficit. • Planning failures after about 8 to 12 steps come directly from the collapse mechanism. • RAG, tools, and schemas act as temporary external reference frames, but they do not eliminate the underlying boundary. • Scaling helps, but only up to an asymptotic limit defined by RCC.

I’m curious how people here interpret these constraints. Do they match what you see in real LLM systems? And do you think limits like this are fundamental, or just a temporary artifact of current model design?

Full text here: https://www.effacermonexistence.com/axioms

noncentral•1h ago
OP here a few folks asked about whether RCC has an actual mathematical backbone, so here’s the compact version of the formal axioms. It’s not meant to be a full derivation, just the minimal structure the argument depends on.

RCC can be written as a set of geometric / partial-information constraints:

A1. Internal State Inaccessibility Let Ω denote the full internal state. The observer only ever sees a projection π(Ω), with π: Ω → Ω′ and |Ω′| < |Ω|. All inference happens over Ω′, not Ω.

A2. Container Opacity Let M be the manifold containing the system. Visibility(M) = 0. Global properties like ∂M or curvature(M) are, by definition, not accessible from inside.

A3. No Global Reference Frame There is no Γ such that Γ: Ω′ → globally consistent coordinates. Inference runs in local frames φᵢ, and the transition φᵢ → φⱼ is not invertible over long distances.

A4. Forced Local Optimization At each step t, the system must produce x₍ₜ₊₁₎ = argmin L_local(φₜ, π(Ω)), even when ∂information/∂M = 0.

From these, the boundary condition is pretty direct:

No embedded inference system can maintain stable, non-drifting long-horizon reasoning when ∂Ω > 0, ∂M > 0, and no Γ exists.

This is the sense in which RCC treats hallucination, drift, and multi-step collapse as structural outcomes rather than training failures.

If anyone wants the longer derivation or the empirical predictions (e.g., collapse curves tied to effective curvature), I’m happy to share.

Psychometric Jailbreaks Reveal Internal Conflict in Frontier Models

https://arxiv.org/abs/2512.04124
1•toomuchtodo•50s ago•1 comments

"You're Not Going to Investigate a Federal Officer"

https://www.propublica.org/article/why-local-state-police-rarely-investigate-ice-cbp-fbi
1•hn_acker•3m ago•0 comments

Show HN: AgentVM – Safe, Sandboxed Linux VM for OpenClaw and AI Agents

https://agentvm.deepclause.ai/
1•phunterlau•4m ago•1 comments

Brain Dumps as a Literary Form

https://davegriffith.substack.com/p/brain-dumps-as-a-literary-form
1•dxs•4m ago•0 comments

A small, shared skill library by builders, for builders. (human and agent)

https://github.com/PsiACE/skills
2•recrush•5m ago•0 comments

Show HN: PyDesigner – Visual GUI Builder for Tkinter, PyQt5, and CustomTkinter

https://pydesigner.qzz.io
1•harshitshah•5m ago•0 comments

Show HN: VoxPaste – Fast voice-to-text CLI for dictating to Claude/Cursor

https://github.com/felixbrock/voxpaste
1•brockmeier•7m ago•0 comments

Ultravox's Breakthrough Voice AI Benchmark [video]

https://www.youtube.com/watch?v=Z7l4w9RTQ_0
1•underfox•7m ago•0 comments

The Wyden Siren: Senator's Cryptic CIA Letter Pattern Has Never Been Wrong

https://www.techdirt.com/2026/02/05/the-wyden-siren-senators-cryptic-cia-letter-follows-a-pattern...
1•hn_acker•7m ago•2 comments

Drone-Enabled Non-Invasive Ultrasound Method for Rodent Deterrence

https://www.mdpi.com/2504-446X/10/2/84
1•PaulHoule•7m ago•0 comments

Show HN: LobsterLair – Managed hosting for OpenClaw AI agents

https://lobsterlair.xyz/
1•tobi_bsf•8m ago•0 comments

The list of best agentic browsers and extensions

1•pramatosh•9m ago•0 comments

The list of best agentic browsers and extensions

1•pramatosh•9m ago•0 comments

Show HN: Why it's hard to know which deployment caused a production incident

https://github.com/BytePeaks/valiant
1•veinar_gh•10m ago•0 comments

Generative Modeling via Drifting

https://lambertae.github.io/projects/drifting/
1•E-Reverance•12m ago•0 comments

US job cuts surge to highest January total since 2009

https://www.ft.com/content/378fb2e9-6575-4da2-9769-13cf6bc499ad
2•hmmmmmmmmmmmmmm•13m ago•0 comments

Show HN: Haystack Review – Have a conversation with your pull request

https://tryhaystack.dev/
1•akshaysg•14m ago•0 comments

GPT-5.3-Codex

https://openai.com/index/introducing-gpt-5-3-codex/
68•meetpateltech•14m ago•14 comments

An Open Letter to Amazon / Audible

https://medium.com/@dbock/an-open-letter-to-amazon-audible-f75aad4c9d5e
1•loudouncodes•14m ago•1 comments

Eniac Day Celebration

https://www.helicoptermuseum.org/event-details/eniac-day-celebration
1•jwstarr•15m ago•0 comments

I work with AI for a living. This marketing ploy is repugnant

https://www.washingtonpost.com/opinions/2026/02/05/moltbook-anthropic-ai-consciousness-marketing/
3•martey•17m ago•2 comments

Show HN: Webhook Dispatcher – webhook infra in a Rust crate

https://github.com/webhook-labs/webhook-dispatcher
2•vinnu2608•18m ago•0 comments

Understanding the hazard potential of the Seattle fault zone

https://phys.org/news/2026-02-hazard-potential-seattle-fault-zone.html
1•samizdis•19m ago•0 comments

Dwarkesh: Collision and I interviewed Elon Musk

https://twitter.com/dwarkesh_sp/status/2019458363495456894
2•tosh•20m ago•0 comments

The Guinea Worm Principle

https://onyxclaw.substack.com/p/the-guinea-worm-principle
1•onyx_writes•21m ago•0 comments

Pinterest fired 2 engineers who built an internal layoff tracker

https://www.cbsnews.com/news/pinterest-ceo-fires-engineers-internal-tracker-layoffs/
1•achristmascarl•22m ago•1 comments

Unsealed Court Documents Show Teen Addiction Was Big Tech's "Top Priority"

https://techoversight.org/2026/01/25/top-report-mdl-jan-25/
7•Shamar•22m ago•1 comments

Show HN: KlongPy array language now supports autograd

http://www.klongpy.org/
1•eismcc•22m ago•0 comments

Meta thought we'd leave Reality, AI joined us instead

https://substack.productmind.co/p/meta-thought-wed-leave-reality-ai
1•okosisi•23m ago•0 comments

The Right to Be Forgotten

https://www.emptysetmag.com/articles/the-right-to-be-forgotten
2•speckx•26m ago•0 comments