frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

How the Tandy Color Computer Works [video]

https://www.youtube.com/watch?v=r2Tq8jdS6mY
1•amichail•1m ago•0 comments

Bash scripts are brittle – simple error handling in bash

https://notifox.com/blog/bash-error-handling
1•Meetvelde•4m ago•0 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
1•denysonique•5m ago•0 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
1•ray__•9m ago•0 comments

Antigen specificity of clonally enriched CD8T cells in multiple sclerosis

https://www.nature.com/articles/s41590-025-02412-3
2•bookofjoe•9m ago•0 comments

Show HN: Vibe-coded game prototypes. Tell me which to work on

1•chux52•11m ago•0 comments

Do rich people live longer?

https://www.empirical.health/blog/rich-people-live-longer-hims-superbowl/
3•brandonb•11m ago•1 comments

R/IndieAppNews

https://old.reddit.com/r/IndieAppNews/
1•arthurofbabylon•18m ago•0 comments

Building "zero-gap" secrets for a UGC platform

1•Braden-dev•19m ago•0 comments

Show HN: NexVo'-Verdicts for SaaS Ideas

https://nexvo.io/
1•Kasra0•23m ago•0 comments

Bypassing Kernel32.dll for Fun and Nonprofit

https://ziglang.org/devlog/2026/#2026-02-03
1•Retro_Dev•23m ago•0 comments

Show HN: Gohpts tproxy with arp spoofing and sniffing got a new update

https://github.com/shadowy-pycoder/go-http-proxy-to-socks
1•shadowy-pycoder•24m ago•0 comments

Installing Ollama and Gemma 3B on Linux

https://byandrev.dev/en/blog/ollama-in-linux
2•byandrev•24m ago•0 comments

Token Smuggling:How Non-Standard Encoding Bypass AI Security

https://instatunnel.my/blog/token-smuggling-bypassing-filters-with-non-standard-encodings
1•birdculture•26m ago•0 comments

Wearable textile-based phototherapy toward non-invasive hair loss treatment

https://www.nature.com/articles/s41467-025-68258-3
1•T-A•26m ago•0 comments

The 1 feature I'm really liking in the OpenAI Codex App

https://asadjb.com/blog/2026-02-06-the-codex-app-feature-i-really-like
1•asadjb•28m ago•0 comments

BlECSd – Terminal UI Library Built on an Entity Component System

https://github.com/Kadajett/blECSd
1•kadajett•32m ago•1 comments

Even in Her Victory Lap, Jessie Diggins Is Always Thinking About Others

https://www.si.com/winter-olympics/even-in-her-victory-lap-jessie-diggins-is-always-thinking-abou...
1•mmooss•34m ago•1 comments

Jobs getting better? "AI has the potential for a productivity uplift"

https://blogs.lse.ac.uk/businessreview/2026/02/03/are-jobs-getting-better-ai-has-the-potential-fo...
2•hhs•36m ago•0 comments

System time, clocks and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•todsacerdoti•38m ago•0 comments

Science of Sharp

https://scienceofsharp.com/
1•volcano_diver•38m ago•0 comments

Show HN: LLM-use – Open-source tool to route and orchestrate multi-LLM tasks

1•justvugg•38m ago•0 comments

Chandra-OCR

https://github.com/datalab-to/chandra
3•Curiositry•42m ago•1 comments

Show HN: Web Cache Using Origin Private File System

https://github.com/P0u4a/opfs-cache
1•p0u4a•42m ago•0 comments

The Malleability of Tools: AI Is Eating UI

https://www.cjroth.com/blog/the-malleability-of-tools
1•thoughtfulchris•43m ago•0 comments

1972: How to commit Computer Fraud – 70s style [video]

https://www.youtube.com/watch?v=RHo3d_4d2SM
1•1659447091•46m ago•0 comments

Show HN: I built a directory of $1M+ in free credits for startups

https://startupperks.directory
3•osmansiddique•47m ago•0 comments

Do You Feel the AGI Yet?

https://www.theatlantic.com/technology/2026/02/do-you-feel-agi-yet/685845/
2•fortran77•49m ago•1 comments

Epstein arranged a meeting between highest-level Russian spy and Peter Thiel

https://bsky.app/profile/robertscotthorton.bsky.social/post/3me7vgg5rms27
11•doener•55m ago•0 comments

Waymo Gets Grilled by Lawmakers over Chinese Cars and Overseas Workers

https://www.businessinsider.com/waymo-grilled-lawmakers-chinese-cars-overseas-workers-ev-autonomo...
3•doener•58m ago•0 comments
Open in hackernews

LLMs don't hallucinate – they hit a structural boundary (RCC theory)

http://www.effacermonexistence.com/rcc-hn-1-1
2•formerOpenAI•1h ago

Comments

formerOpenAI•1h ago
I’ve been investigating a pattern in LLM failures that didn’t make sense when explained through data quality or model scale.

Hallucinations, planning drift after ~8–12 steps, and long-chain self-consistency collapse all show the same signature: they behave like boundary effects, not “errors.”

This led me to formalize something I call RCC — Recursive Collapse Constraints. I didn’t “invent” it. The structure was already there in how embedded inference systems operate without access to their container or global frame. I simply articulated the geometry behind the failures.

Key idea: When an LLM predicts from a non-central observer position, its inference pushes against a boundary it cannot see. The further it moves away from its local frame, the more it collapses into hallucination-like drift. Architecture can reduce noise, but not remove the boundary.

I’m sharing this here because I’d like technically-minded people to challenge (or refine) the framework. If you work on reasoning, planning, or model stability, I’m especially interested in counterexamples.

Happy to answer questions directly. I’m the author of the RCC write-up.

formerOpenAI•1h ago
OP here — adding a bit more color.

RCC isn’t a new model or training method. It’s basically a boundary effect you get when a predictor has no access to its own internal state or to the “container” it’s running inside.

What stood out to me is that when the model steps too far outside its grounded reference frame, the probability space it’s sampling from starts to warp — things stop being orthogonal in the way the model implicitly assumes. What we call “hallucination” looks more like a geometric drift than random noise.

I’m not pitching this as some grand unifying theory — just a lens that helped me understand why scaling cleans up certain failure modes but leaves others untouched.

If anyone has examples of models that maintain long-chain consistency without external grounding, I’d genuinely like to hear about them.