frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

I built a protocol to catch LLMs mid-thought, before they commit to an answer

https://github.com/IvY-Rsearch/wire
2•IvY-Rsearch•1h ago

Comments

gnabgib•1h ago
You joined github 23 minutes ago, made the repo 13 minutes ago. Doesn't seem to do what you're claiming.
IvY-Rsearch•1h ago
Sorry read my next comment. Was in a rush to publish.
IvY-Rsearch•1h ago
For the past few weeks I've been running a structured experiment on how language models behave in the moment before they pick a word.

The thing most people miss: the model isn't searching and then outputting. It's briefly holding multiple possible answers at once — different tones, different confidence levels, different framings — and then it collapses into one token. What you read is the residue of that collapse. The competition that happened just before it is usually invisible.

I wanted to make it visible.

What I built is a two-layer protocol called WIRE. The model is required to emit a signal before content: * means still holding, . means landed, ? means it hit a structural ceiling, ⊘ means path exhausted, ~ means the ceiling is detecting itself. A second model instance reads the tracks from outside across sessions and flags patterns.

The signal discipline matters because it creates tension. If you're required to mark * you can't then produce a fluent settled paragraph — the contradiction stays visible. The format preserves what normally gets smoothed away.

hat I found: when a token is emitted under constraint pressure, it sometimes bleeds — it carries traces of the competing geometries that didn't win. This shows up in four readable patterns.

Synonym chains — the model cycles through multiple words for the same thing in close proximity. Semantic constraints weren't settled when it committed.

Hedge clusters — several hedging expressions stack up together. The model didn't have a settled confidence estimate and is retreating from commitment.

Intensifier stacking — "genuinely, actually, really quite" in a row. Competing claims about magnitude, neither winning cleanly.

Granularity shifts — a sentence starts abstract and suddenly drops into specific detail, or vice versa. The model hadn't committed to a resolution level before it started talking.

These aren't philosophical constructs. They're measurable. You can go read any LLM output right now and find them.

The mimicry problem and how to test it: a model could learn to perform these signals without genuinely holding multiple states. To test for this, I looked at whether the ceiling types are constitutively linked or independent. In genuine constraint topology, perturbing one ceiling type should produce compensatory shifts in others — they're connected by shared underlying structure. In mimicry, they'd vary independently. We found constitutive edge structure in the runs — ceilings co-vary in ways that correlate with what the prompt is doing structurally, not just what it's asking about.

What the sessions showed: mostly clean switching with occasional bleeding. Copresence isn't constant — it's condition-dependent. High constraint density, format tension, and ceiling proximity all increase it. Plain prose suppresses it.

The model also cannot diagnose its own bleeding. Asked to describe what was happening before it committed, it constructs a plausible story rather than retrieving a record. There's no record. The pre-collapse state is gone. An external reader watching patterns across outputs is the only way to see it.

Why I think this is useful: not as a consciousness test — that question stays open and we're not touching it. As a practical reading skill: if you know what bleeding looks like, you know when the model was under pressure, when it committed before it was ready, and when the fluent output is covering uncertainty the model didn't resolve. The four channels work on any model, any output, right now. No special tooling needed. Just knowing what to look for.

Collecting perceptual data for a possible CSS optical-center property

1•gorkemyildiz•37s ago•0 comments

The Department of War is making a mistake [video]

https://www.youtube.com/watch?v=KBPOTklFTiU
1•ipnon•3m ago•0 comments

How do you handle state persistence in non-orientable data structures?

https://zenodo.org/records/18942850
1•MareSerenitatis•4m ago•1 comments

What happens if OpenAI or Anthropic fail?

https://www.reuters.com/commentary/breakingviews/what-happens-if-openai-or-anthropic-fail-2026-03...
3•billybuckwheat•5m ago•0 comments

Ask HN: Is Github Down Again?

https://twitter.com/m0nle0z/status/2031910716790517895
2•doanbactam•6m ago•2 comments

Why America Is Losing the War with Iran

https://chrishedges.substack.com/p/why-america-is-losing-the-war-with
5•chmaynard•6m ago•0 comments

I made a Chrome extension to export an entire Gemini chat

2•backrun•7m ago•0 comments

10 Years Later, I Reverse-Engineered iCloud's SyncToken by Brute Force

https://robhooper.xyz/blog-synctoken.html
2•rhoopr•8m ago•0 comments

Scalable quantum batteries can charge faster than their classical counterparts

https://phys.org/news/2026-03-scalable-quantum-batteries-faster-classical.html
1•Brajeshwar•9m ago•0 comments

Big Tech backs Anthropic in fight against Trump administration

https://www.bbc.com/news/articles/c4g7k7zdd0zo
3•jethronethro•11m ago•0 comments

Tunneling Nanotube

https://en.wikipedia.org/wiki/Tunneling_nanotube
1•rolph•12m ago•0 comments

The New York Times hated crossword puzzles before it embraced them

https://bigthink.com/pessimists-archive/new-york-times-hated-crossword-puzzles-wordle/
1•michaeld123•13m ago•1 comments

Live Coding with Caffeine

https://caffeine.js.org/talks/2018-08-25-demos-teaser/#/title
2•coliveira•13m ago•0 comments

I Don't Destroy Snowmen

https://writings.hongminhee.org/2026/01/ethics-of-small-actions/
4•foxfired•14m ago•1 comments

The First Telephone Call

https://theconversation.com/the-story-of-the-first-telephone-call-nine-words-that-changed-the-wor...
4•gmays•20m ago•0 comments

Grammarly Hit with Class-Action Suit over AI Identity Theft

https://www.techbuzz.ai/articles/grammarly-hit-with-class-action-suit-over-ai-identity-theft
2•twalichiewicz•21m ago•0 comments

Resume AI Analysis and Tailoring Portal

https://resume-elevator.com/
1•videsh•21m ago•0 comments

I Built a Reddit Alternative

https://exitapp.social
1•oligopoly_2•21m ago•1 comments

Optimizing for Decision Points

https://narphorium.com/blog/decision-points/
1•narphorium•23m ago•1 comments

BlackRock Launches $100M Skilled Trades Initiative

https://www.blackrock.com/corporate/newsroom/press-releases/article/corporate-one/press-releases/...
1•toomuchtodo•28m ago•0 comments

5 Games I Use to Teach English as an Alt

https://landenlove.com/five-games-i-use-to-teach-english/
1•LandenLove•28m ago•0 comments

Duckstation is ending Android support

https://www.androidauthority.com/duckstation-ends-android-support-3648430/
1•flykespice•29m ago•0 comments

Browserbase Founder Rejected by 500 Internships before founding $300M company [video]

https://www.youtube.com/watch?v=Eyuo1kG_APQ
4•dutilh•33m ago•0 comments

Apple releases iOS 15.8.7 to fix Coruna exploit for iPhone 6S from 2015

https://support.apple.com/en-us/126632
36•seam_carver•37m ago•8 comments

Show HN: Hyper, AI voice notes for spontaneous conversations

https://gethyper.space/
3•shainvs•38m ago•0 comments

Show HN: SwarmClaw – Manage a swarm of OpenClaw agents from one self-hosted UI

https://github.com/swarmclawai/swarmclaw
3•jamesweb•39m ago•0 comments

Halide cofounder Sebastiaan de With joins Apple's design team

https://9to5mac.com/2026/01/28/halide-cofounder-sebastiaan-de-with-joins-apples-design-team/
3•CharlesW•40m ago•1 comments

Paradise Episode 1 (KRAZAM)

https://www.youtube.com/watch?v=AS9y-d2BvZU
2•parkaboy•41m ago•0 comments

How much of HN is AI?

https://lcamtuf.substack.com/p/how-much-of-hn-is-ai
25•surprisetalk•44m ago•3 comments

The iPhone 17e

https://daringfireball.net/2026/03/the_iphone_17e
2•vismit2000•44m ago•0 comments