frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
479•klaussilveira•7h ago•120 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
818•xnx•12h ago•491 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
40•matheusalmeida•1d ago•3 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
161•isitcontent•7h ago•18 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
158•dmpetrov•8h ago•69 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
97•jnord•3d ago•14 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
53•quibono•4d ago•7 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
211•eljojo•10h ago•135 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
264•vecti•9h ago•125 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
332•aktau•14h ago•158 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
329•ostacke•13h ago•86 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
415•todsacerdoti•15h ago•220 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
27•kmm•4d ago•1 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
344•lstoll•13h ago•245 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
5•romes•4d ago•1 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
53•phreda4•7h ago•9 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
202•i5heu•10h ago•148 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
116•vmatsiiako•12h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
153•limoce•3d ago•79 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
248•surprisetalk•3d ago•32 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
28•gfortaine•5h ago•4 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1004•cdrnsf•17h ago•421 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
49•rescrv•15h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
74•ray__•4h ago•36 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
38•lebovic•1d ago•11 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•59 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
32•betamark•14h ago•28 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
41•nwparker•1d ago•11 comments

Claude Opus 4.6

https://www.anthropic.com/news/claude-opus-4-6
2275•HellsMaddy•1d ago•981 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
8•gmays•2h ago•2 comments
Open in hackernews

Deterministic Governance: mechanical exclusion / bit-identical

https://github.com/Rymley/Deterministic-Governance-Mechanism
5•verhash•1w ago
This repository implements a deterministic exclusion engine where governance decisions are treated as a mechanical process rather than a probabilistic one. Candidates exist as stateful objects that accumulate strain under a scheduled constraint pressure. Pressure is applied across explicit phases—nucleation, quenching, and crystallization—and exclusion occurs only when accumulated stress exceeds a fixed yield threshold. Once fractured, a candidate cannot re-enter; history matters.

There is no ranking, sampling, or temperature. Given identical inputs, configuration, and substrate, the system always produces bit-identical outputs, verified by repeated hash checks. The implementation explores different elastic modulus formulations that change how alignment and proximity contribute to stress, without changing the deterministic nature of the process. The intent is to examine what governance looks like when exclusion is causal, replayable, and mechanically explainable rather than statistical. Repository: https://github.com/Rymley/Deterministic-Governance-Mechanism

Comments

foobarbecue•1w ago
I don't even understand what discipline we're talking about here. Can someone provide some background please?
Nevermark•1w ago
> Quenching is higher-frequency pressure application that amplifies contradictions and internal inconsistencies.

> At each step, stress increments are computed from measurable terms such as alignment and proximity to a verified substrate.

Well obviously its ... uh, ...

It may not be, but the whole description reads as category error satire to me.

verhash•1w ago
Not satire, though I get why the terminology looks odd. The language comes from materials science because the math is the same: deterministic state updates with hard thresholds. In most AI systems, exclusion relies on probabilistic sampling (temperature, top-k, nucleus), which means you can’t replay decisions exactly. This explores whether exclusion can be implemented as a deterministic state machine instead—same input, same output, verifiable by hash.

“Mechanical” is literal here: like a beam fracturing when stress exceeds a yield point (σ > σᵧ), candidates fracture when accumulated constraint pressure crosses a threshold. No randomness, no ranking. If that framing is wrong, the easiest way to test it is to run the code or the HF Space and see whether identical parameters actually do produce identical hashes.

foobarbecue•1w ago
What do you mean by "exclusion"?
verhash•1w ago
Here “exclusion” just means a deterministic reject / abstain decision applied after a model has already produced candidates. Nothing is generated, ranked, or sampled here. Given a fixed set of candidate outputs and a fixed set of verified constraints, the mechanism decides which candidates are admissible and which are not, in a way that is replayable and binary. A candidate is either allowed to pass through unchanged, or it is excluded from consideration because it violates constraints beyond a fixed tolerance.

In practical terms: think of it as a circuit breaker, not a judge. The model speaks freely upstream; downstream, this mechanism checks whether each output remains within a bounded distance of verified facts under a fixed rule. If it crosses the threshold, it’s excluded. If none survive, the system abstains instead of guessing. The point isn’t semantic authority or “truth,” it’s that the decision process itself is deterministic, inspectable, and identical every time you run it with the same inputs.

nextaccountic•1w ago
You really really need to be upfront in the first paragraph or your docs that you are talking about the inner workings of LLMs and other machine learning stuff

Failing that, at least mention it here

verhash•6d ago
LLMs are probabilistic by nature. They’re great at producing fluent, creative, context-aware responses because they operate on likelihood rather than certainty. That’s their strength—but it’s also why they’re risky in production when correctness actually matters. What I’m building is not a replacement for an LLM, and it doesn’t change how the model works internally. It’s a deterministic gate that runs after the model and evaluates what it produces.

You can use it in two ways. As a verification layer, the LLM generates answers normally and this system checks each one against known facts or hard rules. Each candidate either passes or fails—no scoring, no “close enough.” As a governance layer, the same mechanism enforces safety, compliance, or consistency boundaries. The model can say anything upstream; this gate decides what is allowed to reach the user. Nothing is generated here, nothing inside the LLM is modified, and the same inputs always produce the same decision. For example, if the model outputs “Paris is the capital of France” and “London is the capital of France,” and the known fact is Paris, the first passes and the second is rejected—every time. If nothing matches, the system refuses to answer instead of guessing.

Nevermark•6d ago
You are going so deep with abstract terms that your text becomes a special shorthand you think is clear but is anything but clear.

Stop talking about “exclusion” and “pressure” etc and use direct words about what is happening in the model.

Otherwise, even your attempts at explaining what you have said need more explanation.

And as the sibling comment points out, start by stating what you are actually doing, in concrete not “the math is the same so I assume you can guess how it applies if you happen to know the same math and the same models” terms. Which is asking everyone else, most anyone, to read your mind, not your text.

There is a tremendous difference between connections you see that help you understand, vs. assuming others can somehow infer connections and knowledge they don’t already have. The difference between an explanation and incoherence.

nextaccountic•1w ago
The thing that lets LLMs select the next token is probabilistic. This proposed a deterministic procedure

Problem is, we sometimes want LLMs to be probabilistic. We want to be able to try again if the first answer was deemed unsuccessful

foobarbecue•1w ago
Ah, LLMs. I should have guessed.
gwern•1w ago
OK, this is AI slop ("fracture" alone gives it away). But maybe there's still something of value here? Can you explain it in actual human terms, give a real example, and explain what you did to test this and why I shouldn't flag this like I did https://news.ycombinator.com/item?id=46701114 ?
verhash•1w ago
Verified facts:

“The sky is blue”

“Water is wet”

Candidate outputs:

“The sky is blue”

“The sky is green”

Each sentence is embedded deterministically (in the demo, via a hash-based mock embedder so results are reproducible). For each candidate, I compute:

similarity to the closest verified fact

distance from that fact

a penalty function based on those values

Penalty accumulates over a fixed number of steps. If it exceeds a fixed threshold, the candidate is rejected. In this example, “The sky is blue” stays below the threshold; “The sky is green” crosses it and is excluded.

What I tested:

Identical inputs + identical config always produce identical outputs (verified by hashing a canonical JSON of inputs + outputs).

Re-running the same scenario repeatedly produces the same decision and the same hash.

Changing a single parameter (distance, threshold, steps) predictably changes the outcome.

Why this isn’t “AI slop”:

There’s no generative model here at all.

The terminology is unfortunate but the code is explicit arithmetic.

The entire point is removing non-determinism, not adding hand-wavy intelligence.

If you think the framing obscures that rather than clarifies it, that’s useful feedback—I’m actively dialing the language back. But the underlying claim is narrow: you can build governance filters that are deterministic, replayable, and auditable, which most current AI pipelines are not.

If that’s still uninteresting, fair enough—but it’s not trying to be mystical or persuasive, just mechanically verifiable.

You can test it here if you like, https://huggingface.co/spaces/RumleyRum/Deterministic-Govern...

gwern•6d ago
I don't get it. Embeddings don't prioritize, or even necessarily encode, truth value as a dimension. And even if they did, if you simply accept based on some hyperparameter of distance, it sounds like this procedure just leaves you vulnerable to problems like salami-slicing where you reach 'the sky is green' (which after all, it is sometimes) by multiple steps just below the tolerance.
verhash•6d ago
That’s a fair critique, but it slightly misidentifies what’s being claimed. The system does not assume embeddings encode truth, nor does it attempt to extract truth from latent space. It measures proximity to a substrate that has already been declared authoritative. In that sense it’s a conditional gate, not a semantic oracle. If the substrate is wrong, incomplete, or absurd, the mechanism will enforce that wrongness consistently. That is not a failure mode; it is the boundary of responsibility. The engine is not discovering truth, it is enforcing consistency relative to an explicit reference set.

On salami-slicing toward a contradiction: that concern applies to memoryless, single-pass filters. This mechanism is explicitly stateful. Deviations accumulate stress over time and do not reset, so a sequence of “almost acceptable” steps still fractures under sustained pressure. You cannot asymptotically walk toward a contradiction unless the configuration allows it, in which case that permissiveness is deliberate and inspectable. The trade being made here is not correctness for convenience, but opacity for causality. Instead of stochastic acceptance that can’t be replayed or audited, you get a deterministic enforcement layer whose failure modes live upstream in substrate and configuration choices, where they can be examined rather than guessed at.