frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Can Europe get kids off social media?

https://www.ft.com/content/cf465c21-4789-490b-b328-41f6383567d7
1•thm•1m ago•0 comments

I Built a NAS (Buildlog)

https://arne.me/blog/buildlog-nas
1•abahlo•1m ago•0 comments

Making Software: How do computers store data?

https://www.makingsoftware.com/chapters/how-is-data-stored
1•Garbage•3m ago•0 comments

A timeline of claims about AI/LLMs

https://blog.nethuml.xyz/posts/2026/02/timeline-of-claims-about-ai-llms/
1•nethuml•5m ago•0 comments

Freeciv 3D with hex map tiles and WebGPU renderer

https://freecivworld.net/
1•roschdal•7m ago•0 comments

SpaceX-xAI Merger: Nobody's Talking About the von Neumann Elephant in the Room

1•juanpabloaj•10m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
4•aarghh•15m ago•0 comments

Ask HN: Would you use an ESLint-like tool for SEO that fails your CI/CD build?

1•YannBuilds•15m ago•0 comments

Praise for Price Gouging

https://www.grumpy-economist.com/p/praise-for-price-gouging
1•mhb•18m ago•0 comments

Open source infra orchestrator agent clanker CLI

https://github.com/bgdnvk/clanker
1•tekbog•20m ago•0 comments

Lance table format explained simply, stupid (Animated)

https://tontinton.com/posts/lance/
1•tontinton•21m ago•0 comments

Solving Soma

https://anekstein.com/posts/2026-02-01-blocker
1•davidanekstein•21m ago•0 comments

We built a cloud platform for agentic software (our virtualization, etc.)

https://agentuity.com/
1•rblalock•22m ago•2 comments

Show HN: WLM-SLP – A 0D-27D Structural Language for Multi-Agent Alignment

https://github.com/gavingu2255-ai/WLM-Open-Source/blob/main/README.md
1•WujieGuGavin•22m ago•0 comments

Former Tumblr Head Jeff D'Onofrio Steps in as Acting CEO at the Washington Post

https://www.theverge.com/tech/875433/tumblr-jeff-donofrio-ceo-washington-post-layoffs
2•bookofjoe•25m ago•0 comments

Bounded Flexible Arrays in C

https://people.kernel.org/kees/bounded-flexible-arrays-in-c
1•fanf2•25m ago•0 comments

The Invisible Labor Force Powering AI

https://cacm.acm.org/news/the-invisible-labor-force-powering-ai/
1•pseudolus•27m ago•0 comments

Reading Recursion via Pascal

https://journal.paoloamoroso.com/reading-recursion-via-pascal
1•AlexeyBrin•28m ago•0 comments

Show HN: I made a website that finds patterns on your spreadsheet

https://analyzetable.com
1•kouhxp•29m ago•0 comments

Jokes on You AI: Turning the Tables – LLMs for Learning

https://www.dev-log.me/jokes_on_you_ai_llms_for_learning/
1•wazHFsRy•29m ago•0 comments

You don't need RAG in 2026

https://ryanlineng.substack.com/p/you-dont-need-rag-in-2026
1•kareninoverseas•30m ago•0 comments

WatchLLM – Cost kill switch for AI agents (with loop detection)

https://www.watchllm.dev/
1•Kaadz•33m ago•2 comments

I turned myself into an AI-generated deathbot – here's what I found

https://www.bbc.com/news/articles/c93wjywz5p5o
1•cmsefton•44m ago•1 comments

Management style doesn't predict survival

https://orchidfiles.com/management-style-doesnt-predict-survival/
1•theorchid•45m ago•0 comments

One Generation Runs the Country. The Next Cashed in on Crypto

https://www.wsj.com/finance/currencies/trump-sons-crypto-billions-1e7f1414
1•impish9208•46m ago•1 comments

"I Was Wrong": Why the Civil War Is Running Late [video][2h21m]

https://www.youtube.com/watch?v=RDmkKZ7vAkI
1•Bender•47m ago•0 comments

Show HN: A sandboxed execution environment for AI agents via WASM

https://github.com/Parassharmaa/agent-sandbox
1•paraaz•50m ago•0 comments

Wine-Staging 11.2 Brings More Patches to Help Adobe Photoshop on Linux

https://www.phoronix.com/news/Wine-Staging-11.2
2•doener•50m ago•0 comments

The Nature of the Beast

https://cinemasojourns.com/2026/02/07/the-nature-of-the-beast/
1•jjgreen•50m ago•0 comments

From Prediction to Compilation: A Manifesto for Intrinsically Reliable AI

1•JanusPater•50m ago•0 comments
Open in hackernews

A High-Level View of TLA+

https://lamport.azurewebsites.net/tla/high-level-view.html
92•blobcode•8mo ago

Comments

tomthecreator•8mo ago
I’m curious about the practical workflow after you’ve written and verified a TLA+ spec. How do you go from the TLA+ proof to actual code? Is there any established process or best practices for translating the spec into implementation while ensuring you’re not introducing bugs during that translation? And once you’ve written the code, how do you maintain the connection between spec and implementation as things evolve?
SchwKatze•8mo ago
I've never did that before but this recent post from mongodb team has a interesting view on this.

https://www.mongodb.com/blog/post/engineering/conformance-ch...

romac•8mo ago
You can also go the other way around and generate traces [1] from the TLA+ or Quint [2] specs using the Apalache model checker [3], map each action in the trace to an action on your system under test, and check that the abstract and concrete states match at each step.

[1] https://apalache-mc.org/docs/adr/015adr-trace.html

[2] https://quint-lang.org

[3] https://apalache-mc.org

mjb•8mo ago
Runtime monitoring of system behavior against the spec is one way to close the gap. We wrote about our experiences with one tool for that (PObserve) here: https://cacm.acm.org/practice/systems-correctness-practices-...
ams92•8mo ago
I’m wondering the same. I’ve read multiple articles about formal methods and how they’ve been used to find obscure bugs in distributed systems, but usually they just show a proof and talk about formal methods without concrete examples.
pron•8mo ago
You can think about a TLA+ spec as the level of design between the "natural language design" you have in your head or on paper and the code. It makes it easier to go from the idea to the code, and allows you to explore ideas, at any level of abstraction, with full rigour.

The question of how you maintain the spec as the code changes could be the same as how you maintain the natural language spec as the code changes. Sometimes you just don't, and you may want to only when there is a very substantial change that you'd also like to explore with rigour.

However, there are several approaches for creating a more formal (i.e. mechanical) connection between the code and the TLA+ spec, such as generating tests or checking production logs against the high-level spec, but I would say that you already get so much benefit even without such a mechanical connection that having it is not needed, but could be the cherry on top in some situations that really require it.

I think that the greatest hurdle in getting the most out of TLA+ is internalising that the spec isn't code, and isn't supposed to be. For example, if you're making use of some hashing function or a sorting function and the subject of your design isn't the hashing or sorting algorithm itself, in TLA+ you want write a hashing/sorting spec or reuse one from a library; rather you'd write something like "assume there exists a hashing/sorting function".

That's why you may end up writing different specs for the same application, each focusing on a particular aspect of the system at the appropriate level of detail for that aspect. A single line of TLA+ spec could correspond to anywhere between 1 and 100K lines of code. The use is more similar to how a physicist would describe a system than to code, which is (part of) the system. For example, if the physicist is interested only in the orbit of a planet around its star, it may represent it as a point-mass; if she's interested in weather patterns on the planet, then there's an entire different description. It's a map, not the territory.

hwayne•8mo ago
Right now the best practice is generating test suites from the TLA+ spec, though right now it's bespoke for each company that does it and there's no production-ready universal tools do that. LLMs help.
threatofrain•8mo ago
How is that different for many algos whose proof of performance characteristics is purely in math?
Pet_Ant•8mo ago
I've thought that one thing that could help was annotations on your code that index into the spec. That way you at least have a one way binding. If-you-are-changing-this-you-should-at-least-review-that kind-of-thing.
lenkite•8mo ago
Asking from ignorance: Can newer languages like Lean do the job of model checkers like TLA+? There are so many formal model checkers/languages that it is difficult to know which one is the "best".
mjb•8mo ago
Doing distributed systems work in Lean is possible, but right now is much harder than something like TLA+ or P. It's possible that a richer library of systems primitives in Lean ('mathlib for systems') could make it easier. Lean is a very useful tool, but right now isn't where I'd start for systems work (unless I was doing something specific, like trying to formalize FLP for a paper).
igornotarobot•8mo ago
It should be possible to write protocol specifications in Lean, e.g., this is a recent case study on specifying two-phase commit in Lean [1] and proving its safety [2].

However, there are no model checkers for Lean. Currently, you either have to write a full proof by hand, with some assistance from LLMs, or rely on random simulation, similar to property-based testing.

[1] https://protocols-made-fun.com/lean/2025/04/25/lean-two-phas... [2] https://protocols-made-fun.com/lean/2025/05/10/lean-two-phas...

romac•8mo ago
If you are interested in TLA+, you might want to check out Quint (https://quint-lang.org), a modern take on a specification language which shares the same underlying temporal logic of actions, but with a syntax more familiar to programmers.
y-curious•8mo ago
I am very interested in TLA+. I work on an existing application, say, for serving graphs to customers on the frontend. I want to, for example, add a new button to the front end to turn on dark mode.

What I want to know from experienced formal methods folks: how do I go about scoping my new feature in TLA+? Do I have to model my entire system in the language, or just a subset of features? Is my example even a good use case for TLA+?

johnbender•8mo ago
Formal methods like TLA provide the highest value when you have a property of the system that is subtle but should be comprehensive, which is to say you need to know it’s true for all the behaviors of the system. (Aside: this is true even if you never model check the system because modeling is a good forcing function for clarity in understanding of the system)

With that in mind you don’t have to model your whole system as long as you’re comfortable with the boundaries as assumptions in any property/theorem you prove about it! For example, unconstrained variable in a TLA spec do a reasonable job of modeling an overapproximation of inputs from the outside world, so that’s one boundary you could potentially stop at supposing the your proof can go through in that chaotic context.

saigovardhan•8mo ago
TLA+ has shown good promise in verifying cache-coherence protocols in multicore systems (a good alternative to CMurphi). An intern an Microsoft one helped uncover a bug in the Xbox 360, right before its 2004 launch - by formally verifying the protocol.

A few serious RISC-V startups use it today as well..

senkora•8mo ago
Also see the TLA+ video course from the same website: https://lamport.azurewebsites.net/video/videos.html

Leslie Lamport's a funny guy, and it really comes across in the video series. I think it's a great way to get started with TLA+.

saigovardhan•8mo ago
He does have a great collection of hoodies :)
Cyphase•8mo ago
I've had multiple occasions to link to this video recently, and here's another.

https://www.youtube.com/watch?v=tsSDvflzJbc

It's Leslie Lamport giving the closing keynote at SCaLE a few months ago. It's title was "Coding Isn't Programming".

I really enjoyed it. And as someone mentioned in another comment, he's got a fun sense of humor. I was there in person and got to meet him and exchange a few words after, which was cool.

Watch the first 2m10s and see if it hooks you.