frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Hive: A swarm of AI agents evolving code together

https://hive.rllm-project.com/
1•frozenseven•1m ago•1 comments

Fossil: Login Groups

https://fossil-scm.org/home/doc/trunk/www/caps/login-groups.md
1•thunderbong•4m ago•0 comments

The Displacement of Cognitive Labor and What Comes After

https://sahajgarg.github.io/blog/cognitive-labor/
1•salkahfi•9m ago•0 comments

Dell Federal Systems holds $18.8M ICE contract for Microsoft

https://unplugice.org/collaborator/dell
1•doener•11m ago•0 comments

Cryptomator Is on Sale Now

1•_janc_•13m ago•0 comments

SauceLabs launches AI intent tool

https://thenewstack.io/ai-test-authoring-launch/
1•steveharrison•14m ago•1 comments

I built a financial data platform as a static site (and what I gave up)

https://blog.cerrorism.com/blog/2026-03-12
2•cerrorism•18m ago•0 comments

A Review of Dice That Came with the White Castle

https://boardgamegeek.com/thread/3533812/a-review-of-dice-that-came-with-the-white-castle
2•doener•21m ago•0 comments

Ask HN: What do you look for in your first 10 hires?

3•neilk17•21m ago•0 comments

LHCb Collaboration discovers new proton-like particle

https://home.cern/news/news/physics/lhcb-collaboration-discovers-new-proton-particle
2•voctor•22m ago•0 comments

Needle in the haystack: LLMs for vulnerability research

https://devansh.bearblog.dev/needle-in-the-haystack/
1•uneven9434•30m ago•0 comments

Israel to expand Lebanon ground campaign, fuels fears of prolonged occupation

https://www.theguardian.com/world/2026/mar/16/israel-expansion-lebanon-ground-campaign-fears-prol...
3•pabs3•34m ago•0 comments

Identity protection firm Aura suffers data breach exposing 900 000 records

https://cyberinsider.com/identity-protection-firm-aura-suffers-data-breach-exposing-900000-records/
1•nalekberov•35m ago•0 comments

Gorantula – multi-agent AI research platform with parallel web crawlers

https://github.com/Andyi955/Gorantula
1•bruh212•35m ago•0 comments

AI delusions, self-harm, unhealthy emotional attachments 'Think I love you'

https://nypost.com/2026/03/18/business/bombshell-ai-study-chatbots-fueling-delusions-self-harm-an...
1•1vuio0pswjnm7•41m ago•0 comments

ATS Copilot

https://apps.apple.com/in/app/ats-copilot/id6760128345
1•srsstyle•41m ago•0 comments

Pervaziv AI GitHub Code Review App

https://github.com/marketplace/pervaziv-ai-code-review
2•asmprogrammer5•44m ago•1 comments

Anchor: Hardware-based authentication using SanDisk USB devices

1•rewant•44m ago•0 comments

Crowdsource AI-friendly knowledge base about Taiwan

https://github.com/frank890417/taiwan-md
2•phantomathkg•45m ago•0 comments

Contemporary Australian Composers: Alan Lamb (2000)

https://www.rainerlinz.net/NMA/22CAC/lamb.html
2•ipnon•48m ago•0 comments

Theodosian Land Walls of Constantinople

https://turkisharchaeonews.net/object/theodosian-land-walls-constantinople
1•bcraven•49m ago•0 comments

Autoresearching Apple's "LLM in a Flash" to run Qwen 397B locally

https://twitter.com/danveloper/status/2034353876753592372
2•rzk•50m ago•0 comments

Show HN: Supre – A prompt engineer for Suno AI's Style of Music field

https://supre.online/en/tool
2•sdemela•51m ago•0 comments

Human Proof: the rarest thing is proof a human was there

https://humanproof.art
1•gkibakaya•51m ago•0 comments

Secure Exec – secure Node.js execution without a sandbox

https://secureexec.dev/
1•M4v3R•59m ago•0 comments

My hobby: running deranged surveys

https://nablatheta.substack.com/p/my-hobby-running-deranged-surveys
1•leogao•1h ago•0 comments

Figma but for AI Agents

https://paper.design
3•802e65bc-e259•1h ago•0 comments

Did the British unleash biological warfare against Washington's troops?

https://news.harvard.edu/gazette/story/2026/03/did-the-british-unleash-biological-warfare-against...
2•XzetaU8•1h ago•0 comments

Show HN: AI Coding Factory

https://github.com/jaksa76/ai-coding-factory
2•jaksa•1h ago•1 comments

OpenClaw for Research Writing

https://trybibby.com/
1•nilofer99•1h ago•0 comments
Open in hackernews

GenAI-Accelerated TLA+ Challenge

https://foundation.tlapl.us/challenge/index.html
35•lemmster•10mo ago

Comments

Taikonerd•10mo ago
Using LLMs for formal specs / formal modeling makes a lot of sense to me. If an LLM can do the work of going from informal English-language specs to TLA+ / Dafny / etc, then it can hook into a very mature ecosystem of automated proof tools.

I'm picturing it something like this:

1. Human developer says, "if a user isn't authenticated, they shouldn't be able to place an order."

2. LLM takes this, and its knowledge of the codebase, and turns it into a formal spec -- like, "there is no code path where User.is_authenticated is false and Orders.place() is called."

3. Existing code analysis tools can confirm or find a counterexample.

omneity•10mo ago
A fascinating thought. But then who verifies that the TLA+ specification does indeed match the human specification?

I’m guessing using an LLM as a translator narrows the gap, and better LLMs will make it narrower eventually, but is there a way to quantify this? For example how would it compare to a human translating the spec into TLA+?

justanotheratom•10mo ago
maybe run it through few other LLMs depending on how much confidence you need - o3 pro, gemini 2.5 pro, claude 3.7, grok 3, etc..
svieira•10mo ago
Then you need to be able to formally prove the equivalence of various TLA+ programs (maybe that's a solved problem?)
omneity•10mo ago
No idea about SOTA but naively it doesn't seem like a very difficult problem:

- Ensure all TLA+ specs produced have the same inputs/outputs (domains, mostly a prompting problem and can solved with retries)

- That all TLA+ produce the same outputs for the same inputs (making them functionally equivalent in practice, might be computationally intensive)

Of course that assumes your input domains are countable but it's probably okay to sample from large ranges for a certain "level" of equivalence.

EDIT: Not sure how that will work with non-determinism though.

justanotheratom•10mo ago
I didn't mean generate separate TLA programs. Rather, other LLMs review and comment on whether this TLA program satisfies the user's specification.
Taikonerd•10mo ago
A fair question! I'd say it's not that different from using an LLM to write regular code: who verifies that the code the LLM wrote is indeed what you meant?
fmap•10mo ago
The usual way to check whether a definition is correct is to prove properties about it that you think should hold. TLA+ has good support for this, both with model checking as well as simple proofs.
frogmeister57•10mo ago
It makes a lot of sense only for graphics card sales people. For everyone else with a working neuron the sole idea is utter nonsense.
max_•10mo ago
Leslie Lamport said that he invented TLA+ so people could "think above the code".

It was meant as a tool for people to improve their thinking and description of systems.

LLM generation of TLA+ code is just intellectual masterbation.

It may get the work done for your boss. But you intellect will still remain bald — in which case you are better off not writing TLA+ at all.

warkdarrior•10mo ago
> [TLA+] was meant as a tool for people to improve their thinking and description of systems.

Why the speciesism? Why couldn't LLMs use TLA+ by translating a natural-language request into a TLA+ model and then checking it in TLA+?

jjmarr•10mo ago
Not the OP, but I would rather give a formal specification of my system to an AI and have it generate the code.

I believe the point is it's easier for a human to verify a system's correctness as expressed in TLA+ and verify code correctly matches the system than it is to correctly verify the entire code as a system at once.

Then, if my model of the system is flawed, TLA+ will tell me.

I'm an AI bull so if I give the LLM a natural language description, I'd like the LLM to explain the model instead of just writing the TLA+ code.

max_•10mo ago
TLA+ was invented in the first place because we Leslie Lamport thought natural language was a dubious tool for "specifying systems".

Yes an LLM may generate the TLA+ code even correctly, but model checking is not the end goal of TLA+

TLA+ plus is written to fully under how a system works at an abstract level.

Anyways, I guess you could just read the LLM generated TLA+ code. That would help you understand the abstraction of the system — but is the LLMs abstraction equal to your abstraction.

But vibe coded TLA+ sounds extremely dangerous especially in mission critical stuff where its required like Smart Contracts, Pacemakers, Aircraft software etc

frogmeister57•10mo ago
Using generative chatbots to write a formal spec is the most stupid idea ever. Specs are all about reasoning. You need to do the thinking to model the system in a very simplified manner. Formal methods and the generative BS are at the antipodes of reliability. This is an insult to reason. Please keep this nonsense away from the serious parts of CS.
siscia•10mo ago
Anyone who has tried to write formal verification will tell you that there is a WIDE gap between thinking and writing the specs.

Any tool that makes formal verification more accessible, should be welcome.

I believe the valuable part is how accessible we make thinking together with machines.

Us human are great at create innovative solutions, not so great at check and verify every single thing that can go wrong. Machines help with that.

kelseyfrog•10mo ago
Interesting. I've always wanted to formalize the US Constitution into TLA+ in order to find loopholes.