frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tell HN: Bugbrain.app is spamming HN profile email addresses

1•jacquesm•36s ago•0 comments

John C. Dvorak – heart attack – in hospital

https://www.noagendashow.net/listen/1848?t=1:24
2•https443•4m ago•0 comments

Improving Antibot Biometric Protections: A Harsh Lesson from Akamai (2024)

https://www.mimic.sbs/antibot/Improving-Antibot-Biometric-Protections-Through-Threat-Intelligence...
2•mmarian•8m ago•0 comments

Sumo: Ukrainian ozeki Aonishiki never forgets where he started

https://mainichi.jp/english/articles/20260306/p2g/00m/0sp/020000c
1•rawgabbit•8m ago•1 comments

MariaDB innovation: vector index performance

http://smalldatum.blogspot.com/2026/02/mariadb-innovation-vector-index.html
1•gslin•10m ago•0 comments

Autoresearch: Agents researching on single-GPU nanochat training automatically

https://github.com/karpathy/autoresearch
1•simonpure•10m ago•0 comments

Show HN: Turn an audio recording into a LinkedIn video – no signup, no server

https://ohmstone.github.io/audiogram/
1•tonelord•10m ago•0 comments

The Elect

https://tomasbjartur.substack.com/p/the-elect
1•paulpauper•10m ago•0 comments

Completing Claude's Cycles [pdf]

https://github.com/no-way-labs/residue/blob/main/paper/completing_claudes_cycles.pdf
1•fs123•12m ago•0 comments

The Cost of Hard-to-Fire Labor Laws: Why European Firms Don't Take Risks

https://marginalrevolution.com/marginalrevolution/2026/03/the-hidden-cost-of-hard-to-fire-labor-l...
1•paulpauper•13m ago•0 comments

Evalien – Node.js event loop agent harness

https://github.com/agentbellnorm/evalien
1•agentbellnorm•13m ago•0 comments

Whale Fall

https://nesbitt.io/2026/02/21/whale-fall.html
1•cratermoon•13m ago•0 comments

Nippon Life Sues OpenAI over Legal Advice to Ex-Beneficiary

https://www.nippon.com/en/news/yjj2026030600630/
4•powera•17m ago•0 comments

Show HN: U-Boot Fw_env.config Bruteforcer

https://github.com/nstarke/U-Boot-fw_env_scan
1•bootbloopers•22m ago•0 comments

War Prediction Markets Are a National-Security Threat

https://www.theatlantic.com/technology/2026/03/polymarket-insider-trading-going-get-people-killed...
17•fortran77•23m ago•3 comments

How do teams prevent duplicate LLM API calls and token waste?

1•cachelogic•26m ago•0 comments

Agentic open-source local news comedian (Pydantic, Llama 3.1)

https://github.com/jeffjbowie/Local-News-Comedian-Agent
1•Veritaco•30m ago•0 comments

The First Multi-Behavior Brain Upload

https://theinnermostloop.substack.com/p/the-first-multi-behavior-brain-upload
1•bwjx•32m ago•0 comments

The Art of Dailiness, by Michael Bierut

https://www.itsnicethat.com/features/michael-bierut-the-art-of-dailiness-advice-education-creativ...
1•jruohonen•32m ago•0 comments

Fine I'll Try Linux One More Time [video]

https://www.youtube.com/watch?v=kluoZ9RhmVo
1•eldaisfish•33m ago•0 comments

Coworker Isn't the Enemy–Why Compete with Them?

1•01-_-•33m ago•0 comments

Sometimes the simple ideas are the most effective

https://www.trustle.online
2•cdotkay•34m ago•1 comments

Spirals – Or Visualizing Cycles and Patterns

https://uditsaxena.bearblog.dev/spirals-cycles-patterns/
2•wavelander•34m ago•0 comments

Show HN: I wrote a script to customise Hacker News with bigger fonts and skins

https://github.com/susam/hnskins
1•susam•34m ago•0 comments

I used pulsar detection techniques to turn a phone into a watch timegrapher

https://www.chronolog.watch/timegrapher
3•tylerjaywood•38m ago•1 comments

AI-Powered F1 Predictions

https://danielfinch.co.uk/words/2026/03/06/ai-f1-predictions/
3•danielsamuels•38m ago•1 comments

The MacBook Neo drops: What I'm doing to get it

https://www.sonka.io/wishlist/survival-funds-376a71
1•chiswanjo•40m ago•0 comments

Google PM open-sources Always On Memory Agent, ditching vector databases

https://venturebeat.com/orchestration/google-pm-open-sources-always-on-memory-agent-ditching-vect...
1•antigrav_kids•41m ago•2 comments

Full Circle: How AI Agents Are Bringing Back the Age of the Designer

http://bodgerwashere.blogspot.com/2026/03/full-circle-how-ai-agents-are-bringing.html
1•jlbprof•42m ago•0 comments

Sendbuilds: Build and deploy any GitHub repo with one command

https://github.com/Sendara/sendbuilds
1•notsliver•43m ago•0 comments
Open in hackernews

GenAI-Accelerated TLA+ Challenge

https://foundation.tlapl.us/challenge/index.html
35•lemmster•10mo ago

Comments

Taikonerd•10mo ago
Using LLMs for formal specs / formal modeling makes a lot of sense to me. If an LLM can do the work of going from informal English-language specs to TLA+ / Dafny / etc, then it can hook into a very mature ecosystem of automated proof tools.

I'm picturing it something like this:

1. Human developer says, "if a user isn't authenticated, they shouldn't be able to place an order."

2. LLM takes this, and its knowledge of the codebase, and turns it into a formal spec -- like, "there is no code path where User.is_authenticated is false and Orders.place() is called."

3. Existing code analysis tools can confirm or find a counterexample.

omneity•10mo ago
A fascinating thought. But then who verifies that the TLA+ specification does indeed match the human specification?

I’m guessing using an LLM as a translator narrows the gap, and better LLMs will make it narrower eventually, but is there a way to quantify this? For example how would it compare to a human translating the spec into TLA+?

justanotheratom•10mo ago
maybe run it through few other LLMs depending on how much confidence you need - o3 pro, gemini 2.5 pro, claude 3.7, grok 3, etc..
svieira•10mo ago
Then you need to be able to formally prove the equivalence of various TLA+ programs (maybe that's a solved problem?)
omneity•10mo ago
No idea about SOTA but naively it doesn't seem like a very difficult problem:

- Ensure all TLA+ specs produced have the same inputs/outputs (domains, mostly a prompting problem and can solved with retries)

- That all TLA+ produce the same outputs for the same inputs (making them functionally equivalent in practice, might be computationally intensive)

Of course that assumes your input domains are countable but it's probably okay to sample from large ranges for a certain "level" of equivalence.

EDIT: Not sure how that will work with non-determinism though.

justanotheratom•10mo ago
I didn't mean generate separate TLA programs. Rather, other LLMs review and comment on whether this TLA program satisfies the user's specification.
Taikonerd•10mo ago
A fair question! I'd say it's not that different from using an LLM to write regular code: who verifies that the code the LLM wrote is indeed what you meant?
fmap•10mo ago
The usual way to check whether a definition is correct is to prove properties about it that you think should hold. TLA+ has good support for this, both with model checking as well as simple proofs.
frogmeister57•10mo ago
It makes a lot of sense only for graphics card sales people. For everyone else with a working neuron the sole idea is utter nonsense.
max_•10mo ago
Leslie Lamport said that he invented TLA+ so people could "think above the code".

It was meant as a tool for people to improve their thinking and description of systems.

LLM generation of TLA+ code is just intellectual masterbation.

It may get the work done for your boss. But you intellect will still remain bald — in which case you are better off not writing TLA+ at all.

warkdarrior•10mo ago
> [TLA+] was meant as a tool for people to improve their thinking and description of systems.

Why the speciesism? Why couldn't LLMs use TLA+ by translating a natural-language request into a TLA+ model and then checking it in TLA+?

jjmarr•10mo ago
Not the OP, but I would rather give a formal specification of my system to an AI and have it generate the code.

I believe the point is it's easier for a human to verify a system's correctness as expressed in TLA+ and verify code correctly matches the system than it is to correctly verify the entire code as a system at once.

Then, if my model of the system is flawed, TLA+ will tell me.

I'm an AI bull so if I give the LLM a natural language description, I'd like the LLM to explain the model instead of just writing the TLA+ code.

max_•10mo ago
TLA+ was invented in the first place because we Leslie Lamport thought natural language was a dubious tool for "specifying systems".

Yes an LLM may generate the TLA+ code even correctly, but model checking is not the end goal of TLA+

TLA+ plus is written to fully under how a system works at an abstract level.

Anyways, I guess you could just read the LLM generated TLA+ code. That would help you understand the abstraction of the system — but is the LLMs abstraction equal to your abstraction.

But vibe coded TLA+ sounds extremely dangerous especially in mission critical stuff where its required like Smart Contracts, Pacemakers, Aircraft software etc

frogmeister57•10mo ago
Using generative chatbots to write a formal spec is the most stupid idea ever. Specs are all about reasoning. You need to do the thinking to model the system in a very simplified manner. Formal methods and the generative BS are at the antipodes of reliability. This is an insult to reason. Please keep this nonsense away from the serious parts of CS.
siscia•10mo ago
Anyone who has tried to write formal verification will tell you that there is a WIDE gap between thinking and writing the specs.

Any tool that makes formal verification more accessible, should be welcome.

I believe the valuable part is how accessible we make thinking together with machines.

Us human are great at create innovative solutions, not so great at check and verify every single thing that can go wrong. Machines help with that.

kelseyfrog•10mo ago
Interesting. I've always wanted to formalize the US Constitution into TLA+ in order to find loopholes.