frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Leanstral: Open-Source foundation for trustworthy vibe-coding

https://mistral.ai/news/leanstral
229•Poudlardo•3h ago

Comments

blurbleblurble•2h ago
Truly exciting
andai•2h ago
Trustworthy vibe coding. Much better than the other kind!

Not sure I really understand the comparisons though. They emphasize the cost savings relative to Haiku, but Haiku kinda sucks at this task, and Leanstral is worse? If you're optimizing for correctness, why would "yeah it sucks but it's 10 times cheaper" be relevant? Or am I misunderstanding something?

On the promising side, Opus doesn't look great at this benchmark either — maybe we can get better than Opus results by scaling this up. I guess that's the takeaway here.

DrewADesign•2h ago
It’s really not hard — just explicitly ask for trustworthy outputs only in your prompt, and Bob’s your uncle.
miacycle•52m ago
Assuming that what you're dealing with is assertable. I guess what I mean to say is that in some situations is difficult to articulate what is correct and what isn't depending in some situations is difficult to articulate what is correct and what isn't depending upon the situation in which the software executes.
flowerbreeze•2h ago
They haven't made the chart very clear, but it seems it has configurable passes and at 2 passes it's better than Haiku and Sonnet and at 16 passes starts closing in on Opus although it's not quite there, while consistently being less expensive than Sonnet.
andai•1h ago
Oh my bad. I'm not sure how that works in practice. Do you just keep running it until the tests pass? I guess with formal verification you can run it as many times as you need, right?
lefrenchy•2h ago
Does Mistral come close to Opus 4.6 with any of their models?
DarkNova6•1h ago
Not at the moment, but a release of Mistral 4 seems close which likely bridges the gap.
re-thc•1h ago
Mistral Small 4 is already announced.
androiddrew•12m ago
MOE but 120B range. Man I wish it was an 80B. I have 2 GPUs with 62Gib of usable VRAM. A 4bit 80B gives me some context window, but 120B puts me into system RAM
chucky_z•1h ago
I use mistral-medium-3.1 for a lot of random daily tasks, along with the vibe cli. I'd state from my personal opinion that mistral is my preferred 'model vendor' by far at this point. They're extremely consistent between releases while each of them just feels better. I also have a strong personal preference to the output.

I actively use gemini-3.1-pro-preview, claude-4.6-opus-high, and gpt-5.3-codex as well. I prefer them all for different reasons, however I usually _start_ with mistral if it's an option.

sa-code•1h ago
Why not Large 3? It's larger and cheaper
tjwebbnorfolk•1h ago
Mistral hasn't been in the running for SOTA for quite awhile now
patall•2h ago
Maybe a naive question: given that they see better performance with more passes but the effect hits a limit after a few passes, would performance increase if they used different models per pass, i.e leanstral, kimi, qwen and leanstral again instead of 4x leanstral?
andai•1h ago
This is called a "LLM alloy", you can even do it in agentic, where you simply swap the model on each llm invocation.

It does actually significantly boost performance. There was an article on here about it recently, I'll see if I can find it.

Edit: https://news.ycombinator.com/item?id=44630724

They found the more different the models were (the less overlap in correctly solved problems), the more it boosted the score.

patall•1h ago
That sounds quite interesting. Makes me wonder if sooner or later they will have to train multiple independent models that cover those different niches. But maybe we will see that sooner or later. Thanks for the link.
cyanydeez•1h ago
One would think that LoRAs being so successful in StableDiffusion, that more people would be focused on constructing framework based LoRas; but the economics of all this probably preclude trying to go niche in any direction and just keep building the do-all models.
jasonjmcghee•2h ago
Curious if anyone else had the same reaction as me

This model is specifically trained on this task and significantly[1] underperforms opus.

Opus costs about 6x more.

Which seems... totally worth it based on the task at hand.

[1]: based on the total spread of tested models

DarkNova6•1h ago
I'm never sure how much faith one can put into such benchmarks but in any case the optics seem to shift once you have pass@2 and pass@3.

Still, the more interesting comparison would be against something such as Codex.

beernet•1h ago
Agreed. The idea is nice and honorable. At the same time, if AI has been proving one thing, it's that quality usually reigns over control and trust (except for some sensitive sectors and applications). Of course it's less capital-intense, so makes sense for a comparably little EU startup to focus on that niche. Likely won't spin the top line needle much, though, for the reasons stated.
miohtama•1h ago
Alignment tax directly eats to model quality, double digit percents.
hermanzegerman•54m ago
EU could help them very much if they would start enforcing the Laws, so that no US Company can process European data, due to the Americans not willing to budge on Cloud Act.

That would also help to reduce our dependency on American Hyperscalers, which is much needed given how untrustworthy the US is right now. (And also hostile towards Europe as their new security strategy lays out)

kittikitti•1h ago
This is great, congratulations to the Mistral team! I'm looking forward to the code arena benchmark results. Thanks for sharing.
Havoc•1h ago
What are these "passes" they reference here? Haven't seen that before in LLM evals

Could definitely be interesting for having another model run over the codebase when looking for improvements

rockinghigh•1h ago
It's the number of attempts at answering the question.
lsb•1h ago
The real world success they report reminds me of Simon Willison’s Red Green TDD: https://simonwillison.net/guides/agentic-engineering-pattern...

> Instead of taking a stab in the dark, Leanstral rolled up its sleeves. It successfully built test code to recreate the failing environment and diagnosed the underlying issue with definitional equality. The model correctly identified that because def creates a rigid definition requiring explicit unfolding, it was actively blocking the rw tactic from seeing the underlying structure it needed to match.

skanga•51m ago
TDD == Prompt Engineering, for Agentic coding tasks.
flakiness•1h ago
FYI The Lean 4 paper: https://dl.acm.org/doi/10.1007/978-3-030-79876-5_37
elAhmo•1h ago
I don’t know a single person using Mistral models.
pelagicAustral•1h ago
Me neither, they're not ready for prime imo. I have a yearly sub and the product is just orders of magnitude behind Anthropic's offering. I use Code for real world stuff and I am happy with the result, Mistral is just not something I can trust right now.
consumer451•46m ago
Isn't their latest speech to text model SOTA? When I tested it on jargon, it was amazing.

https://news.ycombinator.com/item?id=46886735

Adrig•28m ago
I used Ministral for data cleaning.

I was surprised: even tho it was the cheapest option (against other small models from Anthropic) it performed the best in my benchmarks.

glinksss•56m ago
Oh, is this a new AI model?
miacycle•53m ago
The TDD foundation! We might need one of those. :)
JoshTriplett•53m ago
Pleasant surprise: someone saying "open source" and actually meaning Open Source. It looks like the weights are Apache-2.0 licensed.
esperent•44m ago
I absolutely called this a couple of weeks ago, nice to be vindicated!

> I'm interested to see what it is in the age of LLMs or similar future tools. I suspect a future phase change might be towards disregarding how easy it is for humans to work with the code and instead focus on provability, testing, perhaps combined with token efficiency.

> Maybe Lean combined with Rust shrunk down to something that is very compiler friendly. Imagine if you could specify what you need in high level language and instead of getting back "vibe code", you get back proven correct code, because that's the only kind of code that will successfully compile.

https://news.ycombinator.com/item?id=47192116

hnipps•32m ago
Here we go.
htrp•16m ago
is the haiku comparison because they've distilled from the model?
thoughtfulchris•9m ago
There have been a lot of conversations recently about how model alignment is relative and diversity of alignment is important - see the recent podcast episode between Jack Clark (co-founder of Anthropic) and Ezra Klein.

Many comments here point out that Mistral's models are not keeping up with other frontier models - this has been my personal experience as well. However, we need more diversity of model alignment techniques and companies training them - so any company taking this seriously is valuable.

Leanstral: Open-Source foundation for trustworthy vibe-coding

https://mistral.ai/news/leanstral
232•Poudlardo•3h ago•40 comments

Meta’s renewed commitment to jemalloc

https://engineering.fb.com/2026/03/02/data-infrastructure/investing-in-infrastructure-metas-renew...
311•hahahacorn•5h ago•128 comments

The “small web” is bigger than you might think

https://kevinboone.me/small_web_is_big.html
284•speckx•6h ago•127 comments

US commercial insurers pay 254% of Medicare for the same hospital procedures

https://github.com/rexrodeo/american-healthcare-conundrum
127•rexroad•6h ago•74 comments

My Journey to a reliable and enjoyable locally hosted voice assistant (2025)

https://community.home-assistant.io/t/my-journey-to-a-reliable-and-enjoyable-locally-hosted-voice...
303•Vaslo•10h ago•92 comments

Show HN: Oxyde – Pydantic-native async ORM with a Rust core

https://github.com/mr-fatalyst/oxyde
40•mr_Fatalyst•3d ago•21 comments

Show HN: Trackm, a personal finance web app

https://trackm.net
5•iccananea•23m ago•0 comments

Why I love FreeBSD

https://it-notes.dragas.net/2026/03/16/why-i-love-freebsd/
328•enz•12h ago•152 comments

Language Model Teams as Distrbuted Systems

https://arxiv.org/abs/2603.12229
63•jryio•6h ago•27 comments

Starlink Mini as a failover

https://www.jackpearce.co.uk/posts/starlink-failover/
168•jkpe•15h ago•147 comments

Show HN: Thermal Receipt Printers – Markdown and Web UI

https://github.com/sadreck/ThermalMarky
10•howlett•3d ago•3 comments

AnswerThis (YC F25) Is Hiring

https://www.ycombinator.com/companies/answerthis/jobs/CNdatw5-founding-engineering-lead
1•ayush4921•2h ago

Launch HN: Voygr (YC W26) – A better maps API for agents and AI apps

58•ymarkov•7h ago•37 comments

Polymarket gamblers threaten to kill me over Iran missile story

https://www.timesofisrael.com/gamblers-trying-to-win-a-bet-on-polymarket-are-vowing-to-kill-me-if...
1268•defly•11h ago•849 comments

Apideck CLI – An AI-agent interface with much lower context consumption than MCP

https://www.apideck.com/blog/mcp-server-eating-context-window-cli-alternative
113•gertjandewilde•8h ago•103 comments

Nvidia Launches Vera CPU, Purpose-Built for Agentic AI

https://nvidianews.nvidia.com/news/nvidia-launches-vera-cpu-purpose-built-for-agentic-ai
105•lewismenelaws•3h ago•68 comments

Show HN: Claude Code skills that build complete Godot games

https://github.com/htdt/godogen
132•htdt•7h ago•77 comments

On The Need For Understanding

https://blog.information-superhighway.net/on-the-need-for-understanding
70•zdw•4d ago•30 comments

In space, no one can hear you kernel panic

https://increment.com/software-architecture/in-space-no-one-can-hear-you-kernel-panic/
5•p0u4a•3d ago•1 comments

AirPods Max 2

https://www.apple.com/airpods-max/
174•ssijak•10h ago•341 comments

Home Assistant waters my plants

https://finnian.io/blog/home-assistant-waters-my-plants/
241•finniananderson•4d ago•128 comments

Corruption erodes social trust more in democracies than in autocracies

https://www.frontiersin.org/journals/political-science/articles/10.3389/fpos.2026.1779810/full
628•PaulHoule•12h ago•323 comments

The bureaucracy blocking the chance at a cure

https://www.writingruxandrabio.com/p/the-bureaucracy-blocking-the-chance
79•item•1d ago•110 comments

Lies I was told about collaborative editing, Part 2: Why we don't use Yjs

https://www.moment.dev/blog/lies-i-was-told-pt-2
188•antics•3d ago•95 comments

Cert Authorities Check for DNSSEC from Today

https://www.grepular.com/Cert_Authorities_Check_for_DNSSEC_From_Today
80•zdw•1d ago•181 comments

Kona EV Hacking

http://techno-fandom.org/~hobbit/cars/ev/
112•AnnikaL•5d ago•64 comments

Lazycut: A simple terminal video trimmer using FFmpeg

https://github.com/emin-ozata/lazycut
139•masterpos•11h ago•46 comments

US Job Market Visualizer

https://karpathy.ai/jobs/
385•andygcook•8h ago•306 comments

MoD sources warn Palantir role at heart of government is threat to UK security

https://www.thenerve.news/p/palantir-technologies-uk-mod-sources-government-data-insights-securit...
573•vrganj•12h ago•232 comments

Comparing Python Type Checkers: Typing Spec Conformance

https://pyrefly.org/blog/typing-conformance-comparison/
90•ocamoss•11h ago•33 comments