Yes, tools like Miri, which this very post is about.
But I think their true strategy is to have AI produce "fixes" like these which will end up infecting the entire codebase: https://github.com/oven-sh/bun/pull/30728
It's been like a day since the merge, presumably such followups are coming.
And it's not like Bun when written in Zig has been a beacon of stability either. It has been segfaults all over the place.
The goal of a library is to provide the encapsulation such that the unsafety doesn't spread.
If undefined behavior occurs, the fault lies with whoever wrote `unsafe { ... }` in the body of a function. If I write "unsafe" in order to call an unsafe library function, and I don't meet the library function's pre-requisites, then it's my fault. If the library internally writes "unsafe" in order while providing a safe wrapper, and I never actually wrote `unsafe { ... }`. If neither I nor the library wrote `unsafe { ... }`, then it is the fault of the compiler.
Using "in safe Rust" means that `unsafe` doesn't occur either in the user code nor in the library. In this context, since we've heard how many uses of `unsafe { ... }` exist in the Bun rewrite, I'd read "in safe Rust" to mean "without calling any functions marked as unsafe".
Unsafe Rust allows you to tell the compiler “hold my beer”. It’s a concession to the reality that the normal restrictions of Rust disallow some semantically valid programs that you might otherwise want to write. The safeguards work great in most cases, but in some they’re overly restrictive.
In practice, the overwhelming majority of code is able to be written in safe Rust and the compiler can have your back. The majority of the rest is for performance reasons, interacting with external functions like C libraries over FFI, or expressing semantics that safe Rust struggles with (e.g., circular references).
Here's some links on this topic which have some examples:
https://doc.rust-lang.org/nomicon/working-with-unsafe.html https://www.ralfj.de/blog/2016/01/09/the-scope-of-unsafe.htm...
In this context "UB" means something different than how you're using it. The UB being mentioned here is the "nasal demons" form, i.e., programs which contain undefined behavior have no defined meaning according to the language semantics.
What you're talking about is probably better described in this context as "unspecified behavior", which is behavior that the language standard does not mandate but does not render programs meaningless. For example, IIRC in C++ the order in which g(), h(), and i() are evaluated in f(g(), h(), and i()) is unspecified - an implementation can pick any order, and the order doesn't have to be consistent, but no matter the order the program is valid (approximately speaking).
So this "unspecified behavior" might turn into the more nasal demon type when g(), h() and i() share mutable state and assume some particular sequential order of execution. No?
- Its a throw thing at the wall and see what sticks situation
- LLMs will improve*
- Using LLMs in an agentic way will improve (git worktrees, sliced PRs, spec driven steps)
So what happened here is a mess, but you gotta break a few eggs to make a souffle.
It's a learning step and I am glad it happened, there will be so many things to debrief from this.
I don't use Bun or Rust but fair play to them having a punt.
<Shameless plug> I have been working with Claude code to spec out and bring back to life a Spring Boot starter library for Apache Solr search
https://github.com/tomaytotomato/spring-data-solr-lazarus
There were a few points I had to steer it but the result has been a good implementation.
It’s just a standard Pavlovian response of a bootlicker, it can also be triggered if you mention “tax the rich” and “regulate AI hyperscalers”.
* Make a huge deal out of it how “Claude Code enabled Bun team to rewrite 1+ mil of Zig lines to Rust” and write a blogpost, VCs are salivating
* Basic checks fail
* Let Mythos rip the codebase to shreds, spend God knows how much more
* Write a separate blogpost
* Charlatans and smooth brains clap and defend against “delusional anti-AI mob”
* VCs orgasm even harder
Clap, clap, clap. That’s how you make money, folks.
Thus far most of the buzz and marketing has been entirely negative from people who are against AI.
My take is that most of the buzz is also tied to recent negative opinions of Anthropic themselves due to some of their recent decisions.
Couldn't a case be made that it's better to get Bun to the to the language with the stronger type system first and, once there, use that stronger type system as leverage for these kinds of improvements as a follow-on effort? It seems preferable to requiring perfection on the very first step.
What is a bit disappointing is that the Rust code apparently has APIs that aren't marked unsafe but may cause UB anyway. When doing this kind of translation, I'd always err on the side of caution and start by marking all/most things unsafe. Or prompt the slopbots to do the same I guess.
Then you can go in and verify the safety of individual bits step by step.
I don't see what the big deal is here.
In fact using the word "rewrite" itself is pretty inaccurate.
As has been mentioned the goal was a port so they "could" eventually rewrite most of it to be idiomatic rust. The main benefit of this now is the compiler and being able to use these tools to fix issues that were already being hidden when it was in zig.
If you go into this codebase expecting to see idiomatic rust and get angry when it's not there, you are going in with the entirely incorrect attitude.
It's understandable how people see it as AI slop or whatever given the division among developers at the moment. But please see it for what it is instead of just jumping to conclusions.
They may have said that, but quite clearly the value they actually get out of it is getting the headline "AI reimplements complex, broadly used software in 2 weeks, but makes it way better because it's rust now" in front of a million people's eyes, only 1% of whom will ever find out it was mostly fluff
This is entirely disingenuous. Jarred has already made it clear what value they get out of moving off of Zig. Yes they used AI heavily to attempt this goal but I don't see what the big issue is. They haven't even released it yet and Anthropic themselves have said 0 about this.
The "headlines" thus far are really just people completely uninvolved with Bun and with all to gain by perpetrating "AI BAD".
My honest take: the big issue isn't "what if it goes wrong" its the fear that a migration of this size works out of the box and being done almost entirely by AI.
"Zig, let me Ai you"
"no"
*Ai's Zig fork, suffers from memory bugs*
"Well I'm moving!"
*Ai's code into Rust, suffers from memory bugs*
Bun's fork of Zig was just an unsound hack that at best would have produced a strictly inferior speedup compared to our current work with incremental compilation, which is already plenty usable:
- June 2025 core team starts using it with the zig compiler itself https://ziglang.org/devlog/2025/#2025-06-14
- April 2026 https://ziglang.org/devlog/2026/#2026-04-08
> Zig's AI stance is ridiculous & politically-motivated
It's literally an issue with our business model to mess with our contributor pipeline, can't get more concrete than this.
> An example of this is the changes to type resolution which happened in the 0.16.0 release cycle—these didn’t affect users too much, but had big implications for the compiler implementation. Before those changes, the compiler’s behavior was often highly dependent on the order in which types and declarations were semantically analyzed by the compiler. Some orders might result in successful compilation, while others give compile errors. Single-threaded semantic analysis prevented these bugs from causing user-facing non-determinism. The rewritten type resolution semantics were designed to avoid these issues, but Bun’s Zig fork does not incorporate the changes (and has not otherwise solved the design problems), which means their parallelized semantic analysis implementation will exhibit non-deterministic behavior. That’s pretty much a non-starter for most serious developers: you don’t want your compilation to randomly fail with a nonsense error 30% of the time.
There is a reason why, zig is upholding the quality and they hate it.
In this case, I would trust the output even less than the input. The input was memory-unsafe but hand-written. The output is memory-unsafe but also vibe-coded and has had no eyeballs on it. What is the point of abusing agentic AI for this use-case?
AI disproportionately empowers the stupid and evil
Have you ever seen what comes out of c2rust? It's awful. It relies on a library of functions which emulate unsafe C pointer semantics with unsafe Rust.
A few years ago, when I was struggling with bugs in OpenJPEG (a JPEG 2000 decoder), someone tried running it through c2rust. The converted unsafe rust segfaulted at the same place the C code did. It's compatible, but not safe.
Main insight: don't do string manipulation in C or unsafe Rust. It's totally the wrong tool for the job.
That is indeed the point of c2rust. It gives you a baseline that is semantically identical to the original codebase, and with that passing the full test suite, bug-for-bug, you can then start gradually adopting rusty idioms to improve the memory safety of the codebase.
This is awful. They have some internal string format borrowed from a Zig library where the address of the item is in the low end of a pointer and the length is at the high end. Why are they doing that in 2026? It lets you save a few bytes at best. It doesn't enforce the Rust rule that strings must be strict UTF-8. It's totally alien to the safe way Rust handles strings.
[1] https://github.com/oven-sh/bun/blob/main/src/bun_core/string...
This doesn’t work like you think it does. These things are full of errors and make the code very verbose and hard to reason about. It works with small apps, not entire rewrites.
They did ;) a highly dynamic one...
This would require experience, judgement, and most of all humility. Some vibecoders, in all their hubris, try to build a Tower of Babel with non-deterministic models and are shocked when the project inevitably fails.
The bar for matching tsc's behavior is really _really_ high. see:
https://github.com/type-challenges/type-challenges
I'm not against using LLMs to write a lot of code. But verification should be 100x more robust now that we can output code at this rate.
The issue isn't the existence of undefined behavior that miri would catch. The issue is exposing an API that allows undefined behavior from safe code - which miri only catches if you go write the test that proves it.
This isn't an all together unreasonable thing to happen during an initial port of code from an unsafe language. You can, and the bun team seems to be, go around later and make sure that the functions where you wrap unsafe code does so correctly. Temporarily in a porting stage incorrectly marking some unsafe functions as safe isn't a real issue. It's a bit strange to merge it into the main repo in this state, but not a wholly unreasonable thing to do if the team has decided that they're definitely doing this. The only real issue would be if they made an actual release with the code in this state.
It's also a bit unfortunate that they didn't immediately set up their tests to run in miri if only because LLMs respond so well to good tests - I know they didn't do this not because of this github issue (which doesn't demonstrate that) but because there's another test [1] that absolutely does invoke undefined behavior that miri would catch. Though the code it's testing doesn't actually appear to be used anywhere so it's not much of a real issue. That said it's obviously early in the porting process... maybe they'll get around to it (or just get rid of all this unsafe code that they don't actually need).
[1] https://github.com/oven-sh/bun/blob/4d443e54022ceeadc79adf54... - the pointers derived from the first mutable references are invalidated by creating a new mutable reference to the same object. In C terms think of "mutable reference" as "restrict reference which a trivial mutation is made through". It's easy to do this properly, derive all the pointers from the same mutable reference, it just wasn't done properly.
PS. Spamming github just makes people less likely to work in the open. Please don't. We can all judge this work just fine on third party sites.
PPS. And we might want to withhold judgement until it's in a published state. Judging intermediate working states doesn't seem terribly fair or interesting to me.
I'd be concerned that by jumping onboard with this sort of development process I'd lose touch with how to engineer software in a detail-oriented or remotely rigorous way.
It also makes me question what sort of value the entire Bun project ever had if a drop-in replacement can just be thrown in here like it's nothing. Why do we need all these JS runtimes again?
The AI bubble is so large that we've also forgotten how useless and dumb a lot of software engineering labor was even before LLMs came along. We were already in a bubble.
All that is to say, I think it's useful to reframe some conversations about AI as, "if AI can accomplish this task, was it ever actually valuable?" I think for some specific things, the answer will be yes, but the tech industry has been huffing its own farts for so long I really don't think anyone has sight anymore of what's economically valuable in a ground truth sense. Much like LLMs themselves, this confusion pollutes the entire well of discourse about their economic utility.
Step 2: Purchase an entire company for a product that, if you squint, might help paper over the entirely predictable problems that arise from using the wrong tools to implement the wrong architecture, because surely the solution isn’t reevaluating your original engineering choices.
Step 3: Perform a buggy, vibe-code rewrite of the tool you just bought. A tool you only need because — for whatever internal political reasons — sunk cost means you can only keep digging.
Step 4: ???
What would have been significantly better is just rewriting Claude in a language that's actually well suited to what it's doing in the first place (which could well be Rust, Codex is written in it as prior art). It's funny how the vibe coding promoters are keen on things like this, rewriting other codebases as fast as possible with little quality checking, but they are still defensive of their own code.
dangerlibrary•54m ago
This asymmetry is well understood by marketing and PR professionals, and actively exploited.
[0] https://en.wikipedia.org/wiki/Trust_Me,_I%27m_Lying
giancarlostoro•49m ago
beberlei•45m ago
pavel_lishin•42m ago
kibwen•41m ago
Did they even claim it was "memory-safe"? Every discussion of this topic has had dozens of comments noting that their vibed codebase is bursting at the seams with unaudited unsafe blocks, lightly reviewed by people who seem to not only seem to not understand Rust, but who seem incensed at the idea of needing to understand any programming language in the first place.
parchley•36m ago
Anon1096•24m ago
Henchman21•10m ago
veidr•26m ago
They did cite Rust's safety as a motivating factor for the port. That doesn't imply trying to achieve that simultaneously with the language change — which is good, because that would be insane. (Or, if you prefer, even more insane.)
You cannot faithfully port a codebase to a new language while also radically re-architecting it. You have to choose.
They want the safety benefits of Rust going forward; i.e., after it's finished, when they then write new code in Rust.
swiftcoder•23m ago
Dylan16807•17m ago
CamouflagedKiwi•6m ago
rcxdude•28m ago
whimsicalism•27m ago
trane_project•3m ago