Rust on the other hand is pretty established by now and has less breaking changes. It also has more compile-time safety-guarantees that makes vibe-coding a bit more confident.
In top of that, Zig has rejected their upstream contributions. So they'd have to maintain their own compiler in the long run, which is probably just technical debt to maintain.
Yet here we are, what looks like a massive undertaking for vibe coding.
Time will tell how this will turn out. Would be nice if the Bun maintainers could give some clarification about what they’re doing here, and why they’re doing this.
Like maybe you get the LLM to try _really hard_ to churn through everything, but this feels like a big case of "perils of the lack of laziness".
Of course if you have a good idea for how to deal with allocations etc "idiomatically" already maybe that works out well. And to the credit of the port guide writer bun seems to have its explicit allocations that are already mapping pretty well to Rust.
My only experience with ports so far is Python to Go, and it's been near flawless (just enough stupid shit to make me feel justified to be in the loop).
Zig is a great language and I want to see it succeed, but this is a prudent move for Bun.
I think there are even longer term plays that Anthropic should be looking at, in this space, but it seems like they've decided rust is the right thing, so fair play. I would be (am!) thinking about making an LLM optimized high level language that you can generate / train on intensively because you control the language spec.
Claude struggling at Zig: the above + memory safety issues if you run “fast” mode.
It is generally true that Rust code tends to be written in a way that the compiler catches the issue at compile time. The same is not as true for Zig, Python or JS
So the difference is not in writing new stuff but in maintaining the existing codebase. Rust's rigidity makes it potentially harder to break stuff compared to Zig's general flexibility. As a project grows and matures, different types of contributors naturally come in and it's unreasonable to expect everyone to learn about historical footguns that may have accumulated.
something JS-adjacent could certainly be more known than an obscure language but are that many people using drop-in node replacements?
Sometimes it is worth it, but it may also kill projects. A risky move. And AI doesn't help its cause. AI can save a lot of time when making ports, it is one of the things it does best, but it doesn't protect from regressions.
I am not using Bun in production, but if I was, I would consider it a risk. Not because of Rust vs Zig, but for changing things that works.
Claude has absolutely no idea what it's doing with bleeding edge zig unless you feed it source and guide it closely (in which case it's useful for focused work) - I'm building a game engine & tcp/udp servers with it and it requires a hands-on approach and actually understanding what's being built.
I imagine these are not really concerns with rust at this point.
In my ideal world the team behind bun would be putting in the work to keep up with modern zig, but it's starting to look like they are running mostly on vibes in which case rust might be a better choice.
I think this is true regardless of what language you’re using.
I’ve built a lot in Zig and there’s no difference between vibing stuff in it versus TypeScript/React. Claude can “one-shot” them both, and will mimic existing code or grep the standard library to figure everything out.
Which isn't particularly difficult - the language docs and std source come with the installation, so all you need to do is tell Claude where those directories are in your skill/plugin/CLAUDE.md.
> and guide it closely (in which case it's useful for focused work)
It does struggle sometimes with writing code that compiles and uses the APIs correctly. My approach to that so far has been to write test blocks describing the desired interface + semantics, and asking Claude to (`zig test` -> fix errors) in a loop until all the tests pass.
It doesn’t look like that at all. Do you think that all use of AI is vibe coding?
"+27,939Lines changed: 27939 additions & 0 deletions"
of new rust code
https://github.com/oven-sh/bun/compare/claude/phase-a-port
This single commit is 65k lines of additions
https://github.com/oven-sh/bun/commit/ffa6ce211a0267161ae48b...
There's a decent article by Simon Willison that talks about this: https://simonwillison.net/2025/Mar/19/vibe-coding/
> I’m seeing people apply the term “vibe coding” to all forms of code written with the assistance of AI. I think that both dilutes the term and gives a false impression of what’s possible with responsible AI-assisted programming.
They recently proposed some of their internal tools to be the official Rust implementation[0] of Connect RPC[1]. As a protobuf based library set, this includes a new Rust-based protobuf compiler, Buffa[2].
[0]: https://github.com/orgs/connectrpc/discussions/7#discussionc...
fwiw, I suspect it's less of an undertaking than you may think. I've been playing with AI to rewrite Postgres in Rust over the past couple of weeks and I found the AI to be exceptional at doing rewrites. Having an existing codebase you can reference prevents a lot of the problems you have with vibecoding. You have an existing architecture that works well and have a test suite that you can test against
Over the course of a month I've gone from nothing to passing over 95% of the Postgres test suite. Given Jarred built Bun, I bet he'll be able to go much faster
That's because it's not vibe coding - stingraycharles doesn't seem to understand what vibe coding is. Vibe coding was defined here https://x.com/karpathy/status/1886192184808149383
> There's a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.
This is very far from Anthropic's migration plans.
My benchmark is basically, "are you letting the AI drive."
In this case, an AI appears to have written the migration guide...
It's probably a bit of both.
I've had more success vibe coding Rust than I have in more dynamic languages. I suspect the strictness of the Rust compiler forces the AI agent to produce better code. Not sure. It could be just that I am less familiar with Rust so it feels like it's doing a better job.
I'm not a rust dev but even I kind of notice that tokio is kind of shunned in most projects. Why is that? Is it just bad or what?
Source: I worked on Deno, competed directly with Bun on HTTP performance (and won on some metrics).
You much rather have this runtime you're building manage task scheduling and allocation and all that. It's the most natural design choice to make.
I would guess dealing with breaking changes is a big motivation for this.
[0]: https://github.com/oven-sh/bun/compare/claude/phase-a-port
The slides: https://go.dev/talks/2015/gogo.slide#3
An interesting similarity:
>We had our own C compiler just to compile the runtime.
The Bun team maintain their own fork of Zig too
Problem is fanboys like YOU.
As a fan of the language, I hope it leads to some reflection on things that might need to change moving forward.
https://github.com/oven-sh/bun/compare/claude/phase-a-port#d...
that isn't particularly surprising, but the point is I would expect getting things more stable than the zig version would take a bit.
What is the most interesting here for me is:
- a big, clear outcome and acceptance criteria, vibe coding project on
- a public, working, high performance, full featured, production codebase by
- the leading LLM model maker known for the strongest coding ability
A good example no matter if it successes or not.
As an aside, I've been bitten by Zig's breaking changes on my own projects as well. It's taken the shine off of Zig and I'm looking at alternatives.
heldrida•1h ago