WASM3, especially (released just 2 months ago), is really gunning for a more general-purpose "assembly for everywhere" status (not just "compile to web"), and it looks like it's accomplishing that.
I hope they add some POSIXy stuff to it so I can write cross-platform commandline TUI's that do useful things without needing to be recompiled on different OS/chip combos (at the cost of a 10-20% reduction from native compilation- not a critical loss for all but the most important use-cases) and are likely to simply keep working on all future OS/chip combos (assuming you can run the wasm, of course)
Are you aware of WASI? WASI preview 1 provides a portable POSIXy interfance, while WASI preview 2 is a more complex platform abstraction beast.
(Keeping the platform separate from the assembly is normal and good - but having a common denominator platform like POSIX is also useful).
You can already create threads in Wasm environments (we got even fork working in WASIX!). However, there is an upcoming Wasm proposal that adds threads support natively to the spec: https://github.com/WebAssembly/shared-everything-threads
is this something that is expected to "one day" be part of WASM proper in some form?
If you want to compile threaded code, things should already work (without waiting for any proposal in the Wasm space). If you want to run it, there are few options: use wasmer-js for the browser (Wasmer using the Browser Wasm engine + WASIX) or using normal Wasmer to run it server-side.
No need to wait for the Wasm "proper" implementation. Things should already be runnable with no major issues.
But the times I've used the collaboration tooling in Zed have been really excellent. It just sucks it's not getting much attention recently. In particular I'd really like to see some movement on something that works across multiple different editors on this front.
I'm glad to hear they're still thinking about these kind of features.
Only issue is that some of the managed services are still pretty half-baked, and introduce insane latency into things that should not be slow. KV checks/DB queries through their services can be double-to-triple digit ms latencies depending on configs.
We need WASM-native interfaces to get common to get rid of JS.
Realistically for a low traffic app it's fine, but it really makes you question how badly you want to be writing Rust.
As far as I can tell, the problem stems from the fact that CF Workers is still V8 - it's just a web browser as a server. A Rust app in this environment has to compile the whole stdlib and include it in the payload, whereas a JS app is just the JS you wrote (and the libs you pulled in). Then the JS gets to use V8's data structures and JSON parsing which is faster than the wasm-compiled Rust equivalents.
At least this is what I ran into when I tried a serious project on CF Workers with Rust. I tried going full Cloudflare but eventually migrated to AWS Lambda where the Rust code boots fast and runs natively.
Regardless, not sure why a Rust engineer would choose this path. The whole point to writing a service in Rust is that you would trade 10x time build complexity and developer ovearhead for getting a service that can run in a low memory, low CPU VM. Seems like the wrong tools for the job.
Thanks for the confirmation. I was confused as well. I always thought that the real use of WASM is to run exotic native binaries in a browser, for example, running Tesseract (for OCR) in the browser.
BUT it's worth noting that WebAssembly still has some performance overhead compared to native, the article chooses convenience and portability over raw speed, which might be fine for an editor backend.
For me, it's a superior experience anyway. I also prefer it in editors that support both (like VS code).
You can run the REPL with a Jupiter kernel as well.
This implementation sounds fully dependant on a service that Zed has little to say about.
Its gonna be hard to compete with the scaling cloudflare offers if they migrate to their own dedicated infra, but it of course would become much cheaper than paying per request
mariopt•2mo ago
What is performance overhead when comparing rust against wasm?
Also think the time for a FOSS alternative is coming. Serverless without, virtually, cold starts is here to stay but being tied to only 1 vendor is problematic.
wmf•2mo ago
kevincox•2mo ago
Last time I compared (about 8 years ago) WASM was closer to double the runtime. So things have definitely improved. (I had to check a handful of times that I was compiling with the correct optimizations in both cases.)
pmarreck•2mo ago
It may get even closer with WASM3, released 2 months ago, since it has things like 64 bit address support, more flexible vector instructions, typed references (which remove runtime safety checks), basic GC, etc. https://webassembly.org/news/2025-09-17-wasm-3.0/
jsheard•2mo ago
https://spidermonkey.dev/blog/2025/01/15/is-memory64-actuall...
pmarreck•2mo ago
2) The bounds checking argument is a problem, I guess?
3) This article makes no mention of type-checking, which is also a new feature, which moves some checks that normally only run at runtime to only needing to be checked once at compile time, and this may include bounds-style checks
laktek•2mo ago
Supabase Edge Functions runs on the same V8 isolate primitive as Cloudflare Workers and is fully open-source (https://github.com/supabase/edge-runtime). We use the Deno runtime, which supports Node built-in APIs, npm packages, and WebAssembly (WASM) modules. (disclaimer: I'm the lead for Supabase Edge Functions)
mariopt•2mo ago
Several years ago, I used MeteorJs, it uses mongo and it is somehow comparable to Supabase. The main issue that burned me and several projects was that It was hard/even impossible to bring different libraries, it was a full stack solution that did not evolved well, it was great for prototyping until it became unsustainable and even hard to on board new devs due to “separating of concerns” mostly due to the big learning curve of one big framework.
Having learn for this, I only build apps where I can bring whatever library I want. I need tool/library/frameworks to as agnostic as possible.
The thing I love about CloudFlare workers is that you are not force to use any other CF service, I have full control of the code, I combine it with HonoJs and I can deploy it as a server or serverless.
About the runtimes: Having to choose between node, demo and bun is something that I do not want to do, I’m sticking with node and hopefully the runtimes would be compatible with standard JavaScript.
laktek•2mo ago
It's possible for you to self-host Edge Runtime on its own. Check the repo, it has Docker files and an example setup.
> I have full control of the code, I combine it with HonoJs and I can deploy it as a server or serverless.
Even with Supabase's hosted option, you can choose to run Edge Functions and opt out of others. You can run Hono in Edge Functions, meaning you can easily switch between CF Workers and Supabase Edge Functions (and vice versa) https://supabase.com/docs/guides/functions/routing?queryGrou...
> Having to choose between node, demo and bun is something that I do not want to do, I’m sticking with node and hopefully the runtimes would be compatible with standard JavaScript.
Deno supports most of Node built-in API and npm packages. If your app uses modern Node it can be deployed on Edge Functions without having to worry about the runtime (having said that, I agree there are quirks and we are working on native Node support as well).
mariopt•2mo ago
tomComb•2mo ago
It is a terrific technology, and it is reasonably portable but I think you would be better using it in something like Supabase where are the whole platform is open source and portable, if those are your goals.
kentonv•2mo ago
People can and do use this to run Workers on hosting providers other than Cloudflare.
yencabulator•2mo ago
https://github.com/cloudflare/workerd#warning-workerd-is-not...
(I know you know this, but frankly you should add a disclaimer when you comment about CF or Capnp. It's too convenient for you to leave out the cons.)
kentonv•2mo ago
(Though if we assume no zero-days in V8, then workerd as-is actually does provide strong sandboxing, at least as strong as (arguably stronger than) any other JS runtime. Unfortunately, V8 does in fact have zero-days, quite often.)
What mariopt said above was: "being tied to only 1 vendor is problematic." My point here is that when you build on Workers, you are not tied to one provider, because you can run workerd anywhere. And we do, in fact, have former customers who have migrated off Cloudflare by running workerd on other providers.
> frankly you should add a disclaimer when you comment about CF or Capnp
I usually do. Sometimes I forget. But my name and affiliation is easily discovered by clicking my profile. I note that yours is not.
yencabulator•2mo ago
Meanwhile, somebody like Supabase is making the claim that what you see as open source is what they run, and Deno says their proprietary stuff is KV store and such, not the core offering.
Now, do these vendors have worse security, by trusting the V8 isolates more? Probably. But clearly Cloudflare Workers are a lot more integrated than just "run workerd and that's it" -- which is the base Supabase sales pitch, with Postgrest, their "Realtime" WAL follower, etc.
(I am not affiliated with any of the players in this space; I have burned a few fingers trying to use Cloudflare Workers, especially in any advanced setup or with Rust. You have open, valid, detailed, reproducible, bug reports from me.)
kentonv•2mo ago
https://docs.deno.com/runtime/fundamentals/security/#executi...
The intro blog post for Supabase edge functions appears to hint that, in production, they use Deno Deploy subhosting: https://supabase.com/blog/edge-runtime-self-hosted-deno-func...
Note that Deno Deploy is a hosting service run by Deno-the-company. My understanding is that they have proprietary components of their hosting infrastructure just like we do. But disclaimer: I haven't looked super-closely, maybe I'm wrong.
But yes, it's true that we don't use containers, instead we've optimized our hosting specifically for isolates as used in workerd, which allows us to run more efficiently and thus deploy every app globally with better pricing than competitors who only deploy to one region. Yes, how we do that is proprietary, just like the scheduling systems of most/all other cloud providers are also proprietary.
But how does that make anyone "tied to one vendor"?
yencabulator•2mo ago
Because you can't, in the general case, recreate the setup on a different platform? That's like the definition of that expression.
BTW here's Deno saying Deno Deploy is process-per-deployment with seccomp. No idea if that's always true, but I'd expect them to boast about it if they were doing something different. https://deno.com/blog/anatomy-isolate-cloud
Process-per-deployment is something you can reasonably recreate on top of K8S or whatever for self-hosting. And there's always KNative. Note that in that setting scheduling and tenant sandboxing are not the responsibility of the hosting provider.
Personally, I haven't really felt that cold starts are a major problem when I control my stack, don't compile Javascript at startup, can leave 1 instance idling, and so on. Which is why I'm pretty much ok with the "containers serving HTTP" stereotype for many things, when that lets me move them between providers with minimal trouble. Especially considering the pain I've felt with pretty much every "one vendor" stack, hitting every edge case branch on my way falling down the stack of abstractions. I've very very much tried to use Durable Objects over and over and keep coming back to serving HTTP with Rust or Typescript, using Postgres or SQLite.
Pretending you don't see the whole argument for why people want the option of self-hosting the whole real thing really comes across as the cliched "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"
imron•2mo ago
At that point it doesn’t really matter if it’s cold start or not.