The meat of the article is informative, but the headline and motivation is based on this statement. It’s doesn’t reflect my experience but maybe I just don’t hang out in the same internet spots as the OP.
> We don’t yet see major websites entirely built with webassembly-based frameworks
I don’t know why this entered into the zeitgeist. I don’t think this was ever a stated goal of the WebAssembly project. I get the sense that some people assumed it and then keep wondering why this non-goal hasn’t been realized.
But what happened? Why am I not using it for all of my other random side projects? I posit that the JS ecosystem got so incredibly good that it it's a no-brainer for a very large percentage of workflows. React + Vite + TypeScript is an incredibly productive stack. I can use it to build all but the most demanding apps productively. Additionally, JS is pretty fast these days, so the speed boost from WASM isn't actually that meaningful for most use cases. Only really heavy use cases like media editing or Figma-like apps really benefit from what WASM has to offer.
Meanwhile javascript will be much faster to download since it is smaller and javascript can execute while it is downloading.
It will be a while before WASM GC will look close to any language's GC.
If size is your top priority, you can produce very small binaries, for example with C. Project [0] emulates an x86 architecture, including hardware, BIOS, and DOS compatibility, and ends up with a WebAssembly size of 78 kB uncompressed and a 24 kB transfer size.
Not many people are going to want to be rolling their own libc like that author. Most people just compile their app and ship megabytes of webassembly at the expense of their users. To me webassembly is just a shortcut to ship faster because you don't have to port existing code.
Emscripten provides a libc implementation based on musl, and so does wasi-libc (https://github.com/WebAssembly/wasi-libc).
If you explicitly list which functions you want to export from your WebAssembly module, the linker will remove all the unused code, in the same way that "tree-shaking" works for JS bundlers.
In my experience, a WebAssembly module (even with all symbols exported) is smaller than the equivalent native library. The bytecode is denser.
WebAssembly modules tend to be larger than JavaScript because AOT-compiled languages don't care as much about code size--they assume you only download the program/library once. In particular, LLVM (which I believe is the only mainstream WebAssembly-emitting backend) loves inlining everything.
Judicious use of `-Oz`, stripping debug info, and other standard code size techniques really help here. The app developer does have to care about code size, of course.
• WebAssembly is not huge. Fundamentally it’s generally smaller than JavaScript, but JavaScript comes with more of a standard library and more of a runtime, which unbalances comparisons. If you use something like Rust, it’s not difficult to get the basic overhead down to something like 10 kB, or for a larger project still well under 100 kB, until you touch things that need Unicode or CLDR tables; and it will generally scale similarly to JavaScript, once you take transport compression into account. If you use something like Go or .NET, sure, then there’s a heavier runtime, maybe a megabyte, maybe two, also depends on whether Unicode/CLDR tables are needed, and then JS will probably win handily on bundle size and startup time.
• JavaScript can’t execute while it’s downloading. In theory speculative parsing and even limited speculative execution is possible, but I don’t think any engine has tried that seriously. As for WebAssembly, it can be compiled and instantiated while streaming, generally at a faster rate than you can download it. The end result is that in an apples-to-apples comparison WebAssembly is significantly faster to start than JavaScript.
I always feel like I'm downloading megabytes of it whenever someone uses it. In practice it is. Even a basic hello world in rust will set you back a few megabytes compared to a the tens of bytes it takes in javascript.
>JavaScript comes with more of a standard library and more of a runtime, which unbalances comparisons.
Being able to make programs in a few bytes is a legitimate strength. You can't discount it because it's an effective way javascript saves size.
Lies. It’s 35 kB:
$ cargo new x
…
$ cd x
$ cat src/main.rs
fn main() {
println!("Hello, world!");
}
$ cargo build --release --target=wasm32-unknown-unknown
…
$ ls -l target/wasm32-unknown-unknown/release/x.wasm
… 34597 …
And that’s with the default allocator and all the std formatting and panic machinery. Without too much effort, you can get it to under 1 kB, if I remember correctly.For the rest: I mention comparisons being unbalanced because people often assume it will scale at the rate they’ve seen—twice as much code, twice as much size. Runtimes and heavy tables make for non-scaling overhead. That 35 kB you’ve paid for once, and now can use as much as you like without further growth.
Meanwhile, an empty React project seems to be up to 190 kB now, 61 kB gzipped.
For startup performance, it’s fairly well understood that image bytes are cheap while JavaScript bytes are expensive. WebAssembly bytes cost similar to images.
That's definitely not true.
A debug build of a "hello wasm-bindgen" style Rust program indeed takes ~2MB, but most of that is debug into; disabling that and/or stripping gets it down to 45-80kB (depending how I did it). And a release build starts at 35kB, and after `wasm-opt -O` gets down to 25kB. AFAIK most of the remaining space is used by wasm-bindgen boilerplate, malloc and panic machinery.
...and then, running wasm-bindgen to generate JS bindings somehow strips most of that boilerplate too, down to 1.4kB.
Side note, I never understood how wasm-opt is able to squeeze so much on top of what LLVM already did (it's a relatively fast post-build step and somehow reduces our production binaries by 10-20% and gives measurable speedups).
Lots of people pretend they don't need ICU or TZDB but that means leaving non-english-speakers or people outside of the US in the cold without support, which isn't the case for JS applications.
I still think this is a major unsolved problem for WebAssembly and I've previously raised it. I understand why it's not solved though - specifying and freezing the bitstream for ICU databases is a big task, etc.
Plus, WASM game runtimes need to bundle redundant 2D or 3D stacks, audio, fonts, harfbuzz, etc. yet don't expose eg. text rendering capabilities on par with those that browsers already have natively.
The whole thing is priorizing developer over user experience.
The native ecosystem never payed attention to binary size optimization, but the JS ecosystem payed attention to code size in the very beginning.
WebAssembly makes it possible to:
* Run x86 binaries in the browser via JIT-ting (https://webvm.io)
* Run Java applications in the browser, including Minecraft (https://browsercraft.cheerpj.com)
* Run node.js containers in the browser (https://browserpod.io)
It's an incredibly powerful tool, but very much a power-user one. Expecting your average front-end logic to be compiled in WebAssembly does not make much sense.
Why not? .NET Blazor and others already do that. In my eyes this was the whole hype of WASM. Replace JS. I don't give a crap about running node/java/whatever in the browser, why would i want that? I can run those outside the browser. I mean sure if you have some use case for it that's fine and I'm glad WASM lets you do it but I really don't see why most devs would care about that. We use the browser for browsing the web and displaying our websites.
To me the browser is for displaying websites and I make websites but I loathe JS. So being able to make websites without JS is awesome.
Not every language is a good source for targeting WASM, in the sense that you don't want to bring a whole standard library, custom runtime etc with you.
High-level languages may fare better if their GC is compatible with Wasm's GC model, though, as in that case the resulting binaries could be quite small. I believe Java-to-wasm binaries can be quite lean for that reason.
In c#'s case, it's probably mostly blazor's implementation, but it's not a good fit in this form for every kind of website (but very nice for e.g. an internal admin site and the like)
Modern Blazor can do server side rendering for SEO/crawlers and fast first load similar to next.js, and seamlessly transition to client side rendering or interactive server side rendering afterwards.
Your info/opinion may be based on earlier iterations of Blazor.
> we are at 2MB compressed with https://minfx.ai
That's still pretty bloated. That's enough size to fit an entire Android application a few years ago (before AndroidX) and simple Windows/Linux applications. I'll agree that it's justified if you're optimizing for runtime performance rather than first-load, which seems to be appropriate for your product, right?!What is this 2 MB for? It would be interesting to hear about your WebAssembly performance story!
Regarding the website homepage itself: it weighs around 767.32 kB uncompressed in my testing, most of which is an unoptimized 200+kB JPEG file and some insanely large web fonts (which honestly are unnecessary, the website looks _pretty good_ and could load much faster without them).
Additionally Blazor is a bad fit for .NET CMS and commerce platforms, none of them supports it for rendering components.
It's pretty impressive how far along CheerpJ is right now. I kinda wish this existed about five or ten years ago with this level of performance, maybe it would've allowed some things in the web platform to pan out differently.
Consider dropping in our Discord for further help: https://discord.leaningtech.com
Moreso than anything technical though, there sure seems to be a lot of bad blood between the group of people behind AssemblyScript and the people behind WASI. This feels like a classic case of small initial technical disagreements spiraling out of control and turning into a larger conflict fueled by personalities and organizational politics. I agree that overall this doesn't add confidence to the WebAssembly ecosystem as a whole, but it's not clear to me that the obvious conclusion is "WASI is controversial" as "WebAssembly seems like it might have a problem with infighting".
Furthermore, there are now competing interest groups within the Wasm camp. Wasm originally launched as a web standard: an extension of the JavaScript environment. However, some now want to use Wasm as the basis for replacing containers: an extension of a POSIX environment.
There were several articles that promoted it heavily - aka the hype phase.
And then ... nothing really materialized. If you look at, for instance, ruby WASM, https://github.com/ruby/ruby.wasm - there is virtually zero real documentation. Granted, this is a specific problem of ruby, and japanese devs not understanding english; but when you search for webassembly, contrast it to the numerous tutorials we have with regards to HTML, CSS, JavaScript. I get it, it is younger, it is harder than the other three tech stacks, but virtually nothing really improves here. It is like a borne-dead technology that has only a tiny niche, e. g. Rust developers. That's about it. And I fear this is also not going to change anymore. After a while, if the hype fails to deliver, people will lose interest - and a technology will eventually subside. That also happened to e. g. XHTML and the heavy use of XML in general in, say, 2000. I also don't think WebAssembly can be brought back now that the hype stage went off.
If you pulled the plug on WASM, a lot would stop working and it would heavily impact much of the JS frontend world.
What hasn't caught on is modern UI frameworks that are native wasm. We have plenty of old ones that can be made to work via WASM but it's not the same thing. They are desktop UI toolkits running in a browser. The web is still stuck with CSS and DOM trees. And that's one of the areas where WASM is still a bit weak because it requires interfacing with the browser APIs via javascript. This is a fixable problem. But for now that's relatively slow and not very optimal.
Solutions are coming. But that's not going to happen overnight. But web frontend teams being able to substitute Javascript for something else is going to require more work. Mobile frontend developers cross compiling to web is becoming a thing though. Jetbrain's compose multiplatform is has native Android/IOS supported now with a canvas rendered web frontend supported in Beta currently.
You can actually drive the dom from WASM. There are some RUST frameworks. I've dabbled with using kotlin's wasm support to talk to browser dom APIs. It's not that hard. It's just that Rust is maybe not ideal (too low level/hard) for frontend work and a lot of languages lack frameworks that target low level browser APIs. That's going to take years to fix. But a lot compiles to wasm at this point. And you kind of have access to most of the browser APIs when you do. Even if there is a little performance penalty.
I'm pretty sure this is just plain false. Do you have an exemple?
Which almost no-one cares about.
WASM will live and die in the browser. I wish the folks behind it would acknowledge that fact and give it sufficient browser interop to finally render JS unnecessary.
Any language can target a combination of JS and Wasm, right now, to get a "fully-featured JS replacement". How would adding more features to Wasm improve that situation?
Adding browser interop to Wasm that obviated the need for JS would, obviously, achieve that goal. Hence the improvement.
If someone wanted to replace C, I would strongly suggest compiling something else to C, yes. That seems kind of obvious. It's how many programming languages that aim to replace C get their start.
To put it another way: I can see how [replacing JS as the interface that the human deals with] can be valuable goal, but why is [replacing JS so completely that it doesn't even exist as generated code] so valuable to you?
Where it has worked is as infrastructure: fast, sandboxed, portable code for the parts that actually need it. A lot of people are already using it indirectly without realizing. So it’s less "what happened to Wasm?" and more "it didn’t become the silver bullet people imagined."
As far as I know, we are the fastest on the market. The multithreaded support is a pain though.
I''m personally a big fan of Wasm; it has been one of my favorite technologies ever since the first time I called malloc from the JS console when experimenting with an early version of Emscripten. Modern JS engines can be almost miraculously fast, but Wasm still offers the best performance and much higher levels of control over what's actually running on the CPU. I've written about this in the past.
The only way it really fell short is in the way that a lot of people were predicting that it would become a sort of total replacement for JS+HTML+CSS for building web apps. In this regard, I'd have to agree. It could be the continued lack of DOM bindings that have been considered a key missing piece for several years now, or maybe something else or more fundamental.
I've tried out some of the Wasm-powered web frameworks like Yew and not found them to provide an improvement for me at all. It just feels like an awkwardly bolted-on layer on top of JS and CSS without adding any new patterns or capabilities. Like you still have to keep all of the underlying semantics of the way JS events work, you still have to keep the whole DOM and HTML element system, and you also have to deal with all the new stuff the framework introduces on top of that.
Things may be different with other frameworks like Blazor which I've not tried, but I just find myself wanting to write JS instead. I openly admit that it might just be my deep experience and comfort building web apps using React or Svelte though.
Anyway, I strongly feel that Wasm is a successful technology. It's probably in a lot more places than you think, silently doing its job behind the scenes. That, to me, is a hallmark of success for something like Wasm.
There's lots of good options that come with windows preinstalled.
I for one hope that doesn't happen anytime soon. YouTube or Spotify could theoretically switch to Wasm drawing to a canvas right now (with a lot of development effort), but that would make the things that are currently possible thanks to the DOM (scraping, ad blockers etc.) harder or impossible.
However ads still need to be delivered over the net so there is still some way to block them (without resorting to router/firewall level blocking).
Not gonna happen.
This is a cat mouse fight, and Facebook already does some ultra-shady stuff like rendering a word as a list of randomly ordered divs for each character, and only using CSS to display in a readable way.
But it can't be made impossible, at the worst case we can always just capture the screen and use an AI to recognize ads, wasting a lot of energy. The same is true for cheating in video games and many forms of online integrity problems - I can just hire a good player who would play in my place, and no technology could recognize that.
Perhaps require monitoring of the arm muscle electrical signals, build a profile, match the readings to the game actions and check that the profile matches the advertised player
I wonder how much the developers writing that are being paid to be complete assholes.
I've personally resigned from positions for less and it hasn't cost me much comfort in life (maybe some career progression perhaps but, meh).
If someone else would like to make one, though, I'd be happy to read it.
Just like Elon does.
Elon Musk stands accused of pretending to be good at video games. The irony is delicious:
https://www.theguardian.com/games/2025/jan/20/elon-musk-stan...
>Musk desperately wants to appropriate gamer credibility, but he may be faking it – and doing exactly what toxic nerds have been accusing women of doing for decades
(To make scraping and automation harder, perhaps?)
lol, you can scrape anything visible on your screen.
I see it as an opportunity to do better.
The only real avenue for js-free web applications would be to completely abandon the browser rendering path and have everything render into a canvas. There are experiments to use UI toolkits designed for the desktop. But even that I see more of a niche solution and unlikely to become very widely used. HTML/css/js have become the lingua franca of UI development and they are taking over desktop applications as well. Why should that trend reverse?
Yeah, go ahead and trash the little bit of accessibility we still have. <canvas> by itself already asks webdevs to shit on people with visual disabilities. But getting rid of the DOM (for vague reasons) would really nail the coffin of these pesky blind users. After all, why should they be able to use anything on the internet?
This, and AI making webdevs consider to obfuscate things for scraping reasons, and Microsoft Recall making devs play with the idea of obfuscating OS-level access to their (privacy-sensitive) apps, which in essence would also trash accessibility, are the new nightmares that will haunt me for the next few years.
That just means you personally like JS. In my opinion many languages are better than it.
One such example: audio time stretch in the browser based upon a C++ library [1]. There is no way that if this were implemented in JS that it could deliver (a) similar performance or (b) source code portability to native apps.
[1] https://bungee.parabolaresearch.com/change-audio-speed-pitch
"Not yet"? It will never reach "bare-metal levels of performance and energy efficiency".
https://floooh.github.io/tiny8bit/
You can squeeze out a bit more by building with -march=native, but then there's no reason that a WASM engine couldn't do the same.
Still surprised about the 5% though- I’ve generally seen quite a bit more of a gap.
The initial order-of-magnitude jump in perf that JITs provided took us from the 5-2x overhead for managed runtimes down to some (1 + delta)x. That was driven by runtime type inference combined with a type-aware JIT compiler.
I expect that there's another significant, but smaller perf jump that we haven't really plumbed out - mostly to be gained from dynamic _value_ inference that's sensitive to _transient_ meta-stability in values flowing through the program.
Basically you can gather actual values flowing through code at runtime, look for patterns, and then inline / type-specialize those by deriving runtime types that are _tighter_ than the annotated types.
I think there's a reasonable amount of juice left in combining those techniques with partial specialization and JIT compilation, and that should get us over the hump from "slightly slower than native" to "slightly faster than native".
I get it's an outlier viewpoint though. Whenever I hear "managed jitcode will never be as fast as native", I interpret that as a friendly bet :)
The battlecry of Java developers riding their tortoises.
Don’t we have decades of real-world experience showing native code almost always performs better?
For most things it doesn’t matter, but it always rubs me the wrong way when people mention this about JIT since it almost never works that way in the real world (you can look at web framework benchmarks as an easy example)
AOT compilers without PGO data usually tend to perform worse when those conditions aren't met.
Which is why the best of both worlds is using JIT caches that survive execution runs.
What are the real world chances that a) one's compiled code benefits strongly from runtime data flow analysis AND b) no one did that analysis at the compilation stage?
Some sort of crazy off label use is the only situation I think qualifies and that's not enough.
There's a lot of lore you pick up with Javascript when you start getting into serious optimization with it; and one of the first things you learn in that area is to avoid changing the shapes of your objects because it invalidates JIT assumptions and results in your code running slower -- even though it's 100% valid Javascript.
The idea that an absurdly dynamic language like JS, where all objects are arbitrary property bags with prototypical dependency chains that are runtime mutable, would execute at a tech budget under 2x raw performance was just a matter of fact impossibility.
Until it wasn't. And the technology reason it ended up happening was research that was done in the 80s.
It's not surprising to me that it hasn't happened yet. This stuff is not easy to engineer and implement. Even the research isn't really there yet. Most of the modern dynamic language JIT ideas which came to the fore in the mid 200X's were directly adapting research work on Self from about two decades prior.
Dynamic runtime optimization isn't too hot in research right now, and it never was to be honest. Most of the language theory folks tend to lean more in the type theory direction.
The industry attention too has shifted away. Browsers were cutting edge a while back and there was a lot of investment in core research tech associated with that, but that's shifting more to the AI space now.
Overall the market value prop and the landscape for it just doesn't quite exist yet. Hard things are hard.
Vanessa Freudenberg [1], Craig Latta [2], Dave Ungar [3], Dan Ingalls, and Alan Kay had some great historical and fresh insights. Vanessa passed recently -- here's a thread where we discussed these exact issues:
https://news.ycombinator.com/item?id=40917424
Vanessa had this exactly right. I asked her what she thought of using WASM with its new GC support for her SqueakJS [1] Smalltalk VM.
Everyone keeps asking why we don't just target WebAssembly instead of JavaScript. Vanessa's answer -- backed by real systems, not thought experiments -- was: why would you throw away the best dynamic runtime ever built?
To understand why, you need to know where V8 came from -- and it's not where JavaScript came from.
David Ungar and Randall B. Smith created Self [3] in 1986. Self was radical, but the radicalism was in service of simplicity: no classes, just objects with slots. Objects delegate to parent objects -- multiple parents, dynamically added and removed at runtime. That's it.
The Self team -- Ungar, Craig Chambers, Urs Hoelzle, Lars Bak -- invented most of what makes dynamic languages fast: maps (hidden classes), polymorphic inline caches, adaptive optimization, dynamic deoptimization [4], on-stack replacement. Hoelzle's 1992 deoptimization paper blew my mind -- they delivered simplicity AND performance AND debugging.
That team built Strongtalk [5] (high-performance Smalltalk), got acquired by Sun and built HotSpot (why Java got fast), then Lars Bak went to Google and built V8 [6] (why JavaScript got fast). Same playbook: hidden classes, inline caching, tiered compilation. Self's legacy is inside every browser engine.
Brendan Eich claims JavaScript was inspired by Self. This is an exaggeration based on a deep misunderstanding that borders on insult. The whole point of Self was simplicity -- objects with slots, multiple parents, dynamic delegation, everything just another object.
JavaScript took "prototypes" and made them harder than classes: __proto__ vs .prototype (two different things that sound the same), constructor functions you must call with "new" (forget it and "this" binds wrong -- silent corruption), only one constructor per prototype, single inheritance only. And of course == -- type coercion so broken you need a separate === operator to get actual equality. Brendan has a pattern of not understanding equality.
The ES6 "class" syntax was basically an admission that the prototype model was too confusing for anyone to use correctly. They bolted classes back on top -- but it's just syntax sugar over the same broken constructor/prototype mess underneath. Twenty years to arrive back at what Smalltalk had in 1980, except worse.
Self's simplicity was the point. JavaScript's prototype system is more complicated than classes, not less. It's prototype theater. The engines are brilliant -- Self's legacy. The language design fumbled the thing it claimed to borrow.
Vanessa Freudenberg worked for over two decades on live, self-supporting systems [9]. She contributed to Squeak EToys, Scratch, and Lively. She was co-founder of Croquet Corp and principal engineer of the Teatime client/server architecture that makes Croquet's replicated computation work. She brought Alan Kay's vision of computing into browsers and multiplayer worlds.
SqueakJS [7] was her masterpiece -- a bit-compatible Squeak/Smalltalk VM written entirely in JavaScript. Not a port, not a subset -- the real thing, running in your browser, with the image, the debugger, the inspector, live all the way down. It received the Dynamic Languages Symposium Most Notable Paper Award in 2024, ten years after publication [1].
The genius of her approach was the garbage collection integration. It amazed me how she pulled a rabbit out of a hat -- representing Squeak objects as plain JavaScript objects and cooperating with the host GC instead of fighting it. Most VM implementations end up with two garbage collectors in a knife fight over the heap. She made them cooperate through a hybrid scheme that allowed Squeak object enumeration without a dedicated object table. No dueling collectors. Just leverage the machinery you've already paid for.
But it wasn't just technical cleverness -- it was philosophy. She wrote:
"I just love coding and debugging in a dynamic high-level language. The only thing we could potentially gain from WASM is speed, but we would lose a lot in readability, flexibility, and to be honest, fun."
"I'd much rather make the SqueakJS JIT produce code that the JavaScript JIT can optimize well. That would potentially give us more speed than even WASM."
Her guiding principle: do as little as necessary to leverage the enormous engineering achievements in modern JS runtimes [8]. Structure your generated code so the host JIT can optimize it. Don't fight the platform -- ride it.
She was clear-eyed about WASM: yes, it helps for tight inner loops like BitBlt. But for the VM as a whole? You gain some speed and lose readability, flexibility, debuggability, and joy. Bad trade.
This wasn't conservatism. It was confidence.
Vanessa understood that JS-the-engine isn't the enemy -- it's the substrate. Work with it instead of against it, and you can go faster than "native" while keeping the system alive and humane. Keep the debugger working. Keep the image snapshotable. Keep programming joyful. Vanessa knew that, and proved it!
[1] Freudenberg et al. SqueakJS paper (DLS 2014, Most Notable Paper Award 2024). https://freudenbergs.de/vanessa/publications/Freudenberg-201...
[2] Craig Latta, Caffeine. Smalltalk livecoding in the browser. https://thiscontext.com/
[3] Self programming language. Prototype-based OO with multiple inheritance. https://selflanguage.org/
[4] Hoelzle, Chambers & Ungar. Debugging Optimized Code with Dynamic Deoptimization (1992). https://bibliography.selflanguage.org/dynamic-deoptimization...
[5] Strongtalk. High-performance Smalltalk with optional types. http://strongtalk.org/
[6] Lars Bak. Architect of Self VM, Strongtalk, HotSpot, V8. https://en.wikipedia.org/wiki/Lars_Bak_(computer_programmer)
[7] SqueakJS. Bit-compatible Squeak/Smalltalk VM in pure JavaScript. https://squeak.js.org/
[8] SqueakJS JIT design notes. Leveraging the host JS JIT. https://squeak.js.org/docs/jit.md.html
[9] Vanessa Freudenberg. Profile and contributions. https://conf.researchr.org/profile/vanessafreudenberg
It's then just a matter of how your team values runtime performance vs other considerations such as workflow, binary portability, etc. Virtually all projects have an acceptable range of these competing values, which is where JIT shines, in giving you almost all of the performance with much better dev economics.
Obviously JITting means you'll have a compiler executing sometimes along with the program which implies a runtime by construction, and some notion of warmup to get to a steady state.
Where I think there's probably untapped opportunity is in identifying these meta-stable situations in program execution. My expectation is that there are execution "modes" that cluster together more finely than static typing would allow you to infer. This would apply to runtimes like wasm too - where the modes of execution would be characterized by the actual clusters of numeric values flowing to different code locations and influencing different code-paths to pick different control flows.
You're right that on the balance of things, trying to say.. allocate registers at runtime will necessarily allow for less optimization scope than doing it prior.
But, if you can be clever enough to identify, at runtime, preferred code-paths with higher resolution than what (generic) PGO allows (because now you can respond to temporal changes in those code-path profiles), then you can actually eliminate entire codepaths from the compiler's consideration. That tends to greatly affect the register pressure (for the better).
It might be interesting just to profile some wasm executions of common programs. If there are transient clusterings of control flow paths that manifest during execution. It'd be a fun exercise...
> It could be the continued lack of DOM bindings that have been considered a key missing piece for several years now, or maybe something else or more fundamental.
More fundamentally, every front end developer uses more or less the same JS language (Typescript included) and every module is more or less interoperable. As WASM is a compilation target, every developer could be using a different language and different tools and libraries. One of them could have reached critical mass but there is a huge incumbent (JS) that shadows everything else. So special purpose parts of web apps can be written in one of those other languages but there still is a JS front end between them and the user and GUIs can be huge apps. It looks like a system targeted to optimizations.
And for the backend, if one writes Rust or any other compiled language that can target WASM, why compiling to WASM and not to native code?
It's like...ah, yeah, I see how you might not hear about it, but uh... it's everywhere.
Agreed and I’m personally glad progress on that hasn’t moved quickly. My biggest fear with WASM is that even the simplest web site would end up needing to download a multi MB Python runtime just because the author didn’t want to use JS!
The sad reality is that the slowness very often comes from the DOM, not from JavaScript. Don’t get me wrong, there could be improvements, e.g. VDOM diffing would be a cinch with tuples and records, but ultimately you have to interact with the DOM at some point.
No, it is NOT "something else or more fundamental" - it is most certainly the lack of proper, performant access to the DOM without having to use crazy, slow hacks. Do that and frontend web-apps will throw JS into the gutter within a decade.
Why though? What's wrong with JS? I feel like it's gotten a lot better over the years. I don't really understand all the hate.
Let's not go into that for the millionth time and instead perhaps ask yourself why is TS wildly successful and even before that everyone was trying to use anything-but-js.
Ok, that's fair. My goal with this question wasn't to open a can of worms. But whenever I see a strong averse reaction to JS, I assume that the person hasn't tried using _modern_ JS.
> why is TS wildly successful
From my perspective, it stops me from making stupid mistakes, improves autocomplete, and adds more explicitness to the code, which is incredibly beneficial for large teams and big projects. But I don't think that answers my original question, because if you strip away the types, it's JS.
> even before that everyone was trying to use anything-but-js
Because JS used to suck a lot more, but it sucks a lot less now.
so does c, zig, c++, go, rust, python, ruby, php, ada,...
From my (outsider) perspective, I think the main roadblock atm is standardizing the component model, which would open the door to WIT translations for all the web APIs, which would then allow browsers to implement support for those worlds into browser engines directly, perhaps with some JS pollyfill during the transition. Some people really don't like how slowly component model standardization has progressed, hence all the various glue solutions, but the component model is basically just the best glue solution and it's important to get it right for all the various languages and environments they want to support.
There's a lot of prerequisites for DOM access from WASM that need to be built first before there can be usable DOM access from within WASM, and those are steadily being built and added to the WASM specification. Things like value references, typed object support, and GC support.
Wasm with 'fast' DOM manipulation opens the door to every language compiling to wasm to be used to build a web app that renders HTML.
> Wasm with 'fast' DOM manipulation opens the door to every language compiling to wasm to be used to build a web app that renders HTML.
This was never the goal of Wasm. To quote this article [1]:
> What should be relevant for working software developers is not, "Can I write pure Wasm and have direct access to the DOM while avoiding touching any JavaScript ever?" Instead, the question should be, "Can I build my C#/Go/Python library/app into my website so it runs with good performance?"
Swap out "pure Wasm" with <your programming language> and the point still stands. If you really want to use one language to do everything, I'm pretty sure just about every popular programming language has a way of transpiling to JS.
Blazor WASM probably is among the best approaches to what is possible with WASM today, for better and worse. C# is a great language to write domain code in. A lot of companies like C# for their backends so you get same-language sharing between backend and frontend. The Razor syntax is among the better somewhat type-safe template languages in the wild, with reasonably good IDE support. C# was designed with FFI in mind (as compared to Java and some other languages) so JS imports and exports fit reasonably well in C#; the boundaries aren't too hairy.
That said, C# by itself isn't always that big of leap from Typescript. C# has better pattern matching today, but overall the languages feel like step-brothers and in general the overhead of shipping an entire .NET CLR, most of the BCL, and your C# code as web assembly is a lot more than just writing things more vanilla in Typescript. You can also push C# more functional with libraries like LanguageExt (though you also fight the reasons to pick C# by doing so as so many engineers don't think LanguageExt feels enough like C# to justify using C#).
I'm curious to try Bolero [0] as F# would be a more interesting jump/reason for WASM, but I don't think I could sell it to engineering teams at my day job. (Especially as it can't use Razor syntax, because Razor is pretty deeply tied to C# syntax, and has its own very different template languages.)
With WASM not having easy direct access to the DOM, Blazor's renderer is basically what you would expect it to be: it tosses simple objects over to a tiny Virtual DOM renderer on the JS side. It has most of the advantages and disadvantages of just using something like React or Preact directly, but obviously a smaller body of existing performance optimizations. Blazor's Virtual DOM has relatively great performance given the WASM to JS and back data management and overhead concerns, but it's still not going to out-compete hand written Vanilla JS any time soon.
> I figure most are under the impression that the advancement of this technology would have had a more visible impact on their work. That they would intentionally reach for and use Wasm tools.
> Many seem to think there is a path to Wasm replacing JavaScript within the browser—that they might not need to include a .js file at all. This is very unlikely.
This is because most of us are not writing fancy browser-based 3D game engines; we're writing boring enterprisey CRUD apps, and the only things we want from out frontend code are HTTP request-response handling and DOM manipulation. Consequently, the irrelevance of WASM evangelism is frankly boorish.
Also, wasm doesn’t solve enough real problems. JavaScript sucks but is plenty good enough for most things. Wasm unlocks a few things. But it makes no sense for, say, Steam games that are tens of gigabytes.
If wasm didn’t exist the internet and world would be… fine? Use JavaScript in a browser or go actual native. The space inbetween for wasm exists but is extremely small. Especially for anything other than cool visualization widgets.
The more telling question to me is:
Do we see real world websites that are not just tech demos coming out of WASM aficionados circles. Sites that are actually useful to a significant number of people, even if we wouldn't necessarily call them major websites.
comes to my mind, but there must be more.
Instead of building a true portable binary format with system access, we got a JavaScript VM from TEMU:
- Reference Types
- Exception Handling
- GC
Makes GC'd languages compile better, not system programming
Meanwhile, the actually needed capabilities remain blocked forever:
- Memory: Still can't mmap, still can't allocate outside linear memory
- Networking: Still needs JS interop bullshit
- Device: Still need JS interop bullshit and still sandboxed behind browser security model
The Result: WASM isn't a serious systems target, it's a compilation artifact for managed languages that could've just targeted JS directly
Wasm and wasi are very promising, as stated in the article, it's safe/isolated by default, it can target different hardware and almost any popular language (in theory) can be compiled to wasm. It sounds perfect on paper. It's quick to start (quicker than docker). Maybe it will be replacement/supplement for lambda-esque type of workloads.
I love it! It reads like, "you can put your snowman in a oven to obtain water and then the water to a snow machine to get to your initial material state with almost no information lost."
https://www.opencloudification.com/wp-content/uploads/2025/0...
Tho I have mis-remembered it. They transpile wasm back to C and compile that to a native binary.
WASM for frontend, at least, has been held back by fundamental tools like bundle splitting, hot-reload, debugger symbols, asset integration, etc. We spent a lot of 2025 working on improving this. Vite and friends are really good!
I've been working on a big Dioxus project recently and am pretty happy with where WASM is now. The AI tools make working with Rust code much faster. I'm hopeful people gravitate towards WASM frameworks more now that the tools are better.
The web technologies and frameworks that we have today and how we use them to create solutions, still a lot of developers rely on JavaScript. It may be an outdated language with a lot of issues and problems, but it is still the most popular programming language today. Platforms tend to fail because the workflow surrounding them doesn't offer the flexibility to make the most of the platform.
The most valuable lesson to learn is that potential alone doesn't drive widespread use of a technology. The flexible integration offered by JavaScript is the only thing that made widespread use of it possible. What is the most valuable thing that Web Assembly offered you? What is the missing element in Web Assembly that makes it hard to use? What does the new technology offer that makes it harder to use and does it repeat the same patterns as Web Assembly?
It starts with handcrafted bytecode for a minimal Wasm module in JavaScript, and then guides you through the creation of a simple compiler for a toy language.
It's a fantastic item for the browser toolbox, and i agree with Amea that the "hallmark of success" has been achieved by this technology
Otherwise, WASM looks like a complete success.
I think this person would be very satisfying to work with, because decisions would be based on a discussion of tradeoffs, and an awareness of similar technologies and approaches throughout computing history.
But it has been "damn, that's a pretty good sandbox we all can compile to".
And of course, it means we can now have safe Python execution services from user input thanks to stuff like pyodide now.
Just like with JVM and other better options before and after it, it's politics, interests and momentum. JVM in the browser was not killed by technology, it was killed by Microsoft. Similarly, we should look who gains and loses relative to other if Wasm becomes mainstream.
Easy portability and less platform dependence. Who wants it, who does not? Apple, Microsoft, Google, ...
Just like with JVM the Wasm can be killed with wrong embrace. Microsoft Java Virtual Machine (MSJVM) was named in the United States v. Microsoft Corp. antitrust civil actions, as an implementation of Microsoft's "Embrace, extend and extinguish" strategy. Adopt JVM, remove portability with extensions.
Unless it's something like Figma or a game, why the fuck would they be?
So that you get the joy of writing your website in some language that ports to WebAssembly (and is much more difficult to find frameworks and developers for) and not the native Javascript?
- The ecosystem evolved fast, then slow. This caused adoption problems, especially for things such as WASI and Component model, as a lot of folks did it their own way/using 3rd party, which now meant they had to rewrite to this new thing that still isn't fully properly supported everywhere.
- The way it's "developed" means a lot of things are distributed, unsynced and have different support levels based on the engine you're using. This causes confusion among developers, especially since you have to go from reading an article, to reading a spec, to reading a github issue, then you're 3 repositories deep reading random rust code at 2 AM trying to figure out if you can rely on this stranger's fork just to try something out that should have been dead simple.
- Both of these combined can lead to even greater confusion for our LLM's, as they are trained on varied data which is by now stale, so they can often misunderstand things or look for things that aren't there anymore, just like us humans would.
- And now let's focus on the biggest and most important one IMO: Javascript/Typescript support. That is the holy grail for any technology that wants to be a widely adopted intermediary. While it is possible, you are layering hacks on hacks and begging that the next user won't break it all. Until my users can bring whatever they're using with them, the transition isn't really worth it, and writing my own wiring for every possible combination/need is quite unnecessary. We got a step closer with Web Containers, but by that time a lot of folks already moved onto Bun.
People who want to write JavaScript for backend store functionality can, and then Shopify deploys that code into containers with small IO semantics
> Shopify CLI compiles your JavaScript code using Javy, our JavaScript-to-WebAssembly toolchain.
I've actually used Javy. Kind of interesting: https://github.com/bytecodealliance/javy.
It is very hard to debug WebAssembly applications, depending on the source language, we are still on printf debugging kind of experience.
Even the DWARF plugin for Chrome (only, nowhere else), hasn't been updated since 2023.
Then there is the whole experience, again depending on the language, to produce a .wasm file, alongside the set of imports/exports for the functions, instead of a plain "-arch=wasm".
GC support is now available, however it is a "yes but", because it doesn't support all kinds of GC requirements, thus some ecosystems like .NET, still need to ship their own.
Finally we have WIT trying to be yet another go at COM/CORBA/gRPC.
Doesn't the "WASM Components Model" kind of solve this? I've been hacking on a WASM-app runner (in Rust) which basically loads tiny apps that are compiled into "Components", and seems simple enough to me to use and produce those.
- The main toolchain for compiling existing C codebases to WebAssembly is Emscripten. It still hasn't escaped its tech-demo origins, and it's a rats' nest of compiler flags and janky polyfills. There are at least 3 half-finished implementations of everything. It doesn't follow semver, so every point release tends to have some breaking changes.
- The "modern" toolchain, wasi-sdk, is much more barebones. It's getting to the point of being usable, but I can't use it myself because it ships a precompiled libc and libc++ that use `-O3`, whereas Emscripten recompiles and caches the sysroot and uses `-Oz` if I tell it to. This increases the code size, which is already quite large.
- LLVM is still not very good at emitting optimized WebAssembly bytecode.
- Engines are still not very good at compiling WebAssembly bytecode to optimized machine code.
- Debug info, as you mentioned, is a total mess.
- Rust's WebAssembly tooling is on life support. The rustwasm GitHub organization was "sunset" in mid-2025 after years of inactivity.
- There is still no official way to import WebAssembly modules from JavaScript in a cross-platform manner, in the year of our lord 2026. If you're deploying to the browser and using Vite or raw ES modules, you can use `WebAssembly.instantiateStreaming(fetch(new URL('./foo.wasm', import.meta.url)))` and eat the top-level await. Vite recognizes the `new URL('...', import.meta.url)` pattern and will include the asset in the build output, but most other bundlers (e.g. Rollup and esbuild) do not. If you're on Node, you can't do this, because `fetch` does not work for local files. Most people just give up and embed the WebAssembly binary as a huge Base64 string, which increases the filesize by 33% and greatly reduces the compression ratio.
- If you want multithreaded WebAssembly, you need to set the COOP/COEP headers in order to gain access to `SharedArrayBuffer`. GitHub Pages still doesn't let you do this, although it's the third-most-upvoted feature request. There's a janky workaround that installs a service worker. All bets are off on how that workaround interacts with PWAs.
If the tooling situation had advanced past "tech demo" in the past 8 years since WebAssembly first shipped, a lot more people would be using it.
Like I said in the other comment, I find it incredibly weird that wasm-opt can still squeeze like 10% better code (as in, both smaller binary and somehow faster code) on top of what LLVM does. And it hasn't changed much within the last 5 years.
And in general, the tooling ecosystem is doing... weirdly. Rust is doing badly yeah, but for example there was also a long stretch of time (I think it's solved now?) when you couldn't pass a .wasm with bulk-memory or other extensions to webpack, as its builtin wasm parser (why was it parsing the binary anyway?) didn't recognize new opcodes.
This article didn't even seriously entertain replacing JavaScript as an idea, saying nothing about why it's "very unlikely." But it's the #1 thing most devs are excited about in WASM: maybe they could ditch JS and use another language instead for browser UI, at least Rust, but maybe Go or even Python.
The reason that's unlikely is that browser UI is defined in standards as a JavaScript API; restandardizing an ABI for low-level languages would take years (perhaps decades). https://danfabulich.medium.com/webassembly-wont-get-direct-d...
Why things are not as heated? Simply, because many of the big players are no longer doing the big bets on the technology, nor they are spending any marketing on making it successful. Mainly because most of their bets have been unsuccessful: WASI, Component Model. Many of the small players that raised money on the space either died or ended being acqui-hired by bigger players. The only ones that survive is the ones that truly understand is that tech doesn't matter a thing, is the product (what are you enabling with WebAssembly).
In my view, this happens because there's a great mismatch between technical capabilities and the go-to-market skills that bringing the tech to the masses requires.
The developers that tend to be technically great and understand the value of WebAssembly, are usually not as good on Go To Market to make it successful. For example, WASI proponents wanted to completely break the POSIX model (because in their view, is completely wrong... and they are partially right!). But they don't only want Wasm to succeed... they also want their mental model of new Operating System calls to go along with it (thus, you tie the success of one, to the success of another).
AI only amplifies the Go To Market skills even further, by accelerating tech even more. When your MOAT is fully built around the tech but there's nothing that sustains it (a product), then you have an issue. The market is what sustains it, nothing else. People in the ecosystem cared way more about politics (creating a working group to control other companies), than they cared about creating something that many people could use tomorrow.
At Wasmer, it took us a bit of time to understand this, but overtime we have been able to improve our skills to continuing capturing value from it.
So, it's possible to create something successful with WebAssembly. You just need to make something people want (tl;dr: is not the tech!)
Can you expand on that? I've only been using wasm for web (and the current status quo of JS bindings to the DOM is working just fine for me) so I haven't been following that strongly, but for the last couple months I was under impression that people are still trying to push WASI.
I'd say that wasip1 has been successful, but not any future version. You can check which version is the most popular just by looking at the Rust WASI crate versions and how much downloads each one has:
wasip1:
https://crates.io/crates/wasi/0.11.1+wasi-snapshot-preview1
https://crates.io/crates/wasi/0.10.2+wasi-snapshot-preview1
https://crates.io/crates/wasi/0.10.0+wasi-snapshot-preview1
https://crates.io/crates/wasi/0.9.0+wasi-snapshot-preview1
wasip2, p3: https://crates.io/crates/wasi/0.14.7+wasi-0.2.4
https://crates.io/crates/wasi/0.13.3+wasi-0.2.2
https://crates.io/crates/wasi/0.12.1+wasi-0.2.0If you're willing to risk some safety guarantees, then you can embed SQLite in Go without cgo by using WASM builds of SQLite. In particular, this package: https://github.com/ncruces/go-sqlite3
Note: the risk here is that it's unclear how well-tested SQLite WASM builds are compared to native builds for things like data integrity. With that said, in most of my personal projects using Go, I frequently reach for the WASM builds because it keeps builds fast and easy.
Also, I want to mention that I have seen web apps that are C# / Blazor programs compiled into WebAssembly. The accessibility is predictably terrible, but I have seen at least one such web app in the wild. I assume this is largely why one doesn't encounter WASM web frameworks often. In any case, WASM is surprisingly useful in many niches, and that's kind of the problem for WASM's visibility: the niches where I find WASM useful are almost completely disjoint from each other. But it's a solid technology nowadays. The only real gripe I have is the fact that only wasmtime seems to fully support wasm32-wasip2. You can actually compile quite a lot of Rust backend stuff into WASM and run that instead of a container. Not that this is particularly useful, but I've found it interesting as an exercise.
Most Wasm proposals are very elegantly designed and effective - meaning they provide lots of value for relatively minor specification bloat. Examples are tail-calls, multi-value, custom-page-sizes, memory64 and even gc.
However, the simd and flexible-simd increased spec bloat by a lot, are not future-proof and caused more fragmentation due to non-determinism. In my opinion work should have focused on flexible-vector (SVE-like) which was more aligned to Wasm's original goals of near-native performance. The reason for this development was that simd was simpler to implement and thus users could reap benefits earlier. Unfortunately, it seems the existence of simd completely stalled development of the superior flexible-vectors proposal.
If flexible-vectors (or similar) will ever be stabilized eventually, we will end up in one of two (bad) scenarios:
1) People will have to decide between simd and flexible-vectors for their compilation, depending on their target hardware which is totally against Wasm's original goals.
2) The simd proposal will be mostly unused and deprecated. Dead weight.
simd128 fills a common need(most games using vector operations) and was a viable option with _broad hardware support_, yes, it adds a ton of instructions and impacts a ton of places with regards to memory ops but vec4 operations commonly use much of those instructions. Better useful than something that will never have a chance of standardization.
On the other spectrum, things like custom-page-sizes seems like a simple flexible solution but smells like an implementation nightmare if you already have a runtime since that really impacts things on a far deeper level (64k pages was probably a mistake, but reading up on the issues of emulating x86 with 4k vs 16k pages on Mac's kinda hints at how devious "small" things like that is), i'm not surprised if it never comes about as an offical part (only 3 runtimes supporting it so far).
I can understand the need for tail-calls but at the same time it's also an annoying can of worms to implement into compilers that wasn't prepared (could have been a large part of why it took so long for Safari to support).
wasm-gc really hit a real-world need (bindings did really suck.. they're better but not perfect now) but also comes in a bit half-assed in some respects (languages like C# needing workarounds to use it), same with memory64 being a real-world need.
I can see different camps (popular/functional languages for gc,m-val and tail-calls), games (simd128, multithreading, memory64), embedded(flexible pages),etc all competing and having focus on what they want but all camps also need to understand that pushing _everything_ will be pushing the risks of the web (security) and in the end that's what wasm was for, providing a runtime to run non-JS code on the web.
WebAssembly MVP is a good example: it offered limited initial value but was exceptionally simple. Overall, I am happy with how the spec evolved with the exceptions of 128-bit simd and relaxed-simd.
The main issue I see with 128-bit simd is that it was always clear it would not be the final vector extension. Modern hardware already widely supports 256-bit vector widths, with 512-bit becoming more common. Thus, 128-bit simd increasingly delivers only a fraction of native performance rather than the often-cited "near-native" performance. A flexible-vectors design (similar to ARM SVE or the RISC-V vector extension) could have provided a single, future-proof SIMD model and preserved "near-native" performance for much longer.
From a long-term perspective, this feels like a trade-off of short-term value for a large portion of the spec's complexity budget. Though, I may be underestimating the real challenges for JIT implementers, and I am likely biased being the author of a Wasm interpreter where flexible-vectors would be far more beneficial than 128-bit simd.
Why you think flexible-vectors might never have a realistic path to standardization?
According to the linked blog article, this is not what they are doing, but rather an option they explored. They use JavaScript Realm shims to isolate the execution.
WebAssembly still doesn't provide a way to release memory back to browser (unless using Wasm GC). The linear memory can only grow.
The Wasm GC limits memory layout and doesn't yet support multi-threading.
Wasm multithreading has many limitations. Such as cannot block on main thread, cannot share function table, etc. And web worker has "impedance mismatch" between native threads.
And tooling is also immature (debugging requires print debugging)
I tried it with AG+G3, I prompted both the age 21+ screen AND the chat interface. It one-shotted a working version of both in less than a minute.
I was immediately free to start exploring my idea! Adding multiple personality Budtenders, a Stash box for frequent item; it would create mocks and tests. So liberating.
Then I had this other idea in my head for a while, that since DuckDB is broadly portable and can target WASM, I could play with the datasets in the browser and much of the app doesn't need an MCP-connected LLM or any backend services.
next up, there is another mode where we will browse and visualize the cannabinoid contents. the dataset will be the data here https://github.com/AgentDank/dank-data we will use apache echarts for visualizations. we can probably embed duckdb in the browser and do it all the queries there. we can have some simple UI for exploring, as well as raw SQL query
And it one-shotted an entire DBA application interface with a custom UI and visualization to explore the data. Then I asked for some 3D WebGL charts with echarts and we got that working too.So WASM is gonna be as important as ever because we have tons of software which can be compiled to WASM, Web is the UX meeting point, and LLMs can help bring it all together.
If there was a rust frontend framework that compiles to JS, I'd use it for all my frontend code.
How is this true? seems to me that webassembly looks kind of equivalent to the output you'd get from an x86 disassembler for an x86 native program -- sure it's editable, but it's certainly not equivalent to the original source used to produce it.
To put it another way -- Webassembly encourages theft exactly as much as any other kind of DRM-free publishing; and you can add anti-piracy measures to it in the same way you can with other software.
The second blunder was not allowing for any direct memory mapping I know it's against the security system but if you have to copy every pixel one by one to the host then that won't be effective
The third blunder was when they finally added GC objects to not make any of the objects properties readable from the host
Of course crazy things can be (were already!) done with WASM, but it's more like Rust in the beginning and is still advertised as Go ;)
1. to create web versions of applications that are traditionally desktop only to render things like Parquet, PSD, TIFF, SQLite, EPS, ZIP, TGZ, and many more, where C libraries are often the reference implementations. There are almost a hundred supported file formats, most of which are supported through WASM: https://github.com/mickael-kerjean/filestash?tab=readme-ov-f...
2. to create plugins that extend the core application. As of today, you can add your own endpoint or middleware in Filestash, package it with its own manifest, and run server-side code in a constrained environment. For example, there is a libreoffice wasm edition that can run from your browser but requires a couple HTTP headers to be sent by the server to work so the plugin has this bit that run server side to add those HTTP headers: https://github.com/mickael-kerjean/filestash/blob/master/ser...
3. in the workflow engine to enable people to run their own code in actions while ensuring they can't fuck everything up
Basically in some ways it was a superior idea: benefit from the optimizations we are already doing for JS, but define a subset that is a good compilation target and for which we know the JS VM already performs pretty optimally. So apart from defining the subset there is no extra work to do. On the other hand I'm sure there are JS limitations that you inherit. And probably your "binaries" are a bit larger than WASM. (But, I would guess, highly compressible.)
I guess the good news is that you can still use this approach. Just that no one does, because WASM stole the thunder. Again, not sure if this is a good or bad thing, but interesting to think about... for instance, whether we could have gotten to the current state much faster by just fully adopting asm.js instead of diverting resources into a new runtime.
WebKVM is what we need..
lol?
Plus the demo's computation happens client side so no data is sent to a server.
We can offer our full payment parsing libraries to the web as developer tools without any code changes. I don't have to care about the details of WASM because it "just works".
Through this lens there are actually two cults with two cult rallying cries:
1. The browser is, arguably, a terrible program execution environment. You have to use a stupid language and there's a ton of pretty standard things you can't do (e.g. have proper concurrency). Let's fix that by baking a proper program execution environment into the browser.
2. There are lots of places where someone builds an application (a real application that runs as a process on an OS) that then needs to support some sort of embedded programmability. Historically there have been many ways to do this: embed Lua, embed a Python interpreter, embed a JS interpreter, write the application in a language that inherently supports runtime dynamic binding (Java, Lisp, ...). Let's make a better version of that thing such that it supports all common languages.
My take is that while WASM was developed by people in the #1 cult, it has actually been adopted by people in the #2 cult. I see WASM used all over the place as a way to host user-provided code inside things. Blockchain nodes are a common use case, for example.
Then there's a third use case that I think motivates many of the comments here which is: back in the day we could make an application and distribute it to users who would run it on their computers. That pretty much isn't possible now for various reasons, but primarily because computers are locked down (particularly mobile). If only we could be allowed to run regular code inside the one execution environment that's not locked down (the browser), imagine what we could do then. Problem is that WASM doesn't have all the features necessary for this use case. Experience in the past with similar things (ActiveX, Java, ...) suggests that if it did, it would also become locked down.
Codec support: Built video and audio decoding in Wasm to bring codec support to browsers that didn't have it natively. Also helped with a custom video player to work around HLS latency issues on Safari.
Code sharing: We had business logic written in C that needed to run both frontend and backend. Compiled it to Wasm for the frontend, which guaranteed identical behaviour across environments.
Obfuscation: Currently exploring Wasm for "hiding" some JavaScript logic by rewriting critical parts in Rust and compiling to Wasm. We tried JS obfuscators (including paid ones), but they killed performance. Wasm gives us both obfuscation and better performance.
That modification could be as simple as opening the concerned code file in your back end application as a large string and slicing out the parts you don't want. This will likely require some refactoring of the JavaScript code first to ensure the parts you wish to remove are islands whose absence won't break other things.
I've explained the security reality to the business many times - any JavaScript sent to the client can be read, executed, proxied, or tampered with. That's just how browsers work.
The current directive is - make it as difficult to understand as reasonably possible. We're not trying to stop determined adversaries (that's impossible), but we can raise the bar high enough to deter script kiddies and casual attackers from easily abusing it.
Some truly open/royalty-free codecs you could use - video: VP8, VP9, AV1. audio: Opus, Vorbis, FLAC.
That said, building a VLC in the browser gets complicated quickly because of licensing - even if the decoder implementation is open source, some codecs have patent licensing requirements depending on jurisdiction and use case. For example, H.264's basic patents have mostly expired, but I'd verify the specific profiles you need.
What does the author mean by this?
<!DOCTYPE html>
<p id=r>
<script>
WebAssembly.instantiateStreaming(fetch(
'data:application/wasm;base64,AGFzbQEAAAABBwFgAn9/AX8DAgEABwUBAWEAAAoJAQcAIAAgAWoL'))
.then(x=>r.append(x.instance.exports.a(51,4)))
</script>
And here is the wat code that we can turn into wasm with wat2wasm and then into base64 for a data URL: (module
(func (export "a") (param i32 i32) (result i32)
local.get 0
local.get 1
i32.add))Photoshop Online:
https://www.adobe.com/products/photoshop/online.html
And just announced
Unity Online:
benrutter•14h ago
In theory, WASM could be a single cross platform compile target, which is kind of a CS holy grail. It's easy to let your mind spin up a world where everything is web assembly, a desktop enivornment, a server, day to day software applications.
After I've imagined all of that, being told web assembly helps some parts of Figma run faster feels like a big let down. Of course that isn't fair, almost nothing could live up to the expectations we have for WASM.
Its development is also by committee, which is maybe the best option for our current landscape, but isn't famous for getting things going quickly.
vbezhenar•14h ago
merlindru•14h ago
vbezhenar•14h ago
So basically wasm is some optimisation. That's fine but it's not something groundbreaking.
And if we remove web from the platform list, there were many portables bytecodes. P-code from Pascal era, JVM bytecode from modern era and plenty of others.
IshKebab•14h ago
That's underselling it a bit IMO. There's a reason asm.js was abandoned.
gf000•13h ago
And AFAIK asm.js is the precursor to WASM, like the early implementations just built on top of asm.js's primitives.
creata•12h ago
IshKebab•10h ago
The perfect article: https://hacks.mozilla.org/2017/03/why-webassembly-is-faster-...
Honestly the differences are less than I would have expected, but that article is also nearly a decade old so I would imagine WASM engines have improved a lot since then.
Fundamentally I think asm.js was a fragile hack and WASM is a well-engineered solution.
gr4vityWall•9h ago
I agree 100% with the startup time arguments made by the article, though. No way around it if you're going through the typical JS pipeline in the browser.
The argument for better load/store addressing on WASM is solid, and I expect this to have higher impact today than in 2017, due to the huge caches modern CPUs have. But it's hard to know without measuring it, and I don't know how hard it would be to isolate that in a benchmark.
Thank you for linking it. It was a fun read. I hope my post didn't sound adversarial to any arguments you made. I wonder what asm.js could have been if it was formally specified, extended and optimized for, rather than abandoned in favor of WASM.
hnb2137•14h ago
lxgr•14h ago
creata•12h ago
I don't see how it'd be much different to compiling to JavaScript otherwise. Isn't it usually pretty clear where allocations are happening and how to avoid them?
lxgr•12h ago
Why reverse-engineer each JS implementation if you can just target a non-GC runtime instead?
yencabulator•18m ago
x3haloed•14h ago
The tooling is just not there yet. Everyone is just stuck on supporting Docker still.
IshKebab•14h ago
Also WASI is a way of running a single process. If your app needs to run subprocesses you'll need to do more work.
torginus•14h ago
I also rather like the idea of deploying programs rather than virtual machines.
Docker's cardinal sin imo is that it was designed as a monetizable SaaS product, and suffers from inner platform effect, reinventing stuff (package management, lifecycle management etc) that didn't need to be invented.
hosh•13h ago
Or this? https://podman-desktop.io/blog/wasm-workloads-on-macos-and-w...
creata•13h ago
The performance would be worse, and it would be harder to integrate with everything else. It might be more secure, I guess.
HendrikHensen•12h ago
mike_hearn•12h ago
- Building / moving file hierarchies around
- Compatibility with software that expects Linux APIs like /proc
- Port binding, DNS, service naming
- CLI / API tooling for service management
And about a gazillion other things. WASI, meanwhile, is just a very small subset of POSIX but with a bunch of stuff renamed so nothing works on it. It's not meaningfully portable in any way outside of UNIX so you might as well just write a real Linux app. WASI buys you nothing.
WASM is heavily overfit to the browser user case. I think a lot of the dissipated excitement is due to people not appreciating how much that is true. The JVM is a much more general technology than WASM is which is why it was able to move between such different use cases successfully (starting on smart TV boxes, then applets, then desktop apps, then servers + smart cards, then Android), whereas WASM never made it outside the browser in any meaningful way.
WASM seems to exist mostly because Mozilla threw up over the original NaCL proposal (which IMO was quite elegant). They said it wasn't 'webby', a quality they never managed to define IMO. Before WASM Google also had a less well known proposal to formally extend the web with JVM bytecode as a first class citizen, which would have allowed fast DOM/JS bindings (Java has an official DOM/JS bindings API for a long time due to the applet heritage). The bytecode wouldn't have had full access to the entire Java SE API like applets did, so the security surface area would have been much smaller and it'd have run inside the renderer sandbox like V8. But Mozilla rejected that too.
So we have WASM. Ignoring the new GC extensions, it's basically just regular assembly language with masked memory access and some standardized ABI stuff, with the major downside that no CPU vendor uses it so it has to be JIT compiled at great expense. A strange animal, not truly excellent at anything except pleasing the technical aesthetic tastes of the Mozillians. But if you don't have to care about what Mozilla think it's hard to come up with justifications for using it.
creata•11h ago
And a capability system and a brand new IDL, although I'm not sure who the target audience is...
> it's basically just regular assembly language
This doesn't affect your point at all, but it's much closer to a high-level language than to regular assembly language, isn't it? Nonaddressable, automatically managed stack, mandatorily structured control flow, local variables instead of registers, etc.
mike_hearn•10h ago
Findecanor•10h ago
WASI fixed well-known flaws in the POSIX API. That's not a bad thing.
> the major downside that no CPU vendor uses it so it has to be JIT compiled at great expense.
WASM was designed to be JIT-compiled into its final form at the speed it is downloaded by a web browser. JS JIT-compilers in modern web browsers are much more complex, often having multiple compilers in tiers so it spends time optimising only the hottest functions.
Outside web browsers, I'd think there are few use-cases where WASM couldn't be AOT-compiled.
azakai•5h ago
No, Mozilla's concerns at the time were very concrete and clear:
- NaCl was not portable - it shipped native binaries for each architecture.
- PNaCl (Portable Native Client, which came later) fixed that, but it only ran out of process, making it depend on PPAPI, an entirely new set of APIs for browsers to implement.
Wasm was designed to be PNaCl - a portable bytecode designed to be efficiently compiled - but able to run in-process, calling existing Web APIs through JS.
mike_hearn•5h ago
And was NPAPI not a part of the web, and a key part of its early success? Was ActiveX not a part of the web? I think they both were.
So the idea of portability is not and never has been a requirement for something to be "the web". There have been non-portable web pages for the entire history of the web. The sky didn't fall.
The idea that everything must target an abstract machine whether the authors want that or not is clearly key to Mozilla's idea of "webbyness", but there's no historical precedent for this, which is why NaCL didn't insist on it.
azakai•4h ago
In the context of the web, portability means that you can, ideally at least, use any browser on any platform to access any website. Of course that isn't always possible, as you say. But adding a big new restriction, "these websites only run on x86" was very unpopular in the web ecosystem - we should at least aim to increase portability, not reduce it.
> And was NPAPI not a part of the web, and a key part of its early success? Was ActiveX not a part of the web? I think they both were.
Historically, yes, and Flash as well. But the web ecosystem moved away from those things for a reason. They brought not only portability issues but also security risks.
mike_hearn•4h ago
Security is similar. It sounds good, but is always in tension with other goals. In reality the web doesn't have a goal of ever increasing security. If it was, then they'd take features out, not keep adding new stuff. WebGPU expands the attack surface dramatically despite all the work done on Dawn and other sandboxing tech. It's optional, hardly any web pages need it. Security isn't the primary goal of the web, so it gets added anyway.
This is what I mean by saying it was vague and unclear. Portability and security are abstract qualities. Demanding them means sacrificing other things, usually innovation and progress. But the sort of people who make portability a red line never discuss that side of the equation.
azakai•3h ago
As far back as I can remember well (~20 years) it was an explicitly stated goal to keep the web open. "Open" including that no single vendor controls it, neither in terms of browser vendor nor CPU vendor nor OS vendor nor anything else.
You are right that there has been tension here: Flash was very useful, once, despite being single-vendor.
But the trend has been towards openness: Microsoft abandoned ActiveX and Silverlight, Google abandoned NaCl and PNaCl, Adobe abandoned Flash, etc.
mike_hearn•3h ago
Portability and openness are opposing goals. A truly open system allows or even encourages anyone to extend it, including vendors, and including with vendor specific extensions. Maximizing the number of devices that can run something necessarily requires a strong central authority to choose and then impose a lowest common denominator: to prevent people adding their own extensions.
That's why the modern web is the most closed it's ever been. There are no plugin APIs. Browser extension APIs are the lowest power they've ever been in the web's history. The only way to meaningfully extend browsers is to build your own and then convince everyone to use it. And Google uses various techniques to ensure that whilst you can technically fork Chromium, in practice hardly anyone does. It's open source but not designed to actually be forked. Ask anyone who has tried.
So: the modern web is portable for some undocumented definition of portable because Google acts as that central authority (albeit is willing to compromise to keep Mozilla happy). The result is that all innovation happens elsewhere on more open platforms like Android or Linux. That's why exotic devices like VR headsets or AI servers run Android or Linux, not ChromeOS or WebOS.
shevy-java•14h ago
frez1•14h ago
the fact we haven't heard much about was use is probably because it isnt as valuable as we think, or no one has played around with it yet to find out
matt_kantor•7h ago
TFA has many examples of big tech companies using Wasm in production. It's not exhaustive either, e.g. the article doesn't mention:
- Google using it as a backend for Flutter and to implement parts of Google Maps, Earth, Meet, Sheets, Keep, YouTube, etc
- Microsoft using it in Copilot Studio
- eBay using it in their mobile app
- MongoDB using it for Compass
- Amazon supporting it in EKS
- 1Password using it in their browser extension
- Unity having it as a build target
(And this was just what I found with some quick web searches; I'm sure there are many other examples.)
---
> the fact we haven't heard much about was use is probably because it isnt as valuable as we think
One of the conclusions of the article is that it's mostly used in ways that aren't very visible.
azakai•5h ago
Media, and wasm, are really important when you need them, but usually you don't.
daef•14h ago
[0] https://www.destroyallsoftware.com/talks/the-birth-and-death...
afandian•11h ago
It might be this one I'm thinking of, as it closely fits the bill. But something is telling me it's not, and that it was published earlier.
Any ideas?
torginus•14h ago
Theory and practice doesn't match in this case, and many people have remarked that companies that sit on the WhatWG board have vested interest in making sure their lucrative app stores are not threatened by a platform that can run any app just as well.
I remember when Native Client came to the scene and allowed people to compile complex native apps to the web that run at like 95% of native speed. While it was in many ways an inelegant solution, it worked better than WebAssembly does today.
Another one of WebAssembly's killer features was supposed to be native web integration. How JS engines work is that you have an IDL that describes the interface of JS classes which is then used to generate code to bind to underlying C++ implementations. You could probably bind those to Webassembly just as well.
I don't think a cross-platform as in cross CPU arch matters that much, if you meant 'runs on everything' then I concur.
Also the dirty secret of WebAssembly is that it's not really faster than JS.
PunchyHamster•13h ago
> Also the dirty secret of WebAssembly is that it's not really faster than JS.
That is near purely due to amount of work it took to make that shitty language run fast. Naive webassembly implementation will beat interpreted JS many times over but modern JIT implementations are wonder.
moralestapia•13h ago
V8 is a modern engineering marvel.
rob74•13h ago
Findecanor•10h ago
There is no reason why WASM couldn't be as fast, or faster than JS, especially now with WASM 3.0. Before, every programs in a managed language had to be shipped with its own GC and exception handling framework in WASM which was probably crippled by size constraints.
pjmlp•10h ago
Any language with advanced GC algorithms, or interior pointers, will run poorly with current WASM GC.
It works as long as their GC model overlaps with JS GC requirements.
WorldMaker•6h ago
Some of the real GC tests will be strings support (because immutability/interning) and higher-level composite objects, which is all still in various draft/proposal states.
pjmlp•5h ago
gritzko•13h ago
moralestapia•13h ago
CryZe•12h ago
DonHopkins•7h ago
WorldMaker•5h ago
davidmurdoch•7h ago
torginus•11h ago
The WASM runtime ended up from something that ingests pseudo-assembly,validates it and turns it into machine code, into a full-fledged multi-tiered JIT, like what JS has, with crazy engineering complexity per browser, and similar startup performance woes (which was one of the major goals of Nacl/Wasm to alleviate the load time issues with huge applications).
201984•7h ago
torginus•7h ago
PunchyHamster•5h ago
Starting from not only single-threaded but memory-limited target was... weird decision
WorldMaker•5h ago
I don't think you need conspiracy theories for that. DOM involves complex JS objects and you have to have an entirely working multi-language garbage collection model if you are expecting other languages to work with DOM objects otherwise you run the risk of memory leaking some of the most expensive objects in a browser.
That path to that is long and slow, especially with the various committees' general interest being in not requiring non-JS languages to entirely conform to JS GC (either implementing themselves on top of JS GC alone or having to implement their own complex subset of JS GC to interop correctly), so the focus has been on very low level tools over complex GC patterns. The first basics have only just been standardized. The next step (sharing strings) seems close but probably still has months to go. The steps after that (sharing simple structs) seem pretty complex with a lot of heated debate still to happen, and DOM objects are still some further complexity step past that (as they involve complex reference cycles and other such things).
avadodin•13h ago
EGreg•13h ago
Some joker who built Solana actually thought Berkeley Packet Filter language would be better than WASM for their runtime. But besides that dude, everyone is discovering how great WASM can be to run deterministic code right in people’s browsers!
torginus•11h ago
circuit10•10h ago
undeveloper•10h ago
EGreg•7h ago
No, WASM is deterministic, JS is fundamentally not. Your dislike of all things blockchain makes you say silly things.
z3t4•9h ago
CuriouslyC•8h ago
torginus•7h ago
https://takahirox.github.io/WebAssembly-benchmark/
Js is not always faster, but in a good chunk of cases it is.
CuriouslyC•7h ago
torginus•7h ago
Webassembly that was supposed to replace it needs to be at least as good, that was the promise. We're a decade in, and still Wasm is nowhere near while it has accumulated an insane amount of engineering complexity in its compiler, and its ability to run native apps without tons of constraints and modifications is still meh as is the performance.
azakai•5h ago
Also, Native Client started up so fast because it shipped native binaries, which was not portable. To fix that, Portable Native Client shipped a bytecode, like wasm, which meant slower startup times - in fact, the last version of PNaCl had a fast baseline compiler to help there, just like wasm engines do today, so they are very similar.
And, a key issue with Native Client is that it was designed for out-of-process sandboxing. That is fine for some things, but not when you need synchronous access to Web APIs, which many applications do (NaCl avoided this problem by adding an entirely new set of APIs to the web, PPAPI, which most vendors were unhappy about). Avoiding this problem was a major principle behind wasm's design, by making it able to coexist with JS code (even interleaving stack frames) on the main thread.
torginus•2h ago
I don't see an issue with shipping uArch specific assembly, nowadays you only have 2 really in heavy use today, and I think managing that level of complexity is tenable, considering the monster the current Wasm implementation became, which is still lacking in key ways.
As for out of process sandboxing, I think for a lot of things it's fine - if you want to run a full-fat desktop-app or game, you can cram it into an iframe, and the tab(renderer) process is isolated, so Chrome's approach was quite tenable from an IRL perspective.
But if seamless interaction with Web APIs is needed, that could be achieved as well, and I think quite similarly to how Wasm does it - you designate a 'slab' of native memory and make sure no pointer access goes outside by using base-relative addressing and masking the addresses.
For access to outside APIs, you permit jumps to validated entry points which can point to browser APIs. I also don't see why you couldn't interleave stack frames, by making a few safety and sanity checks, like making sure the asm code never accesses anything outside the current stack frame.
Personally I thought that WebAssembly was what it's name suggested - an architecture independent assembly language, that was heavily optimized, and only the register allocation passes and the machine instruction translation was missing - which is at the end of the compiler pipeline, and can be done fairly fast, compared to a whole compile.
But it seems to me Wasm engines are more like LLVM, an entire compiler consuming IR, and doing fancy optimization for it - if we view it in this context, I think sticking to raw assembly would've been preferable.
azakai•1h ago
> I don't see an issue with shipping uArch specific assembly, nowadays you only have 2 really in heavy use today,
That is true today, but it would prevent other architectures from getting a fair shot. Or, if another architecture exploded in popularity despite this, it would mean fragmentation.
This is why the Portable version of NaCl was the final iteration, and the only one even Google considered shippable, back then.
I agree the other stuff is fixable - APIs etc. It's really portability that was the sticking point. No browser vendor was willing to give that up.
azakai•5h ago
But that is really only common in small computational kernels. If you take a large, complex application like Adobe Photoshop or a Unity game, wasm will be far closer to native speed, because its compilation and optimization approach is much closer to native builds (types known ahead of time, no heavy dependency on tiering and recompilation, etc.).
AlienRobot•5h ago
In practice, WASM codebases won't be simply running a single pure function in WASM from JS but instead will have several data structures being passed around from one WASM function to another, and that's going to be faster than doing the same in JS.
By the way, if I remember correctly V8 can optimize function calls heuristically if every call always passes the same argument types, but because this is an implementation detail it's difficult to know what scenarios are actually optimized and which are not.
throwaway314155•7h ago
This is an entirely unnecessary jab. There’s a whole generation dealing with stuff like this because of economic and other forces outside their control.
axus•3h ago
jiggawatts•13h ago
The JVM says "Hello!" from 1995.
benrutter•13h ago
The JVM is a great parallel example. Anyone listening to the hype in the early days based around what the JVM could be would surely be disappointed now. It isn't faster than C, it doesn't see use everywhere due to practical constraints, etc.
But you'd be hard pushed to say the JVM is a total failure. It's used by lots all round the world, and solves real problems, just not the ones we were hoping it would solve. I suspect the future of WASM looks something like that.
DonHopkins•7h ago
None of the technical arguments for JVM matter any more. It's just bait to trick you into sticking your hand under the lawnmower and helping Larry Ellison solve his problems.
pjmlp•6h ago
jiggawatts•1h ago
The two are so similar that Java bytecode to .NET bytecode translators exist. With some, it is possible to take a class defined in Java, subclass it with C#, call it from Java, etc...
pjmlp•1h ago
whywhywhywhy•12h ago
Not really when tools like Figma were not really possible before it
creata•12h ago
For developing brand new code, I don't think there's anything fundamentally impossible without Wasm, except SIMD.
azakai•5h ago
Also, the ability to recompile existing code to wasm is often important. Unity or Photoshop could, in theory, write a new codebase for the Web, but recompiling their existing applications is much more appealing, and it also reuses all their existing performance work there.
pjmlp•6h ago
xnx•11h ago
shear potential = likely to break apart
benrutter•11h ago
lionkor•9h ago