> A contraction of “WebAssembly”, not an acronym, hence not using all-caps.
There aren't any particular rules about contractions and intermediate capitalization so we are free to choose. WAsm is more awkward than Wasm so the latter seems better.
If you want to run code written in other languages in the browser, you could just as well compile to JavaScript.
All Wasm brings to the table is a bit of a speed improvement.
Not having the JS GC randomly pausing your process unpredictably.
Sandboxing untrusted code, i.e. you sell a SaaS and you also want clients to be able to run untrusted plugins from a marketplace.
Without more details on their exact use case, their algorithms, and their data movement patterns, you have no way of knowing this. Doing stuff on the GPU isn't automatically faster than doing it on the CPU.
What types of operations are 10x faster in Wasm than in JS? Why can't the JIT compiler compile JS to the same native code as your Wasm gets compiled to?
Not a hater, though it's fun to run Doom in a browser tab... just can't see any business value in 99% of its ecosystem, especially with the drift away from web (the only niche where it made sense).
Without direct browser support for WASM with DOM access ( and no need for JavaScript "shim"), all this is futile.
As far as I know, "2.0" is just a marketing term batching several extensions standardized since 1.0 (and simplifying feature queries "are extensions X,Y,Z supported" to "is 2.0 supported"), not unlike what Vulkan does with their extensions.
The HN crowd has just always been terminally myopic about this because it has "web" in the name.
[0]https://learn-wasm.dev/tutorial/introduction/what-webassembl...
We already have lots of bytecode formats.
I am not going to lie, I thought the same because of the name, too.
If you're talking about WASI, well personally I'm not interested in it and we're just using plain wasm in the browser. However, nothing in this linked post is about WASI specifically.
Outside of the browser are only VC backed companies, pretending bytecode based distribution isn't something existing since 1958, with wins and losses, many of those were polyglot, supporting languages like C in bytecode was already done in 1989 with Architecture Neutral Distribution Format, and many other examples.
Emscripten is very bloated, but it's the best option from what I can tell.
I lost a whole day of a 3 day game jam to a weird Emscripten bug. It ended up being that adding a member to a class blew up the whole thing.
The alternative (and the only option, if you want it to be as light as possible) is to do the bindings yourself, which is fun, depending on how much your concept of fun involves JavaScript, and having half your code in a different programming language.
I'm told the Rust situation is pretty nice, although my attempt didn't get anywhere — apparently I tried to use it in exactly the opposite way that it was intended.
I had a pretty nice time with Odin. Someone put raylib wasm bindings for Odin on GitHub, and it worked really well for me.
(Odin syntax is really nice, but you don't realize just how nice, until you port your game to another language!)
Zig was cool, but a bit pedantic for a jam, and a bit unstable (kept finding out of date docs). Didn't see much in the way of game libs, but I was able to ship a Zig game with WASM-4.
I ended up switching to TS, which I'm not happy with, but since you (usually) need JS anyway, the benefit of having a single language, and a reliable toolchain, is very high, especially under time pressure. The "my language is nice" benefits do not in my experience outweigh the rest of the pain.
https://janpfeifer.github.io/hiveGo/www/hive/
Probably everything JS and DOM is better supported from TS, but I have to say, I was never blocked on my small project.
https://www.youtube.com/@wasmio
According to those, likely to replace containers and likely to be integreated in more and more systesms.
It seems like it's exploding in populartity and usage because it solves some very real problems.
"More than 20 programming tools vendors offer some 26 programming languages — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET. "
https://news.microsoft.com/source/2001/10/22/massive-industr...
Ah, it isn't portable, maybe 1989?
"The Architecture Neutral Distribution Format (ANDF) in computing is a technology allowing common "shrink wrapped" binary application programs to be distributed for use on conformant Unix systems, translated to run on different underlying hardware platforms. ANDF was defined by the Open Software Foundation and was expected to be a "truly revolutionary technology that will significantly advance the cause of portability and open systems",[1] but it was never widely adopted."
https://en.wikipedia.org/wiki/Architecture_Neutral_Distribut... or better 1980?
"The Amsterdam Compiler Kit (ACK) is a retargetable compiler suite and toolchain written by Andrew Tanenbaum and Ceriel Jacobs, since 2005 maintained by David Given.[1] It has frontends for the following programming languages: C, Pascal, Modula-2, Occam, and BASIC."
https://en.wikipedia.org/wiki/Amsterdam_Compiler_Kit
I have a problem with people selling WASM as something spectacullary new, never done before.
Nobody is doing this here, you're arguing against a strawman.
WebAssembly from the Ground Up (https://wasmgroundup.com/) an online book to learn Wasm by building a simple compiler in JavaScript. It starts with handcrafting bytecodes in JS, and then slowly builds up a simple programming language that compiles to Wasm.
There's a free sample available: https://wasmgroundup.com/book/contents-sample/
(Disclaimer: I'm one of the authors)
In the book, we don't cover source maps, but it would also be possible to generate source maps so that you can set breakpoints in (and step through) the original source code in your custom language, rather than debugging at the Wasm instruction level.
Does that answer your question?
re: "dynamically replace parts of the implementation as source code evolves" — there is a technique for this, I have a short write-up on it here: https://github.com/pdubroy/til/blob/main/wasm/2024-02-22-Run...
About the debugging and inspecting —
Inspecting Wasm memory is easy from JS, but to be able to do the debugging, you'd probably either need to rewrite the bytecode (e.g., inserting a call out to JS between every "real" instruction) or a self-hosted interpreter like wasm3 (https://github.com/wasm3/wasm3).
(Or maybe there are better solutions that I'm not thinking of.)
Whamm can inject arbitrary instrumentation logic, so you could, e.g. inject calls to imports that are implemented in JS. You'll have some heavy lifting to do on the JS side.
Granted, you're debugging in another window that isn't a browser; but overall the debugger is about 80% of what you get when debugging a .net process running outside of the debugger.
const MIN_U32 = 0;
const MAX_U32 = 2 ** 32 - 1;
function u32(v) {
if (v < MIN_U32 || v > MAX_U32) {
throw Error(`Value out of range for u32: ${v}`);
}
return leb128(v);
}
I love Ada, because you can do this: subtype U32 is Interfaces.Unsigned_64 range 0 .. 2 ** 32 - 1;
or alternatively: type U32 is mod 2 ** 32;
and then you can use attributes such as: First : constant U32 := U32'First; -- = 0
Last : constant U32 := U32'Last; -- = 2 ** 32 - 1
Range_ : constant U32 := U32'Range; -- Range 0 .. 2**32 - 1
> the Wasm Community and Working Groups had reached consensus and finished the specification in early 2022. All major implementations have been shipping 2.0 for even longer.
> In a future post we will take a look at Wasm 3.0, which is already around the corner at this point!
Features in 3.0 presumably also being mostly implemented already, some maybe just kept behind feature flags.
Wasm 2.0 is complete in a handful of engines, whereas 3.0 is less well-supported.
Wizard is almost done with 3.0; only memory64 and relaxed-simd are incomplete.
New x86 processor don't executes 128-bit SIMD, the vecto ALUs are all wider now and 128 and 256-bit instructions have the same throughput and latency.
Also, do you have an example for such "opportunistic" usages?
I suppose mainly things the SLP vectorizer can usually do already (in compiled languages, I'm not sure how good the JIT is these days).
I worry that we now may end up in a world, where "hand optimized SIMD" in WASM ends up slower than autovectorization, because you can't use the wider SIMD instructions and leave 2x (zen4) to 4x (zen5) of the performance on the table.
The simplest example would be copying a small number of bytes (like, copying structs). Vector instructions generally have a higher setup cost, like setting so it can't really be used for this purpose. Maybe future vector instructions have no such caveats and can be used as like SIMD, but AFAIK it's not yet the case even for RISC-V's V extension.
The provided specification tests allow implementers to be confident that their runtime conforms to the spec.
Overall I think it's an impressive specification and is worth studying .
with the component model's wit you get higher level types like enums, option, result and generics: https://component-model.bytecodealliance.org/design/wit.html
But when you think about it, isn't that basically true for native languages?
if the host provides the guest wasm module via imports a function to create and run from an array of bytes then it can be done today (if I understand you correctly).
Here's some related content: https://github.com/pdubroy/til/blob/main/wasm/2024-02-22-Run...
> WebAssembly provides no ambient access to the computing environment in which code is executed. Any interaction with the environment, such as I/O, access to resources, or operating system calls, can only be performed by invoking functions provided by the embedder and imported into a WebAssembly module.
more info here:
lioeters•7h ago
Wasm 2.0 Completed - https://webassembly.org/news/2025-03-20-wasm-2.0/
> ..here is the summary of the additions in version 2.0 of the language:
Vector instructions: With a massive 236 new instructions — more than the total number Wasm had before — it now supports 128-bit wide SIMD (single instruction, multiple data) functionality of contemporary CPUs, like Intel’s SSE or ARM’s SVE. This helps speeding up certain classes of compute-intense applications like audio/video codecs, machine learning, and some cryptography.
Bulk memory instructions: A set of new instructions allows faster copying and initialization of regions of memory or ranges of tables.
Multi-value results: Instructions, blocks, and functions can now return more than one result value, sometimes supporting faster calling conventions and avoiding indirections. In addition, block instructions now also can have inputs, enabling new program transformations.
Reference types: References to functions or pointers to external objects (e.g., JavaScript values) become available as opaque first-class values. Tables are repurposed as a general storage for such reference values, and new instructions allow accessing and mutating tables in Wasm code. In addition, modules now may define multiple tables of different types.
Non-trapping conversions: Additional instructions allow the conversion from float to integer types without the risk of trapping unexpectedly.
Sign extension instructions: A new group of instructions allows directly extending the width of signed integer value. Previously that was only possible when reading from memory.
immibis•5h ago
flohofwoe•3h ago
Values are hardwired to 128 bits which can be i8x16/i16x8/i32x4/i64x2 or f32x4/f64x2, so that already limits the 'feature surface' drastically.
IMHO as long as it covers the most common use cases (e.g. vec4 / mat4x4 floating point math used in games and a couple of common ALU and bit-twiddling operations on integers) that's already quite a bit better than having to fall back to scalar math.
mdaniel•1h ago
Are you also on Firefox? I've been getting those 429s a lot over the past week or so. I haven't changed my configuration other than I'm religious about the "check for updates" button, but I cannot imagine a world in which my release-branch browser is a novelty. No proxies, yes I run UBO but it is disabled for GH
adrian17•4h ago
Unfortunately, despite being "enabled", Rust+LLVM don't take advantage of this because of ABI compatibility mess. I don't know whether the story on Clang's side is similar.
sapiogram•4h ago
adrian17•4h ago
"As a result there is no longer any possible method of writing a function in Rust that returns multiple values at the WebAssembly function type level."
And similar queries in Rust's zulip: https://rust-lang.zulipchat.com/#narrow/channel/122651-gener...
burakemir•4h ago
Dwedit•1h ago
acheong08•34m ago
azakai•8m ago
Inside functions, there is perhaps a 1-3% code size opportunity at best (https://github.com/WebAssembly/binaryen?tab=readme-ov-file#b...), and no performance advantage.
Between functions there might be a performance advantage, but as wasm VMs do more things like runtime inlining (which becomes more and more important with wasm GC and the languages that compile to it), that benefit goes away.
varjag•2h ago
singularity2001•12m ago
https://github.com/WebAssembly/flexible-vectors