frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

We rewrote our Rust WASM Parser in TypeScript – and it got 3x Faster

https://www.openui.com/blog/rust-wasm-parser
70•zahlekhan•2h ago

Comments

blundergoat•2h ago
The real win here isn't TS over Rust, it's the O(N²) -> O(N) streaming fix via statement-level caching. That's a 3.3x improvement on its own, independent of language choice. The WASM boundary elimination is 2-4x, but the algorithmic fix is what actually matters for user-perceived latency during streaming. Title undersells the more interesting engineering imo.
shmerl•2h ago
More like a misleading clickbait.
sroussey•1h ago
Yeah, though the n^2 is overstating things.

One thing I noticed was that they time each call and then use a median. Sigh. In a browser. :/ With timing attack defenses build into the JS engine.

fn-mote•44m ago
For those of us not in the know, what are we expecting the results of the defenses to be here?
Aurornis•1h ago
> Title undersells the more interesting engineering imo.

Thanks for cutting through the clickbait. The post is interesting, but I'm so tired of being unnecessarily clickbaited into reading articles.

socalgal2•1h ago
same for uv but no one takes that message. They just think "rust rulez!" and ignore that all of uv's benefits are algo, not lang.
estebank•1h ago
Some architectures are made easier by the choice of implementation language.
crubier•8m ago
In my experience Rust typically makes it a little bit harder to write the most efficient algo actually.
rowanG077•47m ago
That's a pretty big claim. I don't doubt that a lot of uv's benefits are algo. But everything? Considering that running non IO-bound native code should be an order of magnitude faster than python.
thfuran•26m ago
More than one, I'd think.
jeremyjh•3m ago
Its a pretty well-supported claim. uv skips doing a number of things that generate file I/O. File I/O is far more costly than the difference in raw computation. pip can't drop those for compatibility reasons.

https://nesbitt.io/2025/12/26/how-uv-got-so-fast.html

azakai•50m ago
O(N²) -> O(N) was 3.3x faster, but before that, eliminating the boundary (replacing wasm with JS) led to speedups of 2.2x, 4.6x, 3.0x (see one table back).

It looks like neither is the "real win". both the language and the algorithm made a big difference, as you can see in the first column in the last table - going to wasm was a big speedup, and improving the algorithm on top of that was another big speedup.

nulltrace•34m ago
Yeah the algorithmic fix is doing most of the work here. But call that parser hundreds of times on tiny streaming chunks and the WASM boundary cost per call adds up fast. Same thing would happen with C++ compiled to WASM.
dmix•2h ago
That blog post design is very nice. I like the 'scrollspy' sidebar which highlights all visible headings.

Claude tells me this is https://www.fumadocs.dev/

sroussey•1h ago
Interesting, thanks. I need make some good docs soon.
dmix•1h ago
Good documentation is always worth the effort. Markdown explaining your products is gold these days with LLMs.
nine_k•2h ago
"We rewrote this code from language L to language M, and the result is better!" No wonder: it was a chance to rectify everything that was tangled or crooked, avoid every known bad decision, and apply newly-invented better approaches.

So this holds even for L = M. The speedup is not in the language, but in the rewriting and rethinking.

MiddleEndian•2h ago
Now they just need a third party who's never seen the original to rewrite their TypeScript solution in Rust for even more gains.
nine_k•1h ago
Indeed! But only after a year or so of using it in production, so that the drawbacks would be discovered.
baranul•1h ago
Truth. You can see improvement, even rewriting code in the same language.
azakai•47m ago
You're generally right - rewrites let you improve the code - but they do have an actual reason the new language was better: avoiding copies on the boundary.

They say they measured that cost, and it was most of the runtime in the old version (though they don't give exact numbers). That cost does not exist at all in the new version, simply because of the language.

spankalee•2h ago
I was wondering why I hadn't heard of Open UI doing anything with WASM.

This new company chose a very confusing name that has been used by the Open UI W3C Community Group for over 5 years.

https://open-ui.org/

Open UI is the standards group responsible for HTML having popovers, customizable select, invoker commands, and accordions. They're doing great work.

caderosche•2h ago
What is the purpose of the Rust WASM parser? Didn't understand that easily from the article. Would love a better explanation.
joshuanapoli•1h ago
They use a bespoke language to define LLM-generated UI components. I think that this is supposed to prevent exfiltration if the LLM is prompt-injected. In any case, the parser compiles chunks streaming from the LLM to build a live UI. The WASM parser restarted from the beginning upon each chunk received. Fixing this algorithm to work more incrementally (while porting from Rust to TypeScript) improved performance a lot.
evmar•1h ago
By the way, I did a deeper dive on the problem of serializing objects across the Rust/JS boundary, noticed the approach used by serde wasn’t great for performance, and explored improving it here: https://neugierig.org/software/blog/2024/04/rust-wasm-to-js....
slopinthebag•13m ago
Did you try something like msgpack or bebop?
SCLeo•1h ago
They should rewrite it in rust again to get another 3x performance increase /s
slowhadoken•1h ago
Am I mistaken or isn’t TypeScript just Golang under the hood these days?
iainmerrick•1h ago
Hmm, there's an in-progress rewrite of the TypeScript compiler in Go; is that what you mean?

I don't think that's actually out yet, and more importantly, it doesn't change anything at runtime -- your code still runs in a JS engine (V8, JSC etc).

jeremyjh•25m ago
There is too much wrong here to call it a mistake.
szmarczak•1h ago
> Attempted Fix: Skip the JSON Round-Trip > We integrated serde-wasm-bindgen

So you're reinventing JSON but binary? V8 JSON nowadays is highly optimized [1] and can process gigabytes per second [2], I doubt it is a bottleneck here.

[1] https://v8.dev/blog/json-stringify [2] https://github.com/simdjson/simdjson

kam•25m ago
No, serde-wasm-bindgen implements the serde Serializer interface by calling into JS to directly construct the JS objects on the JS heap without an intermediate serialization/deserialization. You pay the cost of one or more FFI calls for every object though.

https://docs.rs/serde-wasm-bindgen/

neuropacabra•57m ago
This is very unusual statement :-D
nallana•48m ago
Why not a shared buffer? Serializing into JSON on this hot path should be entirely avoidable
mavdol04•8m ago
I think a shared array just avoids the copy, not the serialization which is the main problem as they showed with serde-wasm-bindgen test
ivanjermakov•47m ago
Good software is usually written on 2nd+ try.
joaohaas•44m ago
God I hate AI writing.

That final summary benchmark means nothing. It mentions 'baseline' value for the 'Full-stream total' for the rust implementation, and then says the `serde-wasm-bindgen` is '+9-29% slower', but it never gives us the baseline value, because clearly the only benchmark it did against the Rust codebase was the per-call one.

Then it mentions: "End result: 2.2-4.6x faster per call and 2.6-3.3x lower total streaming cost."

But the "2.6-3.3x" is by their own definition a comparison against the naive TS implementation.

I really think the guy just prompted claude to "get this shit fast and then publish a blog post".

nssnsjsjsjs•41m ago
Rewrite bias. Yoy want to also rewrite the Rust one in Rust for comparison.
jeremyjh•27m ago
It would be surprising if rewriting in Rust could change the WASM boundary tax that the article identified as the actual problem.
rented_mule•32m ago
Something not unlike this happened to me when moving some batch processing code from C++ to Python 1.4 (this was 1997). The batch started finishing about 10x faster. We refused to believe it at first and started looking to make sure the work was actually being done. It was.

The port had been done in a weekend just to see if we could use Python in production. The C++ code had taken a few months to write. The port was pretty direct, function for function. It was even line for line where language and library differences didn't offer an easier way.

A couple of us worked together for a day to find the reason for the speedup. Just looking at the code didn't give us any clues, so we started profiling both versions. We found out that the port had accidentally fixed a previously unknown bug in some code that built and compared cache keys. After identifying the small misbehaving function, we had to study the C++ code pretty hard to even understand what the problem was. I don't remember the exact nature of the bug, but I do remember thinking that particular type of bug would be hard to express in Python, and that's exactly why it was accidentally fixed.

We immediately started moving the rest of our back end to Python. Most things were slower, but not by much because most of our back end was i/o bound. We soon found out that we could make algorithmic improvements so much more quickly, so a lot of the slowest things got a lot faster than they had ever been. And, most importantly, we (the software developers) got quite a bit faster.

envguard•7m ago
Agreed — the headline buries the lede. Algorithmic complexity improvements compound across all future inputs regardless of implementation language, while the WASM boundary win is more of a one-time gain. Worth noting that the statement-level caching insight generalises well: many parser-adjacent hot paths suffer the same O(N²) trap when doing repeated prefix/suffix matching without memoisation.
asa400•5m ago
Fun story! Performance is often highly unintuitive, and even counterintuitive (e.g. going from C++ to Python). Very much an art as well as a science.

Crazy how many stories like this I’ve heard of how doing performance work helped people uncover bugs and/or hidden assumptions about their systems.

slopinthebag•19m ago
This article is obviously AI generated and besides being jarring to read, it makes me really doubt its validity. You can get substantially faster parsing versus `JSON.parse()` by parsing structured binary data, and it's also faster to pass a byte array compared to a JSON string from wasm to the browser. My guess is not only this article was AI generated, but also their benchmarks, and perhaps the implementation as well.
StilesCrisis•12m ago
It's vibe code all the way down!
jeremyjh•17m ago
> The openui-lang parser converts a custom DSL emitted by an LLM into a React component tree.

> converts internal AST into the public OutputNode format consumed by the React renderer

Why not just have the LLM emit the JSON for OutputNode ? Why is a custom "language" and parser needed at all? And yes, there is a cost for marshaling data, so you should avoid doing it where possible, and do it in large chunks when its not possible to avoid. This is not an unknown phenomenon.

envguard•6m ago
The WASM story is interesting from a security angle too. WASM modules inheriting the host's memory model means any parsing bugs that trigger buffer overreads in the Rust code could surface in ways that are harder to audit at the JS boundary. Moving to native TS at least keeps the attack surface in one runtime, even if the theoretical memory safety guarantees go down.

OpenCode – The open source AI coding agent

https://opencode.ai/
345•rbanffy•3h ago•166 comments

Our commitment to Windows quality

https://blogs.windows.com/windows-insider/2026/03/20/our-commitment-to-windows-quality/
347•hadrien01•5h ago•644 comments

We rewrote our Rust WASM Parser in TypeScript – and it got 3x Faster

https://www.openui.com/blog/rust-wasm-parser
70•zahlekhan•2h ago•46 comments

France's aircraft carrier located in real time by Le Monde through fitness app

https://www.lemonde.fr/en/international/article/2026/03/20/stravaleaks-france-s-aircraft-carrier-...
454•MrDresden•11h ago•383 comments

Why I'm Not Worried About Running Out of Work in the Age of AI

https://kellblog.com/2026/03/19/why-im-not-worried-about-running-out-of-work-in-the-age-of-ai/
15•0bytematt•1h ago•4 comments

Discontinuation and reinitiation of dual-labeled GLP-1 receptor agonists

https://nautil.us/whiplash-heart-attack-and-stroke-risk-jumps-when-people-stop-taking-glp-1s-1279029
50•siquick•2h ago•53 comments

Ghostling

https://github.com/ghostty-org/ghostling
48•bjornroberg•2h ago•9 comments

Attention Residuals

https://github.com/MoonshotAI/Attention-Residuals
111•GaggiX•6h ago•18 comments

The Los Angeles Aqueduct Is Wild

https://practical.engineering/blog/2026/3/17/the-los-angeles-aqueduct-is-wild
281•michaefe•3d ago•157 comments

An FAQ on Reinforcement Learning Environments

https://epoch.ai/gradient-updates/state-of-rl-envs
13•dcre•1d ago•0 comments

Linux Applications Programming by Example: The Fundamental APIs (2nd Edition)

https://github.com/arnoldrobbins/LinuxByExample-2e
5•teleforce•38m ago•0 comments

Show HN: We built a terminal-only Bluesky / AT Proto client written in Fortran

https://github.com/FormerLab/fortransky
32•FormerLabFred•2h ago•26 comments

Show HN: Red Grid Link – peer-to-peer team tracking over Bluetooth, no servers

https://github.com/RedGridTactical/RedGridLink
11•redgridtactical•2h ago•8 comments

VisiCalc Reconstructed

https://zserge.com/posts/visicalc/
155•ingve•3d ago•65 comments

Show HN: I made an email app inspired by Arc browser

https://demo.define.app
39•johndamaia•6h ago•24 comments

Work_mem: It's a Trap

https://mydbanotebook.org/posts/work_mem-its-a-trap/
26•enz•2d ago•2 comments

Show HN: Baltic shadow fleet tracker – live AIS, cable proximity alerts

https://github.com/FormerLab/shadow-fleet-tracker-light
24•FormerLabFred•3h ago•6 comments

The worst volume control UI in the world (2017)

https://uxdesign.cc/the-worst-volume-control-ui-in-the-world-60713dc86950
54•andsoitis•2d ago•28 comments

NumKong: 2'000 Mixed Precision Kernels for All

https://ashvardanian.com/posts/numkong/
24•ashvardanian•5h ago•0 comments

Parallel Perl – Autoparallelizing interpreter with JIT

https://perl.petamem.com/gpw2026/perl-mit-ai-gpw2026.html#/4/1/1
97•bmn__•2d ago•35 comments

Delve – Fake Compliance as a Service

https://deepdelver.substack.com/p/delve-fake-compliance-as-a-service
522•freddykruger•1d ago•183 comments

Lent and Lisp

https://leancrew.com/all-this/2026/02/lent-and-lisp/
3•surprisetalk•2d ago•0 comments

Meme Buildings

https://misfitsarchitecture.com/2026/03/15/meme-buildings/
17•speckx•8h ago•1 comments

Entso-E final report on Iberian 2025 blackout

https://www.entsoe.eu/publications/blackout/28-april-2025-iberian-blackout/
170•Rygian•13h ago•72 comments

The Social Smolnet

https://ploum.net/2026-03-20-social-smolnet.html
106•aebtebeten•11h ago•12 comments

I love my dumb watches

https://gary.onl/a-post-about-watches/
60•abnercoimbre•3d ago•53 comments

Video Encoding and Decoding with Vulkan Compute Shaders in FFmpeg

https://www.khronos.org/blog/video-encoding-and-decoding-with-vulkan-compute-shaders-in-ffmpeg
147•y1n0•3d ago•54 comments

Show HN: An open-source safety net for home hemodialysis

https://safehemo.com/
38•qweliantanner•3d ago•6 comments

Launch HN: Sitefire (YC W26) – Automating actions to improve AI visibility

32•vincko•7h ago•20 comments

Flash-KMeans: Fast and Memory-Efficient Exact K-Means

https://arxiv.org/abs/2603.09229
163•matt_d•3d ago•14 comments