That's a staggering accomplishment.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... .
https://webkit.org/blog/6240/ecmascript-6-proper-tail-calls-...
V8 team decided that it's not worth it, since proper stack traces (such as Error.stack) are essential for some libraries, such as Sentry (!). Removing some stack trace info can break some code. Also imagine you have missing info from the error stack trace in production code running on NodeJS, that's not good. If you need TCO, you can compile that code in WASM. V8 does TCO in WASM.
V8 team decided that it's not worth it, since proper stack traces (such as Error.stack) are essential for some libraries, such as Sentry (!). Removing some stack trace info can break some code. Also, imagine you have missing info from the error stack trace in production code running on NodeJS. that's not good. If you need TCO, you can compile that code in WASM. V8 does TCO in WASM.
Bun users don’t care either.
At this point it is pure BS.
Most Bun users don't even know about this (unless they are bitten by this). That doesn't mean absolutely no one cares or would not care even though such complaints might be uncommon.
It'll be interesting to see how much it will affect React Native apps as it gets more and more optimized for this use case
At one point I really thought that Flutter would outclass it but typical Google project stuff has really put a damper on it from all I can see.
It’s not better than native apps, but as far as cross platform GuIs go it’s still very very good
Flutter is amazing, but they really shot themselves in the foot with Dart (and I say that as someone who doesn't mind Dart).
As a React Native developer for, what, 6 years, I don’t have much positivity left to offer. Bug reports to the core team that went nowhere, the Android crash on remote images without dimensions, all the work offloaded to Expo, etc.
Google couldn’t really done better, maybe Flutter should’ve become independent after the initial release.
The amount of work just to aggregate and compare is admirable, let alone the effort behind the engines themselves.
How many of these engines are chasing benchmarks at the cost of increased memory usage?
A few years ago I started work on a kind of abstraction layer that would let you plug Rust code into multiple different engines. Got as far as a proof of concept for JavascriptCore and QuickJS (at the time I had iOS and Android in mind as targets). I still think there’s some value in the idea, to avoid making too heavy a bet on one single JS engine.
Every time I look I find repos that look promising at first but are either unmaintained or have a team or just one or two maintainers running them as a side project.
I want my sandbox to be backed by a large, well funded security team working for a product with real money on the line if there are any holes!
(Playing with Cloudflare workerd this morning, which seems like it should cover my organizational requirements at least.)
Update: Classic, even Cloudflare workerd has "WARNING: workerd is not a hardened sandbox" in the README! https://github.com/cloudflare/workerd?tab=readme-ov-file#war...
You could also look at GraalJS. It's shipped as part of the Oracle Database, there's a security team, patching process etc. It's used in production by Amazon amongst others. It's got flexible sandbox features too.
https://www.graalvm.org/latest/reference-manual/embed-langua...
The way it's written is good for security as well:
https://medium.com/graalvm/writing-truly-memory-safe-jit-com...
Disclosure: I sit next to the GraalVM team.
I looked at GraalVM but was put off by the licensing situation: https://www.graalvm.org/22.3/reference-manual/embed-language...
> GraalVM Enterprise provides the experimental Sandbox Resource Limits feature that allows for the limiting of resources used by guest applications. These resource limits are not available in the Community Edition of GraalVM.
Part of my requirements for a sandbox are strong guarantees against memory or CPU exhaustion from poorly written or malicious code.
https://www.graalvm.org/latest/introduction/#licensing-and-s...
> Oracle GraalVM is licensed under GraalVM Free Terms and Conditions (GFTC) including License for Early Adopter Versions. Subject to the conditions in the license, including the License for Early Adopter Versions, the GFTC is intended to permit use by any user including commercial and production use.
It has all the sandboxing features you might want. I don't know if the disclaimers on the other engines changes much, open source software always disclaims all liability. Nobody will stand behind something security sensitive unless it's commercial because otherwise there's no way to pay for the security team it requires.
But generally, I think best bet is to offload such things to e.g. Lambda per tenant.
Featured recently on HN.
High budget is no guarantee for absence of critical bugs in an engine, maybe even somewhat opposite - on a big team the incentives are aligned with shipping more features (since nobody gets promoted for maintenance, especially at Google) -> increasing complexity -> increasing bug surface.
If speed is less important and you can live without JIT, that expands your options dramatically and eliminates a large class of bugs. You could take a lightweight engine and compile it to a memory-safe runtime, that'd give you yet another security layer for peace of mind. Several projects did such ports to Wasm/JS/Go - for example your browser likely runs QuickJS to interpret JavaScript inside .pdf (https://github.com/mozilla/pdf.js.quickjs)
workerd does not include any sandboxing layers other than V8 itself. If someone has a V8 zero-day exploit, they can break out of the sandbox.
But putting aside zero-day exploits for a moment, workerd is designed to be a sandbox. That is, applications by default have access to nothing except what you give them. There is only one default-on type of access: public internet access (covering public IPs only). You can disable this by overriding `globalOutbound` in the config (with which you can either intercept internet requests, or just block them).
This is pretty different from e.g. Node, which starts from the assumption that apps should have permission to run arbitrary native code, limited only by the permissions of the user account under which Node is running.
Some other runtimes advertise various forms of permissions, but workerd is the only one I know of where this is the core intended use case, and where all permissions (other than optionally public internet access, as mentioned) must be granted via capability-based security.
Unfortunately, JavaScript engines are complicated, which means they tend to have bugs, and these bugs are often exploitable to escape the sandbox. This is not just true of V8, it's true of all of them; any that claims otherwise is naive. Cloudflare in production has a multi-layer security model to mitigate this, but our model involves a lot of, shall we say, active management which can't easily be packaged up into an open source product.
With all that said, not all threat models require you to worry about such zero-day exploits, and you need to think about risk/benefit tradeoffs. We obviously have to worry about zero-days at Cloudflare since anyone can just upload code to us and run it. But if you're not literally accepting code directly from anonymous internet users then the risk may be a lot lower, and the overall security benefit of fine-grained sandboxing may be worth the increased exposure to zero-days.
The problem I have is that I'm just one person and I don't want to be on call 24/7 ready to react to sandbox escapes, so I'm hoping I can find a solution that someone else built where they are willing to say "this is safe: you can feed in a string of untrusted JavaScript and we are confident it won't break out again".
I think I might be able to get there via WebAssembly (e.g. with QuickJS or MicroQuickJS compiled to WASM) because the whole point of WebAssembly is to solve this one problem.
> But if you're not literally accepting code directly from anonymous internet users then the risk may be a lot lower
That's the problem: this is exactly what I want to be able to do!
I want to build extension systems for my own apps such that users can run their own code or paste in code written by other people and have it execute safely. Similar to Shopify Functions: https://shopify.dev/docs/apps/build/functions
I think the value unlocked by this kind of extension mechanism is ready to skyrocket, because users can use LLMs to help write that code for them.
For Wasm to be a secure sandbox, you have to assume a bug-free compiler/interpreter, which, alas, none of them really are. It's a somewhat easier problem than building a bug-free JavaScript runtime, but not by as much as you might expect, sadly.
> I want to build extension systems for my own apps such that users can run their own code or paste in code written by other people and have it execute safely. Similar to Shopify Functions: https://shopify.dev/docs/apps/build/functions
Ah, this is exactly the Workers for Platforms use case: https://developers.cloudflare.com/cloudflare-for-platforms/w...
And indeed, Shopify uses it: https://shopify.engineering/how-we-built-oxygen
(There's also the upcoming Dynamic Worker Loader API: https://developers.cloudflare.com/workers/runtime-apis/bindi...)
But it sounds like you really do want to self-host? I don't blame you, but that does make it tough. I'm not sure there's any such thing as a secure sandbox that doesn't require some level of monitoring and daily maintenance, sadly. (But admittedly I may be biased.)
I've been picking at this problem for a few years now!
On the one hand I get why it's so hard. But it really feels like it should be possible to solve this in 2026 - executing arbitrary code in a way that constrains its memory and CPU time usage is a problem our industry solves in browsers and hosting platforms and databases and all sorts of other places, and has done for decades.
The whole LLM-assisted end-user programming thing makes solving this with the right developer affordances so valuable!
If Simon's users choose to self-host the open source version of his service, they are probably using it to run their own code, and so the sandbox security matters less, and workerd may be fine. The sandbox only matters when Simon himself offers his software as a service, which he could do using Workers for Platforms.
(But this is a self-serving argument coming from me.)
It hasn't been updated in some time, but it should still be working, and can probably be brought up to date with some small effort: https://github.com/facebook/hermes/tree/static_h/API/hermes_...
EDIT: Reading some of your other comments, I should point out that this is more like a component of a possible solution. It does not attempt to prevent resource exhaustion or crashes due to corrupted internal state.
Even if you go with something backed by a full time team there is still going to be a chance you have to deal with a security issue in a hurry, maybe in the run up to Christmas. That is just going to come with the territory and if you don’t want to deal with that then you probably need to think about whether you really need a sandbox that can execute untrusted code.
Benchmark numbers for request isolated JS hello world / React page rendering:
JCO/wasmtime: 314µs / 13ms
Bun process forking: 1.7ms / 8.2ms
v8 isolate from snapshot: 0.7ms / 22ms
TinyKVM: 52µs / 708µs
Native with reuse 14µs / 640µs
Numbers taken from our upcoming TinyKVM paper. Benchmark setup code for JCO/wasmtime is here: https://github.com/libriscv/kvmserver/tree/main/examples/was...(I suspect even if we are able to get TinyKVM into a state you'd feel comfortable with in the future it would still be an awkward fit for Datasette since nested virtualisation is not exposed on AWS EC2.)
How much are you ready to pay for a license?
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Just keep benchmark code limited to standard ECMAScript, don't expect any browser or Node APIs besides console.log() or print().
My n=1 as a long time Firefox user is that performance is a non-issue (for the sites I frequent). I’m much more likely to switch browsers because of annoying bugs, like crashes due to FF installed as a snap.
It honestly is pretty surprising, given that JS runtime runs website code single-threaded.
The gap is not so big these days. JavaScriptCore, Spidermonkey, and V8 are all competent.
gunwd•23h ago
And SpiderMonkey seems... not up there compared to the other 2
DarkFuture•21h ago
I just ran the JetStream2 benchmark and got:
- Firefox: 159 score
- Chromium: 235 score
That's on latest Fedora Linux and Ryzen 3600 CPU.
p_ing•20h ago
- Firefox: 253.584
- Safari: 377.470
- Chrome: 408.332
- Edge: 412.005
tralarpa•19h ago
I'm curious to know what the problem of Firefox is. For example, the 3d-raytrace-SP benchmark is nearly three times faster on Edge than on Firefox on my i7 laptop. The code of that benchmark is very simple and mostly consists of basic math operations and array accesses. Maybe the canvas operations are particularly slow on Firefox? This seems to be an example that developers should take a look at.
360MustangScope•14h ago
nicoburns•14h ago
That seems likely. WebRender (Firefox's GPU accellerated rendering backend) doesn't do vector rasterization. So Firefox rasterizes vectors using the CPU-only version of Skia and then uploads them to the GPU as textures. Apparently the upload process is often the bottleneck.
In contrast, Chrome uses (GPU-accelerated) Skia for everything. And Skia can render vector graphics directly into GPU memory (at least part of the rasterization pipeline is GPU accelerated). I would expect this to be quite a bit faster under load.
It's a known problem, but I hear that almost all of the Gecko graphics team's capacity beyond general maintenance is going towards implementing WebGPU.
---
SpiderMonkey is also now just quite a bit slower than V8 which may contribute.
forgotpwd16•21h ago
esprehn•20h ago
aurareturn•20h ago
gurgunday•20h ago
I believe long term, V8 will become the undisputed champ again as Google has a lot more incentive than Apple to make the fastest engine, but this is just a wild guess of mine, and I'm biased being a Node.js Collaborator
I've been hearing for a while that JSCore has a more elegant internal architecture than V8, and seeing the V8 team make big architectural changes as we speak seems to support it [1], but like I said, hopefully they will pay off long term
[1]
— https://v8.dev/blog/leaving-the-sea-of-nodes
— https://v8.dev/blog/maglev
lionkor•19h ago
What are those incentives? I see no incentive for Google to make something fast.
charcircuit•19h ago
lionkor•33m ago
gurgunday•19h ago
Octoth0rpe•18h ago
johannes1234321•18h ago
attractivechaos•18h ago
aurareturn•14h ago
Why would Google have more incentive than Apple to make the fastest engine? Safari being the fastest mobile browser is important to Apple.
If Google had a stronger incentive than Apple, we would have seen V8 being more performant by now.
[0]https://web.archive.org/web/20220724110148/https://bun.sh/