That said I wonder if it could ever go mainstream – JS is not a trivial language anymore. Matching all of its quirks to the point of being stable seems like a monstrous task. And then Node and all of its APIs are the gorilla in the room too. Even Deno had to acquiesce and replicate those with bugs and all, and it’s just based on V8 too.
We're faster! (please disregard the fact that we're barely more than a demo)
Everyone knows about 80:20, the slowdowns will come after you start doing everything your competition does.
Look at Biome. We're 15x as fast as ESLint (but disregard the fact that we don't do typeaware linting). Then comes typeaware linting and suddenly they have huge performance issues that kill the project (I'm unable to use Biome 2)
This happens over and over and over. The exceptions are very, very few (Bun is one example)
It’s great when those PRs do come, but most of the time there is just empty whining while a developer contributes nothing. This is because most JavaScript developers are deathly afraid to write original software, as that would be reinventing a wheel.
Most JavaScript developers are absolutely incapable of measuring things, so they have no idea when something else is actually faster until else runs the numbers for them. Let’s take your Bun example. Bun is great but Bun is also written in Zig which is faster than C++. Bun claims to be 3x faster at WebSockets than Nodes popular WS package, because Bun can achieve a send rate of 700,000 messages per second (numbers from 5 years ago). Bun is good at measuring things. What they don’t say is that WS is just slow. I wrote my own WebSocket library for Node in TypeScript about 5 years ago that can achieve a send rate of just under 500,000 messages per second. What they also don’t tell you is that WebSockets are 11x faster to send than to receive due to frame header interpretation. I say not to disparage Bun but to show your empty worship is misplaced if you aren’t part of the solution.
But I want someone else to make it faster without me having to do anything.
Did you know that moment.js used to be a microlibrary? It came out replacing "the huge Date.js library."
If you've been in the field long enough, you've seen this cycle repeat over and over.
moment is a library discontinued years ago. They had a couple of commits in 2023 and a couple in 2022, but work stopped on the library almost 5 years ago. It is an abstraction over the big scary native Date object.
Up until 2 years ago I was writing JS full time for 15 years. I just saw a lot of people bitch and cry about inventing wheels and whining at performance targets they just guessed at. Its because most of these people could not write code. During this time there were some amazing things being written in JavaScript, and later TypeScript, but these awesome things were almost exclusively personal hobby projects the regular employed JS developer either bitched about or relied upon for career survival, because bitching and crying is what you do when you can't otherwise contribute in a meaningful way.
Edit: it's not 4s anymore, I just measured with the latest version and it takes ~900ms. Insane.
I have used eslint in very large projects (far more than 3000 files) and running multiple instances via a task runner makes it a breeze to keep it under 30s or less, especially if you use the cache
It would be amazing if they pull this off. Being able to compile JS to produce minimal binaries for CLIs or just to make slim containers would be nice.
And chain it with other stuff as well which is where workflow engines like n8n or Unmeshed.io works better. You can mix up lambdas in different languages as well.
esbuild src/handler.ts --bundle --external:@aws-lambda-powertools --external:@aws-sdk --minify --outfile=dist/handler.js --platform=node --sourcemap --target=es2022 --tree-shaking=true
Maybe I'm not doing as much as others in my functions and I tend to stick within the AWS ecosystem, so I save some space and I presume cold-start time by not including the AWS SDK/Powertools in the output, but my functions tend to cold start and complete in ~100ms.- Porffor can use typescript types to significantly improve the compilation. It's in many ways more exciting as a TS compiler.
- There's no GC yet, and likely will be a while before it gets any. But you can get very far with no GC, particularly if you are doing something like serving web requests. You can fork a process per request and throw it away each time reclaiming all memory, or have a very simple arena allocator that works at the request level. It would be incredibly performant and not have the overhead of a full GC implementation.
- many of the restrictions that people associate with JS are due to VMs being designed to run untrusted code. If you compile your trusted TS/JS to native you can do many new things, such as use traditional threads, fork, and have proper low level memory access. Separating the concept of TS/JS from the runtime is long overdue.
- using WASM as the IR (intermediate representation) is inspired. It is unlikely that many people would run something compiled with Porffor in a WASM runtime, but the portability it brings is very compelling.
This experiment from Oliver doesn't show that Porffor is ready for production, but it does validate that he is on the right track, and that the ideas he is exploring are correct. That's the imports take away. Give it 12 months and exciting things will be happing.
This is just outright wrong. JS limitations come from lots of things:
1. The language has almost zero undefined behavior by design. Code will essentially never behave differently on different platforms.
2. JS has traditional threads in the form of web workers. This interface exists not for untrusted code but because of thread safety. That's a language design, like channels in Go, rather than a sandboxing consideration.
3. Pretty much every non-browser JS runtime has the ability to fork.
4. JS is fully garbage collected, of course you don't get your own memory management. You can use buffers to manage your own memory if you really want to. WASM lets you manage your own memory and it can run "untrusted" code in the browser with the WASM runtime; your example just doesn't hold water. There's no way you could fiddle with the stack or heap in JS without making it not JS.
5. The language comes with thirty years of baggage, and the language spec almost never breaks backwards compatibility.
Ironically Porffor has no IO at the moment, which is present in literally every JS runtime. It really has nothing to do with untrusted code like you're suggesting.
> You can fork a process per request and throw it away each time reclaiming all memory, or have a very simple arena allocator that works at the request level. It would be incredibly performant and not have the overhead of a full GC implementation.
You also must admit that this would make Porffor incompatible with existing runtimes. Code today can modify the global state, and that state can and does persist across requests. It's a common pattern to keep in-memory caches or to lazily initialize libraries. If every request is fully isolated in the future but not now, you can end up with performance cliffs or a system where a series of requests on Node return different results than a series of requests on Porffor.
As for arena allocation, this makes it even less compatible with Node (if not intractable). If means you can't write (in JS) any code that mutates memory that was initialized during startup. If you store a reference to an object in an arena in an object initialized during startup, at the end of the request when the arena is freed you now have a pointer into uninitialized memory.
How do you tell the developer what they can and cannot mutate? You can't, because any existing variable might be a reference to memory initialized during startup. Your function might receive an object as an argument that was initialized during startup or one that's wasn't, and there's no way to know whether it's safe to mutate it.
Long story short, JS must have a garage collector to free memory, or it's not JS.
> It is unlikely that many people would run something compiled with Porffor in a WASM runtime, but the portability it brings is very compelling.
Node (via SEA in v20), bun, and deno all have built in tooling for generating a self-contained binary. Granted, the runtime needs to work for your OS and CPU, but the exact same thing could be said about a WASM runtime.
And of course there are hundreds of mature bundlers that can compile JS into a single file that runs in various runtimes without ever thinking about platform. It's weird to even consider portability of JS as a benefit because JS is already almost maximally portable.
> This experiment from Oliver doesn't show that Porffor is ready for production, but it does validate that he is on the right track, and that the ideas he is exploring are correct.
It validates that the approach to building a compiler is correct, but it says little about whether the project will eventually be usable and good. It's unlikely it'll get faster, because robust JS compatibility will require more edge cases to be handled than it currently does, and as Porffor's own README says, it's still slower than most JITted runtimes. A stable release might not yield much.
Almost none of your criticisms connects with anything that the other person wrote.
Nowhere did I say that full, or even any, compatibility with Node is needed - it isn't.
We need to stop conflating JS the language with the runtimes.
A JS runtime absolutely can get by without a GC, you just never dealloc and consume indefinitely. That doesn't change any semantics of the language, if a value/object is inaccessible, it's inaccessible...
An arena allocator provides a route to say embedding a js-to-native app in a single threaded web server like Nginx, you don't need to share memory between what in effect become "isolates".
It doesn't protect end users any more than it protects servers. Node could easily expose raw threading, but they don't because nearly the whole language isn't thread safe and everything would break. It has almost nothing to do with protecting users, it's a language design decision that enforces other design constraints.
> We need to stop conflating JS the language with the runtimes
If you're just sharing syntax but the standard library is different and essentially none of the code is compatible, it's not the same language. ECMAScript specifies all of the things you're talking about, and that is JavaScript, irrespective of the runtime.
> A JS runtime absolutely can get by without a GC, you just never dealloc and consume indefinitely. That doesn't change any semantics of the language, if a value/object is inaccessible, it's inaccessible...
If you throw away the whole heap on every request, now every request it's definitionally a "cold start". Which negates the singular benefit that this post is calling out. Porffor is still not faster than JITed engines at runtime, and initializing the code still has to happen.
> Nowhere did I say that full, or even any, compatibility with Node is needed - it isn't.
You have to square what you're saying with this statement. What you're describing is JavaScript in syntax only. You're talking about major departures from the formal language spec. Existing JavaScript code is likely to break. Why not just make a new language and call it something else, like Crystal is to Ruby? It works different, you're saying it doesn't care about compatibility... Why even call it JS then?
I suggest you go and read the EMCAScript standard: https://ecma-international.org/publications-and-standards/st...
There is nothing in there about browser APIs, and in fact it explicitly states that the browser runtime, or any other runtime + api are not EMCAScript.
There is no language I’m aware of where workers behave like “traditional threads”. They’re isolates. Not threads.
I would be curious which attack vectors change or become safe after compiling though.
I don't think anything changes with compile to native on the server.
> - Porffor can use typescript types to significantly improve the compilation. It's in many ways more exciting as a TS compiler.
Proffor could use types, but TypeScript's type system is very unsound and doing so could lead to serious bugs and security vulnerabilities. I haven't kept track of what Oliver's doing here lately, but I think the best and still safe thing you could do is compile an optimistic, optimized version of functions (and maybe basic blocks) based on the declared argument types, but you'd still need a type guard to fall back to the general version when the types aren't as expected.
This isn't far from what a multi-tier JIT does, and the JIT has a lot more flexibility to generate functions for the actual observed types, not just the declared types. This can be a big help when the declared types are interfaces, but in an execution you only see specific concrete types.
> or have a very simple arena allocator that works at the request level.
This isn't viable. JS semantics mean that the request handling path can generate objects that are held from outside the request's arena. You can't free them or you'd get use-after-free problems.
> - many of the restrictions that people associate with JS are due to VMs being designed to run untrusted code
This is true to some extent, but most of the restrictions are baked into the language design. JS is a single-threaded non-shared memory language by design. The lack of threads has nothing to do with security. Other sandboxed languages, famously Java, have threads. Apple experimented with multithreaded JS and it hasn't moved forward not because of security but because it breaks JS semantics. Fork is possible in JS already, because it's a VM concept, not a language concept. Low-level memory access would completely break the memory model of JS and open up even trusted code to serious bugs and security vulnerabilities.
> It is unlikely that many people would run something compiled with Porffor in a WASM runtime
Running JS in WASM is actually the thing I'm most excited about from Porffor. There are a more and more WASM runtimes, and JS is handicapped there compared to Rust. Being able to intermix JS, Rust, and Go in a single portable, secure runtime is a killer feature.
Please do go and check up what the state of using types to inform the compiler is (I'm not incorrect)
On the area allocator, I wasn't clear enough, as stated elsewhere this was in relation to having something similar to isolates - each having a memory space that's cleaned up on exit.
Python has almost identical semantics to JS, and has threads - there is nothing in the EMCAScript standard that would prevent them.
Dart had very similar issues and constraints and they couldn't do a proper AOT compiler that considered types until they made the type system sound. TypeScript can never do that and maintain compatibility with JS.
Isolates are already available as workers. The key thing is that you can't have shared memory, other wise you can get cross-Isolate references and have all the synchronization problems of threads.
And ECMAScript is simply just specified as a single-threaded language. You break it with shared-memory threads.
In JS, this always logs '4'. With threads that's not always the case.
let x = 4;
console.log(x);Well... unsafe and impossible aren't quite the same thing. I guess this is possible if you throw out "safe" as a requirement?
It'd also be interesting to see comparisons to the Java and .NET runtimes on AWS Lambda.
Note: we don't support .NET or Java atm, but we support PHP and Python is about to be fully supported!
A previous job I worked at ran Java on AWS Lambda. We ran our busiest Java lambda in a docker layer as our whole build system was designed around docker and from a compute performance point of view it was just as fast.
The main issues were:
* Longer init times for the JRE (including before the JIT kicks in). Our metrics had a noticeable performance hit that lined up to the startup of freshly initialized lambdas. It was still well within a tolerable range for us, though.
* Garbage collection almost never ran cleanly due the code suspension between invocations, which means we had to allocate more memory than we really needed.
The native AWS Lambda Java's 'snap start' would have helped, but the startup times were just not a big deal for our use case - we didn't bother with provisioning lambdas either. Despite the added memory costs, it was also still cheap enough that it was not really worth us investigating Java's parallel GC.
So as always, what language one should use depends on your use case. If you have an app that's sensitive to "random" delays, then you probably want something with less startup overhead.
>The Rust runtime client is an experimental package. It is subject to change and intended only for evaluation purposes.
https://developers.cloudflare.com/workers/runtime-apis/webas...
Use an experimental (as in, 60% of ECMA tests passing, "currently no good I/O or Node compat") AOT compiler for JS. You remove the cold start by removing the runtime, at the cost of maybe your JavaScript working and not having a garbage collector.
But other than that it's impossible to assess performance with such a tiny toy.
That isn't how lambda works. A single lambda instance can run for 10s of minutes handling thousands or requests. So not cleaning up memory sounds like a problem to me since you're billed by the GB/second
When a lambda is invocated, that docker container isn’t immediately destroyed. It’s kept around for reuse.
So there isn’t a realistic scenario where you would get a million cold starts in a month.
This is now how lambda works. A lambda instance sticks around for 10s of minutes at most. So you'll have cold starts every day no matter how often you deploy.
If you have very little traffic you might actually have cold starts often as the lifetime of a lambda is determined based on its use.
Unfortunately AWS has always been rather unspecific about this behavior but here's a link to a pluralsight blog https://www.pluralsight.com/resources/blog/cloud/how-long-do...
Lambda is far more economic given the memory requirements, but we recently moved to Rust+Candle to shave off ~300ms on cold starts as the lag could be really jarring.
https://nodejs.org/api/single-executable-applications.html#s...
The project homepage is awesome, it's a mix between a throwback to retro documentation (with ascii charts) and a console out of godbolt: https://porffor.dev/
The hangup on lack of GC is probably unnecessarily overwrought, WasmGC is pretty much here and there will be an entire ecosystem of libraries providing JS GC semantics for WASM compilers that this compiler can tap into (actually implementing the backend/runtime GC support is fairly trivial for baseline support).
I still like Proffor's approach because by compiling to WASM we could have latency-optimized WASM runtimes (though I'm unsure what that might entail) that would benefit other languages as well.
- Cold starts are kinda rare, sure it sucks that your request takes 600ms, but that means you are the first user. If you would've been served by a container that was just scaled up, you'd have been waiting for much longer
- Microservices and AWS lambda are inherently stateless, and do a ton of things to make themselves useful - get credentials, establish db connections, query configuration endpoints, all of which take time, usually more than your runtime spends booting up.
As much as I like lambdas for their deployment and operational simplicity, if you want the best UX, they have inherent technical limitations which make them the wrong choice.
Terretta•5mo ago