[1] https://github.com/jk-jeon/dragonbox?tab=readme-ov-file#perf...
I think it's less about side effects being common when serializing, just that their fast path avoids anything that could have side effects (like toJSON).
The article touches briefly on this.
People have exploited this sort of side effect to get bug bounties before via type confusion attacks, iirc.
It's a real example of "you can solve just about anything with a billion dollars" though :)
I'd prefer JavaScript kept evolving (think "strict", but "stricter", "stricter still", ...) to a simpler and easier to compile/JIT language.
[1] https://source.chromium.org/chromium/_/chromium/v8/v8/+/5cbc...
[2] https://github.com/facebook/folly/commit/2f0cabfb48b8a8df84f...
So array list instead of array?
hinkley•6h ago
Sooner or later is seems like everyone gets the idea of reducing event loop stalls in their NodeJS code by trying to offload it to another thread, only to discover they’ve tripled the CPU load in the main thread.
I’ve seen people stringify arrays one entry at a time. Sounds like maybe they are doing that internally now.
If anything I would encourage the V8 team to go farther with this. Can you avoid bailing out for subsets of data? What about the CString issue? Does this bring faststr back from the dead?
jcdavis•3h ago
As someone who has come from a JVM/go background, I was kinda shocked how amateur hour it felt tbh.
MehdiHK•2h ago
That's what I experienced too. But I think the deeper problem is Node's cooperative multitasking model. A preemptive multitasking (like Go) wouldn't block the whole event-loop (other concurrent tasks) during serializing a large response (often the case with GraphQL, but possible with any other API too). Yeah, it does kinda feel like amateur hour.
hinkley•2h ago
Just so. It is, or at least can be, the plurality of the sequential part of any Amdahl's Law calculation for Nodejs.
I'm curious if any of the 'side effect free' commentary in this post is about moving parts of the JSON calculation off of the event loop. That would certainly be very interesting if true.
However for concurrency reasons I suspect it could never be fully off. The best you could likely do is have multiple threads converting the object while the event loop remains blocked. Not entirely unlike concurrent marking in the JVM.
dmit•1h ago
brundolf•2h ago
hinkley•2h ago
Most tasks take more memory in the middle that at the beginning and end. And if you're sharing memory between processes that can only communicate by setting bytes, then the memory at the beginning and end represents the communication overhead. The latency.
But this is also why things like p-limit work - they pause an array of arbitrary tasks during the induction phase, before the data expands into a complex state that has to be retained in memory concurrent with all of its peers. By partially linearizing you put a clamp on peak memory usage that Promise.all(arr.map(...)) does not, not just the thundering herd fix.
dwattttt•2h ago
Or I guess you can do it without the WebAssembly step.