They didn't have this kind of compute back when the article was written. Which is the point in the article.
Should have prefixed my comment wirh "nowadays"
We're slowly getting back to similarly-sized systems. IBM now has POWER systems with more than 1,500 threads (although I assume those are SMT8 configurations). This is a bit annoying because too many programs assume that the CPU mask fits into 128 bytes, which limits the CPU (hardware thread) count to 1,024. We fixed a few of these bugs twenty years ago, but as these systems fell out of use, similar problems are back.
This is equal to the combined single precision GPU and CPU horsepower of a modern MacBook [1]. Really makes you think about how resource-intensive even the simplest of modern software is...
[1]: https://www.amd.com/content/dam/amd/en/documents/products/et...
A lot of software time is spent making something scalable when in 2025 I can probably run any site the bottom 99% of most visited sites on the internet on a couple machines and < 40k capital.
What % is the AWS console, and what counts as "running" it?
0%
Prior to the recent RAM insanity(a big caveat I know) a 1u supermicro machine with 768GB some NVME storage and twin 32 core Epyc 9004s was ~12K USD. You can get 3 of those and and some redundant 10G network infra(people are literally throwing this out) for < 40k. Then you just have to find a rack/internet connection to put them in which would be a few hundred a month.
The reality is most sites don't need multi region setups, they have very predicable load and 3 of those machines would be massive overkill for many. A lot of people like to think they will lose millions per second of down time, and some sites certainly do but most wont.
All of this of course would be using new stuff. If you wanted to use used stuff the most cost effective are the 5 year old second gen xeon scalables that are being dumped by cloud providers. Those are more than enough compute for most they are just really thirsty so you will pay with the power bill.
This of course is predicated on assumption you have the skill set to support these machines and that is increasingly becoming less common though as successful companies that started in the last 10 years are starting to do more "hybrid cloud" it is starting to come back around.
Otherwise Viaweb would be the shining star of 2025. Instead it's a forgotten footnote on a path to programming with money (VC).
A lot of analytic data is like that. If you captured it for 1% of users you'd find out what you needed to know at 1% of the cost.
This article describes the 10k client connection problem, you should be handling 256K clients :)
When they say "most companies can run in a single server, but do backups" they usually mean the physical kind.
"libuv is a multi-platform C library that provides support for asynchronous I/O based on event loops. It supports epoll(4), kqueue(2)"
Picking the correct theoretical architecture can't save you if you bog down on every practical decision.
If you haven’t had experience with actual performant code JS can seem fast. But it’s is a Huffy bike compared to a Kawasaki H2. Sure it is better than a kid’s trike but it is not a performance system by any stretch of the imagination. You use JS for convenience, not performance.
You really don't have to go far down the techempower benchmarks to get to JS. Losing let's just say 33% performance versus an extremely tuned much more minimalist framework on what is practically a micro benchmark is far from the savage fall from civilization & decadence of man, deserving far less scorn and shame than what the cult of js hate fuels their fires on.
I could go on about how JS has benefitted from having incredibly work poured into it, because it is the most popular runtime on the planet, because it's everywhere, because there was a hot war for companies to try to beat each other on making their js runtime good (one of the only langs with many runtimes is interesting). It's a bit excessive & maybe it should have been a more deserving language perhaps (we can debate endlessly), but man... It just doesn't matter. Stressing out, being so mean and nasty (huffy vs Kawasaki, @hoppp's even worse top-voted half sentence snark takedown), trying to impress upon people this image that it's all just so bad: I think there's such a ridiculous off kilter tilting way too hard, and far less of it is about good reasons and valid concerns, and so so so much of it is this bandwagon of negative energy, is radical overconcern.
Like most tools & languages, it's what you do with it. With JS, we have a problem that (mainstream) software hadn't faced before, which is client server architecture, that the client might be a cruddy cheap phone somewhere and/or on a dodgy link. We are trying to build experiences that work across systems, with significant user perceived latency sometimes. And so data & systems architecture matters a lot. Trying to get keep the client primed with up to date data that it needs to render, doing client work without blocking/while maintaining user responsiveness, are hard fun multi-threaded (webworker) challenges, for those folks that care.
And those challenges aren't unique to js. Other languages have similar challenges. Trying to multithread a win32 UI to avoid blocking also was a bit of a nightmare, working off main thread. Doing data sync is a nightmare. There's so many ways to get stuff wrong. And I think a lot of the js code out there does get it wrong. But we experience hundreds or thousands of websites a week, and crucial tools we use that are js client-server are badly architected. I sympathize with why js has such a bad rap. To me it usually seems like architectural app design issues, that companies are too busy building features to really consider the core, to establish data architectures that don't block or lag. And that's not a specific js problem.
There are faster systems yes, but man, the energy being poured into blaming the worst of the world on JS seem ridiculous to me, like such a sad drag, that avoids any interest or fascination on what is so so interesting & so worthy. The language is the least remarkable part of the equation, practically doesn't matter, and the marginal performance levels are (with some exception for very specific cases) almost never a remotely critical factor. Just so so so tired, knowing such enormous and pointless frivolous scorn and disdain is going to overwhelm all conversations, going to take over every thread forever, when it matters so so little, is so very rarely a major determinant.
JS does not have to be the thought terminating cliche to every thread ever (and humbly, I'd assert it doesn't deserve that false conviction either. But either way!).
Care to summarize?
The people ripping into js suck up the interesting energies, and bring nothing of value.
We are discussing tech where having a custom userland TCP stack is not just a viable option but nearly a requirement and you are talking about using a lighter JS framework. We are not having the same conversation. I highly recommend you get off Dunning-Kruger mountain by writing even a basic network server using a systems language to learn something new and interesting. It is more effort than flaming on a forum but much more instructive.
JavaScript engines also are also JITted which is better than a straight interpreter but except microbenchmarks worse than compiled code.
I use it for nearly all my projects. It is fine for most UI stuff and is OK for some server stuff (though Python is superior in every way). But would never want to replace something like nginx with a JavaScript based web server.
https://youtu.be/hjjydz40rNI?si=F7aLOSkLqMzgh2-U
(From Wayne's World--how we knew the comedians had smart advisors)
gnabgib•2d ago
Title: The C10K problem
Popular in:
2014 (112 points, 55 comments) https://news.ycombinator.com/item?id=7250432
2007 (13 points, 3 comments) https://news.ycombinator.com/item?id=45603