frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Growing up in “404 Not Found”: China's nuclear city in the Gobi Desert

https://substack.com/inbox/post/182743659
578•Vincent_Yan404•13h ago•248 comments

Remembering Lou Gerstner

https://newsroom.ibm.com/2025-12-28-Remembering-Lou-Gerstner
19•thm•1h ago•5 comments

Rust errors without dependencies

https://vincents.dev/blog/rust-errors-without-dependencies/
9•vsgherzi•16h ago•0 comments

Calendar

https://neatnik.net/calendar/?year=2026
845•twapi•15h ago•107 comments

No it's not a Battleship

https://www.navalgazing.net/No-its-not
5•hermitcrab•41m ago•1 comments

Building a macOS app to know when my Mac is thermal throttling

https://stanislas.blog/2025/12/macos-thermal-throttling-app/
152•angristan•8h ago•71 comments

Global Memory Shortage Crisis: Market Analysis

https://www.idc.com/resource-center/blog/global-memory-shortage-crisis-market-analysis-and-the-po...
34•naves•4h ago•13 comments

Replacing JavaScript with Just HTML

https://www.htmhell.dev/adventcalendar/2025/27/
625•soheilpro•19h ago•231 comments

Never Use Pixelation to Hide Sensitive Text (2014)

https://dheera.net/posts/20140725-why-you-should-never-use-pixelation/
81•basilikum•1w ago•25 comments

Learn computer graphics from scratch and for free

https://www.scratchapixel.com
82•theusus•9h ago•8 comments

Designing Predictable LLM-Verifier Systems for Formal Method Guarantee

https://arxiv.org/abs/2512.02080
32•PaulHoule•5h ago•5 comments

tc-ematch(8) extended matches for use with "basic", "cgroup" or "flow" filters

https://man7.org/linux/man-pages/man8/tc-ematch.8.html
20•hamonrye•3h ago•0 comments

One year of keeping a tada list

https://www.ducktyped.org/p/one-year-of-keeping-a-tada-list
170•egonschiele•6d ago•52 comments

Floor796

https://floor796.com/
937•krtkush•1d ago•111 comments

Vibration Isolation of Precision Objects (2005) [pdf]

http://www.sandv.com/downloads/0607rivi.pdf
6•nill0•6d ago•0 comments

We "solved" C10K years ago yet we keep reinventing it (2003)

https://www.kegel.com/c10k.html
72•birdculture•2d ago•42 comments

2D Signed Distance Functions

https://iquilezles.org/articles/distfunctions2d/
46•nickswalker•3d ago•3 comments

Last Year on My Mac: Look Back in Disbelief

https://eclecticlight.co/2025/12/28/last-year-on-my-mac-look-back-in-disbelief/
316•vitosartori•10h ago•223 comments

Rex is a safe kernel extension framework that allows Rust in the place of eBPF

https://github.com/rex-rs/rex
125•zdw•5d ago•56 comments

How we lost communication to entertainment

https://ploum.net/2025-12-15-communication-entertainment.html
626•8organicbits•1d ago•346 comments

Hungry Fat Cells Could Someday Starve Cancer

https://www.ucsf.edu/news/2025/01/429411/how-hungry-fat-cells-could-someday-starve-cancer-death
118•mrtnmrtn•10h ago•28 comments

Langfuse (YC W23) Is Hiring in Berlin, Germany

https://langfuse.com/careers
1•clemo_ra•8h ago

A "Prime" View of HN

https://dosaygo-studio.github.io/prime-news/index.html
37•keepamovin•3h ago•22 comments

Fathers’ choices may be packaged and passed down in sperm RNA

https://www.quantamagazine.org/how-dads-fitness-may-be-packaged-and-passed-down-in-sperm-rna-2025...
269•vismit2000•18h ago•165 comments

Deathbed Advice/Regret

https://hazn.com/deathbed-regret
31•paulpauper•3h ago•20 comments

Gpg.fail

https://gpg.fail
422•todsacerdoti•1d ago•260 comments

Dialtone – AOL 3.0 Server

https://dialtone.live/
99•rickcarlino•16h ago•47 comments

Rainbow Six Siege hacked as players get billions of credits and random bans

https://www.shanethegamer.com/esports-news/rainbow-six-siege-hacked-global-server-outage/
268•erhuve•1d ago•134 comments

Functional programming and reliability: ADTs, safety, critical infrastructure

https://blog.rastrian.dev/post/why-reliability-demands-functional-programming-adts-safety-and-cri...
132•rastrian•20h ago•133 comments

Liberating Bluetooth on the ESP32

https://exquisite.tube/w/mEzF442Q4hUXnhQ8HmfZuq
133•todsacerdoti•21h ago•26 comments
Open in hackernews

We "solved" C10K years ago yet we keep reinventing it (2003)

https://www.kegel.com/c10k.html
72•birdculture•2d ago

Comments

gnabgib•2d ago
(2011 / 2003)

Title: The C10K problem

Popular in:

2014 (112 points, 55 comments) https://news.ycombinator.com/item?id=7250432

2007 (13 points, 3 comments) https://news.ycombinator.com/item?id=45603

trueismywork•2d ago
With nginx and 256 core Epycs, most single servers can easily do 200k requests per sec. Very few companies have more needs
intothemild•2d ago
I can't tell if this is sarcasm or not.

They didn't have this kind of compute back when the article was written. Which is the point in the article.

trueismywork•1d ago
Half serious. I guess what Iwas saying is that it is that kind of science which is still very useful but more to nginx developers themselves. And most users now dont have to worry about this anymore.

Should have prefixed my comment wirh "nowadays"

hinkley•1d ago
In spring 2005 Azul introduced a 24 core machine tuned for Java. A couple years later they were at 48 and then jumped to an obscene 768 cores which seemed like such an imaginary number at the time that small companies didn’t really poke them to see what the prices were like. Like it was a typo.
fweimer•2h ago
Before clusters with fast interconnects were a thing, there were quite a few systems that had more than a thousand hardware threads: https://linuxdevices.org/worlds-largest-single-kernel-linux-...

We're slowly getting back to similarly-sized systems. IBM now has POWER systems with more than 1,500 threads (although I assume those are SMT8 configurations). This is a bit annoying because too many programs assume that the CPU mask fits into 128 bytes, which limits the CPU (hardware thread) count to 1,024. We fixed a few of these bugs twenty years ago, but as these systems fell out of use, similar problems are back.

alexjplant•1h ago
> Driven by 1,024 Dual-Core Intel Itanium 2 processors, the new system will generate 13.1 TFLOPs (Teraflops, or trillions of calculations per second) of compute power.

This is equal to the combined single precision GPU and CPU horsepower of a modern MacBook [1]. Really makes you think about how resource-intensive even the simplest of modern software is...

[1] https://www.cpu-monkey.com/en/igpu-apple_m4_10_core

fweimer•1h ago
Note that those 13.1 TFLOPs are FP64, which isn't supported natively on the MacBook GPU. On the other hand, local/per-node memory bandwidth is significantly higher on the MacBook. (Apparently, SGI Altix only had 8.5 to 12.8 GB/s.) Total memory bandwidth on larger Altix systems was of course much higher due to the ridiculous node count. Access to remote memory on other nodes could be quite slow because it had to go through multiple router hops.
hinkley•1h ago
My Apple Watch can blow the doors off a Cray 1. It’s crazy.
marcosdumay•1h ago
The article was written exactly because they had machines capable enough at the time. But the software worked against it on every level.
api•2h ago
I’m shocked that a 256 core Epyc can’t do millions of requests per second at a minimum. Is it limited by the net connection or is there still this much inefficiency?
otterdude•2h ago
256 Processes x 10k clients (per the article) = 256K RPS
tempest_•2h ago
Like anything it really depends on what they are doing, if you wanted to just open and close a connection you might run into bottle necks in other parts of the stack before the CPU tops out but the real point is that yea, a single machine is going to be enough.
zipy124•1h ago
It almost certainly can, even old intel systems with dual CPU 16 core systems could do 4 and a half million a second [1]. At a certain point network/kernel bottlenecks become apparent though, rather than being compute limited.

[1]: https://www.amd.com/content/dam/amd/en/documents/products/et...

tempest_•2h ago
This is how I feel about this industries fetishization of "scalability".

A lot of software time is spent making something scalable when in 2025 I can probably run any site the bottom 99% of most visited sites on the internet on a couple machines and < 40k capital.

tbrownaw•1h ago
> any site the bottom 99% of most visited sites on the internet

What % is the AWS console, and what counts as "running" it?

tempest_•31m ago
> What % is the AWS console

0%

Prior to the recent RAM insanity(a big caveat I know) a 1u supermicro machine with 768GB some NVME storage and twin 32 core Epyc 9004s was ~12K USD. You can get 3 of those and and some redundant 10G network infra(people are literally throwing this out) for < 40k. Then you just have to find a rack/internet connection to put them in which would be a few hundred a month.

The reality is most sites don't need multi region setups, they have very predicable load and 3 of those machines would be massive overkill for many. A lot of people like to think they will lose millions per second of down time, and some sites certainly do but most wont.

All of this of course would be using new stuff. If you wanted to use used stuff the most cost effective are the 5 year old second gen xeon scalables that are being dumped by cloud providers. Those are more than enough compute for most they are just really thirsty so you will pay with the power bill.

This of course is predicated on assumption you have the skill set to support these machines and that is increasingly becoming less common though as successful companies that started in the last 10 years are starting to do more "hybrid cloud" it is starting to come back around.

oblio•31m ago
Raw technical excellence doesn't rake in billions, despite what IT people keep saying.

Otherwise Viaweb would be the shining star of 2025. Instead it's a forgotten footnote on a path to programming with money (VC).

Animats•28m ago
The sites that think they need huge numbers of small network interactions are probably collecting too much detailed data about user interaction. Like capturing cursor movement. That might be worth doing for 1% of users to find hot spots, but capturing it for all of them is wasteful.

A lot of analytic data is like that. If you captured it for 1% of users you'd find out what you needed to know at 1% of the cost.

otterdude•2h ago
When people talk about a single server they're not talking about one hunk of metal, they're talking about 1 server process.

This article describes the 10k client connection problem, you should be handling 256K clients :)

marcosdumay•1h ago
When people talk about a single server they are pretty much talking about either a single physical box with a CPU inside or a VPS using a few processor threads.

When they say "most companies can run in a single server, but do backups" they usually mean the physical kind.

hinkley•1d ago
I don’t think I even heard of C10K until around 2003.
hoppp•4h ago
We solved it 2 decades ago but then decided to use javascript on the server ...
wmf•2h ago
Node.js is actually pretty good at C10K but it failed at multicore and C10M.
mifreewil•2h ago
Node.js uses libuv, which implements strategy 2. mentioned on the linked webpage.

"libuv is a multi-platform C library that provides support for asynchronous I/O based on event loops. It supports epoll(4), kqueue(2)"

marcosdumay•1h ago
Except that it wastes 2 or 3 orders of magnitude in performance and polls all the connections from a single OS thread, locking everything if it has to do extra work on any of them.

Picking the correct theoretical architecture can't save you if you bog down on every practical decision.

mifreewil•1h ago
I'm sure there is plenty of data/benchmarks out there and I'll let that speak for itself, but I'll just point out that there are 2 built-in core modules in Node.js, worker_threads (threads) and cluster (processes) which are very easy to bolt on to an existing plain http app.
IgorPartola•1h ago
So think of it this way: you want to avoid calling malloc() to increase performance. JavaScript does not have the semantics to avoid this. You also want to avoid looping. JavaScript does not have the semantics to avoid it.

If you haven’t had experience with actual performant code JS can seem fast. But it’s is a Huffy bike compared to a Kawasaki H2. Sure it is better than a kid’s trike but it is not a performance system by any stretch of the imagination. You use JS for convenience, not performance.

jauntywundrkind•1h ago
What a sad miserable cult it is, to be obsessed with performance, and to so desperately need to beat up on a language. I don't know if people spewing this shit are aware of how dull their radical extremism is?

You really don't have to go far down the techempower benchmarks to get to JS. Losing let's just say 33% performance versus an extremely tuned much more minimalist framework on what is practically a micro benchmark is far from the savage fall from civilization & decadence of man, deserving far less scorn and shame than what the cult of js hate fuels their fires on.

I could go on about how JS has benefitted from having incredibly work poured into it, because it is the most popular runtime on the planet, because it's everywhere, because there was a hot war for companies to try to beat each other on making their js runtime good (one of the only langs with many runtimes is interesting). It's a bit excessive & maybe it should have been a more deserving language perhaps (we can debate endlessly), but man... It just doesn't matter. Stressing out, being so mean and nasty (huffy vs Kawasaki, @hoppp's even worse top-voted half sentence snark takedown), trying to impress upon people this image that it's all just so bad: I think there's such a ridiculous off kilter tilting way too hard, and far less of it is about good reasons and valid concerns, and so so so much of it is this bandwagon of negative energy, is radical overconcern.

Like most tools & languages, it's what you do with it. With JS, we have a problem that (mainstream) software hadn't faced before, which is client server architecture, that the client might be a cruddy cheap phone somewhere and/or on a dodgy link. We are trying to build experiences that work across systems, with significant user perceived latency sometimes. And so data & systems architecture matters a lot. Trying to get keep the client primed with up to date data that it needs to render, doing client work without blocking/while maintaining user responsiveness, are hard fun multi-threaded (webworker) challenges, for those folks that care.

And those challenges aren't unique to js. Other languages have similar challenges. Trying to multithread a win32 UI to avoid blocking also was a bit of a nightmare, working off main thread. Doing data sync is a nightmare. There's so many ways to get stuff wrong. And I think a lot of the js code out there does get it wrong. But we experience hundreds or thousands of websites a week, and crucial tools we use that are js client-server are badly architected. I sympathize with why js has such a bad rap. To me it usually seems like architectural app design issues, that companies are too busy building features to really consider the core, to establish data architectures that don't block or lag. And that's not a specific js problem.

There are faster systems yes, but man, the energy being poured into blaming the worst of the world on JS seem ridiculous to me, like such a sad drag, that avoids any interest or fascination on what is so so interesting & so worthy. The language is the least remarkable part of the equation, practically doesn't matter, and the marginal performance levels are (with some exception for very specific cases) almost never a remotely critical factor. Just so so so tired, knowing such enormous and pointless frivolous scorn and disdain is going to overwhelm all conversations, going to take over every thread forever, when it matters so so little, is so very rarely a major determinant.

JS does not have to be the thought terminating cliche to every thread ever (and humbly, I'd assert it doesn't deserve that false conviction either. But either way!).

mystraline•29m ago
Tl;dr.

Care to summarize?

jauntywundrkind•17m ago
Architecture is far more important than runtime speed. (People are so easily swayed by "JS SUCKS LOL" because of experiences with terrible & careless client-server architectures, far more than js itself being "slow".)

The people ripping into js suck up the interesting energies, and bring nothing of value.

dilyevsky•11m ago
Nothing about current js ecosystem screams good architecture it’s hacks on hacks and a culture of totally ignoring everything outside of your own little bubble. Reminds me of early 2000s javabeans scene
IgorPartola•4m ago
If we are discussing C10K we are by definition discussing performance. JavaScript does not enter this conversation any more than BASIC. Yes of course architecture matters. Nobody has been arguing otherwise. But the point is that if you take the best architecture and implement it in the best available JS environment you are still nowhere close to the same architecture implemented in a systems language in terms of performance. You are welcome to your wishful thinking that you do not need to learn anything besides JavaScript when it comes to this conversation. But no matter how hard you argue it will continue being wishful thinking.

We are discussing tech where having a custom userland TCP stack is not just a viable option but nearly a requirement and you are talking about using a lighter JS framework. We are not having the same conversation. I highly recommend you get off Dunning-Kruger mountain by writing even a basic network server using a systems language to learn something new and interesting. It is more effort than flaming on a forum but much more instructive.

winrid•40m ago
(to be fair the memory manager reuses memory, so it's not calling out to malloc all the time, but yes a manually-managed impl. will be much more efficient)
IgorPartola•14m ago
Whichever way you manage memory, it is overhead. But the main problem is the language does not have zero copy semantics so lots of things trigger a memcpy(). But if you also need to call malloc() or even worse if you have to make syscalls you are hosed. Syscalls aren’t just functions, they require a whole lot of orchestration to make happen.

JavaScript engines also are also JITted which is better than a straight interpreter but except microbenchmarks worse than compiled code.

I use it for nearly all my projects. It is fine for most UI stuff and is OK for some server stuff (though Python is superior in every way). But would never want to replace something like nginx with a JavaScript based web server.

gbuk2013•17m ago
IIRC V8 actually does some tricks under the hood to avoid malocs which is why Node.js can be be unexpectedly fast (I saw some benchmarks where it was only 4x of equivalent C code) - for example it recycles objects of the same shape (which is why it is beneficial not to modify object structure in hot code paths).
drogus•8m ago
Worker threads can't handle I/O, so a single process Node.js app will still have the connection limit much lower than languages where you can handle I/O on multiple threads. Obviously, the second thing you mention, ie. multiple processes, "solves" this problem, but at a cost of running more than one process. In case of web apps it probably doesn't matter too much (although it can hurt performance, especially if you cache stuff in memory), but there are things where it just isn't a good trade-off.
_qua•2h ago
I personally think it's more of a https://c25k.com/ time of year.
alwa•1h ago
Apparently this refers to making a web server able to serve 10,000 clients simultaneously.
IgorPartola•1h ago
It has been long enough that C10K is not in common software engineer vernacular anymore. There was a time when people did not trust async anything. This was also a time when PHP was much more dominant on the web, async database drivers were rare and unreliable, and you had to roll your own thread pools.
amelius•1h ago
Yes. But it's easy to reinvent it, with modern OSes and tools.
readthenotes1•1h ago
The internationally famous Unix Network Programming book. An icon, a shibboleth, a cynosure

https://youtu.be/hjjydz40rNI?si=F7aLOSkLqMzgh2-U

(From Wayne's World--how we knew the comedians had smart advisors)