Also as mentioned nginx on a blog site will certainly not be hitting the disk.
Broadly speaking in 2025 if a website is slow it is 100% the fault of the app-specific code being run in the web request. I've been HN'd before on a very small VPS but since my blog is now all static content it doesn't even notice... even when it was making 4 or 5 DB reads per page it didn't notice. This web server is basically fast not because "it's fast" but simply because there's no reason for it to be slow. That's how computers are nowadays; you really have to give them a reason to be slow for a task like this.
You'd think everyone would know this but I fight a surprising amount of rule-of-thumb estimates from coworkers based on 2000 or 2010 performance of systems, even from developers who weren't developing then! It's really easy to not realize how much performance you're throwing away using a scripting language, and using multiple fancy runtime features that have multiplicative costs at runtime, and make bad use of databases with too many queries, that fail to do even basic optimizations on said databases, and come away thinking that 50 queries per second is a lot, when in fact in 2025 you hardly even need to consider the performance of the web requests themselves until you're into the range of interest until you're in the many thousands per core... and that's just when you need to start thinking about it.
Depending on what you are doing, of course, you may need to be considering how your code runs well before that, if your web requests are intrinsically expensive. But you don't need to worry about the web itself until at least that level of performance, and generally it'll be your code struggling to keep up, not the core web server or framework.
[1]: https://www.techempower.com/benchmarks/#section=data-r23
Pretending this is not the case is the bread and butter of so many companies nowadays, saying this is basically like screaming in the void.
You have no idea of the amount of "cloud-native" applications I have seen throwing 10k a month to Databricks for things that could have been done as efficiently by a small server in a cupboard with a proper architecture. The company’s architects did enjoy the conferences through.
At that point, it’s probably better for you to keep pretending and enjoy the graft like everyone else. Unless you are paying of course.
And even then you can have a default Cloudflare setup that will just cache most of the stuff.
I once had two articles hit the top spot on HN. Meh https://x.com/dmitriid/status/1944765925162471619
:)
No mention of why it needs to go through a Linode server.
yupyupyups•2h ago
yupyupyups•2h ago
barbazoo•1h ago
BirAdam•2h ago