frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Andrej Karpathy – It will take a decade to work through the issues with agents

https://www.dwarkesh.com/p/andrej-karpathy
552•ctoth•10h ago•572 comments

The Unix Executable as a Smalltalk Method (and Unix-Smalltalk Unification) [pdf]

https://programmingmadecomplicated.wordpress.com/wp-content/uploads/2025/10/onward25-jakubovic.pdf
36•pcfwik•3h ago•4 comments

New Work by Gary Larson

https://www.thefarside.com/new-stuff
110•jkestner•6h ago•12 comments

The pivot

https://www.antipope.org/charlie/blog-static/2025/10/the-pivot-1.html
222•AndrewDucker•8h ago•87 comments

Exploring PostgreSQL 18's new UUIDv7 support

https://aiven.io/blog/exploring-postgresql-18-new-uuidv7-support
185•s4i•2d ago•132 comments

PlayStation 3 Architecture (2021)

https://www.copetti.org/writings/consoles/playstation-3
104•adamwk•3d ago•17 comments

Live Stream from the Namib Desert

https://bookofjoe2.blogspot.com/2025/10/live-stream-from-namib-desert.html
417•surprisetalk•15h ago•82 comments

Claude Skills are awesome, maybe a bigger deal than MCP

https://simonwillison.net/2025/Oct/16/claude-skills/
445•weinzierl•10h ago•256 comments

Show HN: ServiceRadar – open-source Network Observability Platform

https://github.com/carverauto/serviceradar
10•carverauto•2h ago•0 comments

WebMCP

https://github.com/jasonjmcghee/WebMCP
52•sanj•6h ago•17 comments

Tahoe's Elephant

https://eclecticlight.co/2025/10/12/last-week-on-my-mac-tahoes-elephant/
21•GavinAnderegg•5d ago•7 comments

Claude Code vs. Codex: I built a sentiment dashboard from Reddit comments

https://www.aiengineering.report/p/claude-code-vs-codex-sentiment-analysis-reddit
72•waprin•1d ago•31 comments

EVs are depreciating faster than gas-powered cars

https://restofworld.org/2025/ev-depreciation-blusmart-collapse/
293•belter•17h ago•685 comments

The Rapper 50 Cent, Adjusted for Inflation

https://50centadjustedforinflation.com/
528•gaws•11h ago•146 comments

Asking AI to build scrapers should be easy right?

https://www.skyvern.com/blog/asking-ai-to-build-scrapers-should-be-easy-right/
84•suchintan•9h ago•41 comments

Career Asymtotes

https://molochinations.substack.com/p/career-asymptotes
33•neiljohnson•4d ago•34 comments

4Chan Lawyer publishes Ofcom correspondence

https://alecmuffett.com/article/117792
345•alecmuffett•20h ago•459 comments

NeXT Computer Offices

https://archive.org/details/NeXTComputerOffices
63•walterbell•3h ago•8 comments

Researchers Discover the Optimal Way to Optimize

https://www.quantamagazine.org/researchers-discover-the-optimal-way-to-optimize-20251013/
28•jnord•4d ago•4 comments

Intercellular communication in the brain through a dendritic nanotubular network

https://www.science.org/doi/10.1126/science.adr7403
263•marshfram•12h ago•209 comments

Every vibe-coded website is the same page with different words. So I made that

https://vibe-coded.lol/
86•todsacerdoti•5h ago•69 comments

If the Gumshoe Fits: The Thomas Pynchon Experience

https://www.bookforum.com/print/3202/if-the-gumshoe-fits-62416
4•prismatic•1w ago•0 comments

MIT physicists improve the precision of atomic clocks

https://news.mit.edu/2025/mit-physicists-improve-atomic-clocks-precision-1008
69•pykello•6d ago•28 comments

Amazon’s Ring to partner with Flock

https://techcrunch.com/2025/10/16/amazons-ring-to-partner-with-flock-a-network-of-ai-cameras-used...
471•gman83•18h ago•413 comments

Ruby core team takes ownership of RubyGems and Bundler

https://www.ruby-lang.org/en/news/2025/10/17/rubygems-repository-transition/
591•sebiw•15h ago•310 comments

Jeep Wrangler Owners Waiting for Answers Week After an Update Bricked Their Cars

https://www.thedrive.com/news/jeep-wrangler-4xe-owners-still-waiting-for-answers-a-week-after-an-...
47•pseudolus•4h ago•17 comments

Smithsonian Open Access Images

https://www.si.edu/openaccess
51•bookofjoe•3d ago•5 comments

How I bypassed Amazon's Kindle web DRM

https://blog.pixelmelt.dev/kindle-web-drm/
1564•pixelmelt•1d ago•479 comments

GOG Has Had to Hire Private Investigators to Track Down IP Rights Holders

https://www.thegamer.com/gog-private-investigators-off-the-grid-ip-rights-holders/
201•haunter•9h ago•90 comments

The Wi-Fi Revolution (2003)

https://www.wired.com/2003/05/wifirevolution/
66•Cieplak•5d ago•42 comments
Open in hackernews

Migrating from AWS to Hetzner

https://digitalsociety.coop/posts/migrating-to-hetzner-cloud/
1025•pingoo101010•18h ago

Comments

geenat•17h ago
Yup. It's very good for the ecosystem for AWS to have good competition.

Amazon gets far too greedy- particularly bad when you need egress.

Also an "amazon core" is like 1/8th of a physical cpu core.

CaptainOfCoit•17h ago
Using a dedicated server for the first time after using VPSes or similar since learning programming and infrastructure is like a whole new world. Suddenly, you feel like the application is running in molasses, and the whole idea of "We need 10 VPS instances" seems so stupid...
vidarh•17h ago
My favorite Jeff Bezos quote is one that applies very much to AWS: "your margin is my opportunity".

Clearly when Amazon realised the enormous potential in AWS, they scrapped that principle. But the idea behind it - that an organisation used to fat margins will not be able to adapt in the face of a competitor built from the ground to live of razor thing margins - still applies.

AWS is ripe for the picking. They "can't" drop prices much, because their big competitors have similar margins, and a price war with them would devastate the earnings of all of them no matter how much extra market share they were to win.

The challenge is the enormous mindshare they have, and how many people are emotionally invested even in believing AWS is actually cost effective.

master_crab•17h ago
"your margin is my opportunity"

Yup, that phrase was running through my head as I skimmed the comments.

To that, an interesting observation I’ve made is that their frequency for service price cuts have dropped in the past several years. And the instances of price increases have started to trickle in (like the public IP cost).

If core compute and network keep getting cheaper faster than inflation, and they never drop their prices (or drop them by less relatively) the margins are growing.

hylaride•13h ago
The worst aspect of AWS is that once you get to a certain size, you can negotiate bulk agreements, especially for things like bandwidth. At a previous job, we cut our bill down by quite a bit this way, but it was annoying to have to schmooze with sales people.
vidarh•13h ago
Great you're pointing it out, as this is also something a lot of organisations are entirely unaware of in my experience.

If you're paying more than a few hundred k/year (worth starting to try below that; success rates will vary greatly) and are still paying the list prices, you might as well set fire to money.

CaptainOfCoit•17h ago
Best feature of (some) the dedicated servers Hetzner offers is the unmetered bandwidth. I'm hosting a couple of image-heavy websites (mostly modding related) and since moving to Hetzner I sleep much better knowing I'll pay the same price every single month, and have been for the ~3 years I've been a happy Hetzner customer.
AmazingTurtle•17h ago
https://news.ycombinator.com/item?id=44038591
CaptainOfCoit•17h ago
Less biased view of "Hetzner on HN": https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... (skipping all comments from this submission)

In the end, Hetzner is a provider of "cheap but not 100% uptime" infrastructure, probably why it's so cheap in the first place.

As every other provider, if you want 100% uptime (or getting close to it), you really need at least N+1 instances of everything, as every hosting provider end up fucking something up, sooner or later.

croes•17h ago
Can you name a provider with 100% uptime? Or is it 100%¹?
CaptainOfCoit•17h ago
No, which is why I wrote "As every other provider, if you want 100% uptime ..."
master_crab•17h ago
No one provides 100% uptime for core compute. That’s their point. Also, good luck extracting anything out of those companies that offer 99.99% and don’t meet it.

Sure they’ll throw you some service credits. But it’ll always be magnitudes less than the cost of their disruption to you.

esskay•14h ago
You make it sound like they are in some way less reliable or you've got more downtime - neither of which is true. You've got just as much chance of having downtime there as you have with any other provider.
CaptainOfCoit•11h ago
Yeah, that's not true in my experience, and I'm a very happy Hetzner customer and been using them for years, they are a step below in reliability, no matter how much I love them it's hard not to see that.

I've used Vultr for about the same amount of time, and I never got an email that some network switch had a hardware failure and it'll take a couple of hours to restore connectivity, but I've had that happen with Hetzner more than once, in the same time-span. And again, I say this as a Hetzner-lover, and someone who prefers Hetzner over Vultry any day of the week.

CuriouslyC•17h ago
I use Hetzner for this reason, but there are caveats. They're great but their uptime isn't as good as AWS and they don't have great region coverage. I strongly advise people to pair them with Cloudflare. Use Hetzner for your core with K8s, and use R2/D1/KV with Container Durable Objects to add edge coprocessing. I also like to shard customer data to individual DOs, this takes a ton of scaling pressure off your data layer, while being more secure/private.
CaptainOfCoit•17h ago
I do this too. Hetzner dedicated servers for the "core" and data-storage basically, and thin/tiny edge-nodes hosted at OVH across the globe as my homebrew CDN.
BoredPositron•17h ago
That's exactly how we do it we have Gcore in the mix for GPU compute though.
geenat•17h ago
AWS has certainly had some pretty public facing downtime ;) I'd say its been roughly the same in my experience- the only way to avoid it IMHO is multi-region.
likium•16h ago
If customer data is considered edge, then what’s core?
CuriouslyC•15h ago
Everything that's shared between customers, internal system state and customer metadata. I use Postgres with FDWs + Steampipe + Debezium to integrate everything, it's more like a control plane than a database. This model lets you go web scale with one decently sized database and a read replica, since you're only hitting PG for fairly static shared data, Cloudflare Hyperdrive gives insane performance.
LunaSea•17h ago
This is also what we did at my company.

We kept most smaller-scale, stateless services in AWS but migrated databases and high-scale / high-performance services to bare metal servers.

Backups are stored in S3 so we still benefit from their availability.

Performance is much higher thanks to physically attached SSDs and DDR5 on-die RAM.

Costs are drastically lower and for much larger server sizes which means we are no getting stressed about eventually needing to scale up our RDS / EC2 costs.

rs_rs_rs_rs_rs•17h ago
You got me with the title and I was curious at first but then I got to the part where it shows the bill and realized this is just toy project.
CaptainOfCoit•17h ago
You can tell if a project is a toy or not based on the bill? How about actually looking at what they do? https://digitalsociety.coop/

It's literally a agency doing professional development for others, among other services. Clearly not "toys".

HN dismissals are going down in quality, at least they used to be well researched some years ago. Now people just spew out the first thing that comes up in their mind, and zero validation before hitting that "reply" button.

endymion-light•17h ago
It's really dismissive and frankly quite ignorant to have an attitude that just because a product doesn't have a massive AWS bill it's a toy project.

It's a rotten attitude, and judging a projects worth by an AWS bill is a very poor comparator. I could spin up a massive aws bill doing some pointless machine learning workloads, is that suddenly a valid project in your eyes?

rs_rs_rs_rs_rs•15h ago
>I could spin up a massive aws bill doing some pointless machine learning workloads, is that suddenly a valid project in your eyes?

Can you spin it on a AWS competitor for a fraction of a cost? Absolutely yes I would be interested in reading about it!

endymion-light•15h ago
I will do - but my latest ML model is attempting to create leylines of different mcdonalds across the country, i don't think it's worthy of being considered product
lisperforlife•17h ago
I think you can get much farther with dedicated servers. I run a couple of nodes on Hetzner. The performance you get from a dedicated machine even if it is a 3 year old machine that you can get on server auction is absolutely bonkers and cannot be compared to VMs. The thing is that most of the server hardware is focused towards high core count, low clock speed processors that optimize for I/O rather than compute. It is overprovisioned by all cloud providers. Even the I/O part of the disk is crazy. It uses all sorts of shenanigans to get a drive that sitting on a NAS and emulating a local disk. Most startups do not need the hyper virtualized, NAS based drive. You can go much farther and much more cost-effectively with dedicated server rentals from Hetzner. I would love to know if they are any north-american (particularly canadian) companies that can compete with price and the quality of service like Hetzner. I know of OVH but I would love to know others in the same space.
tonyhart7•17h ago
Yeah this is what they do in "high perfomance" server, they just use gaming cpu
CaptainOfCoit•17h ago
> . I know of OVH but I would love to know others in the same space.

When I've needed dedicated servers in the US I've used Vultr in the past, relatively nice pricing, only missing unmetered bandwidth for it to be my go-to. But all those US-specific cases been others paying for it, so hasn't bothered me, compared to personal/community stuff I host at Hetzner and pay for myself.

rapind•13h ago
I've been eying Vultr for dedicated metal in Canada (Toronto Datacenter). How do they measure up to Hertzner? I'm not looking to get the best possible deal, but just better value than EC2 (which costs me a fair amount of egress).
CaptainOfCoit•11h ago
I'd go for Hetzner any day of the week, but if a client really screams "Servers MUST be in North America" I'd use Vultr before anything else, unless the client is bandwidth-sensitive.
Sanzig•8h ago
If you are a Canadian entity, I would go OVH rather than Vultr. OVH US is a completely distinct legal entity from their Canada and EU offerings, specifically so that the rest of OVH is immune to the CLOUD Act. Vultr is an American company, so if Uncle Sam asks for your data, even at a Canadian location, there's nothing Vultr nor you can do to stop it.

This wasn't a consideration a few years ago, but with how quickly things are devolving south of the border it's now much more of a risk. If I were operating a company in Canada, I would want to be able to assure my customers that their data won't get expropriated to the US without first going through Canadian courts.

OVH Canada now has two Canadian locations, by the way - the original location in Beauharnois and a new location in Cambridge, so you even can have two zones for redundancy.

rapind•7h ago
Yes I was also looking at OVH. I heard some horror stories about a fire several years ago and a lack of backups though...
michalsustr•17h ago
Interserver. But I don’t have personal experience (yet)
hshdhdhehd•17h ago
It can affect system design. Just chuck it all on one box! And it will be crazy fast.
bakugo•17h ago
Be warned though that, when renting dedicated servers, there are certain issues you might have to deal with that usually aren't a factor when renting a VPS.

For example, I got a dedicated server from Hetzner earlier this year with a consumer Ryzen CPU that had unstable SIMD (ZFS checksums would randomly fail, and mprime also reported errors). Opened a ticket about it and they basically told me it wasn't an issue because their diagnostics couldn't detect it.

CaptainOfCoit•17h ago
Yeah, their support, for better or worse, is really technical and you need to send all the evidence of any faults to convince them. But when I've had random issues happening, I've sent them all the troubleshooting and evidence I came across, and a couple of hours later they had provisioned a new host for me with the same specs.

And based on our different experiences, the quality of care you receive could differ too :)

bakugo•17h ago
> and a couple of hours later they had provisioned a new host for me with the same specs.

To be fair, they probably would've done the same for me if I'd pushed the issue further, but after over a week of trying to diagnose the issue and convince them that it wasn't an problem with the hard drives (they said one of the drives was likely faulty and insisted on replacing it and having me resilver the zpool to see if it fixed the issue. spoiler: it didn't) I just gave up, disabled SIMD in ZFS and moved on.

CaptainOfCoit•17h ago
> but after over a week of trying to diagnose the issue and convince them that it wasn't an problem

That sucks big time :( In the most recent case I can recall, I successfully got access, noticed weirdness, gathered data and sent an email, and had a new instance within 2-3 hours.

Overall, based on comments here on HN and otherwhere, the quality and speed of support is really uneven.

vanviegen•15h ago
> based on comments here on HN and otherwhere, the quality and speed of support is really uneven.

Can you name one tech company that's scaled passed the point where the founders are closely involved with support that has consistently good tech support? I think this is just really hard to get right, as many customers are not as knowledgeable as they think they are.

CaptainOfCoit•15h ago
"Consistently" is hard, people's experiences tend to differ with every company out there, even by what country you're currently in. For example, I've always had quick and reasonable replies from Coinbase support, but I know friends who've had the complete opposite experience with Coinbase, so won't claim they're consistent. But their replies to me has been consistent at least.

Probably the company most people have had any sort of consistency from would be Stripe I think. Of course, there are cases where they haven't been great, but if you ask me for a company with the best tech support, Stripe comes to mind first.

I'm not sure it's active anymore, but there used to be a somewhat hidden and unofficial support channel in #stripe@freenode back in the day, where a bunch of Stripe developers hanged out and helped users in an in-official capacity. That channel was a godsend more than once.

trollied•3h ago
It's not hard to get right. It's expensive to get right. And that affects pricing and profitability. You have to have a threshold.
eahm•17h ago
I recently rediscovered this website that might help: https://vpspricetracker.com

Too cool to not share, most of the providers listed there have dedicated servers too.

CaptainOfCoit•17h ago
Great website, but what a blunder to display the results as "cards" rather than a good old table so you can scan the results rather than having to actually read it. Makes it really hard to quickly find what you're looking for...

Edit: Ironically, that website doesn't have Hetzner in their index.

dizhn•16h ago
That is weird indeed. But I bet you are getting Hetzner results indirectly through resellers :) (Yeah I checked one Frankfurt based datacenter named FS1 - probably for Falkenstein. They might be colo or another datacenter there of course)
ta12653421•17h ago
++1

excellent website, thanks.

chromehearts•16h ago
Amazing website, glad to know that I already have a super great offer! But will definitely share this
63stack•14h ago
This is an amazing site
aantix•13h ago
What a great site. Thanks for sharing!
shrubble•17h ago
I have used wholesaleinternet.net and they are centrally located in the USA.
yread•16h ago
ugh 235$ a month for a 4TB SSD?! You can buy one for that price and have some money left over
zakki•17h ago
Try www.wowrack.com or www.serverstadium.com. (I work for them).
yread•16h ago
I used GTHost in the US. Performance is not bad but you do end up paying more if you need 1gbit/s link.
jwr•16h ago
I actually benchmarked this and wrote an article several years back, still very much applicable: https://jan.rychter.com/enblog/cloud-server-cpu-performance-...
kees99•16h ago
Did you "preheat" during those tests? It is very common for cloud instances to have "burstable" vCPUs. That is - after boot (or long idle), you get decent performance for first few minutes, then performance gradually tanks to a mere fraction of the initial burst.
fakwandi_priv•15h ago
> The total wall clock time for the build was measured. The smaller the better. I always did one build to prime the caches and discarded the first result.

The article is worth the read.

vicarrion•4h ago
I also did a benchmark between cloud providers recently and compared performance for price

https://dillonshook.com/postgres-cloud-benchmarks-for-indie-...

fireant•1h ago
That isn't the same as parent through, you are comparing VMs instead of dedicated servers
codethief•16h ago
> I would love to know if they are any north-american (particularly canadian) companies that can compete with price and the quality of service like Hetzner

FWIW, Hetzner has two data centers in the US, in case you're just looking for "Hetzner quality but in the US", not for "American/Canadian companies similar to Hetzner".

CaptainOfCoit•16h ago
IIRC, Hetzners dedicated instances are only available in their German and Finnish data centers, not anywhere else sadly :/
joshstrange•16h ago
This is correct, they only offer VPS in the US.
atonse•14h ago
But are the VPSs also similarly much better performing than AWS?
joshstrange•8h ago
I don't know the answer to that, I believe they are a good bit cheaper (it's been a while since I compared apples-to-apples). My understanding is that they are a good deal but I can't say that with 100% certainty.
ccakes•14h ago
latitude.sh do bare metal in the US well
GordonS•13h ago
Yes, but they are vastly more expensive than Hetzner (looks like pricing stats at just under $200/m for 6 cores).
matt-p•14h ago
Yeah no dedicated severs in the US sadly. I'm not aware of anyone who can quite match hetzners pricing in the US (but if someone does I'd love to know!). https://www.serversearcher.com throws up clouvider and latitiude at good pricing but.. not hetzner levels by any means.
MrPowerGamerBR•12h ago
I haven't checked Hetzner's prices in a while, but OVHcloud has dedicated servers and they do have dedicated servers in the US and in Canada (I've been using their dedicated servers for years already and they are pretty dang good)
matt-p•11h ago
Seems to be broadly the same sadly, but thanks it's interesting to see they're all hovering quite close to eachother.
g8oz•10h ago
Similarly OVH is French and has bare metal in their US and Canadian data centers.
shlomo_z•7h ago
I have been considering colocating at endoffice (I saw the suggestion once at codinghorror.com)
lossolo•16h ago
I've been using dedicated servers for 20 years. Here's my top list:

Hetzner, OVH, Leaseweb, and Scaleway (EU locations only).

I've used other providers as well, but I won't mention them because they were either too small or had issues.

citrin_ru•16h ago
VMs are middle ground between AWS and dedicated hardware. With hardware you need to monitor it, report problems/failures to the provider, make necessary configuration changes (add/remove node to/from a cluster e. t. c.). If a team is coming from AWS it may have no experience with monitoring/troubleshooting problems caused by imperfect hardware.
ta1243•16h ago
For self hosted / cohosting my own kit, I buy refurbed servers from https://www.etb-tech.com/ because I can spec exactly what I want and see how the cost varies, what the delivery time is, etc.

Years ago Broadberry has a similar thing with Supermicro, but not any more. You have to talk to a sales person about how they can rip you off. Then they don't give you what you specced anyway -- I spec 8x8G sticks of ram, they provide 2x32G etc.

wongarsu•16h ago
> I would love to know if they are any north-american (particularly canadian) companies that can compete with price and the quality of service like Hetzner.

In a thread two days ago https://ioflood.com/ was recommended as US-based alternative

amelius•16h ago
But I'm looking more for "compute flood" ...
micw•16h ago
I have seen hetzners cloud block storage to be quite slow. It became soon a bottleneck on our timescale databases. Now we're testing on netcup.com's "root servers" which are VPS with dedicated CPU cores and lots of very fast storage.
nik736•15h ago
They limit them to 7500 IOPS, as stated in their docs. It also doesn't scale with size, the limit is there for every volume of any size.
deaux•16h ago
On a similar note, I'm looking for a "Hetzner, but in APAC, particularly East Asia". I've struggled to find good options for any of JP, TW or KR.
b0ner_t0ner•13h ago
LayerStack is very fast in APAC:

    https://www.layerstack.com/en/dedicated-cloud
deaux•8h ago
Going to try this out, looks very much like what I was looking for.
BiraIgnacio•15h ago
In Canada

https://www.hostpapa.ca/

https://www.cacloud.com/

https://www.keepsec.ca/

https://www.canspace.ca/

jamwil•5h ago
Thanks for this.
matt-p•15h ago
Quite a few options on https://serversearcher.com that sell in US/CA.

Clouvider is available in alot of US DCs, 4GB ram/2cpu/80GB NVME and a 10Gb port for like $6 a month.

ozim•15h ago
As mentioned multiple times in other comments and places people think that doing what Google or FB is doing should be what everyone else is doing.

We are running modest operations on European VPS provider where I work and whenever we get a new hire (business or technical does not matter) it is like a Groundhog day - I have to explain — WE ALREADY ARE IN THE CLOUD, NO YOU WILL NOT START "MIGRATING TO CLOUD PROJECT" ON MY WATCH SO YOU CAN PAD YOUR CV AND MOVE TO ANOTHER COMPANY TO RUIN THEIR INFRA — or something along those lines but asking chatgpt to make it more friendly tone.

PeterStuer•14h ago
The number of times I have seen fresh "architects" come in with an architectural proposal for a 10 user internal LoB app that they got from a Meta or Microsoft worldscale B2C service blueprint ...
kccqzy•13h ago
> doing what Google or FB is doing

Google doesn't even deploy most of its own code to run on VMs. Containers yes but not VMs.

dijit•13h ago
Yeah, the irony being Google runs VMs in Containers but not the other way around.
ozim•11h ago
Well I think that’s the point people think if we run VPS and not containers or some app fabric, serverless so PaaS we are “not using real cloud”. But we use IaaS and it is also proper cloud.
torginus•13h ago
Virtualization has a crazy overhead - when we moved to metal instances in AWS, we gained like 20-25% performance. I thought that since AWS has the smartest folks in the business and Intel & co. has been at this for decades, it'd be like a couple percent overhead at most, but no.
mrinterweb•9h ago
One thing that frustrates me with estimating performance on AWS is that I have to dramatically estimate down from the performance of my dev laptop (M2 MBP). I've noticed performance tradeoffs around 15x slower when deployed to AWS. I realize that's a anecdotal number, but this is a fairly consistent trend I've seen at different companies running on cloud hosting services. One of the biggest performance hits is latency between servers. If you're server is making hundreds or thousands of db queries per second, you're going to feel that pain on the cloud more even if your cloud db server CPU is relatively bored. It is network latency. I look at the costs of AWS and it is easy to spend >$100,000 month.

I have ran services on bare metal, and VPSs, and I always got far better performance than I can get from AWS or GCP for a small fraction of the cost. To me "cloud" means vendor lock-in, terrible performance, and wild costs.

benjiro•4h ago
> It is network latency.

People do not realize for that fancy infinite storage scaling, that it means that AWS etc run network based storage. And that, like on a DB, can be a 10x performance hit.

piokoch•17h ago
Well, from what see the authors exchanged AWS managed Kubernetes cluster with self-hosted Kubernetes cluster on Talos Linux. Question is if $449.50/month paid for AWS will cover additional work needed for self-hosting.
mystifyingpoi•17h ago
All the effort that previously wasn't required for operating EKS, but now is required to operate self-hosted Kubernetes, will be pushed to existing engineers as a bit of extra work, with no extra pay.

In the best case scenario. In the worst, some cluster f-up will eat 10x that in engineering time.

vidarh•16h ago
As someone who do consulting in this space, and has for a decade: Clients usually end up needing more help to run their AWS setup than to self-host.
jokethrowaway•17h ago
Having deployed servers well before AWS was a thing, AWS alwyas felt incredibly overpriced.

The only benefit you get is reliability, temporary network issues on AWS are not a thing.

On DigitalOcean they are fairly bad (I lose thousands of requests almost every month and I get pennies in credit back when I complain - while my users churning cost way more), on Hetzner I've heard mixed reviews.

Some people complains, some say it's extremely reliable.

I'm looking forward to try Hetzner out!

CaptainOfCoit•17h ago
> Having deployed servers well before AWS was a thing, AWS alwyas felt incredibly overpriced.

Yeah, I remember when AWS first appeared, and the value proposition was basically "It's expensive but you can press a button and a minute later you have a new instance, so we can scale really quickly". For the companies that know more or less the workload they have during a week don't really get any benefits, just more expensive monthly bills.

But somewhere along the line, people started thinking it was easier to use AWS than the alternatives, and I even heard people saying it's cheaper...

vidarh•16h ago
You'll even see people in a lot of the HN threads on the subject refusing to believe AWS is expensive, even in the face of a lot of us with expensive (EDIT: meant "extensive". but will leave it there as the tpo is also apt...) experience running systems on both AWS and alternatives.

The biggest innovation AWS delivered was to convince engineers they are cheap, while wresting control of provisioning away from the people with actual visibility into the costs.

breadislove•17h ago
Hetzner is really great until you try to scale with them. We started building our service on top of Hetzner and had couple 100s of VMs running and during peak time we had to scale them to over 1000 VMs. And here couple of problems started, you get pretty often IPs which are black listed, so if you try to connect to services hosted by Google, AWS like S3 etc. you can't reach them. Also at one point there were no VMs available anymore in our region, which caused a lot of issues.

But in general if you don't need to scale crazy Hetzner is amazing, we still have a lot of stuff running on Hetzner but fan out to other services when we need to scale.

croes•17h ago
Blacklisted by whom?
Hikikomori•17h ago
AWS at least maintains IP lists of bots, active exploiters, ddos attackers, etc, that you can use to filter/rate limit traffic in WAF. Not so much AWS that blocks you but customers that decide to use these lists.
drcongo•16h ago
Ironic, given how often the attacks I spend time fending off are coming from AWS.
croes•16h ago
So AWS could list some IPs of competitors, just enough to make them look unreliable.
IshKebab•15h ago
Nice IPs you've got there Hetzner, shame if...
netdevphoenix•16h ago
The big cloud providers I am assuming
CaptainOfCoit•17h ago
Worth noting that this seems to be about Hetzners cloud product, not the dedicated servers. The cloud product is relatively new, and most of the people who move to Hetzner do so because of the dedicated instances, not to use their cloud.
drcongo•17h ago
Hetzner's cloud offering is probably a decade old by now - I've been a very happy customer for 8 years.
CaptainOfCoit•17h ago
You're right! I seem to have mixed it with some other dedi provider that added "cloud" recently. Thanks for the correction!

My point of people moving to Hetzner for the dedicated instances rather than the cloud still remains though, at least in my bubble.

drcongo•17h ago
No problem (I'm just glad you didn't read it as snark)! I mean, even 8 years is relatively new compared to their dedicated box offering so technically you were still correct.
watermelon0•17h ago
Hetzner was founded in '97, so cloud offering could technically still be considered relatively new. :D
jakewins•17h ago
> Also at one point there were no VMs available anymore in our region, which caused a lot of issues.

I'm not sure if this is a difference between other clouds, at least a few years ago this was a weekly or even daily problem in GCP; my experience is if you request hundreds of VMs rapidly during peak hours, all the clouds struggle.

antonvs•14h ago
We launch 30k+ VMs a day on GCP, regularly launching hundreds at a time when scheduled jobs are running. That’s one of the most stable aspects of our operation - in the last 5 years I’ve never seen GCP “struggle” with that except during major outages.

At the scale of providers like AWS and even the smaller GCP, “hundreds of VMs” is not a large amount.

Macha•14h ago
If you’re deploying something like 100 m5.4xlarge in us-east-1, sure, AWS’s capacity seems infinite. Once you get into high memory instances, GPU instances, less popular regions etc, it drops off.

Now maybe after the AI demand and waves of purchases of systems appropriate for that things have improved, but it definitely wasn’t the case at the large scale employer I worked at in 2023 (my current employer is much smaller, so doesn’t have those needs, so I can’t comment)

dinvlad•11h ago
Not a single VM possible to request on Azure us-east for over a month now though :-(
GordonS•12h ago
I don't use Azure much anymore, but I used to see this problem regularly on Azure too, especially in the more "niche" regions like UK South.
dinvlad•11h ago
Right now, we can’t request even a single (1) non-beefy non-GPU VM in us-east on Azure. That’s been going on for over a month now, and that’s after being a customer for 2 years :(
jwr•16h ago
Note that we might be talking about two different things here: some of us use physical servers from Hetzner, which are crazy fast, and a great value. And some of us prefer virtual servers, which (IMHO) are not that revolutionary, even though still much less expensive than the competition.
jgalt212•16h ago
1000 VMs?

So you have approx 1MM concurrent customers? That's a big number. You should definitely be able to get preferred pricing from AWS at that scale.

breadislove•15h ago
We have extremely processing heavy jobs where user upload large collection of files (PDFs, audios, videos etc.) and expect to get fast processing.
FBISurveillance•15h ago
We scaled to ~1100 bare metal servers with them and it worked perfectly.
atonse•14h ago
Username checks out.
matt-p•14h ago
I think they're great but it's unfortunate they don't have more locations which would at least enable you to spin VMs up in different locations during a shortage. If you rely on them it might be wise to have a second cloud provider that you can use in a pinch, there's many options.
V__•14h ago
This sound really intriguing, and I am really curious. What kind of service do you run where you need a 100s of VMs? Was there a reason for not going dedicated? Looking at their offering their biggest VM is (48 CPU, 192 GB RAM, 960 GB SSD). I can't even imagine using that much. Again, I'm really curious.
breadislove•13h ago
we have extremely processing heavy jobs where user upload large collection of files (audios, pdfs, videos etc.) and expect to get fast processing. its just that we need to fan out sometimes, since a lot of our users a sensitive to processing times.
jamesblonde•14h ago
The blocking of services on Hetzner and Scaleway by Microsoft is well known -

https://www.linkedin.com/posts/jeroen-jacobs-8209391_somethi...

I didn't know AWS and GCP also did it. Not surprised.

The problem is that European regulators do nothing about such anti-competitive dirty tricks. The big clouds hide behind "lots of spam coming from them", which is not true.

lossyalgo•13h ago
First comment on that post claims that according to Mimecast, 37% of EU-based spam originates from Hetzner and Digital Ocean. People have been asking for 3 days for a link to the source (I can't find it either).

On the other hand, someone linked a report from last year[0]:

> 72% of BEC attacks in Q2 2024 used free webmail domains; within those, 72.4% used Gmail. Roughly ~52% of all BEC messages were sent from Gmail accounts that quarter.

[0] https://docs.apwg.org/reports/apwg_trends_report_q2_2024.pdf

GordonS•12h ago
I've ran into the IP deny list problem too, but for Windows VMs - you spin them up, only to realise that you can't get Windows Updates, can't reach the Powershell gallery etc.

And just deleting it and starting again is just going to give you the exact same IP again!

I ended up having to buy a dozen or so IPs until I found one that wasn't blocked, and then I could delete all the blocked ones.

spinningslate•17h ago
Related: Michael Kennedy moved TalkPython [0] hosting to Hetner in 2024. There's a blog about the move here [1] and a follow up after Hetzner changed some pricing policy [2].

He's also just released a book on hosting scale production Python apps [3]. Haven't read yet though would assume it'll get covered there in more detail too.

--

[0] https://talkpython.fm/

[1] https://talkpython.fm/blog/posts/we-have-moved-to-hetzner/

[2] https://talkpython.fm/blog/posts/update-on-hetzner-changes-p...

[3] https://talkpython.fm/books/python-in-production

mikeckennedy•12h ago
Thanks for the shoutout @spinningslate. :)
eyk19•17h ago
We've experienced something similar: for compute-heavy rendering tasks, AWS just wasn't good enough. EC2 machines with the same spec perform much worse than Hetzner machines
CaptainOfCoit•17h ago
> EC2 machines with the same spec perform much worse than Hetzner machines

Yeah, even when you move to "EC2 Dedicated Instances" you end up sharing the hardware with other instances, unless you go for "EC2 Dedicated Hosts", and even then the performance seems worse than other providers.

Not sure how they managed to do so for even the dedicated stuff, would require some dedicated effort.

andersmurphy•16h ago
Yeah shared vCPU can be really bad.
drcongo•17h ago
Hetzner's ARM servers are the best kept secret in tech. Unbelievably capable and mindbogglingly cheap.
dizhn•16h ago
Have you encountered any software that wasn't compatible ?
drcongo•15h ago
Only a couple of times, but nothing that I use on production servers, only things that were very much hobby projects. For the sort of things we build an 8 core Hetzner ARM server outperforms an 8 core Digital Ocean x86 server by 10-20% for about a tenth of the cost.
GordonS•12h ago
They really are great, I just wish they'd make them available in the US regions too.
axus•17h ago
I'm going to be that guy and ask which service is the cheapest for AI to bring up new infrastructure and deploy to it?
cisophrene•17h ago
Dedicated servers on a host like Hetzer and OVH surely beats any virtualization based cloud offering on price and performance. The tradeoff is availability. It's a great choice for entities that are optimizing on cost, but not a great choice if your business cannot tolerate downtime.

A good example is a the big lichess outage from last year [1]. Lichess is a non-profit, and also must serve a huge user base. Given their financials, they have to go the cheap dedicated server route (they host on OVH). They publish an Excel sheet somewhere with every resources they use to run the services and last year, I had fun calculating how much it would cost them if they were using an hyperscaler cloud offering instead. I don't remember exactly but it was 5 or 6x the price they currently pay OVH.

The downside, is that when you have an outage, your stuff is tied to physical servers and they can't easily be migrated, when cloud provider on the opposite can easily move around your workload. In the case of Lichess outage, it was some network device they had no control of that went bad, and lichess was down until OVH could fix it, that is many hours.

So, yes you get a great deal, but for a lot of businesses, uptime is more important than cost optimization and the physicality of dedicated servers is actually a serious liability.

[1]: https://lichess.org/@/Lichess/blog/post-mortem-of-our-longes...

CaptainOfCoit•17h ago
> It's a great choice for entities that are optimizing on cost, but not a great choice if your business cannot tolerate downtime.

Even hosting double of everything when you're doing dedicated servers will let you have cheaper monthly bills, compared to the same performance/$ you could get with AWS or whatever.

But Hetzner does seem a bit worse than other providers in that they have random failures in their own infrastructure, so you do need to take care if you wanna avoid downtime. I'm guessing that's how they can keep the prices so low.

> is that when you have an outage, your stuff is tied to physical servers and they can't easily be migrated

I think that's a problem in your design/architecture, if you don't have backups that live outside the actual servers you wanna migrate away from, or at least replicate the data to some network drive you can easily attach to a new instance in an instant.

yomismoaqui•17h ago
You can have reliability with physical servers.

When you pay 1/4 for 3X the performance you can duplicate your servers and then be paying 1/2 for 3X the performance.

I find baffling that people forget about how things were done before the cloud.

CodesInChaos•14h ago
Hetzner only has one Datacenter/AZ per region. So you either risk a single region failure taking you down, or you lose performance from transferring data to another location.
abujazar•13h ago
These physics are exactly the same with AWS et al.
PeterStuer•17h ago
"5 or 6x the price they currently pay OVH"

So they could have had 100% redundant systems at OVH and still be under half the cost of a traditional "cloud" provider?

I would look at architecture and operations first. Their "main" node went down, and they did not have a way they could just bring another instance of it online fast on a fresh OVH machine (typically provisioned in a few minutes, assuming they had no hot standby). If the same happened to their "main" VM at a "hyperscaler" , I would guess they also would have been up the same creek. It is not the difference between 120 and 600 seconds to provision a new machine that caused their 10 hrs downtime.

wolfi1•16h ago
is it really redundant when you host at the same provider?
CaptainOfCoit•16h ago
If you're doing VPSes, then maybe, as long as they're not under the same node. If it's dedicated servers, then probably.

But I think "redundancy" is more like a spectrum, rather than a binary thing. You can be more or less redundant, even within the same VPS if you'd like, but that of course be less redundant than hosting things across multiple data centers.

vidarh•16h ago
And it's cheap enough that you can have replicated setup across two different providers and still be cheaper than one expensive cloud provider.

While AWS is probably towards the safer end if you want to put all your eggs in one basket, people are still putting all their eggs in one basket if they have everything at AWS as well...

namibj•16h ago
So host one on OVH and one on Hetzner?
PeterStuer•16h ago
But that question remains the same whether you are renting bare metal or VMs. You can rent OVH servers located at different datacentres all over the globe, and their Cloud SLA has higher uptime guarantees than AWS (what that is worth depends on the value you place on an SLA ofc.)
jwr•16h ago
> when you have an outage, your stuff is tied to physical servers and they can't easily be migrated

I don't see how that follows? Could you please explain?

I run my stuff on Hetzner physical servers. It's deployed/managed through ansible. I can deploy the same configuration on another Hetzner cluster (say, in a different country, which I actually do use for my staging cluster). I can also terraform a fully virtual cloud configuration and run the same ansible setup on that. Given that user data gets backed up regularly across locations, I don't see the problem you are describing?

lossolo•16h ago
> The tradeoff is availability.

This is a myth, created so cloud providers can sell more, and so those who overpay can feel better. I've been using dedicated servers since 2005, so for 20 years across different providers. I have machines at these providers with 1000-1300 days of uptime.

dizhn•16h ago
In fairness they might have been inaccessible during that time. :)
debian3•16h ago
Same here, been running dedicated servers with OVH since 2009, if anything bare metal server are more stable than before. I just replaced a set of servers that was from 2018, I didn’t have any hardware problems during their 8 years of working under significant load. During that time I had 2 or 3 power outages, a few more network outages. Usually problems come in a cluster. I had a few years that I had nothing to report, 100% uptime. Dedicated are nice, but I guess it scares people. Hetzner use lower hardware quality than OVH on some of their offerings, so your experience may vary. One of the most important thing is to check that your server use datacenter SSD/HDD with ECC ram, it saves you a lot of problems.
petit_robert•13h ago
> I have machines at these providers with 1000-1300 days of uptime

You did not say what system you use on them, but don't you need to reboot them to apply kernel upgrades, for instance?

lossolo•12h ago
Most of them run Debian (some have Windows VMs running on those Debian machines), while a minority use Ubuntu. I reboot them once every few years when I upgrade the OS, kernel, or migrate to newer machine types.

I run most of the workloads in containers, but there are also some VMs (mostly Windows) and some workloads use Firecracker micro vms in containers. A small number of machines are rebooted more often because they occasionally need new kernel features, and their workloads aren't VM friendly, so they run on bare metal.

LaurensBER•13h ago
This is a very good point but even with dedicated servers it's doable to build a resilient HA architecture.

OVH offers a managed kubernetes solution which for a team experienced with Kubernetes and/or already using containers would be a fairly straightforward way to get a solid HA setup up and running. Kubernetes has its downsides and complexity but in general it does handle hardware failures very well.

abujazar•13h ago
My experience is exactly the opposite. None of the cloud vendors are actually resilient, every single one of them have had major global outages. And when it happens you've got no influence on how fast it gets fixed. The only way of building a truly resilient infrastructure eith cloud vendors is mirroring across vendors. But it happens to be easier to mirror a private cloud between e.g. Hetzner and OVH than maintaining parallel setups in Azure and AWS.
roschdal•17h ago
Hetzner is the best.
eric_khun•17h ago
AWS won't raise the limits on our new account (we're stuck at 1GB RAM in Lightsail after 2 months, even though we need to launch this month).

Looking at Hetzner or Vultr as alternatives. A few folks mentioned me Infomaniak has great service and uptime, but I haven't heard much about them otherwise.

Anyone used Infomaniak in production? How do they compare to Hetzner/Vultr?

CaptainOfCoit•16h ago
Just curious, what are you building/launching that requires more than 1GB of RAM at launch? 1GB is a lot of memory for most use cases, guessing something involving graphics or maybe simulations? In those cases, dedicated instances with proper hardware will give you enormous performance benefits, FYI.

Both Vultr and Hetzner are solid options, I'd go for Hetzner if I know the users are around Europe or close to it, and I want to run tiny CDN-like nodes myself across the globe. Also, Hetzner if you don't wanna worry about bandwidth costs. Otherwise go for Vultr, they have a lot more locations.

eric_khun•16h ago
appreciate the advice! Launching a 2D game generator with an editor, and expecting those people to share the games . Not multiplayer yet.

The lightsail instance sometimes just hangs and we have to reboot it when people performing simple action like login or queryng API (we have a simple express / nextjs app)

Macha•14h ago
A Wordpress install that makes it to the top of HN can use 1GB of RAM
slyall•16h ago
Are you using Lightsail rather than normal EC2 and other AWS services?

Just wondering if your limits just apply to lightsail or normal stuff too.

Terretta•14h ago
I haven't checked recently, but previously a Lightsail account was a full AWS account. Tie route 53, app or API gateway, and some instances.

That said, for your use case, you might want the predictability and guarantee of having no "noisy neighbors" on an instance. While most VM providers don't offer that (you have to go to fully dedicated machine), AWS does, so keep that in mind as well.

For BYOL (bring your own hosting labor), Vultr is a lesser known but great choice.

artdigital•6h ago
I have all my small stuff on a Vultr managed k8s with the cheapest nodes.

Big fan of Vultr, I like them a lot, but got bare metal stuff Hetzner is going to be cheaper

buyucu•17h ago
aws and azure are massively overpriced. there is no reason to use them.
OutOfHere•17h ago
The part they never tell you is how Hetzner has a substantially higher unfair risk of account termination without warning. If you are okay with your account being terminated like that with zero notice or reason, then Hetzner is cheap.
PeterStuer•16h ago
Do you have a substantiated source for the "12x the unfair risk of account termination without warning". I tried looking for it but all I could find were unsubstantiated grapevine (I heard they ...) posts with lots of people stating the opposite.
OutOfHere•16h ago
Edited. Not only have I read numerous reports of it both on this site and on Reddit, but it personally happened to me around 2022. The number of such reports that I have read is easily 12x that of AWS. In contrast, AWS or any other cloud never did anything like this to me.

This is risked if the CPU or another resource is using close to 100% for a couple of months. Hetzner likes customers that pay for what they don't use.

gdulli•13h ago
Reddit anecdotes are not a good way to sell a given argument lol.
OutOfHere•8h ago
Reddit is a prime platform for users to report problems with pretty much any service. Reports from moderate to high karma users who're not newly registered will carry more weight. If you haven't used it for this purpose, then you're absolutely in the minority. Typically people don't pay attention until they themselves become a victim, and then they learn the hard way.

To label it an "anecdote" is to gaslight it. It is lived experience.

vidarh•16h ago
They're cheap, so I'd expect they get a substantially higher proportion of users who might think their account termination is unfair, but that were actually flouting the rules, so I wouldn't be surprised if there were a higher proportion of claims of unfair account termination... I've recommended Hetzner to people since 2008-2009, and know lots of people who use it, and I've never heard first-hand accounts of termination of any kind. But anecdotes vs. data and all that.
OutOfHere•16h ago
If you haven't read first-hand reports, then you haven't read all that much. If you search on Reddit, you may see many reports. It happened to me personally after I started using their CPU at nearly 100% for about two months. That's a report for you. They're cheap because they don't actually want people using what they're paying for. This is a theme that I have seen with German services.

Speaking of their rules, those are a bit insane too. Speaking of "flouting rules", any prospective user should think about whether it's okay for a cloud vendor to keep spying on which processes the user is running, even without a court order; it is not okay.

If you keep moving the goalpost, then you will understand nothing. You might as well be an employee of Hetzner.

vidarh•15h ago
I've maxed out lots of CPUs at Hetzner over many years, and across multiple companies, and had clients do the same, so I find your claim to be dubious unless you're talking about shared CPU cloud instances in which case I wouldn't be surprised but also wouldn't consider it unfair.

So let me revise that to say I haven't seen any reports I can 1) verify are first hand, and 2) know accurately reflect an actual unfair termination. That is also why I don't bother going around reading accounts on Reddit.

OutOfHere•8h ago
Maxing out a CPU for a day or a week doesn't count. It has to stay maxed out close to for a month, maybe more.

There are no "fair" terminations except without a court order. You will understand when it happens to you. Also, there is no way for you to determine if a report of a a termination is "unfair". In this way, you will continue reveling in your limited worldview.

I have seen this multiple times with German providers. They promise to serve, then when the user really genuinely exercises the service, they cancel the user.

vidarh•6h ago
I notice you've avoided addressing the issue of whether you were on a shared instance, where the point very much is that they're not meant for workloards that will pin the resources on an ongoing basis. On the dedicated servers they won't know whether you max it out or not.

That you're being evasive makes it very much sound like you used them in ways you should have expected would be treated accordingly.

If you've run into this multiple times, it very much sounds like a "you problem".

OutOfHere•4h ago
I have not run into this multiple times. You said that, not me. I said something different. Hetzner is the only cloud provider that cut me off. The other provider was not a cloud vendor.

Even if it was a shared instance, people don't hire a 48 core server just to use 1 or 2 cores. It makes no sense to rent out a big shared server and then expect users to not use it. Someone would rent it out only if they have exhausted smaller instances.

Something tells me that your idea of computing is communist computing, where someone shouldn't use too much even when paying for it. That's a mental roadblock for which there is no fix.

Someone with your communist mental model would be okay a cloud provider spying on their activities very closely, but most people are not.

PeterStuer•15h ago
Just curious: Was this a colo, dedicated server, managed server or VPS? And since you mention "CPU at nearly 100% for about two months", was this potentially crypto mining?
OutOfHere•8h ago
There was no crypto mining involved whatsoever. There was some crypto analysis involved, among unrelated other analyses, but no mining.

If you think about it, Hetzner had to be spying on my activities in very close detail to see what I am doing. Such unnecessary spying (without a court order) alone should detract anyone from using them. Assuming they copied my disk image and subjected it to a scan, it's very possible that they retained my confidential data without my permission. Is this the kind of cloud provider that anyone should use?

As for the type of server, it really shouldn't matter. The service exists to be used. People don't rent say 24 core or 48 core servers just to pass the time and pay money for nothing.

pwmtr•17h ago
We’ve been seeing the same trend. Lots of teams moving to Hetzner for the price/performance, but then realizing they have to rebuild all the Postgres ops pieces (backups, failover, monitoring, etc.).

We ended up building a managed Postgres that runs directly on Hetzner. Same setup, but with HA, backups, and PITR handled for you. It’s open-source, runs close to the metal, and avoids the egress/I/O gotchas you get on AWS.

If anyone’s curious, I added here are some notes about our take [1], [2]. Always happy to talk about it if you have any questions.

[1] https://www.ubicloud.com/blog/difference-between-running-pos... [2] https://www.ubicloud.com/use-cases/postgresql

normie3000•16h ago
This is one key draw to Big Cloud and especially PaaS and managed SQL for me (and dev teams I advise).

Not having an ops background I am nervous about:

* database backup+restore * applying security patches on time (at OS and runtime levels) * other security issues like making sure access to prod machines is restricted correctly, access is logged, ports are locked down, abnormal access patterns are detected * DoS and similar protections are not my responsibility

It feels like picking a popular cloud provider gives a lot of cover for these things - sometimes technically, and otherwise at least politically...

ksajadi•16h ago
I can attest to that. At Cloud 66 a lot of customers tell us that while the PaaS experience on Hetzner is great, they benefit from our managed DBs the most.
gizzlon•12h ago
What's the "the PaaS experience on Hetzner" ? Link?
ozim•15h ago
Applying security patches on time is not much problem. Ones that you need to apply ASAP are rare and for DB engine you never put it on public access, most of the time exploit is not disclosed publicly and PoC code is not available for patched RCE right on day of patch release.

Most of the time you are good if you follow version updates for major releases as they come you do regression testing and put it on prod in your planned time.

Most problems come from not updating at all and having 2 or 3 year old versions because that’s what automated scanners will be looking for and after that much time someone much more likely wrote exploit code and shared it.

DanielHB•15h ago
There must be SaaS services offering managed databases on different providers, like you buy the servers they put the software and host backups for you. Anyone got any tips?
swiftcoder•15h ago
to be fair, AWS' database restore support is generally only a small part of the picture - the only option available is to spin an entirely new DB cluster up from the backup, so if your data recovery strategy isn't "roll back all data to before the incident", you have to build out all your own functionality for merging the backup and live data...
matt-p•15h ago
I think the "strategy" for most people is to do it manually, or make the decision to just revert wholesale to a particular time.
swiftcoder•15h ago
Yeah, and that default strategy tends to become very, very painful the first time you encounter non-trivial database corruption.

For example, one of my employers routinely tested DB restore by wiping an entire table in stage, and then having the on call restore from backup. This is trivial because you know it happened recently, you have low traffic in this instance, and you can cleanly copy over the missing table.

But the last actual production DB incident they had was a subtle data corruption bug that went unnoticed for several weeks - at which point restoring meant a painful merge of 10s of thousands of records, involving several related tables.

matt-p•15h ago
Yeah, but automating a solution for all possible "one off subtle data corruption bugs" is a lot of energy and effort to be honest.
swiftcoder•14h ago
For sure. It's more about having a pipeline for pulling data from multiple sources - rather than spin up a whole new DB cluster, you usually want to pull the data into new tables in your existing DB, so that you can run queries across old & new data simultaneously
recroad•12h ago
Exactly this. For a small team that's focused on feature development and customer retention, I tend to gladly outsource this stuff and sleep easy at night. It's not even a cost or performance issue for me. It's about if I start focusing on this stuff, what about my actual business am I neglecting. It's a tradeoff.
baobun•16h ago
In the adjacent category of self-managed omakase postgres: https://www.elephant-shed.io/
bdcravens•15h ago
While I'm sure it's a great project, a few issues in the README gave me pause to think about how well it's kept up to date. Around half of the links in the list of dependencies are either out of date or just plain don't work, and referencing Vagrant with no mention of Docker.
baobun•15h ago
It's indeed undermaintaned so it's not a case of only plug-and-play and automated pulls for production. Still a solid base to build from when setting up on VMs or dedicated and I'm yet to find something better short of DIYing everything.
slig•12h ago
Also, Pigsty [1]. Feels too bloated for my taste, but I'd love to hear any experience from fellow HNers.

[1] https://pigsty.io/

ed_mercer•17h ago
Great! Now if you go full homelab, you can get 1/30th of the price ;)
Havoc•17h ago
Makes sense. If you don't need the redundancy, certification/legals or the big cloud 100s of integrated other lego blocks then big cloud vps prices just are a rip off.
Havoc•17h ago
Their Storage box offerings are great too. Think like a big ftp drive except supports lots of transfer protocols
nodesocket•16h ago
Saving $426/mo for a business seems like a waste of time and resources. The excessive frugal developer complex. How many hours did it take to do the migration?
CaptainOfCoit•16h ago
> Saving $426/mo for a business seems like a waste of time and resources

How come? The baseline for that comparison will also stay static, regardless of how many TPS or whatever is going on, meanwhile the AWS price they're comparing to would only increase the more people use whatever they deploy.

Propelloni•16h ago
A quick background check on digitalsociety.coop reveals that 5000 US$/year was a significant sum for them in 2024 and that opportunity costs were probably negligible. Not everybody has money to burn.
fauigerzigerk•16h ago
Self-funded startups need to be frugal. And self-funded startups serving the not-for-profit sector in Europe need to be extra frugal.

The hours they put into not wasting money on AWS today could pay off many times if it makes their SaaS economically viable for their target audience.

gdulli•13h ago
As the business grows so will that inefficiency and migrating now is much less work than migrating later.
jwr•16h ago
I've been running my SaaS on Hetzner servers for over 10 years now. Dedicated hardware, clusters in DE and FI, managed through ansible. I use vpncloud to set up a private VPN between the servers (excellent software, btw).

My hosting bill is a fraction of what people pay at AWS or other similar providers, and my servers are much faster. This lets me use a simpler architecture and fewer servers.

When I need to scale, I can always add servers. The only difference is that with physical servers you don't scale up/down on demand within minutes, you have to plan for hours/days. But that's perfectly fine.

I use a distributed database (RethinkDB, switching to FoundationDB) for fault tolerance.

withinboredom•15h ago
Similar setup to me (including rethinkdb). Why choose FoundationDB? RethinkDb is still maintained and features added occasionally (I'm on the rethinkdb slack and maintain an async php driver). It just is one guy though, working on it part time.
jwr•12h ago
RethinkDB is somewhat maintained, and while it is a very good database and works quite well, it is not future-proof. But the bigger reason is that I need better performance, and by now (after 10 years) I know my data access patterns well, so I can make really good use of FoundationDB.

The reason for FoundationDB specifically is mostly correctness, it is pretty much the only distributed database out there that gives you strict serializability and delivers on that promise. Performance is #2 on the list.

vjerancrnjak•6h ago
How will you deal with lack of 3 AZ or FI to DE latency?
da02•14h ago
You use vpncloud to connect across different Hetzner data centers (DE + FI)? I thought/assumed Hetzner provided services to do this at little-to-no cost.
jwr•13h ago
No, I use vpncloud for a local (within a datacenter) VPN. This lets me move more configuration into ansible (out of the provider's web interfaces), avoid additional fees, and have the same setup usable for any hosting provider, including virtual clouds. Very flexible.
GordonS•13h ago
Not the GP, but I also use Hetzner, but use Tailscale to connect securely across different Hetzner regions (and indeed other VPS providers).

Hetzner does provide free Private Networks, but they only work within a single region - I'm not aware of them providing anything (yet) to securely connect between regions.

boobsbr•14h ago
Nice to see someone still using RethinkDB.
DoctorOW•16h ago
I use Hetzner for personal projects and love it, the one thing stopping me from pushing it up the chain at work, is that we're based out of the US exclusively and AZs are pretty sparse.
naiv•16h ago
Just yesterday they released a new 'Shared regular performance' offering https://www.hetzner.com/cloud/
littlecranky67•16h ago
Thanks for that link. It seems with that introduction, they also lowered prices on the dedicated-core on their vservers - at least I was paying 15€/month and now they seem to offer it for 12€/month. I will try to see if shared performance is an option for the future.
kachapopopow•16h ago
This is pretty bad still, with colocation you can get the costs down to 1/100th with good deals at datacenters especially ones that are struggling to attract customers. Most of your bill is power so if you rack efficiency optimized servers you can have a lot of compute for very cheap.

In terms of networking many offer no-headache solutions with some kind of transit blend.

<rant>I recently had to switch away from hetzner due to random dhclient failures causing connectivity loss once ip's expired, complete failure of the loadbalancer - stopped forwarding traffic for around 6 hours and the worst part is that there was no acknoledgement from hetzner about any of these issues so at some point I was going insane over trying to find what is the issue when in the end it was hetzner. (US VA region)

tom1337•16h ago
But when you are Colocating you have higher upfront costs as you need to acquire hardware and also need to have somebody nearby the datacenter for hardware swaps in case of a failure, no?
hyperionplays•15h ago
There's tonnes of companies out there who have smart remote hands in the major cities who can respond in sub 1hour to an outage at your choosen DC.

Refurb servers will still blast AWS, and spares are easy to source.

I know HE.net does a rack for like $500/mo intro price and that comes with a 1G internet feed as well.

dboreham•15h ago
You need to buy the hardware. However, you don't really need a dude on-hand to swap stuff on a daily basis, unless you're trying to host backblaze. The approach we take (with our data center 1000 miles away) is to provision excess machines. So if we need 6 machines we'll provision, say 8. Failure modes are always assumed to be "the whole machine" -- so a machine is either in service or not. Over time (years) one or two machines might fail in one way or another. Every couple of years we mount a rescue mission to repair/replace the bad machines, do some upgrades etc. We have redundant switches and routers and would make a special trip to replace one of those if there were a failure. The entire deployment has a "scaled to zero" cloud hot standby in place for the eventuality that the whole setup gets nuked somehow.
scjon•14h ago
There's higher upfront costs, but typically we find that we are breakeven for the cost of hardware in 12 months or less. In my experience, colos will have techs available for hardware swaps / remote hands troubleshooting if needed. It's not free for that but it solves that problem. I think it really just depends on your company's needs and skillset. For our company it makes sense to colocate. We are a VOIP service provider, so we also have multiple IP transit providers and our own /22 subnet. We use BGP to change / pull routes quickly when there's outages with an ISP or cloud provider. I know AWS supports a setup like that, but you're relying on them for announcing route changes.
kachapopopow•13h ago
using decomissioned hardware saves you 90% of the costs and you usually colocate where you live or just have one of your tech friends help out :)

most datacenters do offer remote hands which is a bit pricey, but since they're only needed in emergencies in a redundant setup it is just not required.

9cb14c1ec0•15h ago
Cogent just offered me this colo deal:

Full Rack = $100/month* with $500 install, Power (20A) = $350/month with $500 install, DIA (1Gbps) = $300/month

Total = $750/month plus $1,000 Install on 12 month term

karterk•11h ago
Where is this, if I may ask?
jedisct1•16h ago
Of course.

A dedicated server or VPS from OVH, Hetzner, Scaleway, etc., or even Docker containers on Koyeb, will give you way more bang for your buck.

Call me a dinosaur, but I’ve never used any of the big cloud providers like AWS. They’re super expensive, and it’s hard to know what you’ll actually end up paying at the end of the month.

e12e•16h ago
Very interesting and detailed article!

I'd love to hear more about how you use terraform and helm together.

Currently our major friction in ops is using tofu (terraform) to manage K8s resources. Avoiding yaml is great - but both terraform and K8s maintaining state makes the deployment of helm from terraform feel fragile; and vice-versa depending on helm directly in a mostly terraform setup also feels fragile.

baobun•15h ago
Not OP but I've lived through this too and my conclusion from that is that if you're doing tofu/terraform you're better off not introducing helm at all. Just tf the k8s.
e12e•10h ago
Yes, this is what we do for example for the tailscale operator - but it's tedious to convert yaml to tf - and more importantly: error prone to correctly adapt upstream changes to update deployments as upstream refine their helm/k8s yaml files.
sergioisidoro•16h ago
I really liked Hetzner but I got burned by one issue. I had some personal projects running there and the payment method failed. Automated email communications also failed among so much spam and email notifications I receive, and when I noticed the problem they had wiped all my data without possibility of recovery.

It was a wake up moment for me about keeping billing in shape, but also made me understand that a cloud provider is as good as their support and communications when things go south. Like an automated SMS would be great before you destroy my entire work. But because they are so cheap, they probably can't do that for every 100$/month account.

I've had similar issues with AWS, but they will have much friendlier grace periods.

roflmaostc•16h ago
Sorry to hear that.

But if you do not pay and you do not check your e-mails, it's basically your fault. Who is using SMS these days even?

sergioisidoro•16h ago
Yes, absolutely my fault. But these problems happen. Credit cards expire, people change companies or go on leaves, off boarding processes are not always perfect, spam filters exist.

Add to that the declining experience of email with so much marketing and trash landing in the inbox (and sometimes Gmail categorizing important emails as "Updates")

That's why grace periods for these situations are important.

Who uses SMS? This might be a cultural difference, but in Europe they are still used a lot. And would you be ok if your utility company cut your electricity bill just with an email warning? Or being asked to appear to court by email?

amelius•16h ago
How long after shutting you down did they delete your data?

That period should definitely be longer than a few days.

debazel•15h ago
Hetzner will almost immediately nuke your data if you miss a payment and often outright ban you and your business from ever using them again.

Hetzner is great for cheap personal sites but I would never use them for any serious business use-cases. Other than failed payments, Hetzner also has very strict content policies and they use user reports to find offenders. This means that if just a few users report your website, everything is deleted and you're banned with zero warning or support, whether the reports are actually true or not. (This also means you can never use Hetzner for anything that has user uploaded content, it doesn't matter if you actively remove offending material because if it ever reaches their servers you're already SOL.)

amelius•15h ago
That sounds really bad.
patapong•15h ago
Hmm this sounds scary, even though I've had very positive experience with them. Any alternatives with similarly priced offerings that do not face this issue?
account42•14h ago
> Add to that the declining experience of email with so much marketing and trash landing in the inbox (and sometimes Gmail categorizing important emails as "Updates")

This is also something under your control - you don't have to use Gmail as your email provider for important accounts and you can whitelist the domains of those service providers if you don't rely on a subpar email service.

oefrha•14h ago
I had payment issues with Hetzner too, that was back in 2018, haven’t used them since. At least back then, and at least for me, they were unlike any other provider I’ve used which would send you plenty of warnings if they fail to bill you. The very first email I got from them that smelt of trouble was “Cancellation of Contract”, at which point my account was closed and I could only pay by international bank wire. (Yes I just checked all my correspondence with them to make sure I’m not smearing them.) Amusingly they did send payment warning after account closure. Why not before? No effing clue. That was some crazy shit.
dotancohen•16h ago

  > It was a wake up moment for me about keeping billing in shape
It should be a wake up moment about keeping backups as well.
sergioisidoro•14h ago
Yep. And importantly - backups on different cloud providers, with different payment methods.
matdehaast•15h ago
I've had billing issues, and they have let it be resolved a couple of weeks later.
futurecat•16h ago
Questions for people who migrate off-cloud:

1. How many nodes do you have? 2. Did you install anything to monitor your node(s) and the app deployed on these nodes? If so, which software?

CaptainOfCoit•16h ago
1. In total, maybe 10-15, but managing a lot of it for others, my own stuff is hosted across two.

2. Yes, TLDR: Prometheus + Grafana + AlertManager + ELK. I think it's a fairly common setup.

GordonS•12h ago
1. Around 30, a mix of both x64 and ARM. But planning on switching heavy workloads to physical machines at some point, which would take total node count down to around 16.

2. OpenTelemetry Collector installed on all nodes, sending data to a self-hosted OpenObserve instance. UI is a little clunky, but it's been an invaluable tool, and it handles everything in one place - logs, traces, metrics, alerts.

tmdetect•16h ago
+1 to running services on physical servers, OVH in my case. I'm really enjoying CI pushing to servers and having managed database provided by a 3rd party like Mongo Atlas.
cyberpunk•16h ago
Isn’t there quite a significant latency problem if you’re going across the internet for db instead of say, the same switch?
baobun•15h ago
No experience with Mongo Atlas but other managed DB providers will IME be transparent about where they host and you can often request resources in an appropriate DC, sometimes even the same. Businesses providing this in Hetzner, OVH etc too. If you plan accordingly you can eat your cake and have it too.
dvfjsdhgfv•16h ago
3x seems quite low, I routinely get 7-11x higher performance on Hetzner when compared to AWS. Also the conclusion of this old benchmark is still partially true: https://jan.rychter.com/enblog/cloud-server-cpu-performance-...
krowek•16h ago
I tried my best to start using Hetzner, but they wouldn't let me.

I got my account validation rejected despite having everything "in norm" and tried 3 times, they wouldn't give me a reason why it ended up rejected.

I think it's better that way, I wouldn't like to get the surprise my account was terminated at some point after that.

silexia•16h ago
At my home, I have the PUD fiber Internet with Starlink as the backup WAN. I have two non mission critical servers we were using on AWS for and I set up two old laptops locally to run. Saving $1,000 a month now. I am looking at my $4,000 a month of mission critical servers now.
dboreham•15h ago
I began years ago hosting at home (because the business grew unexpectedly and initially the test server had been in my home). I wouldn't recommend it. Better to rent colocation space. The problem apart from having to provision reliable internet is you also need to provision reliable power, and cooling. And it gets noisy.
webprofusion•16h ago
Context is important for stuff like this, I've served 350M requests (with db interaction) for $49 via cloudflare before, but it all depends what you're trying to do.

Abstracted infrastructure like Kubernetes is expensive by default, so design has an impact.

Qasaur•16h ago
Hetzner is great for dedicated servers, but for those of us who need smaller-scale secure/confidential VMs I'm afraid that there isn't really any other choice than hyperscalers.

Does anyone know if there is a VM vendor that sits somewhere in between a dedicated server host like Hetzner in terms of performance + cost-effectiveness and AWS/GCP in terms of security?

Basically TPM/vTPM + AMD SEV/SEV-SNP + UEFI Secure Boot support. I've scoured the internet and can't seem to find anyone who provides virtualised trusted computing other than AWS/GCP. Hetzner does not provide a TPM for their VMs, they do not mention any data-in-use encryption, and they explicitly state that they do not support UEFI secure boot - all of these are critical requirements for high-assurance use cases.

dboreham•15h ago
Interested to hear more about your use case and threat model, if you are willing to share. I ask because although I've looked into (and done some prototyping) with secure cloud hosting, I/we came to the conclusion that there's no current technology that is "actually secure" and so abandoned the approach. Curious if things have improved now, or if you're operating in some security theater context where it's ok.
manawyrm•15h ago
+1, if your threat model is actually this severe, use physical hardware with physical interlocks and physical security mechanisms.

Software/virtualization is just helpless against such a threat model.

Qasaur•14h ago
The basic principle is to ensure that any machine/workload which joins the network (and processes customer data, in this case extremely sensitive PII) has a cryptographically verified chain of trust from boot to the application-layer to guarantee workload integrity.

NixOS is used for declarative and more importantly deterministic OS state and runtime environment, layered with dm-verity to prevent tampering of the Nix store. The root partition, aside from whatever is explicitly configured in the nix store, is wiped on every reboot. The ephemerality prevents persistence of any potential attacker, and the state of the machine is completely identical to whatever you have configured in your NixOS configuration, which is great for audibility. This OS image + boot loader is signed with organisation-private keys, and deployed to machines preloaded with UEFI keys to guarantee boot integrity and preventing firmware-level attacks (UEFI secure boot).

At this point you need to trust the cloud provider to not tamper with the UEFI keys or otherwise compromise memory confidentiality through a malicious or insecure hypervisor, unless the provider supports memory encryption through something like AMD SEV-SNP. The processor provides an AMD-signed attestation that is provided to the guest OS that states "Yes, this guest is running in a trusted execution environment, and here are the TPM measurements for the boot" and you can use this attestation to determine whether or not the machine should join your network and that it is running the firmware, kernel, and initramfs that you expect AND on hardware that you expect.

I think I'll put together a write-up on this architecture once I launch the service. There is no such thing as perfect security, of course, but I think this security architecture prevents many classes of attacks. Bootkits and firmware-level attacks are exceedingly difficult or even impossible with this model, combine this with an ephemeral root filesystem and any attacker would be effectively unable to gain persistence in the system.

Tepix•14h ago
Have you looked at colocation?
majodev•8h ago
Oracle Cloud Infrastructure tries to fill exactly this sweet spot. Cheaper compute than the other hyperscalers, while still offering similar security features (TPM, Shielded Instances, Measured Boot) and a bare-metal-first focus.

Disclaimer, just joined Oracle a few months ago. I'm using both Hetzner and OCI for my private stuff and my open-source services right now. I still personally think they've identified a clever market fit there.

slig•16h ago
Anyone here using

https://github.com/vitobotta/hetzner-k3s

Or

https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne...

For a K3S cluster? Would love to hear any experience. Thanks!

ctm92•15h ago
We are using hetzner-k3s, super solid and easy to use. I love that it also utilizes the Hetzner Cloud Controllers to utilize the native Hetzner load balancers, volumes, network
slig•15h ago
Great, thanks!
JCM9•15h ago
As competition heats up, the relative “enshitifcation” of AWS is real.

There just isn’t a compelling story to go “all in on AWS” anymore. For anything beyond raw storage and compute the experience elsewhere is consistently better, faster, cheaper.

It seems AWS leadership got caught up trying to have an answer for every possible computing use case and broadly ended up with a bloated mess of expensive below-bar products. The recent panicked flood of meh AI slop products as AWS tries to make up for its big miss on AI is one such example.

Would like to see AWS just focus on doing core infrastructure and doing it well. Others are simply better at everything that then layers on top of that.

YetAnotherNick•15h ago
You are doing calculations all wrong if you think saving $500/month is 75% of your cost.

Also first three lines of new stack is a sure shot way to get PTSD. You shouldn't manage database in your plane, unless you really know the internals of the tools you are using. Once you get off AWS then you really start to see the value of things like documentation.

adamcharnock•15h ago
I cannot overstate the performance improvement of deploying onto bare metal. We typically see a doubling of performance, as well as extremely predictable baseline performance.

This is down to several things:

- Latency - having your own local network, rather than sharing some larger datacenter network fabric, gives around of order of magnitude reduced latency

- Caches – right-sizing a deployment for the underlying hardware, and so actually allowing a modern CPU to do its job, makes a huge difference

- Disk IO – Dedicated NVMe access is _fast_.

And with it comes a whole bunch of other benefits:

- Auto-scalers becomes less important, partly because you have 10x the hardware for the same price, partly because everything runs 2x the speed anyway, and partly because you have a fixed pool of hardware. This makes the whole system more stable and easier to reason about.

- No more sweating the S3 costs. Put a 15TB NVMe drive in each server and run your own MinIO/Garage cluster (alongside your other workloads). We're doing about 20GiB/s sustained on a 10 node cluster, 50k API calls per second (on S3 that is $20-$250 _per second_ on API calls!).

- You get the same bill every month.

- UPDATE: more benefits - cheap fast storage, run huge Postgresql instances at minimal cost, less engineering time spend working around hardware limitations and cloud vagaries.

And, if chose to invest in the above, it all costs 10x less than AWS.

Pitch: If you don't want to do this yourself, then we'll do it for you for half the price of AWS (and we'll be your DevOps team too):

https://lithus.eu

Email: adam@ above domain

rightbyte•15h ago
What is old is new again.

My employer is so conservative and slow that they are forerunning this Local Cloud Edge Our Basement thing by just not doing anything.

Aissen•15h ago
As an infrastructure engineer (amongst other things), hard disagree here. I realize you might be joking, but a bit of context here: a big chunk of the success of Cloud in more traditional organizations is the agility that comes with it: (almost) no need to ask permission to anyone, ownership of your resources, etc. There is no reason that baremetal shouldn't provide the same customer-oriented service, at least for the low-level IaaS, give-me-a-VM-now needs. I'd even argue this type of self-service (and accounting!) should be done by any team providing internal software services.
infogulch•15h ago
Like https://oxide.computer/ ?
abujazar•15h ago
The permissions and ownership part has little to do with the infrastructure – in fact I've often found it more difficult to get permissions and access to resources in cloud-heavy orgs.
joshuaissac•11h ago
This could be due to the bureaucratic parts of the company being too slow initially to gain influence over cloud administration, which results in teams and projects that use the cloud being less hindered by bureaucracy. As cloud is more widely adopted, this advantage starts to disappear. However, there are still certain things like automatic scaling where it still holds the advantage (compared to requesting the deployment of additional hardware resources on premises).
alexchantavy•14h ago
> no need to ask permission to anyone, ownership of your resources, etc

In a large enough org that experience doesn’t happen though - you have to go through and understand how the org’s infra-as-code repo works, where to make your change, and get approval for that.

misiek08•14h ago
You also need to get budget, few months earlier, sometimes even legal approval. Then you have security rules, „preferred” services and the list goes on..
michaelt•14h ago
"No need to ask permission" and "You get the same bill every month" kinda work against one another here.
Aissen•14h ago
I should have been more precise… Many sub-orgs have budget freedom to do their job, and not having to go through a central authority to get hardware is often a feature. Hence why Cloud works so well in non-regulatory heavy traditional orgs: budget owner can just accept the risks and let the people do the work. My comment was more of a warning to would-be infrastructure people: they absolutely need to be customer-focused, and build automation from the start.
rightbyte•14h ago
Well ye it is more like I frame it as a joke but I do mean it.

I don't argue there aren't special cases for using fancy cloud vendors, though. But classical datacentre rentals get you almost always there for less.

Personally I like being able to touch and hear the computers I use.

blibble•14h ago
don't underestimate the ability of traditional organisations to build that process around cloud

you keep the usual BS to get hardware, plus now it's 10x more expensive and requires 5x the engineering!

datadrivenangel•11h ago
This is my experience, though the lead time for 'new hardware' on cloud is only 6-12 weeks of political knife fighting instead of 6-18 months of that plus waiting.
ambicapter•14h ago
I'm at a startup and I don't have access to the terraform repo :( and console is locked down ofc.
rcxdude•14h ago
I think also this was only a temporary situation caused by the IT departments in these organisations being essentially bypassed. Once it became a big important thing then they have basically started to take control of it and you get the same problems (in fact potentially more so because the expense means there's more pressure cut down resources).
kccqzy•13h ago
That's a cultural issue. Initially at my workplace people needed to ask permissions to deploy their code. The team approving the deployment got sick of it and built a self-service deployment tool with security controls built in and now deployment is easy. All it matters is a culture of trusting other fellow employees, a culture of automating, and a culture of valuing internal users.
Aissen•13h ago
Agreed, that's exactly what I was aiming at. I'm not saying that it's the only advantage of Cloud, but that orgs with a dysfunctional resource-access culture were a fertile ground for cloud deployments.

Basically: some managers gets fed-up with weeks/months of delays for baremetal or VM access -> takes risks and gets cloud services -> successful projects in less time -> gets promoted -> more cloud in the org.

HPsquared•15h ago
"Do nothing, Win"
radu_floricica•15h ago
> What is old is new again.

Over the years I tried occasionally to look into cloud, but it never made sense. A lot of complexity and significantly higher cost, for very low performance and a promise of "scalability". You virtually never need scalability so fast that you don't have time to add another server - and at baremetal costs, you're usually about a year ahead of the curve anyways.

ep103•14h ago
The benefit of cloud has always been that it allows the company to trade capex for opex. From an engineering perspective, it trades scalability for complexity, but this is a secondary effect compared to the former tradeoff.
et1337•14h ago
I’ve heard this a lot, but… doesn’t Hetzner do the same?
PeterStuer•14h ago
"trade capex for opex"

This has nothing to do with cloud. Businesses have forever turned IT expenses from capex to opex. We called this "operating leases".

radiator•14h ago
Hetzner is also a cloud. You avoid buying hardware, you rent it instead. You can rent either VMs or dedicated servers, but in both cases you own nothing.
kitd•14h ago
People are usually the biggest cost in any organisation. If you can run all your systems without the sysadmins & netadmins required to keep it all upright (especially at expensive times like weekends or run up to Black Friday/Xmas), you can save yourself a lot more than the extra it'll cost to get a cloud provider to do it all for you.
HPsquared•14h ago
That's how they can get away with such seemingly high prices.
chatmasta•14h ago
I can’t believe this cloud propaganda remains so pervasive. You’re just paying DevOps and “cloud architects” instead.
codegeek•14h ago
Exactly. It's sad that we have been brain washed by the cloud propaganda long enough now. Everyone and their mother thinks that to setup anything in production, you need cloud otherwise it is amaeteurish. Sad.
ecshafer•14h ago
Every large organization that is all in on cloud I have worked at has several teams doing cloud work exclusively (CICD, Devops, SRE, etc), but every individual team is spending significant amounts of their time doing cloud development on top of that work.
rcxdude•14h ago
This. There's a lot of talk of 'oh you will spend so much time managing your own hardware' when I've found in practice it's much less time than wrangling the cloud infrastructure. (Especially since the alternatives are usually still a hosting provider that mean you don't have to physically touch the hardware at all, though frankly that's often also an overblown amount of time. The building/internet/cooling is what costs money but there's already a wide array of co-location companies set up to provide exactly that)
ecshafer•13h ago
The cost to run data-centers for a large company that is past the co-location phase, I am not sure where those calculations come out to. But yeah in my experience, running even a fairly large amount of bare metal nix servers in colocation facilities are really not that time consuming.
epistasis•13h ago
I think you are very right, and to be specific, IAM roles, connecting security groups, terraform plan/apply cycles, running Atlantis through GitHub, all that takes tremendous amounts of time and requires understanding a very large set of technologies on top of the basic networking/security/PostGRES knowledge.
mjr00•14h ago
Yeah I always just kinda laugh at these comparisons, because it's usually coming from tech people who don't appreciate how much more valuable people's time is than raw opex. It's like saying, you know it's really dumb that we spend $4000 on Macbooks for everyone, we could just make everyone use Linux desktops and save a ton of money.
grim_io•13h ago
What is this?!

You are self-managing expensive dedicated hardware in form of MacBooks, instead of renting Azure Windows VM's?!

Shame!

lozf•12h ago
Don't be silly, - the MacBook Pro's are just used to RDP to the Azure Windows VMs ;)
wredcoll•12h ago
If "cloud" took zero time, then sure.

It actually takes a lot of time.

mjr00•12h ago
"It's actually really easy to set up Postgres with high availability and multi-region backups and pump logs to a central log source (which is also self-hosted)" is more or less equivalent to "it's actually really easy to set up Linux and use it as a desktop"

In fact I'd wager a lot more people have used Linux than set up a proper redundant SQL database

grim_io•10h ago
Honestly, I don't see a big difference between learning the arcane non-standard, non-portable incantations needed to configure and use various forks of standard utilities running on the $CLOUD_PROVIDER, and learning to configure and run the actual service that is portable and completely standard.

Okay, I lied. The later seems much more useful and sane.

KronisLV•8h ago
> It's like saying, you know it's really dumb that we spend $4000 on Macbooks for everyone, we could just make everyone use Linux desktops and save a ton of money.

Ohh idk if this is the best comparison, due to just how much nuance bubbles up.

If you have to manage those devices, Windows and Active Directory and especially Group Policy works well. If you just have to use the devices, then it depends on what you do - for some dev work, Linux distros are the best, hands down. Often times, Windows will have the largest ecosystem and the widest software support (while also being a bit of a mess). In all of the time I’ve had my MacBook I really haven’t found what it excels at, aside from great build quality and battery life, it feels like one of those Linux distros that do things differently just for the sake of it, even the keyboard layout, the mouse acceleration feeling the most sluggish (Linux distros feel the best, Windows is okay) even if the trackpad is fine, as well as stuff like needing DiscreteScroll and Rectangle and some other stuff to make generic hardware feel okay (or even multi display work), maybe creative software is great there.

It’s the kind of comparison that derails itself in the mind of your average nerd.

But I get the point, the correct tool for the job and all that.

Arch-TK•13h ago
What is more likely to fail? The hardware managed by Hetzner or your product?

I'm not saying that you won't experience hardware failures, I am just saying that you also need to remember that if you want your product to keep working over the weekend then you must have someone ready to fix it over the weekend.

grim_io•13h ago
Cloud providers and even cloudflare go down regularly. Relax.
fwip•12h ago
Sure - but when AWS goes down, Amazon fixes it, even on the weekends. If you self-host, you need to pay a person to be on call to fix it.
CursedSilicon•12h ago
AWS doesn't have to pay people (LOTS OF PEOPLE) to keep things running over the weekends?

And they aren't...just passing those costs on to their customers?

fwip•7h ago
They are of course, but it's amortized over many users. If you're a small company, it's hard to hire one-tenth of an SRE.
rypskar•12h ago
Not only that. When your self-host goes down your customers complain that you are down. When AWS goes down your customers complain that internet is down
grim_io•10h ago
Not every business needs that kind of uptime.

How often is GitHub down? We are all just fine without it for a while.

wredcoll•12h ago
I mean, yes, but also I get "3 nines" uptime by running a website on a box connected to my isp in my house. (it would easily be 4 or 5 nines if I also had a stable power grid...)

There's a lot, a lot of websites where downtime just... doesn't matter. Yes it adds up eventually but if you go to twitter and its down again you just come back later.

icedchai•5h ago
"3 nines" is around 8 hours of downtime a year. If you can get that without a UPS or generator, you already have a stable power grid.
exe34•13h ago
except you now have your developers chasing their own tails figuring out how to insert the square peg in the round hole without bankrupting the company. cloud didn't save time, it just replaced the wheels for the hamsters.
Ekaros•12h ago
Wouldn't you want someone watching over cloud infra at those times too? So maybe slightly less, but still need some people being ready.
spatley•10h ago
Exactly, for the narrowly defined condition of running k8s on digital ocean with a managed control plane compared to Hetzner bare metal:

AWS and DigitalOcean = $559.36 monthly or Hetzner = $132.96 The cost of an engineer to set up and maintain a bare metal k8s cluster is going to far exceed the roughly $400 monthly savings.

If you run things yourself and can invest sweat equity, this makes some sense. But for any company with a payroll this does not math out.

icedchai•7h ago
Right, because cloud providers take care of it all. /s Cloud engineers are more expensive than traditional sysadmins.
odie5533•14h ago
Complexity? I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks and I don't have to worry about OS upgrades and patches. Or a highly available load balancer with infinite scale.
fun444555•14h ago
I have done many postgres deploys on bare metal. The IOPS and storage space saved (zfs compression because psql is meh) is huge. I regularly used hosted dbs but largely for toy DBs in GBs not TBs.

Anyway, it is not hard and controlling upgrades saves so much time. Having a clients db force upgraded when there is no budget for it sucks.

Anyway, I encourage you to learn/try it when you have opportunity

codegeek•14h ago
This is how the cloud companies keep you hooked on. I am not against them of course but the notion that no one can self host in production because "it is too complex" is something that we have been fed over the last 10-15 years. Deploying a production db on a dedicated server is not that hard. It is about the fact that people now think that unless they do cloud, they are amateurs. It is sad.
speleding•13h ago
I agree that running servers onprem does not need to be hard in general, but I disagree when it comes to doing production databases.

I've done onprem highly available MySQL for years, and getting the whole master/slave thing go just right during server upgrades was really challenging. On AWS upgrading MySQL server ("Aurora") is really just a few clicks. It can even do blue/green deployment for you, where you temporarily get the whole setup replicated and in sync so you can verify that everything went OK before switching over. Disaster recovery (regular backups to off site & ability to restore quickly) is also hard to get right if you have to do it yourself.

fisf•12h ago
If you are running k8s on prem, the "easy" way is to use a mature operator, taking care of all of that.

https://github.com/percona/percona-xtradb-cluster-operator https://github.com/mariadb-operator/mariadb-operator or CNPG for Postgres needs. They all work reasonable well, and cover all the basic (HA, replication, backups, recovery, etc).

klooney•12h ago
It's really hard to do blue/green on prem with giant expensive database servers. Maybe if you're super big and you can amortize them over multiple teams, but most shops aren't and can't. The cloud is great.
cameronh90•12h ago
Doing stuff on-prem or in a data centre _is_ hard though.

It's easy to look at a one-off deployment of a single server and remark on how much cheaper it is than RDS, and that's fine if that's all you need. But it completely skips past the reality of a real life resilient database server deployment: handling upgrades, disk failures, backups, hot standbys, encryption key management, keeping deployment scripts up to date, hardware support contracts and vendor management, the disaster recovery testing for the multi-site SAN fabric with fibre channel switches and redundant dedicated fibre, etc. Before the cloud, we actually had a staff member who was entirely dedicated to managing the database servers.

Plus as a bonus, not ever having to get up at 2AM and drive down to a data centre because there was a power failure due to a generator not kicking in, and it turns out the data centre hadn't adequately planned for the amount of remote hands techs they'd need in that scenario...

RDS is expensive on paper, but to get the same level of guarantees either yourself or through another provider always seems to end up costing about the same as RDS.

fridder•11h ago
I guess that is the kicker right? "same level of guarantees".
biql•10h ago
Database is one of those places where it's justified, I think. Application containers do not need the same level of care hence are easy to run yourself.
matt-p•9h ago
I have done all of this also, today I outsource the DB server and do everything else myself, including a local read replica and pg_dump backups as a hail mary.

Essentially all that pain of yonder years was essentially storage it was a F**ing nightmare running HA network storage before the days of SSDs. It was slower than RAID, 5X more expensive than RAID and generally involved an extreme amount of pain and/or expense (usually both). But these days you only actually need SANs or as we call it today block storage when you have data you care about, again you only have to care about backups when you have data you care about.

For absolutely all of us the side effect of moving away from monolithic 'pets' is that we have made the app layer not require any long term state itself. So today all you need is N X any random thing that might lose data or fail at any moment as your app servers and an external DB service (neon, planetscale, RDS), plus perhaps S3 for objects.

AtlasBarfed•11h ago
I'd much rather deploy cassandra, admittedly a complex but failure resistant database, on internal hardware than on AWS. So much less hassle with forced restarts of retired instances, noisy nonperformant networking and disk I/O, heavy neighbors, black box throttling, etc.

But with Postgres, even with HA, you can't do geographic/multi-DC of data nearly as well as something like Cassandra.

trenchpilgrim•13h ago
If you were personally paying the bill, you'd probably choose the self host on cost alone. Deploying a DB with HA and offsite backups is not hard at all.
naasking•13h ago
> I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks and I don't have to worry about OS upgrades and patches

Last I checked, stack overflow and all of the stack exchange sites are hosted on a single server. The people who actually need to handle more traffic than that are in the 0.1% category, so I question your implicit assumption that you actually need a Postgres and Redis cluster, or that this represents any kind of typical need.

trenchpilgrim•13h ago
SO was hosted on a single rack last I checked, not a single box. At the time they had an MS SQL cluster.

Also, databases can easily see a ton of internal traffic. Think internal logistics/operations/analytics. Even a medium size company can have a huge amount of data, such as tracking every item purchased and sold for a retail chain.

naasking•11h ago
They use multiple servers for redundancy, but they are using only 5-10% capacity per [1], so they say they could run on a single server given these numbers. Seems like they've since moved to the cloud though [2].

[1] https://www.datacenterdynamics.com/en/news/stack-overflow-st...

[2] https://stackoverflow.blog/2025/08/28/moving-the-public-stac...

AznHisoka•13h ago
As a self hosting fan, i cant even fathom how hard it would be to even get started running a Postgres or redis cluster on AWS.

Like, where do I go? Do i search for Postgres? If so where? Does the IP of my cluster change? If so how to make it static? Also can non-aws servers connect to it? No? Then how to open up the firewall and allow it? And what happens if it uses too much resources? Does it shutdown by itself? What if i wanna fine tune a config parameter? Do I ssh into it? Can i edit it in the UI?

Meanwhile, all that time finding out, and I could ssh into a server, code and run a simple bash script to download, compile, run. Then another script to replicate. And i can check the logs, change any config parameter, restart etc. no black box to debug if shit hits the fan

trenchpilgrim•13h ago
A fun one in the cloud is "when I upgrade to a new version of Postgres, how long is the downtime and what happens to my indexes?"
mschuster91•12h ago
For AWS RDS, no big deal. Bare metal or Docker? Oh now THAT is a world of pain.

Seriously I despise PostgreSQL in particular in how fucking annoying it is to upgrade.

icedchai•9h ago
Yep. I know folks running their own clusters on AWS EC2 instead of RDS. They're still on 3 or 4 versions back because upgrading Postgres is a PITA.
wahnfrieden•13h ago
It is not as simple as you describe to set up HA multi-region Postgres

If you don't care about HA, then sure everything becomes easy! Until you have a disaster to recover and realize that maybe you do care about HA. Or until you have an enterprise customer or compliance requirement that needs to understand your DR and continuity plans.

Yugabyte is the closest I’ve seen to achieving that simplicity with self host multi region and HA Postgres and it is still quite a bit more involved than the steps you describe and definitely more work than paying for their AWS service. (I just mention instead of Aurora because there’s no self host process to compare directly there as it’s proprietary.)

nkozyra•13h ago
Having lived in both worlds, there are services wherein, yeah, host it yourself. But having done DB on-prem/on-metal, dedicated hosting, and cloud, databases are the one thing I'm happy to overpay for.

The things you describe involve a small learning curve, each different for each cloud environment, but then you never have to think about it again. You don't have to worry about downtime (if you set it up right), running a bash script ... literally nothing else has to be done.

Am I overpaying for Postgres compared to the alternatives? Hell yeah. Has it paid off? 100%, would never want to go back.

Volundr•13h ago
> Do i search for Postgres?

Yes. In your AWS console right after logging in. And pretty much all of your other setup and config questions are answered by just filling out the web form right there. No sshing to change the parameters they are all available right there.

> And what happens if it uses too much resources?

It can't. You've chosen how much resources (CPU/Memory/Disk) to give it. Run away cloud costs are bill by usage stuff like redshift, s3, lambda, etc.

I'm a strong advocate for self (for some value of self) hosting over cloud, but your making cloud out to be far more difficult than it is.

infecto•13h ago
This smells like “Dropbox is just rsync”. No skin in the game I think there are pros and cons to each but a Postgres cluster can be as easy as a couple clicks or an entry into a provisioning script. I don’t believe you would be able to architect the same setup with a simple single server ssh and a simple bash script. Unless you already wrote a bash script that magically provisions the cluster across various machines.
pavel_lishin•13h ago
> As a self hosting fan, i cant even fathom how hard it would be to even get started running a Postgres or redis cluster on AWS. Like, where do I go? Do i search for Postgres? If so where?

Anything you don't know how to do - or haven't even searched for - either sounds incredibly complex, or incredibly simple.

mschuster91•12h ago
Actually... for Postgres specifically, it's less than 5 minutes to do so in AWS and you get replication, disaster recovery and basic monitoring all included.

I hated having to deal with PostgreSQL on bare metal.

To answer your questions should someone ask these as well and wish answers:

> Does the IP of my cluster change? If so how to make it static?

Use the DNS entry that AWS gives you as the "endpoint", done. I think you can pin a stable Elastic IP to RDS as well if you wish to expose your RDS DB to the Internet although I have really no idea why one would want that given potential security issues.

> Also can non-aws servers connect to it? No?

You can expose it to the Internet in the creation web UI. I think the default the assistant uses is to open it to 0.0.0.0/0 but the last time I did that is many years past so I hope that AWS asks you about what you want these days.

>Then how to open up the firewall and allow it?

If the above does not, create a Security Group, assign the RDS server to that Security Group and create an Ingress rule that either only allows specific CIDRs or a blanket 0.0.0.0/0.

> And what happens if it uses too much resources? Does it shutdown by itself?

It just gets dog slow if your I/O quota is exhausted, it goes into an error state when the disk goes full. Expand your disk quota and the RDS database becomes accessible again.

> What if i wanna fine tune a config parameter? Do I ssh into it? Can i edit it in the UI?

No SSH at all, not even for manually unfucking something, for that you need the assistance of the AWS support - but in about six years I never had a database FUBAR'ing itself.

As for config parameters, there's an UI for this called "parameter/option groups", you can set almost all config parameters there, and you can use these as templates for other servers you need as well.

cortesoft•12h ago
Your comment seems much more in the vain "I already learned how to do it this way, and I would have to learn something to do it the other way"

Which is of course true, but it is true for all things. Provisioning a cluster in AWS takes a bit of research and learning, but so did learning how to set it up locally. I think most people who know how to do both will agree it is simpler to learn how to use the AWS version than learning how to self host it.

icedchai•9h ago
If you can self host postgres, you'll find "managing" RDS to be a walk in the park.
AtlasBarfed•8h ago
Did you try ChatGPT for step by step directions for an EC2 deployed database? It would be a great litmus test to see if it does proper security and lockdown in the process, and what options it suggests aside from the AWS-managed stuff.

It would be so useful to have an EC2/S3/etc compatible API that maps to a homelab. Again something that Claude should allegedly be able to vibecode give then breadth of documentation, examples, and discussions on the AWS API.

whstl•13h ago
If you are talking about RDS and ElasticCache, it’s definitely NOT a few clicks if you want it secure and production-ready, according to AWS itself in their docs and training.

And before someone says Lightsail: is not meant for highly availability/infinite scale.

lelanthran•13h ago
> I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks

It's "only a few clicks" after you have spent a signficant amount of time learning AWS.

binary132•12h ago
If you don’t find AWS complicated you really haven’t used AWS.
benjiro•12h ago
> I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks

I haven ever setup a AWS postgres and redis, and know its more then a few clicks. there is simply basic information that you need to link between services, where it does not matter if its cloud or hardware, you still need to do the same steps, be it from CLI or WebInterface.

And frankly, these days with LLMs, its no excuse anymore. You can literally ask a LLM to do the steps, explain them to you, and your off to the races.

> I don't have to worry about OS upgrades and patches

Single command and reboot...

> Or a highly available load balancer with infinite scale.

Unless your google, overrated ...

You literally rent from places like Hetzner for 10 bucks a load balancer, and if your old fascion, you can even do a DNS balancing.

Or you simply rent a server 10x the performance what Amazon gives (for the same price or less), and you do not need a load balancer. I mean, for 200 bucks, you rent a 48 core 96 thread server at Hetzner... Who needs a load balancer again... You will do millions or requests on a single machine.

icedchai•7h ago
For anything "serious", you'll want a load balancer for high availability, even if there's no performance need. What happens when your large server needs an OS upgrade or the power supply melts down?
prmoustache•9h ago
Well you can have managed resources on premises.

It costs people and automation.

throwaway894345•13h ago
If you’re just running some CRUD web service, then you could certainly find significantly cheaper hosting in a data center or similar, but also if that’s the case your hosting bill is probably a very small cost either way (relative to other business expenses).

> You virtually never need scalability so fast that you don't have time to add another server

What do you mean by “time to add another server?” Are you thinking about a minute or two to spin up some on-demand server using an API? Or are you talking about multiple business days to physically procure and install another server?

The former is fine, but I don’t know of any provider that gives me bare metal machines with beefy GPUs in a matter of minutes for low cost.

Lalabadie•13h ago
I'm a designer with enough front-end knowledge to lead front-end dev when needed.

To someone like me, especially on solo projects, using infra that effectively isolates me from the concerns (and risks) of lower-level devops absolutely makes sense. But I welcome the choice because of my level of competence.

The trap is scaling an org by using that same shortcut until you're bound to it by built-up complexity or a persistent lack of skill/concern in the team. Then you're never really equipped to reevaluate the decision.

f1shy•13h ago
If everything is properly done, it should be next to trivial to add a server. When I was working on that we had a written procedure, when followed strictly, it would just take less than an hour
hibikir•13h ago
A nimble enough company doesn't need it, but I've had 6 months of lead time to request one extra server in an in-house data center due to sheer organizational failure. The big selling point of the cloud really was that one didn't have to deal with the division lording over the data center, or have any and all access to even log in by their priesthood who knew less unix than the programmers.

I've been in multiple cloud migrations, and it was always solving political problems that were completely self inflicted. The decision was always reasonable if you looked just at the people the org having to decide between the internal process and the cloud bill. But I have little doubt that if there was any goal alignment between the people managing the servers and those using them, most of those migrations would not have happened.

AtlasBarfed•12h ago
Yeah, clouds are such a huge improvement over what was basically an industry standard practice to say oh you want a server fill out this 20 page form and will get you your server in 6 to 12 months.

But we don't need one minute response times from the cloud really. So something like hetzner that may just be all right. We'll get it to you within an hour. It's still light years ahead of what we used to be.

And if it makes the entire management and cost side and performance with bare metal or closer to bare metal on the provider side, then that is all good.

And this doesn't even address the fact that yeah, AWA has a lot of hidden costs, but a lot of those managed data center outsourcing contracts where you were subjected to those lead times for new servers... really weren't much cheaper than AWS back in the day.

bstsb•11h ago
in my experience i can rescale Hetzner servers and they'll be ready in a minute or two
AtlasBarfed•11h ago
Yes, sorry, I didn't mean to impugn Hetzner by saying they were an hour delay, just that there could be providers that are cheaper that didn't need to offer AWS-level scaling.

Like a company should be able to offer 1 day service, or heck 1 week with their internal datacenters. Just have a scheduled buffer of machines to power up and adapt the next week/month supply order based on requests.

mgkimsal•12h ago
I've been in projects where they're 'on the cloud' to be 'scalable', but I had to estimate my CPU needs up front for a year to get that in the budget, and there wasn't any defined process for "hey, we're growing more than we assumed - we need a second server - or more space - or faster CPUs - etc". Everything that 'cloud' is supposed to allow for - but ... that's not budgeted for - we'll need to have days of meetings to determine where the money for this 'upgrade' is coming from. But our meetings are interrupted by notices from teams that "things are really slow/broken"...
0cf8612b2e1e•12h ago
The management overhead in requesting new cloud resources is now here. Multiple rounds of discussion and TPS reports to spin up new services that could be a one click deploy.

The bureaucracy will always find a way.

tracker1•11h ago
Worst is when one of those dysfunctional orgs that does the IT systems administration tries to create their own internal cloud offerings instead of using a cloud provider. It's often worse than hosted clouds or bare metal.

But I definitely agree, it's usually a self-inflicted problem and the big gamble attempting to work around infrastructure teams. I've had similar issues with security teams when their out of the box testing scripts show a fail, and they just don't comprehend that their test itself is invalid for the architecture of your system.

jiggawatts•5h ago
Running away from internal IT works until they inevitably catch up to the escapees. At $dayjob the time required to spin up a single cloud VM is now measured in years. I’ve seen projects take so long that the cloud vendor started sending deprecation notices half way through for their tech stacks but they forged ahead anyway because it’s “too hard to steer that ship”.

The current “runners” are heading towards SaaS platforms like Salesforce, which is like the cloud but with ten times worse lock in.

bluedino•4h ago
> At $dayjob the time required to spin up a single cloud VM is now measured in years.

We have a Service Now ticket that you can complete that spins the server up at completion. Kind of an easy way to do it.

jiggawatts•4h ago
Then you end up with too-large servers all over the place with no rhyme or reason, burning through your opex budget.

Also, what network does the VM land in? With what firewall rules? What software will it be running? Exposed to the Internet? Updated regularly? Backed up? Scanned for malware or vulnerabilities? Etc…

Do you expect every Tom, Dick, and Harry to know the answers to these questions when they “just” want a server?

This is why IT teams invariably have to insert themselves into these processes, because the alternative is an expensive chaos that gets the org hacked by nation states.

The problem is that when interests aren’t forced to align — a failure of senior management — then the IT teams become an untenable overhead instead of a necessary and tolerable one.

The cloud is a technology often misapplied to solve a “people problem”, which is why it won’t ever work when misused in this way.

bluedino•2h ago
Those are all checkboxes on the form

The first time you do it, you can do a consult with a cloud team member

And of course they get audited every quarter so usage is tracked

binary132•12h ago
It’s kinda good if your requirements might quadruple or disappear tonight or tomorrow, but you should always have a plan to port to reserved / purchased capacity.
kccqzy•14h ago
My employer also resisted using cloud compute and sent staff explanations why building our own data centers is a good thing.
Damogran6•14h ago
As a career security guy, I've lost count of the battles I've lost in the race to the cloud...now it's 'we have to up the budget $250k a year to cover costs' and you just shrug.

The cost for your first on-prem datacenter server is pretty steep...the cost for the second one? Not so much.

marcosdumay•13h ago
> What is old is new again.

It's not really. It just happens that when there is a huge bullshit hype out there, people that fall for it regret and come back to normal after a while.

Better things are still better. And this one was clearly only better for a few use-cases that most people shouldn't care about since the beginning.

darkwater•12h ago
> What is old is new again.

I think there is a generational part as well. The ones of us that are now deep in our 40s or 50s grew up professionally in a self-hosted world, and some of us are now in decision-making positions, so we don't necessarily have to take the cloud pill anymore :)

Half-joking, half-serious.

olavgg•12h ago
I'm in my 40s and run my own company. We deliver a data platform, our customers can choose between our self-hosted solution or run it on AWS/Azure for 10x higher cost.
jnsaff2•15h ago
There is a graph database which does disk IO for database startup, backup and restore as single threaded, sequential, 8kb operations.

On EBS it does at most 200MB/s disk IO just because the EBS operation latency even on io2 is about 0.5 ms. Even though the disk can go much faster, disk benchmarks can easily do multi-GB/s on nodes that have enough EBS throughput.

On instance local SSD on the same EC2 instance it will happily saturate the whatever instance can do (~2GB/s in my case).

anonzzzies•15h ago
What graph db is that?
jnsaff2•14h ago
neo4j
the_arun•14h ago
What is the cost of running Neo4j on aws vs using aws Neptune? Related to disk I/o?
jnsaff2•11h ago
Tbh, I don't know. For us the switching cost alone would be pretty high. That said ongoing maintenance is pretty high as well.
zw17•10h ago
Just want to chime in. Zhenni, cofounder of PuppyGraph. We created the first graph query engine that can sit on top of your relational databases (think Postgres, Iceberg, Delta lake, etc.), and query your relational data as a graph using Cypher and Gremlin, without any ETL or a separate graphdb needed. It's much more lightweight and easy to spin up. Because we sit on top of column based storage and our compute engine is distributed, we can achieve subsecond query speed across 1 billion nodes. Please check it out!
inkyoto•1h ago
It is not easily possible to directly compare neo4j and AWS Neptune as the former does not exist as a fully managed service in AWS. neo4j is available through the AWS marketplace, though, but it most assuredly runs on an EC2 instance by neo4j (the company).

We run a modest graph workload (relatively small dataset wise but an intense on graph edge wise) on Neptune that costs us slightly under USD 600 per month – that is before the enterprise discount, so in reality we pay USD 450-500 a month. But we use Neptune Serverless that bursts out from time to time, which means that monthly charges are averaged out across the spikes/bursts. The monthly charges are for the serverless configuration of 3-16 NPU's.

Disk I/O stats are not available for Neptune, moreso for serverless clusters, and they would not be insightful anyway. The transactions per second rate is what I look at.

Thicken2320•15h ago
Using the S3 API is like chopping onions, the more you do it, the faster you start crying.
scns•15h ago
Less to no crying when you use a sharp knive. Japanese chefs say: no wonder you are crying, you squash them.
Esophagus4•15h ago
Haha!

My only “yes, but…” is that this:

> 50k API calls per second (on S3 that is $20-$250 _per second_ on API calls!).

kind of smells like abuse of S3. Without knowing the use case, maybe a different AWS service is a better answer?

Not advocating for AWS, just saying that maybe this is the wrong comparison.

Though I do want to learn about Hetzner.

wiether•14h ago
They conveniently provide no detail about the usecase, so it's hard to tell

But, yeah, there's certainly a solution to provide better performances for cheaper, using other settings/services on AWS

adamcharnock•13h ago
We're hoping to write a case study down the road that will give more detail. But the short version is that not all parts of the client's organisation have aligned skills/incentives. So sometimes code is deployed that makes, shall we say, 'atypical use' of the resources available.

In those cases, it is great to a) not get a shocking bill, and b) be able to somewhat support this atypical use until it can be remedied.

wiether•13h ago
Thank you for the reply

I'm honestly quite interested to learn more about the usecase that required those 50k API calls!

I've seen a few cases of using S3 for things it was never intended for, but nothing close to this scale

wredcoll•12h ago
You're (probably) not wrong about the abuse thing, but it sure is nice to just not care about that when you have fixed hardware. I find trying to guess which of the 200 aws services is the cheapest kinda stressful.
mike_hearn•10h ago
Why would it be abuse? Serving e.g. map tiles on a busy site can get up to tens of thousands of qps, I'd have thought serving that from S3 would have made sense if it weren't so expensive.
Esophagus4•10h ago
I don’t know much about map tiles… but could that be done more effectively through a CDN or cache, and then have S3 behind it?

Then the CDN takes the beating. So this still sounds like S3 abuse to me.

But I leave room for being wrong here.

Edit: presumably if your site is big enough to serve 50k RPS it’s big enough for a cache?

belter•15h ago
> If you don't want to do this yourself, then we'll do it for you for half the price of AWS (and we'll be your DevOps team too

You might not realize but you are actually increasing the business case for AWS :-) Also those hardware savings will be eaten away by two days of your hourly bill. I like to look at my project costs across all verticals...

adamcharnock•15h ago
I understand the concern for sure. But we don't bill hourly in that way, as one thing our clients really appreciate is predictable costs. The fixed monthly price already includes engineering time to support your team.
dlisboa•14h ago
> Also those hardware savings will be eaten away by two days of your hourly bill

Doubt it. I've personally seen AWS bills in the tens of thousands, he's probably not that costly for a day.

whstl•13h ago
I don't think I have joined a startup that pays less than 20k/month to AWS or any cloud in almost a decade.

Biggest recent ones were ~200k and ~100k that we managed to lower to ~80k with a couple months of work (but it went back up again after I left).

I fondly remember lowering our Heroku bill from 5k to 2k back in 2016 after a day of work. Management was ecstatic.

theideaofcoffee•12h ago
Same, but in the hundreds of thousands monthly and growing at steady clip, and AWS extending credits worth -millions-, just to keep them there because their margins are so fat and juicy they can afford that insane markup.

That's where the real value lies. Not paying these usurious amounts.

realitysballs•14h ago
Ya but then you need to pay for a team to maintain network and continually secure and monitor the server and update/patch. The salaries of those professionals , really only make sense for a certain sized organization.

I still think small-midsized orgs may be better off in cloud for security / operations cost optimization.

esskay•14h ago
You still need those same people even if you're running on a bunch of EC2 and RDS instances, they aren't magically 'safer'.
lnenad•13h ago
I mean, by definition yes they are. RDS is locked down by default. Also if you're using ECS/Fargate (so not EC2) as the person writing the article does, it's also pretty much locked down outside of your app manifest definitions. Also your infra management/cost is minimal compared to running k8s and bare metal.
rightbyte•14h ago
Isn't most vulnerabilities in your own server software or configs anyways?
DisabledVeteran•14h ago
That used to be the case until recently. As much as neither I nor you want to admit it -- the truth is ChatGPT can handle 99% of what you would pay for "a team to maintain network and continually secure and monitor the server and update/patch." Infact, ChatGPT surpasses them as it is all encompassing. Any company now can simply pay for OpenAI's services and save the majority of the money they would have spent on the, "salaries of those professionals." BTW, ChatGPT Pro is only $200 a month ... who do you think they would rather pay?
tayo42•13h ago
You have a link to some proof that chat gpt is patching servers running databases with no down time or data loss?
Yiin•13h ago
I think the argument is that dev with some vibe coding can successfully setup servers that are good enough already for 10x less cost and 95% reliability
kikimora•10h ago
This is an extremely bold statement to make. Vibe coding by a non-expert is the best way to introduce hard to find security issues.
parliament32•10h ago
I would pay you 100x that amount monthly to perform those services, as long as you assume the risk. If you're convinced this is viable, you should start a business :)
abenga•14h ago
This implies cloud infrastructure experts are cheaper than bare metal Linux/networking/etc experts. Probably in most smaller organizations, you have the people writing the code manage the infra, so it's an "invisible cost", but ime, it's easy to outgrow this and need someone to keep cloud costs in check within a couple of years, assuming you are growing as fast as an average start-up.
ldoughty•11h ago
I think it's completely different ballparks to compare the skill sets...

It is cheaper/easier for me to hire cloud infrastructure _capable_ people easier and cheaper than a server _expert_. And a capable serverless cloud person is MUCH cheaper and easier to find.

You don't need to have 15 years of a Linux experience to read a JSON/YAML blob about setting up a secure static website.. of you need to figure out how to set up an S3 bucket and upload files... And another bucket for logging... And you have to go out of your way now to not be multi-az and to expose it to public read... I find most people can do this with minimal supervision and experience as long as they understand the syntax and can read the docs.

The equivalent to set up a safe and secure server is a MUCH higher bar. What operating system will they pick? Will it be sized correctly? How are application logs offloaded? What are the firewall rules? What is the authentication / ssh setup? Why did we not do LDAP integration? What malware defense was installed? In the event of compromise, do we have backups? Did you setup an instance to gather offloaded system logs? What is the company policy going to be if this machine goes down at 3am? Do we have a backup? Did we configure fail over?

I'm not trying to bash bare metal. I came from that space. I lead a team in the middle of nowhere (by comparison to most folks here) that doesn't have a huge pool of people with the skills for bare metal.. but LOTS of people that can do competent severless with just one highly technical supervisor.

This lets us higher competent coders which are easier to find, and they can be reasonably expected to have or learn secure coding practices... When they need to interact with new serverless stuff, our technical person gets involved to do the templating necessary, and most minor changes are easy for coders to do (e.g. a line of JSON/YAML to toggle a feature)

gervwyk•7h ago
This comment pretty much sums up this argument. Well said.

As with everything, choose the right tool for the job.

If it feels expensive or risky, make a u-turn, you probably went off the rails somewhere unless you’re working on bleeding edge stuff, and lbh most of us are not.

dorkypunk•13h ago
Then you have to replace those professionals with even more specialized and expensive professionals in order be able to deploy anything.
adamcharnock•13h ago
I very much understand this, and that is why we do what we do. Lots of companies feel exactly as you say. I.e. Sure it is cheaper and 'better', but we'll pay for it in salaries and additional incurred risk (what happens if we invest all this time and fail to successfully migrate?)

This is why we decided to bundle engineering time with the infrastructure. We'll maintain the cluster as you say, and with the time left over (the majority) we'll help you with all your other DevOps needs too (CI/CD pipelines, containerising software, deploying HA Valkey, etc). And even after all that, it still costs less than AWS.

Edit: We also take on risk with the migration – our billing cycle doesn't start until we complete the migration. This keeps our incentives aligned.

parliament32•10h ago
If you haven't had to fight network configuration, monitoring, and security in a cloud provider you must have a very simple product. We deploy our product both in colos and on a cloud provider, and in our experience, bare-metal network maintenance and network maintenance in a PaaS consumes about the same number of hours.
chubot•14h ago
Does anyone have experience with say Linode and Digital Ocean performance versus AWS and GCE?

They still use VMs, but as far as I know they have simple reserved instances, not “cloud”-like weather?

Is the performance better and more predictable on large VPSes?

(edit: I guess a big difference is that VPS can have local NVMe that is persistent, whrereas EC2 local disk is ephemeral? )

inapis•14h ago
No. DO can be equally noisy but I've always tried their regular instances and not their premium AMD/Intel ones.
pton_xd•14h ago
I can't speak to Linode but in my experience the Digital Ocean VM performance is quite bad compared to bare metal offerings like Hetzner, OVH, etc. It's basically comparable to AWS, only a bit cheaper.
matt-p•12h ago
It's essentially the same product, but you do get lower disk latency. Best performance is always going to be a dedicated server which in the US seem to start around $80-100/month (just checking on serversearcher.com), DO and so on do provide a "dedicated cpu" product if that's too much.
cess11•14h ago
I've left a job because it was impossible to explain this to an ex-Googler on the board who just couldn't stop himself from trying to be a CTO and clownmaker at the company.

The rough part was that we had made hardware investments and spent almost a year setting up the system for HA and immediate (i.e. 'low-hanging fruit') performance tuning and should have turned to architectural and more subtle improvements. This was a huge achievement for a very small team that had neither the use nor the wish to go full clown.

exe34•13h ago
I love that you're not just preaching - you're offering the service at a lower cost. (I'm not affiliated and don't claim anything about their ability/reliability).
torginus•13h ago
Yup, I hope to god we are moving past the age of 'everything's fast if you have enough machines' and 'money is not real' era of software development.

I remember the point in my career when I moved from a cranky old .NET company, where we handled millions of users from a single cabinent's worth of beefy servers, to a cloud based shop where we used every cloud buzzword tech under the sun (but mainly everything was containerized node microservices).

I shudder thinking back to the eldritch horrors I saw on the cloud billing side, and the funny thing is, we were constantly fighting performance problems.

bombcar•12h ago
My conspiracy theory is that "cloud scaling" was entirely driven by people who grew up watching sites get slash dotted and thought it was the absolute most important thing in the world that you can quickly scale up to infinity billion requests/second.
colechristensen•11h ago
No, cloud adoption was driven by teams having to wait 2 years for capex for their hardware purchase and then getting a quarter of what they asked for. You couldn't get things, people hoarded servers they pretended to be using because when they did need something they couldn't get it. Management just wouldn't approve budgets so you were stuck using too little hardware.

On the cloud it takes five seconds to get a new machine I can ssh into and I don't have to ask anyone for the budget.

You can save a lot of money with scaling, you have to actually do that though and very few places do.

dinvlad•11h ago
And now, on cloud it’s the same but much more expensive and worse performance. We’ve been struggling for over a month to get a single (1) non-beefy non-GPU VM allocated on Azure, since they’ve been having insane capacity issues, to the extent that even “provisioned” capacity cannot be fulfilled ;-(
lazide•10h ago
Sure, but that’s because Azure. I’m sorry someone made the decision to go there. AWS & GCP, stock outs at least used to be nearly unheard of.
dijit•10h ago
Until you hit a certain scale.

I totally agree about Azure being the worst of the three, they wanted us to commit to certain use before even buying hardware themselves. Crazy…

But I also had capacity issues with Google at large scales in many zones.

everforward•11h ago
I think this is partially that for the past decade or two, on-prem was partially preferred by very frugal companies.

One of the places I worked that was on-prem enforced a "standard hardware profile" where the servers were all nearly the same except things that could be changed in house (like RAM sticks). When they ordered hardware, they'd order like 5% or 10% more than we thought we'd need to keep an "emergency pool".

If you ended up needing more hardware than you thought and could justify why you needed it right now, they'd dip into that pool to give you more hardware on a pretty rapid schedule.

It cost slightly more, but was invaluable. Need double the hardware for a week to do a migration? No problem. Product more popular than you thought? No problem.

colechristensen•7h ago
>I think this is partially that for the past decade or two, on-prem was partially preferred by very frugal companies.

Sure this is made worse by frugality, but I experienced this problem when virtualization was in its infancy, much less cloud anything even existing much less being popular.

bombcar•9h ago
There's "cloud" as "server in the cloud I can use" which is what the majority of smaller players are using - it's just someone else's server.

There's also "cloud" as the API-driven world of managed services that drain your wallet faster than you can blink.

drob518•8h ago
This. Instant availability of compute and storage, purchasable with a corporate credit card, was the cloud killer app.
kaliszad•7h ago
Seems like a few negotiation skills would be of better use than doing extreme amounts of work so somebody can take months on approving new hardware. Well guess what, the people that did slow down hardware procurement are now slowing down deployment of cloud resources as well because the fundamental problem wasn't addressed and that's organizational misalignment and disfunction.
colechristensen•7h ago
You're going to negotiate with the structure of an entire corporation?

Excuse me CEO your budgeting process is inconvenient to my work, please change it all to be more responsive.

This is not how things work and not how changes are made. Unless you get into the C-suite or at least become a direct report of someone who is nobody cares and you're told to just deal with it. You can't negotiate because you're four levels of management away from being invited to the table.

An organization that can make agile capital expenditures in response to the needs of an individual contributor is either barely out of the founder's garage or a magical fairyland unicorn.

bluedino•4h ago
Even if you can get instant approval, Dell or HP won't Fedex you 50 servers next day like they will 50 laptops.

And once you're a customer you get to deal with sales channels full of quotes and useless meetings and specialists. You can't just order from the website.

neonsunset•10h ago
However old, .NET Framework was still using a decent JIT compiler with a statically typed language, powerful GC and a fully multi-threaded environment :)

Of course Node could not compete, and the cost had to be paid for each thinly sliced microservice carrying heavy runtime alongside it.

dematz•10h ago
Tangential point but why is it that so often these leaving the cloud posts use the word "beefy" to describe the servers? It's always you don't need cloud because beefy servers handle pretty much any bla bla bla

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

If anyone from oxide computer or similar is reading, maybe you should rebrand to BEEFY server inc...

fsckboy•9h ago
>Tangential point... rebrand to BEEFY server

idea for an ad campaign:

"mmm, beefy!" https://www.youtube.com/watch?v=j6ImwKMRq98&t=21s

i don't know how "worldwide" is the distribution of Chef "Boyardee" canned spaghetti (which today is not too good), but the founder's story is fairly interesting. He was a real Italian immigrant chef named Ettore Boiardi and he gets a lot of credit for originally evangelizing and popularizing "spaghetti" and "spaghetti sauce" in America when most people had never tried it.

https://en.wikipedia.org/wiki/Ettore_Boiardi

you know, "spaghetti code..."?

gwking•9h ago
The servers are always beefy and the software is always blazingly fast. Blazingly beefy is my new joke trademark.
torginus•7h ago
Because that is a casual word in the English language to describe an object with substantial power?

If you would suggest a word that would make a better substitute in this case, that could move the conversation forward, and perhaps you could improve the aesthetic quality of posts about leaving the cloud.

earthnail•7h ago
Because the server types you get for the price of a single Heroku dyno are incredibly beefy. And suddenly you need a lot less dynos. Which is quite important if you start managing them yourself.
bcantrill•6h ago
Noted!
epistasis•13h ago
What do you recommend for configuration management? I've had a fairly good experience with Ansible, but that was a long time ago... anything new in that pace?
dijit•13h ago
"new", I'm not sure, but I deployed 2,500 physical Windows machines with SaltStack and it worked pretty good.

it also handled some databases and webservers on FreeBSD and Windows, I considered it better than Ansible.

lazyfanatic42•13h ago
haha this reminds me of when I used to manage Solaris system consisting of 2 servers. Sparc T7, 1 box in one state and 1 box in another. No load balancer.

Thousands and thousands of users depending on that hardware.

Extremely robust hardware.

api•13h ago
How do you deprogram your devs and ops people from the learned helplessness of cloud native ideology?

I've found that it's almost impossible to even hire people who aren't terrified of the idea of self-hosting. This is deeply bizarre for someone who installed Linux from floppy disks in 1994, but most modern devs have fully swallowed the idea that cloud handles things for them that mere mortals cannot handle.

This, in turn, is a big reason why companies use cloud in spite of the insane markup: it's hard to staff for anything else. Cloud has utterly dominated the developer and IT mindset.

awestroke•12h ago
So you'd rather self host a database as well? How do you prevent data loss? Do you run a whole database cluster in multiple physical locations with automatic failover? Who will spend time monitoring replication lag? Where do you store backups? Who is responsible for tuning performance settings?
7bit•12h ago
I really don't understand this comment. The cloud doesn't protect you from data loss or provide any of the things you named.
baby_souffle•12h ago
Yes it does? For a fraction of a dollar per hour, AWS will give me a URI that I can connect to. On the other end is a postgres instance that already has authentication, backups handled for me. It's also backed by a storage layer that is far more robust than anything I can get together in my rented cage with my corporate budget.
theideaofcoffee•12h ago
Hosting a database is no different than self-hosting any other service. This viewpoint hath what cloud wrought, this atrophying of the most basic operational skills, as if running these magic services are only achievable by the hyperscalers who said they are the only ones capable.

The answers to all of your questions are a hard: it depends. What are your engineering objectives? What are your business requirements? Uptime? Performance? Cost constraints and considerations? The cloud doesn't take away the need to answer these questions, it's just that self-hosting actually requires you to know what you are doing versus clicking a button and just hoping for the best.

xmcp123•7h ago
I would argue that correctly tuning a database is significantly more difficult than most services one would self host.

But that said, you can afford a lot more hardware if you’re not using RDS, so the tuning doesn’t need to be perfect.

theideaofcoffee•6h ago
Not... really? It's no more difficult than finding the correct buffer sizes for nginx, or finding the correct sizes for the ebpf connection table tracking map if you're using cilium on k8s, or kernel tcp buffers or any other other myriad services one could run.

Being a bit obtuse to tune doesn't really justify going all-in on cloud. It's all there in the documentation.

CursedSilicon•12h ago
>I've found that it's almost impossible to even hire people who aren't terrified of the idea of self-hosting

Are y'all hiring? [1]

I did 15 months at AWS and consider it the worst career move of my life. I much prefer working with self-hosting where I can actually optimize the entire hardware stack I'm working with. Infrastructure is fun to tinker with. Cloud hosting feels like a miserable black box that you dump your software into and "hope"

[1] https://cursedsilicon.net/resume.pdf

fer•11h ago
>I've found that it's almost impossible to even hire people who aren't terrified of the idea of self-hosting

Funny, I couldn't find a new job for a while because I had no cloud experience, finally and ironically I got hired at AWS. Every now and then these days I get headhunters unsure about my actual AWS experience because of my lack of certifications.

nikodunk•12h ago
If you’re big, invest in this. If you’re small, slap Dokploy/Coolify on it.
rixed•12h ago
I do not disagree, but just for the record, that's not what the article is about. They migrated to Hetzner cloud offering.

If they had migrated to a bare metal solution they would certainly have enjoyed an even larger increase in perf and decrease in costs, but it makes sense that they opted for the cloud offering instead given where they started from.

lazystar•11h ago
> In reality, there is significant latency between Hetzner locations that make running multi-location workloads challenging, and potentially harmful to performance as we discovered through our post-deployment monitoring.

the devil is in the details, as they say.

rgrieselhuber•11h ago
We moved DemandSphere from AWS to Hetzner for many of the same reasons back in 2011 and never looked back. We can do things that competitors can’t because of it.
dhruv_ahuja•11h ago
Can you please explain what are some of those things? Curious to know and learn.
dinvlad•11h ago
The cloud is also not fulfilling its end of the promise anymore - capacity on-demand. We’ve been struggling for over a month to get a single (1) non-beefy non-GPU instance on Azure, since they’ve been having just insane capacity issues, where even paying for “provisioned” capacity doesn’t make it available.
traceroute66•10h ago
> on S3 that is $20-$250 _per second_ on API calls!

It is worth pointing out that if you look beyond the nickle & diming US-cloud providers, you will very quickly find many S3 providers who don't charge you for API calls and just the actual data-shifting.

Ironically, I think one of them is Hetzner's very own S3 service. :)

Other names IIRC include Upcloud and Exoscale ... but its not hard to find with the help of Mr Google, most results for "EU S3 provider" will likely be similar pricing model.

P.S. Please play nicely and remove the spam from the end of your post.

ksec•8h ago
We will soon have 256 Zen 6c per socket, so at least 512 Core per server. Multiple PCIe 5.0 SSD at 14GB/s at up to half a Petabytes storage, and TBs of Memory.

And now Nvidia is in the game for Sever CPU, much faster time to market for PCIe in the future, and better x86 CPU implementation as well as ARM variants.

themafia•7h ago
> We typically see a doubling of performance

The AWS documents clarify this. When you get 1 vCPU in a Lambda you're only going to get up to 50% of the cycles. It improves as you move up the RAM:CPU tree but it's never the case that you get 100% of the vCPU cycles.

up2isomorphism•1h ago
Half year later all the data gets wiped out and what your customer can do?

And you are still charging half of AWS, which is that case I am just doing these work myself if I really think AWS is too expensive.

liendolucas•15h ago
What is an actual and solid reason to choose or stay AWS these days?

The topic of paying hefty amounts of money to AWS when other options are available has been discussed many times before.

My view of AWS is that you have bazillions of things that you might never use, need to learn about it, you are tied to a company across the Atlantic that can basically shut you down anytime they want for whatever reason and finally the cost.

andybak•15h ago
I love how few comments on this and similar posts give much context along with their advice. Are you hosting a church newsletter in your spare time or a resource intensive web app with millions of paying enterprise customers and a dedicated dev ops team in 3 continents?

Any advice on price / performance / availability is meaningless unless you explain where you're coming from. The reason we see people overcomplicating everything to do with the web is that they follow advice from people with radically different requirements.

DarkNova6•15h ago
Tech industry in a nutshell
Terretta•14h ago
Strong agree. I hadn't seen your comment when I wrote this, below: https://news.ycombinator.com/item?id=45616366

TL;DR: Think of hosting providers like a pricing grid (DIY, Get Started, Pro, Team, Enterprise) and if YAGNI, don't choose it.

casparvitch•14h ago
IDK mate, my personal pastebin needs to run on bare metal or it can't keep up
Hasz•13h ago
Different requirements, different skillsets, different costs, different challenges. AWS is only topically the same product as Hetzner, coming from someone who has used both quite a bit.
cube00•13h ago
> The reason we see people overcomplicating everything to do with the web is that they follow advice from people with radically different requirements.

Or they've had cloud account managers sneaking into your C-suite's lunchtime meetings.

Other comments in this thread say they get directives to use AWS from the top.

Strangely that directive often comes with AWS's own architects embedded into your team and even more strangely they seem to recommend the most expensive server-less options available.

What they don't tell is you you'll be rebuilding and redeploying your containerised app daily with new Docker OS base images to keep up with the security scanners just like patching an OS on a bare metal server.

sergiotapia•11h ago
> a dedicated dev ops team in 3 continents

you don't need that in 99.9999% of cases.

faizshah•15h ago
Can anyone recommend a good cloud for GPU instances?

Was trying to find a good one for 30B quants but there’s so many now and the pricing is all over the place.

dboreham•15h ago
What kind of GPU are you looking for?
matt-p•15h ago
Depends on the gpu you need honestly. Maybe worth checking out https://www.serversearcher.com/servers/gpu and other comparison sites?
jedisct1•15h ago
Koyeb is really great and has affordable GPU instances https://www.koyeb.com
ants_everywhere•15h ago
The public cloud is priced as if availability, reliability, durability, and latency matter to your business. If they don't, you can get far with just a single machine.

A great deal of the work in cloud engineering is ensuring the abstractions meet the service guarantees. Similarly you can make a car much cheaper if you don't need to guarantee the driver will survive a collision. The cost of providing a safety guarantee is much higher than providing a hand-wavy "good enough" feeling.

If your business isn't critical then "good enough" vibes may be all you need, and you can save some money.

sealeck•15h ago
You do know you can have high availability without using cloud providers? E.g. you run a second server in a different database as a standby that can take over, etc.
Terretta•14h ago
just, and, and, and ...

IF you need it, soon you wish the lego blocks pulled IAM all the way through and worked with a common API

ants_everywhere•13h ago
I mean the (virtual) machine itself has these guarantees. You can set the entire rack on fire and your VM will continue to operate or else you're owed compensation for the SLA violation.

You can add redundant machines with a failover. You then need to calculate how likely the failover is to fail, how likely the machines are to fail, etc. How likely is the switch to fail. You need engineers with pager rotations to give 24 hour coverage, etc.

What I'm saying is that the cloud providers give you strong guarantees and this is reflected in their pricing. The guarantees apply to every service you consume because with independent failures, the probability of not failing is multiplicative. If you want to build a reliable system out of N components then you need to have bounds on the reliability of each of the components.

Your business may not need this. Your side project almost certainly doesn't. But the money you save isn't free, it's compensation for taking on the increased risk of downtime, losing customer data, etc.

I would be interested to see a comparison of the costs of running a service on Hetzner with the same reliability guarantees as a corresponding cloud service. On the one hand we expect some cloud service markup for convenience. On the other hand they have economies of scale. So it's not obvious to me which is cheaper.

marcosdumay•11h ago
> What I'm saying is that the cloud providers give you strong guarantees and this is reflected in their pricing.

and yet, they go offline all the time.

stego-tech•15h ago
Really digging these post-mortems on major public cloud migrations of late, either to smaller providers like Hetzner or privately-owned solutions in data centers. It gives me more ammo when an organization tasks me with saving them money, by showing that these are, in fact, perfectly viable options whose tradeoffs may be worthwhile for specific organizational needs.
dboreham•15h ago
Quick note that while I hear nothing but good things about Hetzner, the approach is generic -- applies to the many other bare metal "SmallCo" providers. For example we use Hivelocity and have been very happy. Perhaps also worth mentioning that you can literally buy computers and rent space to plug them in and hook up to the internet in a place we call a data center. That's even cheaper when you know your workload long term and have access to capital.
natnat•15h ago
Are there decent US based alternatives to Hetzner? I'd like to have my servers located in the US for a variety of reasons, but most of the alternatives I've seen to Hetzner seem to be pretty fly-by-night shops.
NoiseBert69•15h ago
Hetzner has 2 DCs in the US with solid peerings and transit.
CodesInChaos•14h ago
Which do not offer Hetzner's biggest selling point: Cheap dedicated servers
matt-p•15h ago
clouvider and latitude look like decent pricing on https://www.serversearcher.com/ I know phoenixnap exists as well.
brikym•6h ago
Maybe digital ocean.
lunias•15h ago
The world is healing.
whstl•15h ago
After being immersed in cloud-native hell for a few years, I'll say it:

This setup is probably also easier to reason about and easier to make secure than the messy garbage pushed by Amazon and other cloud providers.

People see Cloud providers with rose-colored glasses, but even something like RDS requires VPCs, subnets, route tables, security groups, Internet/NAT gateways, lots of IAM roles, and CloudWatch to be usable. And to make it properly secure (meaning: not just sharing the main DB password with the team) you need way more as well, and it's hard to orchestrate, it's not just an option in a CloudFormation script.

Sure securing a server is hard too, but people 1. actually share this info and 2. don't have illusions about it.

Terretta•14h ago
> This setup is probably also easier to reason about and easier to make secure than the messy garbage pushed by Amazon and other cloud providers.

Ability to do anything doesn't mean do everything.

It's straightforward to be simple on AWS, but if you have trouble denying yourself, consider Lightsail to start: https://aws.amazon.com/lightsail/

alberth•15h ago
There's a value curve for infrastructure, I'll use an analogy...

  Low Cost                                           High Cost
  ==============================================================
  FARM     WHOLESALER     GROCERY     RESTAURANT     DOORDASH    
  BUILD    CO-LOCATION    HETZNER     AWS            VERCEL           
While it's not a perfect analogy, in principle it holds true.

As such, it should come as no surprise that eating at a restaurant every day is going to be way more expensive.

whstl•15h ago
AWS is more of high-scale cook-your-own pizza restaurant where you can't see the bill until the end, and you often have to mop the floor and wash the latrines yourself too. And washing the latrine costs money too of course, but you don’t wanna get salmonella, right?
alberth•13h ago
That's were my analogy isn't perfect.

There's different tiers of restaurants.

There's the luxury premium restaurants (Michelin star rated, like AWS), but there is also local dinners that arguably have phenomenal food too (maybe someone like DigitalOcean/Linode).

Terretta•14h ago
Exactly.

I hadn't seen your comment when I wrote this, below: https://news.ycombinator.com/item?id=45616366

I love your farm-to-table grid: works for everyone not just HN commenters. And putting DOORDASH on the right is truer from cost perspective than the metaphor I'd used.

For HN, I'd compared to a pricing grid (DIY, Get Started, Pro, Team, Enterprise) with the bottom line that if YAGNI, don't choose it.

Your grid emphasizes my other point, it's about your own labor.

hunvreus•3h ago
I love that, I may steal it.
1a527dd5•15h ago
Love it!

We are unfortunately moving away from self-hosted bare metal. I disagree with the transition to AWS. But it's been made several global layers above me.

It's funny our previous AWS spend was $800 per month and has been for almost 6 years.

We've just migrated some of our things to AWS and the spend is around $4,500 per month.

I've been told the company doesn't care until our monthly is in excessive of five figures.

None of this makes sense to me.

The only thing that makes sense is our parent company is _huge_ and we have some really awesome TAMs and our entire AWS spend is probably in the region of a few million a month, so it really is pennies behind the sofa when global org is concerned.

Sammi•15h ago
I read some many stories like this and every time I think of the "your margins are my opportunity" quote, and think there must be so many inefficient enterprises that are ripe for disruption by a small efficient team.
Terretta•14h ago
There are many other costs besides that AWS bill. Naming two it's hard to put a number on, but get discussed at board room or senior exec level:

- client confidence

- labor pool

aunty_helen•14h ago
And to add to that second one, ability to bring in a third party contractor to reduce headcount when needed.
cube00•13h ago
> None of this makes sense to me.

OpEx good, CapEx bad.

marcosdumay•11h ago
What could make sense if the OP was talking about a less than 30% difference.

What country is it that applies a 400% income tax to companies?

(Well, seriously, it makes sense in a larger than 80% tax rate. Not that impossibly high, but I doubt any country ever had it.)

1a527dd5•8h ago
Now you mention it, the other thing we are being forced to do is categorise our work (e.g. commits/PRs) as Cap/Op. And then once a year a bunch of us are randomly selected by one of the big four auditing companies to talk about why that piece of work was Cap/Op.
mythz•15h ago
We've moved most of our Apps off AWS to Hetzner years ago by switching to SQLite/Litestream > Cloudflare R2 replication to avoid needing to using a managed RDBMS and saved a bunch of $$$ [1].

Although for our latest App we've switched to using local PostgreSQL (i.e. app/RDBMS on same server) with R2 backups for its better featureset, same cost as we only pay for the 1x Hetzner VM and Cloudflare R2 storage is pretty cheap.

[1] https://docs.servicestack.net/ormlite/litestream

nibab•15h ago
It was never about price or performance. Price and performance may be things you care about as a hobbyist, but as a business you have a lot of other considerations.
abujazar•14h ago
In my case, availability is crucial as well, and AWS simply doesn't offer good enough SLAs.
drchaim•14h ago
I dream of the day when I can have my servers at home with solar power and batteries. I still have a ways to go, but it will come.
Tepix•14h ago
For many projects, this is already possible. 50MBit/s of upstream bandwidth is not that uncommon if you have something better than DSL and that's probably the bottleneck.
Terretta•14h ago
AWS and DigitalOcean*: $559.36, Hetzner: $132.96

Perspective: this difference is one hour of US fintech engineer time a month. If you have to self-build a single thing on Hetzner you get as "built-in" on AWS, are you ahead?

If this is your price range, and you're spending time thinking about how to save that $400/month (three Starbucks a day) instead of drive revenue or deliver client joy, you likely shouldn't be on AWS in the first place.

AWS is for when you need the most battle tested business continuity through automations driving distributed resilience, or if you have external requirements for security built into all infra, identity and access controls built into all infra at all layers, compliance and governance controls across all infra at all layers, interop with others using AWS (private links, direct connects, sure, but also permission-based data sharing instead of data movement, etc.). If your plans have those in your future, you should start on AWS and learn as you grow so you never have a "digital transformation" in your future.

Whether you're building a SaaS for others or a platform for yourself, “Enterprise” means more than just SSO tax and a call us button. There can be real requirements that you are not going to be able to meet reasonably without AWS's foundational building blocks that have this built in at the lego brick level. Combine that with "cost of delay" to your product and "opportunity cost" for your engineering (devs, SREs, users spending time doing undifferentiated heavy lifting) and those lego blocks can quickly turn out less expensive. Any blog comparing pricing not mentioning these things means someone didn't align their infra with their business model and engineering patterns.

Put another way, think of the enterprise column in the longest pricing grid you've ever seen – the AWS blocks have everything on the right-most column built in. If you don't want those, don't pick that column. Google and Azure are in the Team column second from right. Digital Ocean, CloudFlare, the Pro column third from right. Various Heroku-likes in the Getting Started column at the left, and SuperMicro and Hetzner in the Self-Host column, as in, you're buying or leasing the hardware either way, it's just whose smart hands you're using. ALL of these have their place, with the Getting Started and Pro columns serving most folks on HN, Team best for most SMB, and Enterprise best for actual enterprise but also Pro and Team that need to serve enterprise or intend to grow to that.

Note that if you don't yet need an enterprise column on your own pricing grid, K8s on whoever is a great way to Get Started and go Pro yourself while learning things needed for continuous delivery and system resilience engineering. Those same patterns then can be shifted onto on the Team and Enterprise column offerings from the big three (Google, Azure, AWS).

Here's my TL;DR blog post distilling all this:

If YAGNI, don't choose it.

wltr•14h ago
I’d like to point that exaggerations like $430 an hr (isn’t some average salary) or three Starbucks a day being something everyone casually does, they weaken your point.

As the rest of your comment, personally, I see it more like a pitch to use AWS, rather than some conversion whether everyone really needs that enterprise tier. Me, I’d prefer to control as much of my infra as possible, rather than offloading it to others for an insane price tag.

Terretta•3h ago
OK, call it a half day (four hours) a month.

But really, if DIY, someone's got to actually have it meet SLOs and SLAs. So you need a person or two, which is when those hours add up.

These days housing and benefiting an employee can cost 50% to 100% overhead, depending on firm efficiency. So, $400/hr means $800k/yr (because 40 hrs x 50 weeks = rate x 2000) but half that can be considered overhead (recruiting, real estate, benefits, training, vacation, "management" when some number of headcount requires adding a lead or manager who is expensive overhead), so it's really 400k a year which is not out of line at firms with regulatory requirements.

Anyway, if your workload is critical, you can't have only one, so call it 2 at 200k. Point is, when all these things matter, GCP/Azure/AWS isn't the thing that stands out.

---

> As the rest of your comment, personally, I see it more like a pitch to use AWS

Re AWS, I thought I was clear:

If YAGNI, don't choose it.

mustaphah•14h ago
You've saved $426/mo but inherited a $10k/mo full-time DevOps job.
NDizzle•14h ago
Do you think Devops is not required when you use cloud providers or something? Of course it is...
mustaphah•14h ago
I meant +1 DevOps engineer dedicated to managing the added operational complexity.
NDizzle•14h ago
Why would it be +1? The Devops duties that were performed on AWS are no longer being performed... wouldn't it simply shift to the new stack?
mustaphah•6h ago
Self-managed Talos K8s, self-managed CloudNativePG, and the operational overhead of networking, DNS, etc. All of these were used to be fully managed by AWS for them; zero operational cost.

I'd guesstimate a 2× increase in their operational complexity. So, if they previously required 0.5 DevOps of a full-timer, they'll now need one more DevOps full-timer just to handle the added complexity.

Does that make sense to you?

matesz•14h ago
Second hand hardware for the win
sharpfuryz•14h ago
AWS/GCP/Azure Cloud turns the audit beast into a house-cat: one IAM rule, one log stream, one firewall and etc. Otherwise, you need to fill out a lot of documents to prove that your bare metal is safe to host, for example, cardholder data.
ripped_britches•14h ago
Who could have guessed that servicing compliance requirements at scale would create such a good business model
iamleppert•14h ago
$500 a month for two environments with 4 CPUs and 8 GB of memory is diabolical. The only thing more expensive and with worse performance than AWS is Azure.
objektif•14h ago
Startup founders and employees are you really paying insane bills to AWS? How? We are really paying peanuts compared to other expenses.
ripped_britches•14h ago
Probably depends on what your business is and the LTV of your customers
objektif•12h ago
May be but not sure. How many Figma type startups out there paying $500k a day? As opposed to just traditional SaaS/OpenAi wrapper. I just don’t see it.
tstrimple•13h ago
In a lot of cases it's because they don't know how to build using cloud services effectively so they just spin up a lot of VMs because that's the only tool they know how to use. Running VMs 24/7 is just about the most expensive thing you can do in the "cloud". But doing anything else is "too complicated".
ukd1•14h ago
My last co runs hundreds of servers with Hetzner for a semi-stateless workload. With AWS, the pricing let alone the performance wouldn't be viable. At some point we also used Heroku for the application (more recently EKS); the combo drove folks nuts as, it was "weird".
dielll•14h ago
Only problem with Hetzner is that they don't seem to accept Accounts created from African countries. Tried creating an account with them twice from Kenya, and in both instances my account got blocked 5 minutes after creation with zero explanation. tried reaching out to support and got zero reply.

I get it's their business and they can do as they please with it, however, maybe tell me before I create an account that you don't accept accounts from my continent

esskay•14h ago
Hetzner is very cautious who they accept as a customer these days as the downside of being a low-cost hosting provider is you get a ton of signups from people looking to use it for seed boxes and other illegal activities.

It sucks for legitimate customers, but you can sometimes plead your case directly as long as you are willing to provide id and such, but ultimately like you say, it's their business.

whalesalad•14h ago
The struggle is real. A lot of people think cloud lock-in is due to using cloud-specific services like SQS... but its the data. Try and exfil 300TB out of S3 without paying an enormous transfer cost =(

I want to move our infra out of AWS but at the end of the day we have too much data there and it is a non starter.

mentalgear•14h ago
No mention of SST.dev running on Hetzner ?
tinyspacewizard•14h ago
> Why two cloud providers? Initially we used only DigitalOcean, but a data intensive SaaS like tap needs a lot of cloud resources and AWS have a generous $1,000 credit package for self-funded startups.

So some Kubernetes experts migrated to AWS for $1k in credits. This is madness. That's weeks of migration work to save the equivalent of a day of contracting.

oneplane•14h ago
Gee, another "we did not need cloud, so by not using cloud, we stopped spending on something we did not need"-story. Duh. The real story is why someone who doesn't need cloud services starts using them anyway.

If you need it, use it, if you don't need it, don't use it. It's not the big revelation people seem to think it is.

vivzkestrel•14h ago
i ll immediately do the switch but please tell me how do I use CDK on hetzner
jerf•13h ago
Saving $400/month covers about 3-5 hours of engineering time per month. In a year, call it 30-50 hours. Did this project take more than 30-50 person-hours?

(The obvious argument about how it might pay off more in the future are dependent on the startup surviving long enough for that future to arrive.)

bearjaws•13h ago
I feel like this is left out of the story too often - people tend to compare the most optimistic "self-hosted", usually just one or two servers at best, to a less than ideal cloud installation.

My parent company (Healthcare) uses all on prem solutions, has 3 data centers and 10 sys admins just for the data centers. You still need DevOps too.

I don't know how much it would cost to migrate their infra to AWS, but ~ $1.3M (salary) in annual spend buys you a ton of reserved compute on AWS.

$1.3M is 6000 CPU cores, 10TiB of RAM 24/7 with 100TB of storage.

I know for a fact due to redundancy they have no where near that, AND they have to pay for Avamar, VMWare, (~$500k) etc.

There's no way its cheaper than AWS, not even close.

So sure someones self hosted PHP BB forum doesn't need to be on AWS, but I challenge someone to run a 99.99% uptime infra significantly cheaper than the cloud.

jerf•13h ago
I didn't try to hit that because that's harder to call, especially at a small startup. If what is probably "the guy doing all this" happens to be more comfortable with k8s than the AWS stack you can end up winning by going with a nominally more complicated k8s stack that doesn't force you to spend dozens of hours learning more new things and instead just using what you already know. For a small startup those training costs are proportionally huge compared to a more established larger going concern already making money. Startups should generally always go with whatever their engineers already know unless there is a damned good reason not to. (And "I just wanted to learn it" is not a good reason for the startup.)

But monetarily, even for a startup, $400/month savings is something you shouldn't be pouring the equivalent of $5000 (or more, just picking a reasonable concrete number to anchor the point) into. You really need to solve a $400/month problem by putting your time into something, anything that will promote revenue growth sooner and faster rather than optimizing that particular cost.

mbesto•12h ago
Exactly this. I know people don't like to use this term (because it comes from traditional IT), but this is effectively known as TCO (total cost of ownership). The whole "bare metal" versus the well known hyperscalers debate often misses this with a hand-wavy "just get better devops people and its cheaper".
bcrosby95•11h ago
If your parent company sysadmins invest heavily into automation each sysadmin could be managing thousands of servers.

Also, 6000 CPU "cores" on the cloud is more like 3000 CPU cores. Which you can get in just 20-50 servers. This is in the range of something that could be taken care of as a part time job.

bearjaws•10h ago
Of course you could also invest heavily into cloud automation and likely run their payloads autoscaling and save a fortune too.

My point is, when people compare cloud to on prem they use a hypothetical on-prem installation vs a realistic actually working cloud deployment.

We only see these blog posts for things that are just 1-2 servers.

Very few companies are fully on-prem and saving a lot of money, they typically have very specific use cases like high bandwidth or IO usage.

pikelet•1h ago
Not everyone is on a US tech salary (no idea about the company in question – just saying that this doesn't apply universally).
createaccount99•13h ago
Frankly the idea of managing the db myself seems like a horrible idea and a lot of work. But perhaps I've been indoctrinated by the big cloud.
liampulles•13h ago
There are footguns to be found with a self operated k8s cluster, and definitely (costly) ones with cloud native database orchestration. Not to mention all the risks which come with migrating to "new things" in general. I know for me that when I moved away from those things to managed AWS versions, I could definitely see the value of a cloud solution.

But that cost difference is huge...

It is a interesting tradeoff to consider I think (I'm not criticizing either Hetzner or AWS or any team's decision, provided they've thought the tradeoffs through).

Pxtl•13h ago
I've always thought these services seemed overpriced, like users were subsidizing a ferocious amount of R&D and speculative expansion off their fees. I mean, it's just hosting webservices - this feels like it should be a commodity at this point.
dwrowe•13h ago
In my experience, there has been an interesting ebb and flow on the use of 'dedicated' hardware and auto-scaling power of AWS/cloud instances. When first starting out, super cost-conscious, you're getting ideal performance. As soon as you start experiencing interesting traffic patterns, auto-scaling makes sense. Then, your bill grows to such a point as it makes economical sense to pull back to more powerful dedicated services, then lean a little into auto-scaling, etc. A pendulum of underlying services and people to support it. A balancing act of finding efficiencies during wild growth periods, and reaching some sense of stability.
js4ever•13h ago
We've helped quite a few teams move from AWS to Hetzner (and Netcup) lately, and I think the biggest surprise for people isn't the cost or the raw performance, it’s how simple things become when you remove 15 layers of managed abstractions.

You stop worrying about S3 vs EFS vs FSx, or Lambda cold starts, or EBS burst credits. You just deploy a Docker stacks on a fast NVMe box and it flies. The trade-off is you need a bit more DevOps discipline: monitoring, backups, patching, etc. But that's the kind of stuff that's easy to automate and doesn't really change week to week.

At Elestio we leaned into that simplicity, we provide fully managed open-source stacks for nearly 400 software and also cover CI/CD (from Git push to production) on any provider, including Hetzner.

More info here if you're curious: https://elest.io

(Disclosure: I work at Elestio, where we run managed open-source services on any cloud provider including your own infra.)

pqdbr•9h ago
Would like to know more about your postgres offering: does it offer streaming replicas and streaming backup? Or just dump stored to s3?
js4ever•8h ago
Yes we offer clusters with auto failover and replicas can be in multiple regions and even in multiple providers.

We support postgres but also MySQL, redis, opensearch, Clickhouse and many more.

About backups we offer differential snapshots and regular dumps that you can send to your own S3 bucket

https://docs.elest.io/books/databases/page/deploy-a-new-clus...

wahnfrieden•13h ago
Is anyone using Yugabyte self host? It looks like maybe the best solution these days for HA Postgres on bare metal or VPS. (And very friendly open source license.)
redbell•13h ago
Nice writeup, I like it and would definitely bookmark for future reference.

One weak take however in the article that I felt not quite right is the pricing saving part. By saying 1/4 of the price. I was expecting to see the AWS bill in the range of $10k/month, or more but it turned out to be just around ~$550 or, a total saving of $420.

Whith the above said, it does really make me questioning whether it worth the hassle of migration because, probably, one of the main reasons to move away from AWS is to save the cost.

Finally, let me conclude with this comment from /r/programminghumor:

    You're not a real engineer until you've accidentally sponsored Amazon's quarterly earnings
warrenmiller•13h ago
its also very easy to run windows on their linux cloud VPS's...if you need to run windows.
czhu12•13h ago
We discovered a similar cost saving and I ended up building an internal PaaS that i ended up open sourcing, that works well on hetzner.

The biggest downside to hetzner only is that it’s really annoying to wrangle shell scripts and GitHub actions to drive all the automations to deploy code.

The portainer team recently started sponsoring the project so Ive been able to dedicate a lot more time to it, close to full time.

https://canine.sh

groestl•13h ago
Deploying to bare metal Hetzner for years, very happy with the service. Smart hands work fast and reliable. As soon as you got your stack worked out (which admittedly takes a while), maintaining bare metal is not much different from maintaining cloud, apart from swapping disks sometimes. And the cost is so low, in comparison, it's ridiculous.
negendev•13h ago
Shhhh don't talk too loud about Hetzner!
jasonthorsness•13h ago
With AWS you are paying for the ancillary things; pretty good network, reliable portal, reliable management APIs, nearly perfectly commoditized products (one VM like any other, though in practice I observe minor exceptions to this all the time predominantly with worse connectivity to other services or disk). But the performance bare metal is good (AWS offers bare metal too but at insane price point because they only buy enormous servers).
bzmrgonz•13h ago
OP Can you expand a little on kustomize? I think it could use a little more real estate on your article!!
time4tea•13h ago
Hetzner are great.

Cloud was a reaction to overlong procurement timeline in company managed DC. This is still a thing, it still takes half a year to get a server into a DC!!

However probably 99% of use cases dont need servers in your own DC, they work just fine on a rented server.

One thing though, a rented server can still have hardware failure, and it needs to be fixed, so deployment plans need to take that into account - fargate will do that for you.

heavyset_go•13h ago
The cloud makes sense when you have someone else's money to burn, and don't care about being held hostage with lock-in if you aren't careful.
999900000999•12h ago
Long long ago, at the start of my career I was at a great company. We were using a Postgres DB version not supported by RDS. So I had to manually set up postgres over and over again on EC2 instances. This was before Docker was reliable/standard.

I wasted hours on this, and the moment RDS starts to support the postgres version we need it everything was much easier.

I still remember staying up till 3:00 a.m. installing postgres, repeatedly.

While this article is nice, they only save a few hundred dollars a month. If a single engineer has to spend even an hour a month maintaining this, it's probably going to be a wash.

And that's assuming everything goes right, the moment something goes wrong you can easily wipe out a year saving in a single day ( if not an hour depending on your use case).

This is probably best for situations where your time just isn't worth a whole lot. For example let's say you have a hobbyist project, and for some reason you need a very large capacity server.

This can easily cost hundreds of dollars a month on AWS, and since it's coming out of your own pocket it might be worth it to spend that extra time on bare metal.

But, at a certain point you're going to think how much is my time really worth. For example, and forgive me for mixing up terms and situations, ghost blog is about $10 a month via their hosted solution. You can probably run multiple ghost blogs on a single Hetzner instance.

But, and maybe it was just my luck, eventually it's just going to stop working. Do you feel like spending two or three hours fixing something over just spending the $20 a month to host your two blogs ?

lkrubner•12h ago
They went from $559.36 to $132 a month on Hetzner, and they seem happy about the performance. This matches my own experience as well, I have been stunned regarding Hetzner and how cheap it can be.
sytse•12h ago
The cost improvements are great. If you miss the automation that AWS does for database servers consider using something like https://www.ubicloud.com/ that is great for PostgreSQL servers. On bare metal these typically also support 5x the number of IOPS without paying through the nose.
moomoo11•12h ago
Cloud was never about performance. It was about offloading costs involved in admin stuff like backups and stuff.
rixed•12h ago
The title should have been "...from AWS and DigitalOcean".

If it were only from AWS, they would probably also have mentionned a drastic reduction of API complexity.

jurschreuder•12h ago
Woooowww we literally today switched Object Storage prod from AWS S3 to Hetzner.

Now #1 on HN. Destiny.

lazystar•11h ago
> In reality, there is significant latency between Hetzner locations that make running multi-location workloads challenging, and potentially harmful to performance as we discovered through our post-deployment monitoring.

found this little bit buried near the end. all that glitters is not gold, i guess

desireco42•11h ago
This doesn't make sense to me. Businesses don't switch to save from $500 to $150. It just doesn't make sense. I think they did it because of political reasons, but they should just say so.

I am in US, I would use Hetzner just the same, but not to save few bucks here and there.

0xbadcafebee•11h ago
> We saved 76% on our cloud bills while tripling our capacity

The claim here is that their cloud bills lowered. Nowhere is mentioned the cost of engineering and support. This will increase their overall cost, which is not mentioned here at all.

When you are a startup, you don't have a lot of headcount. You should be using your headcount to focus on your product. The more things you do that eat up your headcount's time, the less time you have to develop your product.

You need a balance between "dirt-cheap cost" and labor-conserving. This means it's a good idea to pay a little bit of a premium, to give you back time and headcount.

> How much does a 2x CPU, 4 GiB RAM container cost on AWS Fargate? Just over $70/month. We run two worker instances, which need these higher resources, along with smaller web instances and all the other infrastructure needed to host an application on AWS (load balancer, relational DB, NAT gateway, ...). All together, our total costs for two environments of tap grew to $449.50/month.

The classic AWS mistake: they're talking retail prices. AWS gives every customer multiple opportunities to lower costs below this retail price. You don't have to be an enterprise. In fact, you can ask them directly how to save money, and they'll tell you.

$70/month is the retail price of Fargate on-demand (for that configuration). Using a Compute Savings Plan[1], you can save 50% of this cost. Additionally, if you switch from x86 to ARM for your Fargate tasks, the cost lowers an additional 25%. So with Graviton and a Compute Savings Plan, their Fargate price would have been $26.25/month, a 62.5% savings.

And there's more to save. Spot instances are discounted at 70%. NAT Instances save you money over NAT Gateways. Scaling your tasks to zero when they're not needed cuts costs more. Choose the cheapest region to pay the least.

[1] https://repost.aws/questions/QUVKyRJcFPStiXYRVPq-KU9Q/saving...

Lorin•11h ago
I'd like to thank everyone moving their hosting away from shared hosts to VPS.

Been on the same shared hosting platform for 15+ years and the hardware's load average dropped to ~16% on a 64-core Epyc /w 512gb RAM. Easily handles half million unique bursts without breaking a sweat.

AntonCTO•11h ago
How do you get rid of AWS SES?
byebyetech•11h ago
If people stop using AWS,Azure and GCP the economy would improve by 10%. Using cloud services is a massive waste of money.
abenga•8h ago
But people spending money is the economy.
fcpk•10h ago
In my opinion, this is not always a true thing because size of the needed maintenance team can mean a lot. I find that for small-medium deployments cloud can be a lot more efficient despite being more expensive because it can - take away a lot of the expensive maintenance and management of things like databases, backup, storage, that would otherwise requiring at least a couple more people. So if your bill is <10k/month it certainly isn't worth hiring more people - allow you spin up systems with almost 0 cost(think cloud run / container systems) for very low usage systems. - give you global coverage & distributed reliability that can be otherwise hard to get.
qwertyuiop_•10h ago
The executives at <bigcorp> will not allow bare metal or other alternatives anytime soon because either most of them have drunk the AWS/Azure koolaid or AWS/Azure has already bought out the Rolodex of most of these companies by the 1) promise of hiring senior tech leadership if they drink the koolaid 2) influence of previous executives who accepted roles at Amazon.

You've gotta hand it to Amazon for their strategy.

byyll•10h ago
I get the point. But those are incomparable. Hyperscalers don't compete on price, they compete on scale and services offered. The compliance, the security, the support, the integrations, the backups.

Can you host your own object storage open source software, key vault OSS, VPN, queue service, container registry, logging, host your own Postgres/MySQL? Sure, but you will need to research what is best, what is supported, keep it maintained, make sure to update it and that those updates don't break anything, wake up in the middle of the night when it breaks, make sure it's secure. And you would still need to handle access control across those services. And you would still need a 3rd party service for DDoS protection, likely CDN too. And you would likely need some identity provider.

itsthecourier•10h ago
also if you need spikes of gpus vast.ai
ilaksh•10h ago
Maybe we should stop talking about Hetzner before they raise their prices guys.
theturtlemoves•9h ago
Estimating the cost and performance up front is impossible unless you've deployed something similar before. I'm probably wrong but it seems to me like AWS (Azure / Google Cloud) is a good enough place to start, get a feel for what you need and then move on to bare metal only if needed.
bradgessler•9h ago
I keep sending links like this to AWS when their sales reps reach out, then I ask for a discount, and they never get back.
lonelyprograMer•9h ago
I tried to register a Hetzner account multiple times but didn't get approval. I chose Netcup as a better alternative.
masterj•9h ago
> tap is a data-intensive SaaS that needs to be able to execute complex queries over gigabytes of data in seconds.

> minimum resource requirements for good performance to be around 2x CPUs and 4 GiB RAM

This is less compute than I regularly carry in my pocket? And significantly less than a Raspberry Pi? Why is Fargate that expensive?

grebc•8h ago
It’s amusing the outages of hyper scalers are posted here on HN regularly enough but there’s plenty of turkeys in this thread mentioning you have to use hyper scalers if you need 100% uptime.

Newsflash - no one has 100% and your over equitied startup is just burning other people’s money with no clue as per usual.

kakoni•7h ago
Any k8s self-hosters here? How is that going?
mlnj•7h ago
Have been running a couple of machines on Hetzner cloud with https://github.com/vitobotta/hetzner-ks for about 2 years now.

Have been very happy with the setup. Also hosts a StackGres cluster that's backs up to GCS. Plenty of compute foe the price.

artdigital•6h ago
Your link 404s
domlebo70•7h ago
How do people deploy docker containers on a machine like this? We currently use Cloud Run, which is very hands off. How is a deploy done?
greenbeans12•6h ago
Is this a shocker? In the cloud, you pay for convenience. This is basically running your own Kubernetes cluster. Right for the right teams (highly technical infra engineers).
artdigital•6h ago
Been thinking about fully self hosting my small stuff for a bit but a mini managed k8s ($10) is just so convenient. I build a new docker image for some thing, add a manifest file to my k8s repo and apply it - my “cluster” spins up a new pod, I can easily hook it up to tailscale or Cloudflare warp to make it accessible, and it’s available

Been using this approach for the past years and if something gets bigger, I move the container to fly or a different k8s cluster in a couple hours max

On my bigger k8s I can then easily add more nodes or scale up pods depending on need, and scale them back down when idle.

Still the main issue with any setup I see is the database. No matter what I use I’d either have a managed Postgres somewhere, or something like litestream, and if that’s not in the same data center it’s gonna add latency sadly

nine_k•6h ago
I see the allure of dedicated servers. All is great when everything works. What I miss in this picture is failure scenarios.

How many dedicated servers do you need to run to afford losing one of them to a hardware failure? What is the cost and lead time for a replacement? How much overprovisioning do you need to do, and how well ahead, in anticipation of seasonal spikes, or a wave of signups from a marketing campaign?

PeterZaitsev•6h ago
There is no one size fits all. On the small scale "managed" cloud which takes care of as many things as possible is effective. As you scale though it will cost proportional to environment size, except some discount. With solutions like Hetzner you get a lot more of hardware for the price but have to invest more labor into managing it, with automation however this labor does not grow nearly as much as your environment size.
figers•45m ago
I love azure app service because I can focus on our code and code security and leave server management, OS security and physical security to Microsoft.

No servers, no VMs, no containers, just our code to focus on.