I've seen very simple services get bogged down in needing to be "scalable" so they're built so they can be spun up or torn down easily. Then a load balancer is needed. Then an orchestration layer is needed so let's add Kubernetes. Then a shared state cache is needed so let's deploy Redis. Then we need some sort of networking layer so let's add a VPC. That's hard to configure though so let's infra-as-code it with terraform. Then wow that's a lot of infrastructure so let's hire an SRE team.
Now nobody is incentivized to remove said infrastructure because now jobs rely on it existing so it's ossified in the organization.
And that's how you end up with a simple web server that suddenly exploded into costing millions a year.
When I looked into having this static page hosted on internal infra, it would have also needed minimum two dedicated oncalls, terraform, LB, containerization, security reviews, SLAs, etc.
I gave up after the second planning meeting and put it on my $5 VPS with a letsencrypt cert. That static page is still running today, having outlived not only the production line, but also the entire company.
In my experience there are two kinds of infrastructure or platform teams:
1) The friendly team trying to help everyone get things done with reasonable tradeoffs appropriate for the situation
2) The team who thinks their job is to make it as hard as possible for anyone to launch anything unless it satisfies their 50-item checklist of requirements and survives months of planning meetings where they try to flex their knowledge on your team by picking the project apart.
In my career it’s been either one or the other. I know it’s a spectrum and there must be a lot of room in the middle, yet it’s always been one extreme or the other for me.
I currently have VPSes running on both lowend and big cloud providers that have been running for years with no downtime except when it restarts for updates.
If a web browser is in a glorified chromebook like a 2025 Macbook Air, indeed there's a lot of breathing room. A lot of ram. Processing power. Cores. It's nice. I get that.
And then you can do off-line first: meaning use the cached local storage available to WASM apps.
Then whatever needs to go to the mother ship, then call web apis in the cloud.
That would, in theory, basically giving power back from "net pc theory of things" back to "fat client"--if you ask the grey-haired nerds among you. And you would gain something.
But outside of a glorified chromebook like a 2025 Macbook Air--we have to remember that we are working with all kinds of web devices--everything from crap phones to satellite servers with terabytes of ram--so the scalability story as we have it isn't entirely wrong.
I have been to U of Toronto, very smart people. But honestly this is a troll piece. Doesn't go into any depth and one-sided. Unhelpful. I think U of Toronto's reputation would be better served by something more sophisticated than this asinine blog entry.
There is a cost to the network synchronisation, so you definitely want to scale vertically until you really must scale horizontally.
Also, why are people submitting every single post from this blog recently? Does this person actually do any work at UToronto, or is he just paid to write? There are -8000- links to various pages under this domain. I hope it's just a collective pseudonym like Nicolas Bourbaki and one person didn't write 8000 pages.
I'm desperate to use some of the insights from a navel-gazing university computing center in my infrastructure: IPv6 NAT (huh? what? What?!), custom config management driven by pathological NIH (I know precisely zilch about anything at utcc but I can already say with 100% confidence that your environment isn't special enough to do that), 'run more fibers', 'keep a list of important infrastructure contacts in case of outages', 'i just can't switch away from rc shell', and that's just in the last six months. On second thought, I'll just avoid all links to here in the future to save my sanity.
tryauuum•2h ago
Deebster•2h ago
Or did I miss the sarcasm?