Someone needs to study these claims tbh; your point is contrary to my experience.
You can subsidize the cost of self hosting by discounting your time/skill in DIY - and you don’t have to interview yourself.
However there are SREs and DevOps engineers that are seemingly required for cloud solutions; and they cost more than sysadmins.
So, someone is lying somewhere.
The profession grew out of observations made in catastrophic failures like Chernobyl.
They primarily focus on organization structures, communication structures, and responding to failures. At an engineering level they focus on meta stability - decoupling local failure from overall system stability to prevent local failures from cascading to catastrophic failures - and automated tooling to loop the right engineers into ongoing incidents.
The skill set is a side effect of operating at scale, where even brief catastrophic failures can result in hundreds of millions of dollars of losses; directly in revenue and indirectly to reputation.
You can “buzzword” yourself into job security just as easily on a VM.
You need to do that for cloud as well. The assumption that you can just get the developers to spin up what they need doesn't work. You need a team of "cloud engineers" if you don't want your cloud bill to go crazy. I know a few people that does AWS for clients, they basically spend all their time maintaining, upgrading and reworking things to reduce the cost.
In one example, when AWS changed their IPv4 pricing, one of the went in an managed to remove 90% of their public IPv4 usage. You need to say on top of the cloud platforms, ensuring that costs stay low is a full time job.
Your colo provider should be doing their part for physical access for ISO27001/SOC2 and PCI-DSS.
I quite literally know all about this because I was running payment card infrastructure in a colo, as part of one of the largest e-commerce software suites in the UK at the time.
Cloud gives you direct access to compute at the click of a button (provided you have quota).
The ISO/PCI cert requirements are not accelerated by their use, only the initial deployment speed of your solution. If at all anything the latest I've seen is that there are more checks in cloud for various areas where you can accidentally leak data through poor permissions or because someone could enable those permissions.
For example in Google Cloud there was an additional requisite that I monitor any admin users oauth2 application accesses for their google accounts; this was not a requirement for on-prem.
At least in my experience as a service provider.
With cloud I can easily plug in a compliance vendor like Drata, add connectors to my other vendors, and get a significant amount of evidence quickly.
Bespoke on-prem solutions bring plenty of their own challenges that are just N/A or handled by cloud vendors, like physical security.
Maybe as a payment processor things shake out differently. Though having read far more of PCI v4 than I'd like, I doubt it.
Do you trust your people to never slip up? To make sure there's really the reliability & security guarantees that your company depend on? Are there competent technical hands available at all times to handle issues as they come up? This adds significantly to the org chart, requires constant caring to be added, to keep it steered and staffed.
I'm a huge fan of self hosting, of buying your own hardware. Especially for very technical entities, that have lots of overlap & intimacy with computers. But your company needs to have reason to believe it can do the job not just well, but to a degree where you're sure you won't lose the entire business. Needs to believe they really are on top of this huge domain! I see very clearly why folks go cloud, and I wish the world was in a better position, had more to say, to show why buying pizza boxes is gonna not just save you money buy be an enduring acceptable safe choice.
Much safer to not wade in here, shutting the hell up is free, and there's plenty that can go wrong, but I do think Kubernetes presents the first maybe acceptable opportunity for broadly acceptable DIY computing, delivering key three things: an essential integrativeness of concerns allowing it to be a comprehensive cross-cutting platform for your business's compute, delivers strongly on the security and dependability you absolutely must have in a well known way, and avoids the historical trap of each company ending up married to their own specific boondoggle infrastructure that's been hand cobbled together by whomever had such and such task at the time. There's a stable practice here, one that many companies and practitioners are honing; there's much difference between Kube clusters sure, but these intracacies and elaborations support the creation of something that is mostly alike, if you look from one company's clusters to the next.
Would UniSuper engineers have trashed their entire infrastructure the way Google Cloud did? Granted, that's not a common thing to happen, but they got _very_ lucky that day that they had a backup elsewhere and didn't fully commit to trusting Google.
And while your company is a single petri dish with your own specific problems, these folks have to see a lot of shit. Sometimes yes it all goes bad even there anyways! The resiliency fails! But given how much they've thought about it & how much they've hedged against 1 in a million 1 in a billion cases that they see semi regularly that your company might never notice until it's too late, I think there's a ton of reasons to trust these generally very well exercises systems with pretty sizable engineering teams & deployments behind them. More so than what most folks can bring to bear.
These matters are existential threats for your business. And they are existential threats for big providers, for HyperScalers, even more so. Google very publicly owned up to UniSuper, found out why it happened (a blank form on an internal tool lead to the account having a 1 year time limit with auto-deletion), and publicly announced their three fold plan to make sure at least this specific horribly threat would never happen again:
> We deprecated the internal tool that triggered this sequence of events… We scrubbed the system database and manually reviewed all GCVE Private Clouds… We corrected the system behavior that sets GCVE Private Clouds for deletion for such deployment workflows.
https://cloud.google.com/blog/products/infrastructure/detail...
The mere fact that this was such a notable event should be itself notable. In general, the hyperscalers have been stupendously reliable. And absurdly secure. They have small armies dedicated insuring both.
I look forward to this new era where finally we might have computing of our own good enough & broadly known & trusted enough to make doing our own - even with small teams - viable!! But heavens yes, keep off-site backups!!!
Individual AZs have significantly worse uptime than any DC I've ever worked with. They are very reliable, if you are multi-region - and you hire a small army of consultants.
EDIT: (I can't reply for a bit) Quick glance at https://www.hetzner.com/sb/ seems to suggest 64GB RAM and a full Intel i7.
https://www.hetzner.com/dedicated-rootserver/ax42/
But 8/16 is $95+ on AWS.
That's where they get you.
People are starting to learn that cloud hosting isn't exactly cheap. It's great for quickly scaling, there's way better automation tools, e.g. Terraform, a lot of the complicated stuff is done for you, but it's not cheap.
37signals is running their own infrastructure, and it's cheaper than cloud.
The biggest thing I hear otherwise is "but salaries!". Okay, but you're just replacing your sysadmins with AWS devops people, who I can't imagine are cheaper.
Cloud isn't simple, it's very complex. You need just as many people and just as much knowledge to make it work.
EDIT: a typical CRUD app will be fine with just PHP, HTML and CSS.
I've been struggling with how to write about and fix this paradox for a long time. We start with the heavy weight infrastructure because we don't know how large our application will end up. Small teams use microservices. Small teams build large continuous integration pipelines and strain under the complexity.
That's partially because the large teams have built open source ecosystems that make dealing with that complexity mostly invisible. That's partially because we all neglect YAGNI. And it's partially because we need better patterns on how to in-time scale our complexity.
In fact, I would be surprised if AI were to generate code as rube-goldbergery as React would have you write. Shrug.
Gemini might default to a framework and Tailwind CSS, but I can tell it to use vanilla Javascript.
It’s Laravel with Alpine (or Vue in some cases).
It retains all the simplicity of classic web development with a lot more sensible tooling and significantly improved language — PHP of today is worlds better than the old days.
There is a lot of gold in the vein of simple web technologies, even if they’re not being propped up by the PR department at billion dollar companies.
also, maintainability of dependencies in web projects that do use build chains seems much better today than even 8 years ago, and it seems to be improving. esbuild, vite, and other projects have simplified the tool chain. improvements in CSS have reduced the need for pre-processors, etc.
No one who argues for this seems to remember the mess of jQuery dependencies in the form plugins people would include in projects, before HTTP/2, just to have some simple functionality like a carousel, transitions, submitting forms with AJAX (XHR), and so on. Many of those things are a few lines of code now, where they were sometimes hundreds of lines of code in the past when people developed this way.
The prototype pollution and general undefined behavior introduced by plugins was nearly impossible to track in projects where teams were plugin hungry.
Additionally, the spaghetti that existed where people would interweave on element event attributes (onclick, etc) with scripts lead to huge headaches debugging events before devtools in browsers was mature as it is today.
Honestly, I could go on forever.
Walk into a large project that is 15 years old and still using jQuery UI, and you might change your mind on the merit of its elegance.
jQuery was one of the nicest things around at a time when it was needed, but pining for this version of the past boggles my mind.
App = {
}
People are a lot more educated now. But you're right, we can't go backward, but there are many good things about Jquery that were mostly thrown out by Angular and React, and I think there's a middle-ground here somewhere. Anyways, not like it matters, AI has arrived.
And the ops team has preconfigured systems for single page applications but not MPAs.
I just need to ship this tiny tool very quickly (initial server request to delivery is about 5 weeks, but I didn't know they were complicating things this much until about 3 weeks after the initial request), otherwise I wouldn't just give in here.
We didn't lose PHP, it's just no longer in the spotlight.
In fact I applied to a Laravel role very recently but they didn't like that most of my recent experience were with Go and Rust and not PHP.
If you want a PHP comeback, hire that Rust engineer that wants to use Laravel!
Jquery isn’t going to make anything easier today. It’s definitely contemporary with this age of web development, but the expected standard library in a browser implementation of JavaScript (+css animations triggered by JavaScript) includes all of jquery’s features automatically. What you will get is more complex stack traces and a rather large dependency (large in comparison to no-dependency) that just unlocks a bespoke syntax for other JavaScript features.
Places where you used jquery, can now be just JavaScript. It’s lovely to use like that actually!
Everything else, totally agree with though!
https://developer.mozilla.org/en-US/docs/Web/API/Web_Animati...
var _ = s => document.querySelectorAll(s)
_('.someclass').forEach(e => {e.classList.add('something', 'else')})
Not crying any tears.
Vanilla javascript now is way better than the mystery methods in jQuery. Does this call return zero, one or many things? Also what does it return? Who knows! We'll find out at run time!
You don’t understand the tools we have now, probably you are not the target and you can still use PHP + jQuery.
And is false that they weren’t package managers: the de-facto standard was Bower, used in combo with Grunt as task manager, with a Ruby toolchain that included LESS most of the type to preprocess styles.
If the "industry" default has new JS/Node powered ecosystem, perhaps some people, including myself, would agree that PHP frameworks are still ahead of the overly modular JS frameworks, there isn't anything like Laravel or Django for the JS world.
But, for simple cases, templates are still very common, including static site generators, but even PHP.
mattl•1d ago
I'm looking at my next project now and I think I'm going to build it like this again because if it has any hope of lasting a decade or more, it needs to be relatively easy to maintain.
indigodaddy•1d ago
mattl•1d ago
msephton•1d ago
mattl•1d ago
mexicocitinluez•1d ago
DocTomoe•1d ago