Cue programmers blaming the product team for "always changing their mind" as they discover what users actually need, and the product team blaming developers for being hesitant to do changes, and when programmers agree, it takes a long time to undo the perfect architecture they've spent weeks fine-tuning against some imaginary future user-base.
Why does that matter? My argument is: Engineer for what you know, leave the rest for when you know better, which isn't before you have lots of users.
That is not a lot. You can host that on a Raspberry Pi.
(16 if you need geo replication.)
I am worried by the talk of 10k daily users and a peak of 1000TPS being too much premature optimisation. Those numbers are quite low. You should know your expected traffic patterns, add a margin of error, and stress test your system to make sure it can handle the traffic.
I disagree that self-inflicted architectural issues and personnel issues are different.
Instead, they celebrate "learning from running at scale" or some nonsense.
they couldn't redeploy to a high-spec VPS instead?
The result is 100% of auth requests timeout once the login queue depth gets above a hundred or so. At that point, the users retry their login attempts, so you need to scale out fast. If you haven't tested scale out, then it's time to implement a bcrypt thread pool, or reimplement your application.
But at least the architecture I described "scales".
You do, in fact, need to scale to trivial numbers of users. You may even need to scale to a small number of users in the near future.
I absolutely agree with your point, but I want to point out, like other commenters here, that the numbers should be much larger. We think that, because 10k daily users is a big deal for a product, they're also a big deal for a small server, but they really aren't.
It's fantastic that our servers nowadays can easily handle multiple tens of thousands of daily users on $100/mo.
1. scaling for a very specific use case, or because
2. it hasn't even found product-market fit
Blaming the failure or designing for scale seem misplaced, you can scale while remaining agile and open to changeThe problem I see is much more about extremely vague notions of scalability, trends, best practices, clean code, and so on. For example we need Kafka, because Kafka is for the big boys like us. Not because the alternatives couldn’t handle the actual numbers.
CV-driven development is a much bigger issue than people picking overly ambitious target numbers.
I basically agree with most of what the author is saying here, and I think that my feeling is that most developers are at least aware that they should resist technical self-pleasure in pursuit of making sure the business/product they're attached to is actually performing. Are there really people out there who still reach for Meta-scale by default? Who start with microservices?
Anecdotally, the last three greenfield projects I was a part of, the Architects (distinct people in every case) began the project along the lines of "let us define the microservices to handle our domains".
Every one of those projects failed, in my opinion not primarily owing to bad technical decisions - but they surely didn't help either by making things harder to pivot, extend and change.
Clean Code ruined a generation of engineers IMO.
I've been running a SaaS for 10 years now. Initially on a single server, after a couple of years moved to a distributed database (RethinkDB) and a 3-server setup, not for "scalability" but to get redundancy and prevent data loss. Haven't felt a need for more servers yet. No microservices, no Kubernetes, no AWS, just plain bare-metal servers managed through ansible.
I guess things look different if you're using somebody else's money.
Not disagreeing that you can do a lot on a lot less than in the old days, but your story would be much more impactful with that information. :)
(Most distributed systems problems are solvable, but only if the person that architected the system knows what they're doing. If they know what they're doing, they won't over-distribute stuff.)
It's just as much about storage and IO and memory and bandwidth.
Different types of sites have completely different resource profiles.
The teams don't talk, and always blame each other
and adds distributed systems and additional organizational problems:
Each team implements one half of dozens of bespoke network protocols, but they still don't talk, and still always blame each other. Also, now they have access to weaponizable uptime and latency metrics, since because each team "owns" the server half of one network endpoint, but not the client half.
Yes, but it's not difficult to do something silly without even noticing until too late. Implicitly (and unintentionally) calling something with the wrong big-O, for example.
That said, anyone know what's up with the slow deletion of Safari history? Clearly O(n), but as shown in this blog post still only deleted at a rate of 22 items in 10 seconds: https://benwheatley.github.io/blog/2025/06/19-15.56.44.html
On a non-scalable system you're going to notice that big-O problem and correct it quickly. On a scalable system you're not going to notice it until you get your AWS bill.
Of course, those people's weekly status reports would always be "we spent all week tracking down a dumb mistake, wrote one line of code and solved a scaling problem we'd hit at 100x our current scale".
That's equivalent to waving a "fire me" flag at the bean counters and any borderline engineering managers.
[1] https://www.thoughtworks.com/radar/techniques/high-performan...
[2] https://www.thoughtworks.com/radar/techniques/big-data-envy
Another perspective is that the defacto purpose of startups (and projects at random companies) may actually be work experience and rehearsal for the day the founders and friends get to interview at an actual FAANG.
I think the author's “dress for the job you want, not the job you have” nails it.
I was but a baby engineer then, and the leads would not countenance anything as pedestrian as MySQL/Postgres.
Anyway, fast forward a bit and we were tasked with building an in-house messaging service. And at that point Mongo's eventual consistency became a roaring problem. Users would get notifications that they had a new message, and then when they tried to read it it was... well... not yet consistent.
We ended up implementing all kinds of ugly UX hacks to work around this, but really we could've run the entire thing off of sqlite on a single box and users would've been able to read messages instantaneously, so...
I feel like that's kind of the other arm of this whole argument: on the one hand, you ain't gonna need that "scalable" thing. On the other hand, the "unscalable" thing scales waaaaaay higher than you are led to believe.
A single primary instance with a few read-only mirrors gets you a reaaaaaaally long way before you have to seriously think about doing something else.
I don't think I should dress down any further :>
The turning point might have been Heroku? Prior to Heroku, I think people just assumed you deploy to a VPS. Heroku taught people to stop thinking about the production environment so much.
I think people were so inspired by it and wanted to mimic it for other languages. It got more people curios about AWS.
Ironically, while the point of Heroku was to make deployment easy and done with a single command, the modern deployment story on cloud infrastructure is so complicated most teams need to hold a one hour meeting with several developers "hands on deck" and going through a very manual process.
So it might seem counter intuitive to suggest that the trend was started by Heroku, because the result is the exact opposite of the inspiration.
They're just trying to be cool, you see.
Here's the thing, though: Almost every choice that leads to scalability also leads to reliability. These two patterns are effectively interchangeable. Having your infra costs be "$100 per month" (a claim that usually comes with a massive disclaimer, as an aside) but then falling over for a day because your DB server crashed is a really, really bad place to be.
How is that supposed to happen. Without k8 involved somehow?
Empirically, that does not seem to be the case. Large scalable systems also go offline for hours at a time. There are so many more potential points of failure due to the complexity.
And even with a single regular server, it's very easy to keep a live replica backup of the database and point to that if the main one goes down. Which is a common practice. That's not scaling, just redundancy.
Failures are astonishingly, vanishingly rare. Like it's amazing at this point how reliable almost every system is. There are a tiny number of failures at enormous scale operations (almost always due to network misconfigurations, FWIW), but in the grand scheme of things we've architected an outrageously reliable set of platforms.
>That's not scaling, just redundancy.
In practice it almost always is scaling. No one wants to pay for a whole n server just to apply shipped logs to. I mean, the whole premise of this article is that you should get the most out of your spend, so in that case much better is two hot servers. And once you have two hot...why not four, distributed across data centers. And so on.
You and I must be using different sites and different clouds.
There's a reason isitdownrightnow.com exists. And why HN'ers are always complaining about service status pages being hosted on the same services.
By your logic, AWS and Azure should fail once in a millennium, yet they regularly bring down large chunks of the internet.
Literally last week: https://cyberpress.org/microsoft-azure-faces-global-outage-i...
Ah, I didn't realize it was you. If HN had a block function, I would 100% just block your argumentative nonsense.
> Please don't sneer, including at the rest of the community.
https://www.youtube.com/watch?v=b2F-DItXtZs
15 years ago people were making the same "chasing trends" complaints. In that case there absolutely were people cargo culting, but to still be whining about this a decade and a half later, when it's quite literally just absolutely basic best practices.
Even if you do truly have a microservices architecture, you’ve also now introduced a great deal of complexity, and unless you have some extremely competent infra / SRE folk on staff, that’s going to bite you. I have seen this over and over and over again.
People make these choices because they don’t understand computing fundamentals, let alone distributed systems, but the Medium blogs and ChatGPT have assured them that they do.
- scaling vertically is cheaper to develop
- scaling horizontally gets you further.
What is correct for your situation depends on your human, financial and time resources.
I laughed. I cried. Having a back full of microservices scars, I can attest that everything said here is true. Just build an effin monolith and get it done.
Break your code into modules/components that have a defined interface between them. That interface only passes data - not code with behaviour - and signal the method calls may fail to complete ( ie throw exceptions ).
ie the interface could be a network call in the future.
Allow easy swapping of interface implementations by passing them into constructors/ using factories or dependency injection frameworks if you must.
That's it - you can then start with everything in-process and the rapid development that allows, but if you need to you can add splitting into networked microservices - any complexity that arises from the network aspect is hidden behind the proxy, with the ultimate escape hatch of the exception.
Have I missed something?
Even so it's still very simple.
To scale your auth service you just write a proxy to a remote implementation and pass that in - any load balancing etc is hidden behind that same interface and none of the rest of the code cares.
I like the idea of the remote implementation being proxied -- not sure I've come across that pattern before.
Also, most of these interfaces you'll likely never need. It's a cost of initial development, and the indirection is a cost on maintainability of your code. It's probably (although not certainly) cheaper to refactor to introduce interfaces as needed, rather than always anticipate a need that might never come.
Quite a while ago, before containers were a thing at all, I did systems for some very large porn companies. They were doing streaming video at scale before most, and the only other people working on video at that scale were Youtube.
The general setup for the largest players in that space was haproxy in front of nginx in front of several PHP servers in front of a MySQL database that had one primary r/w with one read only replica. Storage (at that time) was usually done with glusterfs. This was scalable enough at the time for hundreds of thousands of concurrent users, though the video quality was quite a bit lower than what people expect today.
Today at AWS, it is easily possible for people to spend a multiple of the cost of that hardware setup every month for far less compute power and storage.
The only problem is that there is a lot of video data.
I think most people don't realise that "10 million" records is small, for a computer.
(That said, I have had to deal with code that included an O(n^2) de-duplication where the test data had n ~= 20,000, causing app startup to take 20 minutes; the other developer insisted there was no possible way to speed this up, later that day I found the problem, asked the CTO if there was a business reason for that de-duplication, removed the de-duplication, and the following morning's stand-up was "you know that 20 minute startup you said couldn't possibly be sped up? Yeah, well, I sped it up and now it takes 200ms")
Also, it was overwhelmingly likely that none of the elements were duplicates in the first place, and the few exceptions were probably exactly one duplicate.
Most engineers that I've worked with that die on a premature optimization molehill like you describe also make that molehill as complicated as possible. Replacing the inside of the nested loop with a hashtable probe certainly fits the stereotype.
Fair.
To set the scene a bit: the other developer at this point was arrogant, not at all up to date with even the developments of his preferred language, did not listen to or take advice from anyone.
I think a full quarter of my time there was just fire-fighting yet another weird thing he'd done.
> If it was absolutely necessary to get this 1MB dataset to be smaller
It was not, which is why my conversation with the CTO to check on if it was still needed was approximately one or two sentences from each of us. It's possible this might have been important on a previous pivot of the thing, at least one platform shift before I got there, but not when I got to it.
Like I can honestly have trouble listing too many business problems/areas that would fail to scale with their expected user count, given reasonable hardware and technical competence.
Like YouTube and Facebook are absolute outliers. Famously, stackoverflow used to run on a single beefy machine (and the reason they changed their architecture was not due to scaling issues), and "your" startup ain't needing more scale than SO.
Maintaining the media lifecycle, receiving, transcoding, making it available and removing it, is the big task but that's not real-time, it's batch/event processing at best efforts.
The biggest challenges with streaming are maintaining the content catalogue, which aren't just a few million records but rich metadata about the lifecycle and content relationships. Then user management and payments tends to also have a significant overhead, especially when you're talking about international payment processing.
I’ve seen an entire company proudly proclaim a modern multicore Xeon with 32GB RAM can do basic monitoring tasks that should have been possible with little more than an Arduino.
Except the 32GB Xeon was far too slow for their implementation...
While it's absolutely 100% possible to have a "big beefy server architecture" that's reasonably portable, reproducible, and documented, it takes discipline and policy to avoid the "there's a small issue preventing {something important}, I can fix it over SSH with this one-liner and totally document it/add it to the config management tooling later once we've finished with {something else important}" pattern, and once people have been doing that for a while it's a total nightmare to unwind down the line.
Sometimes I want to smash my face into my monitor the 37th time I push an update to some CI code and wait 5 minutes for it to error out, wishing I could just make that band-aid fix, but at the end of the day I can't forget to write down what I did, since it's in my Dockerfile or deploy.yaml or entrypoint.sh or Terraform or whatever.
We also have been contacted by AWS having them ask us what the hell we are doing, for a specific set of operations. We do a huge prep for some operations, and the prep feeds massive amounts of data through some AWS services, so much so, they thought we were under attack or had been compromised. Nope, just doin data ingestion!
You'd be surprised that the most stable setups today are run this way. The problem is that this way it's hard to attract investors; they'll assume you are running on old or outdated tech. Everything should be serverless, agentic and, at least on paper, hyperscalable, because that sells further.
> Today at AWS, it is easily possible for people to spend a multiple of the cost of that hardware setup every month for far less compute power and storage.
That is actually the goal of hyperscalers: they are charging you premium for way inferior results. Also, the article stated a very cold truth: "every engineer wants a fashionable CV that will help her get the next job" and you won't definitely get a job if you said: "I moved everything from AWS and put it behind haproxy on one bare-metal box for $100/mo infra bill".
Investors don't give a shit about your stack
I have a friend whose startup had a super complicated architecture that was falling apart at 20 requests per second. I used to be his boss a lifetime ago and he brought me in for a meeting with his team to talk about it. I was just there flabbergasted at "Why is any of this so complicated?!" It was hundreds of microservices, many of them black boxes they'd paid for but had no access to the source. Your app is essentially an async chat app, a fancy forum. It could have been a simple CRUD app.
I basically told my friend I couldn't help, if I can't get to the source of the problematic nodes. They'll need to talk to the vendor. I explained that I'd probably rewrite it from the ground up. They ran out of runway and shut down. He's an AI influencer now...
One start up I worked at we had 2 Kubernetes clusters and a rat's nest of microservices for an internal tool that, had we been actually successful at delivering sufficient value would have been used by at most a 100 employees (and those would unlikely be concurrent). And this was an extremely highly valued company at the time.
Another place I worked at we were paying for 2 dev ops engineers (and those guys don't come cheap) to maintain our deployment cluster for 3 apps which each had a single customer (with a handful of users). This whole operation had like 20 people and an engineering team of 8.
Of course they eventually got bored and quit. And then it became really annoying since no one else understood anything about it.
What if I use the cloud? I don't even know how many servers my database runs on. Nor do I care. It's liberating not having to think about it at all.
I’ve seen monoliths because of their sheer size and how much crap and debt is packed into them, build and deploy processes taking several hours if not an entire day for some fix that could be ci/cd’d in seconds if it wasn’t such a ball of mud. Then, what tends to happen, is the infrastructure around it tends to compensate heavily for it, which turns into its own ball of mud. Nothing wrong with properly scaled monoliths but it’s a bit naive, in my personal experience, to just scoff at scale when your business succeeding relies on scale at some point. Don’t prematurely optimize, but don’t be oblivious to future scenarios, because they can happen quicker than you think
• https://benwheatley.github.io/blog/2025/02/26-14.04.07.html
• https://benwheatley.github.io/blog/2024/04/07-21.31.19.html
No, this whole article reads like someone who is crying that they no longer have their AS/200. Bye. The reason people use AWS and all those 3rd party is so they don’t have to reinvent the wheel which this author seems hell bent on.
Why are we using TCP when a Unix file is fine… why are we using databases when a directory and files is fine? Why are we scaling when we aren’t Google when my single machine can serve a webpage? Why am I getting paid to be an engineer while eschewing all the things that we have advanced over the last two decades?
Yeah, these are not the right questions. The real question should be: “Now that we have scale what are we gonna do with it?”
IME at many different SaaS companies, the only one that had serious reliability was the one that had “archaic grey beard architecture restrictions.” Devs want to use New Shiny X? Put a formal request before the architectural review committee; they’ll read it, then explain how what the team wants already exists in a different form.
I don’t know why so many developers - notably, not system design experts, nor having any background in infrastructure - think that they know better than the gray beards. They’ve seen some shit.
> and your lack of understanding what a pod is or how to get your logs from your cloud.
No one said the gray beards don’t know this. At the aforementioned company, we ran hybrid on-prem and AWS, and our product was hybrid K8s and traditional Linux services.
Re: cloud logs, every time I’ve needed logs, it has consistently been faster for me to ssh onto the instance (assuming it wasn’t ephemeral) and use ripgrep. If I don’t know where the logs were emitted from, I’ll find that first, then ssh. The only LaaS I’ve used that was worth a damn was Sumologic, but I have no idea how they are now, as that was years ago.
Meanwhile if you have Splunk, you specify the logfile name and how to extract the IP and then append "| iplocation clientip | geostats count by Country" to see which countries requests are coming from, for example. Or append "| stats count by http_version" and then click pie chart and get a visualization that breaks down how much traffic is still on HTTP 1.1, who's on 1.2, whos is on 2, and who's moved to QUIC/3.
Which leads us to a huge problem I’ve seen over the past few decades.
Too many developers for the task at hand. It’s easier for large companies to hire 100 developers with a lower bar that may or may not be a great fit than it is to hire 5 experts.
Then you have a 100 developers that you need to keep busy and not all of them can be busy 100% of the time because most people aren’t good at making their own impactful work. Then instead of trying to actually find naturally separate projects for some of them to do, you attempt to artificially break up your existing project in a way that 100 developers can work on together (and enforce those boundaries at through a network).
This artificial separation fixes some issues (merge conflicts, some deployment issues), but it causes others (everything is a distributed system now, multi stage and multi system deployments required for the smallest changes, massive infrastructure, added network latency everywhere).
That’s not to say that some problems aren’t really so big that you need a huge number of devs, but the vast majority aren’t.
> they don’t have to reinvent the wheel
Everything is a trade off, but we shouldn’t discount the cost of using generic solutions in place of bespoke ones.
Generic solutions are never going to be as good of a fit as something designed to do exactly what you need. Sometimes the tradeoff is worth it. Sometimes it’s isn’t. Like when you need to horizontally scale just to handle the overhead. Or when you have to maintain a fork of a complex system that does way more than you need.
It’s the same problem as hiring 100 generic devs instead of 5 experts. Sometimes worth it. Sometimes not.
There’s another issue here too. If not enough people are reinventing the wheel we get stuck in local optima.
The worst part is that not enough people spend enough time even thinking about these issues to make informed decisions regarding the tradeoffs they are making.
ie yes kubernetes but the simplest vanilla version of it you can manage
I’d personally start with Linux services on some VMs, but Docker Compose is also valid. There are plenty of wrappers around Compose to add features if you’d like.
Too true. Now that I've stepped into an "engineering leadership" role and spend as much time looking at finances as I do at code, I've formed the opinion that in 99.999% of cases, engineering problems are really business problems. If you could throw infinite time and money at the technical challenges, they'd no longer be challenging. But businesses, especially startups, don't have infinite (or even "some") money and time, so the challenge is doing the best engineering work you can, given time and budget constraints.
> The downsides [of the monolith approach]
I like the article's suggestion of using explicitly defined API boundaries between modules, and that's a good approach for a monolith. However one massive downside that cannot be ignored -- by having a single monolith you now have an implicit dependency on the same runtime working on all parts of your code. What I mean by this is, all your code is going to share the same Python version and same libraries (particularly true in Python, where it's not a common/well-supported use case to have multiple versions of library dependencies). This means that if you're working on Module A, and you realize you need a new feature from Pandas 2.x, but the rest of the code is on Pandas 1.x... well, you can't upgrade unless you go and fix Modules B, C, D ... Z to work with Pandas 2.
This won't be an issue at the start, but it's worth pointing out. Being forced to upgrade a core library or language runtime and finding out it's a multi-month disruptive project can be brutal.
https://www.youtube.com/watch?v=xFFs9UgOAlE
I watched it ages ago, but I seem to remember one thing that I liked was that each time they changed the architecture, it was to solve a problem they had, or were beginning to have. They seemed to be staying away from pre-optimization and instead took the approach of tackling problems as they had as they appeared, rather than imagining problems long before/if they occurred.
It's a bit like the "perfect is the enemy of done" concept - you could spend 2-3x the time making it much more scalable, but that might have an opportunity cost which weakens you somewhere else or makes it harder/more expensive to maintain and support.
Take it with a pinch of salt, but I thought it seemed like quite a good level-headed approach to choosing how to spend time/money early on, when there's a lot of financial/time constraints.
Except that those free credits will go away and you'll find yourself not wanting to do all the work to move it over when it would've been easier to do so when you just had that first monolith server up.
I think free credits and hyped up technology is to blame. So, basically a gamed onboarding process that gets people to over-engineer and spend more.
The exceptions are usually just inexperienced people at the helm. My feeling is, hire someone with adequate experience and this is likely not an issue.
I do think architecture astronauts tend to talk a lot more about their houses of cards, which makes it seem like these set ups are more popular than they are.
- If deploying your MVP to EKS is overengineering, then signing a year-long lease for bare metal is hubris. Both think one day they will need it, but only one of them can undo that decision.
- Don't compare your JBOD to a multi-region replicated, CDN-enabled object store that can shrug off a DDoS attack. One protects you from those egress fees, and the other protects you from a disaster. They are not comparable.
- A year from now, the startup you work for may not exist. Being able to write that you have experience with that trendy technology on your resume sure sounds nice. Given the layoffs we are seeing right now, putting our interest above the company's may be a good idea.
- Yes, everyone knows modern CPUs are very fast, and paying $300/mo for an 8-core machine feels like a ripoff, but unless you are business of renting GPUs and selling tokens. Compute was never your cost center; it was always humans. For some companies, not being able to meet your SLA due to talent attrition is scarier than the cloud bill.
I know these are one-sided arguments, and I said I would cover both sides with more nuance. I need some time to think through all the arguments, especially on the frontend side. I will soon write a blog.
radarsat1•3h ago
I am not sure this is true. Complexity is a function of architecture. Scalability can be achieved by abstraction, it doesn't necessarily imply highly coupled architecture, in fact scalability benefits from decoupling as much as possible, which effectively reduces complexity.
If you have a simple job to do that fits in an AWS Lambda, why not deploy it that way, scalability is essentially free. But the real advantage is that by writing it as a Lambda you are forced to think of it in stateless terms. On the other hand if suddenly it needs to coordinate with 50 other Lambdas or services, then you have complexity -- usually scalability will suffer in this case, as things become more and more synchronous and interdependent.
> The monolith is composed of separate modules (modules which all run together in the same process).
It's of course great to have a modular architecture, but whether or not they run in the same process should be an implementation detail. Barriers should be explicit. By writing it all depending on local, synchronous, same-process logic, you are likely building in all sorts of implicit barriers that will become hidden dangers when suddenly you do need to scale. And by the way that's one of the reasons we think about scaling in advance, is that when the need comes, it comes quickly.
It's not that you should scale early. But if you're designing a system architecture, I think it's better to think about scaling, not because you need it, but because doing so forces you to modularize, decouple, and make synchronization barriers explicit. If done correctly, this will lead to a better, more robust system even when it's small.
Just like premature optimization -- it's better not to get caught up doing it too early, but you still want to design your system so that you'll be able to do it later when needed, because that time will come, and the opportunity to start over is not going to come as easily as you might imagine.
CaptainOfCoit•3h ago
It should be, but I think "microservices" somehow screwed up that. Many developers think "modular architecture == separate services communicating via HTTP/network that can be swapped", failing to realize you can do exactly what you're talking about. It doesn't really matter what the barrier is, as long as it's clear, and more often than not, network seems to be the default barrier when it doesn't have to be.
dapperdrake•3h ago
This is the part that is about math as a language for patterns as well as research for finding counter-examples. It’s not an engineering problem yet.
Once you have product market fit, then it becomes and engineering problem.
saidinesh5•3h ago
What you are describing is already the example of premature optimization. The moment you are thinking of a job in terms of "fits in an AWS Lambda" you are automatically stuck with "Use S3 to store the results" and "use a queue to manage the jobs" decisions.
You don't even know if that job is the bottleneck that needs to scale. For all you know, writing a simple monolithic script to deploy onto a VM/server would be a lot simpler deployment. Just use the ram/filesystem as the cache. Write the results to the filesystem/database. When the time comes to scale you know exactly which parts of your monolith are the bottleneck that need to be split. For all you know - you can simply replicate your monolith, shard the inputs and the scaling is already done. Or just use the DB's replication functionality.
To put things into perspective, even a cheap raspberry pi/entry level cloud VM gives you thousands of postgres queries per second. Most startups I worked at NEVER hit that number. Yet their deployment stories started off with "let's use lambdas, s3, etc..". That's just added complexity. And a lot of bills - if it weren't for the "free cloud credits".
bpicolo•1h ago
I think the most important one you get is that inputs/outputs must always be < 6mb in size. It makes sense as a limitation for Lambda's scalability, but you will definitely dread it the moment a 6.1mb use case makes sense for your application.
hedora•31m ago
That's equivalent to paying attention in software engineering 101. If you can't get those things right on one machine, you're going to be in world of hurt dealing with something like lambda.