The testing pyramid is a par excellance SWE kool-aid. Someone wrote a logically-sounding blogpost about it many years ago and then people started regurgitating it without any empirical evidence behind it.
Many of us have realised that you need a "testing hourglass", not a "testing pyramid". Unit tests are universally considered useful, there's not much debate about it (also they're cheap). Integration tests are expensive and, in most cases, have very limited use. UI and API tests are extremely useful because they are testing whether the system behaves as we expect it to behave.
E.g. for a specific system of ours we have ~30k unit tests and ~10k UI/API tests. UI and API tests are effectively the living, valid documentation of how the system behaves. Those tests are what prevent the system becoming 'legacy'. UI and API tests are what enable large-scale refactors without breaking stuff.
Isolated QA should not exist because anything a QA engineer can do manually can be automated.
Well, sort of maybe, but it's not always economical. For a normal web app - yeah I guess. Depends on the complexity of the software and the environment / inputs it deals with.
And then there's explorative testing, where I always found a good QA invaluable. Sure, you can also automate that to some degree. But someone who knows the software well and tries to find ways to get it to behave in unexpected ways, also valuable.
I would agree that solid development practices can handle 80% of the overall QA though, mainly regression testing. But those last 20%, well I think about those differently.
Yes, I agree. We do this too. Findings are followed by a post-mortem-like process: - fix the problem - produce an automated test - evaluate why the feature wasn't autotested properly
What do you define as "normal"? I can't think of anything harder to test than a web app.
Even a seemingly trivial static HTML site with some CSS on it will already have inconsistencies across every browser and device. Even if you fix all of that (unlikely), you still haven't done your WCAG compliance, SEO, etc.
The web is probably the best example case for needing a QA team.
I’ve had quite a bit of success in helping my dev teams to own quality, devising and writing their own test cases, maintaining test pipelines, running bug hunts, etc. 90% of this can be attributed to treating developers as my customer, for whom I build software products which allow them to be more productive.
the vertical axis is not test type. It is would you run the test. At the bottom are deterministic fast tests for something completely unrelated to what you are working on - but they are so easy/fast you run them anyway 'just in case'. As you move up you get tests that you more and more want to aviod running. Tests that take a long time, tests that randomly fail when nothing is wrong, tests that need some settup, tests that need some expensive license (i can't think of more now but I'm sure there are).
You want to drive everything down as far as possible, but there is value in tests that are higher so you won't get rid of it. Just remember as soon you get to the 'make would run this test but I'm skipping it for now because it is annoying' line you need a seperate process to ensure the test is eventually run - you are trading off speed now for the risk that the test will find something and it is 10x harder to fix when you get there - when a test is run all the time you know what caused the failure and can go right there, while later means you did several things and have forgotten details. 10x is an estimate, depending where in your process you put it it could be 100 or even 1000 times harder.
Quality is something that takes dedicated focus and lots of work. Therefore it’s a job, not an afterthought or latest priority for someone whose primary focus is not quality.
But on other hand those people can not often be trusted. As such you need a team that does checks again. Or alternatively they might have misunderstood something and thus produced incorrect system. Or there is some other fault in their thought process or reality. And system operates differently in more real scenario.
QA perspective and focus is just different from the one of the team building the thing. It's precisely because of their detached perspective that they can do their work properly.
I worked with a QA team for the last fifteen years until last year when they laid them all off.
QA is a discrete skill in and of itself. I have never met a dev truly qualified to do QA. If you don't think this you have never worked with a good QA person. A good QA persons super power is finding weird broken interactions between features and layers where they meet. Things you would never think of in a million years. Any dingbat can test input validation, but it takes a truly talented person to ask "what if I did X in one tab, Y in another, and then Z, all with this exact timing so events overlap". I have been truly stunned at some of the issues QA has found in the past.
As for time, they saved us so much time! Unless your goal is to not test at all and push slop, they are taking so much work off your plate!
Beyond feature testing, when a customer defect would come in they would use their expertise to validate it, reproduce it, document the parameters and boundaries of the issue before it ever got passed on to dev. Now all that work is on us.
As a QA: this bug will get downprioritised by PM to oblivion.
If it messes up the UI until you refresh, yeah, I understand deprioritizing that.
If it causes catastrophic data corruption or leaks admin credentials, any sane PM would want that fixed ASAP.
where I work it is normally easier to fix things than deprioritize to oblivion. I can fix an issue, but priority puts a dozen people in a meeting.
I kid a little, I worked with some very good PMs when we did client work who made my life much easier. Working on a SaaS though, I find them generally less than useful.
Most importantly, they have the diligence and patience to methodically test subtly different cases, which I frankly don't have.
On the question of whether QA slows things down, I have to ask: slows down what? Slows down releasing something broken? Why is that something to optimize for? We should always be asking how long it takes to release the right thing (indeed I'm most productive when I can close a ticket after concluding nothing is needed).
Unit tests are very expensive and return little value. Conversely, a (manual?) 'smoke test' is very cheap and returns great value - the first thing you do when updating a server for example is to check it still responds (and nothing has gone wrong in the deployment process), takes 2 seconds to do, prevents highly embarrassing downtime due to a misconfigured docker pull or whatever.
Why are unit tests very expensive? This goes against everything I know.
Not even mentioning the potential regulatory/market and legal consequences if you don't.
Of course, none of this is true in the real world.
For example, just last week we had a QA essentially bring down our web application on staging environment always reproducible with a sequence of four clicks. Follow the sequence with about the proper timing and boom, exception.
Should this have been caught before a single line of code was written? Yes, it should have been caught before any code was written. However, the reality is that it did not. Should this have been caught by some unit test? Integration test? End to end test? Code review? I'd argue as we barrel down a world of AI slop, we need to slow down more. We need QA more than ever.
I don’t understand the reasoning here why QA shouldn’t be engineering.
Who watches the watcher, right?
That aside, the core idea is the same as the principles of independent audit, peer review, or even simply just specialization.
Red team / Blue team?
But there is also bad QA: The most worthless QA I was forced to work with, was an external company, where I, as developer, had to write the test sheet and they just tested that. Obviously they could not find bugs as I tested everything on the sheet.
My most impressive QA experience where when I helped out a famous Japanese gaming company. They tested things like press multiple buttons in the same frame and see my code crash.
More importantly, it is almost impossible for engineers to be as well incentivized to spend extra time exploring edge cases in something they already believe to work than to ship a feature on time.
Like everything else though, its contextual. Complexity of domain, surface area and age of product, depth of experience on team and consequences of failure are all so variable that there cannot be only one answer.
I have done it both ways for many years. I have worked on teams where QA is a frustrating nuisance, and teams where they were critical to success. I have worked on teams that did pretty good without them, and probably those were the highest throughput, most productive teams because the engineers were forced to own all the consequences - every bug they shipped was a production issue they were immediately forced to track down and resolve.
But those were very small teams, and eventually I was the only founding engineer left on the team and far too many mistakes by other people made it to my desk because I was the only person who could find them in review or track them down quickly in production. That was when I started hiring QA people.
Enterprise software companies selling definitely need it. Customers ask was this tested? where is the test report?
If engineering owns quality, then engineering own all, up the chain. No need for anything and anybody.
Which is the AI pipe dream, really.
You are, in fact, with using AI, QA or coding or otherwise, externalizing services in hope the services will improve and costs will drop.
Let me know how that goes without HITLs.
That would put the damper on the pipe dream pretty quick. Probably more healthily than any data center ban could ever do.
If engineers were licensed, bonded, and liable, things would go very differently.
* speaking as having been a practicing software “engineer” for a decade
But since CEOs, or any other bosses, need to make a living, they will eat the liability in exchange for wealth, and leave engineering in the dust.
The moment that happens it will either be re-outsourced to QA anyways or quality will become a question of licensing and bonding of professional engineers
1. Quality management is a continuous process that starts with product discovery and business requirements. Developers often assume that requirements are clear and move on to building the happy path. QA often explore requirements in depth and ask a lot of good questions.
2. QA usually have the best knowledge of the product and help product managers to understand its current behavior, when new requirements suggest to change it.
3. The same applies to product design. Good designer never leaves the team with a few annotated screens, supporting developers until the product is shipped. Design QA - the verification of conformance of implementation to design specs - can be done with QA team, which can assist with automation of design-specific tests.
4. Customer support - QA people are natural partners of customer support organization, with their knowledge of the product, existing bugs and workarounds.
And just a story: on one of my previous jobs recently hired QA engineer spotted number error in an All Hands presentation. That was an immediate buy-in from founders. :)
This rings so many bells that it feels like some Buddhist festival. Apply the same approach to QA, Operations, and anything outside the actual product development: when this arrogance was shared between bosses and developers, all good on their side. Now with the AI, the arrogance is staying only on the bosses' side, and we have developers freaking out.
TJ_FLEET•1h ago