IMPO, as a developer, I see QA's role as being "auditors" with a mandate to set the guidelines, understand the process, and assess the outcomes. I'm wary of the foxes being completely responsible for guarding the hen-house unless the processes are structured and audited in a fundamentally different way. That takes fundamental organizational change.
Developers may understand that "XYZ is better", but if management provides enough incentives for "not XYZ", they're going to get "not XYZ".
Rarely
Do people send PRs with just enough mostly useless tests, just to tick the DoD boxes.
All the time.
A concrete example would be adding say saml+scim to a product; you can add a library and do a happy path test and call it a day. Maybe add a test against a captive idp in a container.
But testing all the supported flows against each supported vendor becomes a major project in and of itself if you want to do it properly. The number of possible edge cases is extreme and automating deployment, updates and configuration of the peer products under test is a huge drag, especially if they are hostile to automation.
The "test implementation" ended up being more performant, and eventually the two implementations switched roles.
I always understood shift-left as doing more tests earlier. That is pretty uncontroversial and where the article is still on the right track. It derails at the moment it equates shift-left with dev-owned testing - a common mistake.
You can have quality owned by QA specialists in every development cycle and it is something that consistently works.
You can be both but I have yet to meet someone who is equally good in both mindsets.
QA should not be replacing anything a developer does, it should be a supplement because you can't think of everything.
We also use QA because we are making multi-million dollar embedded machines. One QA can put the code of 10 different developers on the machine and verify it works as well in the real world as it does in software simulation.
Adding QA outside, which tests software regularly using different approaches, finding intersections etc. is a different topic. Both are necessary.
Why it worked: the team set the timelines for delivery of software, the team built their acceptance and integration tests based on system inputs and outputs based on the edges of their systems, the team owned being on-call, and the team automated as much as possible (no repeatable manual testing aside from sanity checks on first release).
There was no QA person or team, but there was a quality focused dev on the team whose role was to ensure others kept the testing bar high. They ensured logs, metrics, and tests met the team bar. This role rotated.
There was a ci/cd team. They made sure the test system worked, but teams maintained their own ci configuration. We used buildkite, so each project had its own buildkite.yml.
The team was expected by eng leaders to set up basic testing before development. In one case, our team had to spend several sprints setting up generators to make the expected inputs and sinks to capture output. This was a flagship project and lots of future development was expected. It very much paid off.
Our test approach was very much "slow is smooth and smooth is fast." We would deploy multiple times a day. Tests were 10 or so minutes and very comprehensive. If a bug got out, tests were updated. The tests were very reliable because the team prioritized them. Eventually people stopped even manually verifying their code because if the tests were green, you _knew_ it worked.
Beyond our team, into the wider system, there was a light weight acceptance test setup and the team registered tests there, usually one per feature. This was the most brittle part because a failed test could be because another team or a system failure. But guess what? That is the same as production if not more noisy. So we had the same level of logging, metrics, and alerts (limited to business hours). Good logs would tell you immediately what was wrong. Automated alerts generally alerted the right team, and that team was responsible for a quick response.
If a team was dropping the ball on system stability, that reflected bad on the team and they were to prioritize stability. It worked.
Hands down the best dev org I have part of.
And the first part is true. We can. But that's not why we have (had) QA.
First: it's not the best use of our time. I believe dev and QA are separate skillset. Of course there is overlap.
Second, and most important: it's a separate person, an additional person who can question the ticket, and who can question my translation of the ticket into software.
And lastly: they don't suffer from the curse of knowledge on how I implemented the ticket.
I miss my QA colleagues. When I joined my current employer there were 8 or so. Initially I was afraid to give them my work, afraid of bad feedback.
Never have I met such graceful people who took the time in understanding something, and talking to me to figure out where there was a mismatch.
And then they were deemed not needed.
QA wants things to break.
What worked for me, devs write ALL the tests, QA does selective code reviews of those tests making devs write better tests.
I also wrote the failure of Dev-Owned Testing: "Tests are bad for developers" https://www.amazingcto.com/tests-are-bad-for-developers/
Then, they came with the dev-owned testing and fired all the QAs, and you were happy because they were always breaking your app and slowing you down. I can write my tests!
Now, they are coming with LLM agents and you don't own the product...
__alexs•1h ago
pjdesno•1h ago
That's one of the things that publication is for.
The paper is a well-supported (if not well-proofread) position paper, synthesizing the author's thoughts and others' prior work but not reporting any new experimental results or artifacts. The author isn't an academic, but someone at Amazon who has written nearly 20 articles like this, many reporting on the intersection of academic theory and the real world, all published in Software Engineering Notes.
As an academic (in systems, not software engineering) who spent 15 years in industry before grad school, I think this perspective is valuable. In addition academics don't get much credit for this sort of article, so there are a lot fewer of them than there ought to be.