No Silver Bullet with Dr. Fred Brooks (2017) [video] - https://news.ycombinator.com/item?id=40233156 - May 2024 (1 comment)
No Silver Bullet (1986) [pdf] - https://news.ycombinator.com/item?id=32423356 - Aug 2022 (43 comments)
No Silver Bullet: Essence and Accidents of Software Engineering (1987) - https://news.ycombinator.com/item?id=25926136 - Jan 2021 (9 comments)
No Silver Bullet (1986) [pdf] - https://news.ycombinator.com/item?id=20818537 - Aug 2019 (85 comments)
No Silver Bullet: Essence and Accidents of Software Engineering (1987) - https://news.ycombinator.com/item?id=15476733 - Oct 2017 (8 comments)
No Silver Bullet (1986) [pdf] - https://news.ycombinator.com/item?id=10306335 - Sept 2015 (34 comments)
No Silver Bullet: Essence and Accidents of Software Engineering (1987) - https://news.ycombinator.com/item?id=3068513 - Oct 2011 (2 comments)
"No Silver Bullet" Revisited - https://news.ycombinator.com/item?id=239323 - July 2008 (6 comments)
--- and also ---
Fred Brooks has died - https://news.ycombinator.com/item?id=33649390 - Nov 2022 (211 comments)
On the one hand libraries and platforms save developers from reimplementing common functionality. But on the other hand, they introduce new layers of accidental complexity through dependency management, version conflicts, rapid churn, and opaque toolchains.
This means that accidental complexity has not disappeared, it has only moved. Instead of being inside the code we write, it now lives in the ecosystems and tools we must manage. The result is a fragile foundation that often feels risky to depend on.
I would argue that it is essential complexity that we reuse (what the library does), at the added cost of some accidental complexity from dependency management, etc.
Which is a fair price where the essential complexity reuse is larger than the overhead (e.g. it generally makes no sense to bring in isOdd as a separate library, but for larger functionality you are likely better off doing so).
These are fundamental CS concepts, you don't solve them.
Also, I would first wait for LLMs to have reliable reasoning capabilities on trivial logic puzzles, like Missionaries and cannibals, before claiming they can correctly "reason" about concurrency models and million LOC program behavior at runtime.
I think that the willingness to rely completely on OSS libraries has fundamentally changed SWE practices from 2004 when I earned my first paycheck. We all just agreed that it's okay to use this library where you don't know who wrote it, there is no contract on it, and everyone just accepts that it will probably have massive security holes, and you hope the maintainer will still be around when those are announced. This was not true in 1986, and it mostly wasn't true in 2006, but it feels like every week we get more announcements on new CVE's that it turns out half the internet- very much including products people paid real hard currency for- were using. And we just accepted it.
And yeah, mostly the ability to CD a new deployment immediately- plus force a download of a patched version- meant we could get away with it, but it trained us all to accept lower quality standards. And I feel it in my bones, that the experience of using software is markedly worse than in 1996 or 2006, despite vastly more CPU, RAM, and disk.
Obligatory XKCD: https://xkcd.com/2347/
Brooks masterfully identified the core of the problem, including the "invisibility" of software, which deprives the mind of powerful geometric and spatial reasoning tools. For years, the industry's response has been better human processes and better tools for managing that inherent complexity.
The emerging "agentic" paradigm might offer the first fundamentally new approach to tackling the essence itself. It's not a "silver bullet" that magically eliminates complexity, but it is a new form of leverage for managing it.
The idea is to shift the role of the senior developer from a "master builder," who must hold the entire invisible, complex structure in their mind, to a *"Chief Architect" of an autonomous agent crew.*
In this model, the human architect defines the high-level system logic and the desired outcomes. The immense cognitive load of managing the intricate, interlocking details—the very "essential complexity" Brooks identified—is then delegated to a team of specialized AI agents. Each agent is responsible for its own small, manageable piece of the conceptual structure. The architect's job becomes one of orchestration and high-level design, not line-by-line implementation.
It's not that the complexity disappears—it remains essential. But the human's relationship to it changes fundamentally. It might be the most significant shift in our ability to manage essential complexity since the very ideas Brooks himself proposed, like incremental development. It's a fascinating thing to consider.
Well put.
I think you're right but people are treating it like the silver bullet. They're saying "actually the AI will just eliminate all the accidental complexity by being the entire software stack, from programming language to runtime environment."
So we use the LLM to write Python, and one day hope that it will just also eliminate all the compilers and codegen sitting between the language and the metal. That's silver bullet thinking.
What LLMs are doing is managing some accidental complexity, but it's adding more. "prompt engineering" and "context engineering" are accidental complexity. The special config files LLMs use are accidental complexity. The peculiarities of how the LLM sometimes hallucinates, can't answer basic questions, and behaves differently based on the time of day or how long you've been using it are accidental complexity. And what's worse, it's stochastic complexity, so even if you get your head around it, it's still not predictable.
So LLMs are not a silver bullet. Maybe they offer a new way of approaching the problem, but it's not clear to me we arrive at a new status quo with LLMs that does not also have more accidental complexity. It's like, we took out the spec sheet and added a bureaucracy. That's not any better.
If it could be subdivided into its own small, manageable piece, then we wouldn't really have a problem as human teams either.
But the thing is, composing functions can lead to significantly higher complexity than the individual pieces themselves have -- and in the same vein, a complex problem may not be nicely subdivisible, there is a fix, essential complexity to it, on a foundational, mathematical level.
However, things have actually changed since then. A lot.
Here is Brooks at a "20 year retrospective" panel[1] at OOPSLA '07:
"When ‘96 came and I was writing the new edition of The Mythical Man-Month and did a chapter on that, there was not yet a 10 fold improvement in productivity in software engineering. I would question whether there has been any single technique that has done so since. If so, I would guess it is object oriented programming, as a matter of fact."
Large-scale reuse, which was still a pipe-dream in 1986, and only beginning in 1996 is now a reality. One which has brought its own set of problems, among them the potential for a lot of accidental complexity.
We now use 50 million lines of code to run a garage door opener[2]. I can declare with some level of confidence that the non-essential complexity in that example is more than the 90% that Brooks postulated as a safe upper limit. Five million lines of code is not the essential complexity of a garage door opener.
And while that is an extreme example, it seems closer to the normal case these days than the comparatively lean systems Brooks was thinking about.
[1] https://www.infoq.com/articles/No-Silver-Bullet-Summary/
[2] https://berthub.eu/articles/posts/a-2024-plea-for-lean-softw...
Though I recommend a full read, for those who want a gloss, I made a mind map that identifies the major questions posed by the essay, with summary answers and supporting rationale: https://app.gwriter.io/#/mindmap/view/b52b7d35-4d8d-4164-ba5...
ChrisMarshallNY•20h ago
I feel as if a lot of multipliers have happened that he didn't anticipate, but I also feel as if the culture of software engineering has kind of decomposed, since his day.
We seem to be getting a lot of fairly badly-done work out the door, very quickly, these days.
convolvatron•20h ago
ChrisMarshallNY•19h ago
I was there for almost 27 years, so had plenty of time to deal with the consequences of my decisions.
They were insane about Quality, so testing has always been a big part of my work, and still is, though I haven't been at that company for eight years.
readthenotes1•18h ago
There is a lot of irony in that since the first plank of the agile manifesto is to put individuals in interactions first.
And I noticed you put the development process/structure first over the people who want to treat people as fungible.
ChrisMarshallNY•16h ago
I always liked the Manifesto, but it's really rather vague, and we engineers don't do "vague" so well, which leaves a lot of room for interpretation.
And authors.
And consultants.
And conference speakers.
Those are the ones that form what is eventually implemented. I'm not really sure any of the original signatories ever rolled up their sleeves, and worked to realize their vision.
It's my experience, that this is where the wheels start to come off the grand ideas.
That's one thing that I have to hand to Grady Booch. He came up with the idea, wrote it down, and then started to actually make tools to make it happen. Not sure if they really launched, but he did work to make his ideas into reality.
AnimalMuppet•17h ago
Sometimes for good reason. "Well designed bespoke solutions" often turn out to be badly designed reinventions of the wheel. Industry standard best practices sometimes prevent problems that you yet know you will run into.
And sometimes they just are massively overdesigned overkill. There is a real art to knowing which is which.
ChrisMarshallNY•17h ago
Absolutely, but that “art” is really important, and also, fairly rare.
Many folks just jam in any dependency that is a first hit in a search, with more than 50 GH stars, and a shiny Web site.
One “red flag” phrase that I’ve learned is “That’s a solved problem!”. When I hear that, I know I should be skeptical of the prescribed “solution.”
That said, there’s stuff that definitely should be delegated to better-qualified folks. One example, that I was just working on[0], is Webauthn stuff.
[0] https://littlegreenviper.com/addendum-a-server-setup/
gf000•10h ago
Such as? I think his essay still stands the time that no single multiplier is even close to an order of magnitude productivity boost, with the exception of already existing code.
LLMs are possibly the biggest change to how software is developed, but they are also nowhere near this magnitude - if any - in case of more complicated software.
ChrisMarshallNY•7h ago
I know that OOP was just getting its feet under it, when he wrote that. It turned out to have a huge multiplying effect on productivity, but also introduced a whole new universe of footguns.
Maybe if OOP had been introduced, along with some of the disciplines that evolved, it might have been a big multiplier, but that took time.
I guess, upon reflection, each of our big “productivity boosts” were really evolutionary movements, that took time to evolve.
He really was quite prescient.