For an interesting example of the outcome of strongly optimising for essential complexity, look at the demoscene. They have done some amazing things that were thought to be impossible.
See the eternal urge of gamedevs to build engines instead of building the game.
And if you do stick to purpose-specific design and management asks for a seemingly minor expansion of scope, the “this is a small change, why did we make the system so rigid that it’s expensive to update?” is an uncomfortable conversation.
I suspect that you are right on this; it often appears as a lack of depth of understanding of the problem being solved for and whether the proposed solution actually solves that problem and solves it in the simplest way possible.
Everyone loves a bit of abstraction that encapsulates a given complexity and make it a single simple concept. This is not: "a complex bit of technology that has pros and cons and will lock you in a particular architecture and vendor and would almost always necessitate 2 architectures because not all tasks/scenarios would be suitable for it". It is a "Lambda". So simple. Just a `handle(event) {}`, how much simpler can this be? No complex runtimes, VMs, environments, etc. It's just a `handle(event) {}` no complexity to see here. good luck.
On the other hand, sometimes people who "champion" simplicity are also the same people who don't care about the engineering part of it. The "Hey, I added a full text search. It's implemented as a:
select *
from posts
where column1::text like $1
or column2::text like $1
...
or columnN::text like $1
Lets push it. Oh out database CPU is burning, search is so slow. What a surprise this is. I guess it validates the popularity of search. Can we optimize it? Maybe we cache the result? Maybe we add an LLM to normalize queries so we can cache results more effectively? I'm trying to keep things simple"There is also the requirement or ask that is trying to hide a particular complexity from one side by shoving it all under a rug somewhere. Once the complexity has been sufficiently shoved under the rug, we can think of the rug as just a rug. We can move it around and use it like a rug. Surely the shit underneath will never have any impact on anyone any time.
The hard part is in knowing which is which and I don't think anyone real has a tried and true solution for it.
I think because its really about correctly predicting the future in a n continually evolving environment.
Those "simple" solutions can be piled on top of each other turn into an absolute monstrosity over time.
And then engineered solution can be overengineered and overly complex when it turns out the problems it solves never ended up happening.
On top of that people don't even think of simplicity in the same way. One person's simple is another's overly complex.
What seems to work for me is try give yourself outs/alternative paths.
Hedge your bets by overengineering slightly on where you think there is more risk. And keep it simple where there is less risk. Then reevaluate.
If every dev took the path of least resistance, and did the easiest thing that did its task, we would lose 80% of all tech jobs. This would only funnel more money to billionaires and share holders instead of the employees. Why would we want that.
I am seeing a huge uptick in this style of "accidental" complexity.
I recently had the pleasure of observing a perfectly functional T-SQL script be refactored to use entity framework, resulting in a ~50x decrease in throughput.
Ignorance is something I am willing to entertain. Arrogance is way, way more difficult to deal with. One is an educational opportunity. The other is game of thrones.
Also, framing it as "us vs them" misses the point. For instance, in my world (Data & Analytics), there's no universe where Spark makes sense for <1TB data or using Kafka for what could be solved using with more simple and predictable batch processing, yet we still see them used in wild implementations that clearly don't need that scale. Those implementations can end up needing a lot of headcount and budget for support without adding any value whatsoever and sometimes even ending up in a more brittle platform. Complexity often comes from misguided choices, not just ego.
I've seen enormous data teams at startups, by that I mean bigger than WhatsApp engineering team in its prime pre-Meta (~30 people). That's a red flag, especially when they're processing far less data.
I admit I did not read the article. But I have done so before I guess, I remember the headline.
The headline is enough to provoke thought. See
https://world.hey.com/dhh/merchants-of-complexity-4851301b
And after clicking, I see, this article is very recent. Strange...
That being said, I'd just add (and perhaps the article fails to acknowledge this, maybe contributing to the perceived tone of superiority) that no one is completely safe from falling into these traps. We're more human than engineer, after all. It's easy to slip into complexity, for all sorts of reasons: ego, incentives, time pressure, misguided intellectual curiosity, or simply chasing what's familiar, even when familiar isn't the right fit for the context.
In my context, one funny pattern always shows up. I think of it as a data-related 'rite of passage': junior data engineers and data scientists often dismiss SQL as an old language, assuming it can be easily replaced by something shinier. But over time, they hit a wall: they can't match the speed, performance, or clarity of more experienced peers. Eventually, they come back to SQL and recognize its strengths (especially when paired with an imperative language for tasks like ML, plotting, or report generation). I was that junior once too, stubbornly trying to do everything in Python, only to learn the hard way that joins in pandas are not only slower but also far more painful to write than in good old SQL. It turns out that writing code which outperforms a mature SQL engine is extremely difficult, and realizing that takes a bit of humility and experience (which younger engineers usually haven't developed yet, and honestly, it's not expected from them either).
It's a general discussion and very Yin/Yang like (or dialectical, I'm not 100% sure if these are appropriate terms).
Second the comment by mrkeen.
With concrete examples, this could be more interesting.
And I poorly introduced the thought I wanted to communicate.
The "appeal to feelings of superiority" that I mentioned were mangled, I started to also think about said "merchants of complexity" and software design "evangelism" in general.
And I didn't read the post initially because the writing, structure etc combined with the vague familiarity of the title threw me off.
Especially starting with a bulleted list, ending with a summary, and still: me not seeing any interesting example at first glance.
Though only the top comment in this chain reminded me of the importance of examples / real-world anecdotes.
The whole overengineering problem does have a yin/yang quality to it, especially around themes like designing "for the future or for scale" (generalization) vs YAGNI/"premature optimization is the root of all evil", and engineering is, after all, largely about calibration. But it's not just abstract philosophy. There are plenty of concrete signals and they tend to appear in cycles, which suggests the industry collectively relearns how to navigate complexity over time: the rise of DuckDB and Iceberg over Spark or Hive, the cycle of people embracing MongoDB then returning to PostgreSQL, and even companies moving workloads off the cloud and back to colocated metal after complexity hit them hard, both in direct costs and in hidden ones like engineering headcount. These aren't just trends, they’re signs that overengineering is a concrete problem with real-world, objective consequences, and not merely a matter of opinion.
That's why I shared the link to DHH's post: his post appeared as part of others by him and the decisions about how and if the Rails community should concern themselves with JavaScript.
Hey also pitched to HN initially with their technical approach (simplicity!).
So for that reason the post felt derivative to me.
It's still cool to reflect on all these longstanding general SE tradeoffs and truisms.
Well, obviously, navigating complexity in the middle of a tech cycle is hard, and after the storm, anyone can claim to be a captain. I'd say the best we can do is stay conscious of the pattern (and how it affects you and others), engage in architectural discussions with critical thinking (especially around incentives and epistemology, as a concrete example, cloud certifications and consultants are often brought up to muddle this up in infra context), and bring technical humility and empathy into the room. Also, if possible, try to help younger engineers recognize these dynamics earlier.
It's a genuinely hard problem. And sometimes, there isn't much more to do than to engage honestly, because your own perspective can be just as biased or context-blind as anyone else's.
> It's still cool to reflect on all these longstanding general SE tradeoffs and truisms.
Yep, totally. Philosophy is always fun, even when the only universal truth is that the truth is elusive.
We would never use Spark or Kafka for small data, not like them. Their choices are misguided.
Their data teams are too big, ours are the right size.
Fwiw, I use Kafka regardless of scale. It's the only mainstream system out there that doesn't encourage throwing away your receipts. It's the programming equivalent of "accountants don't use erasers".
Come on dude, tech we can discuss a lot and there is no totally correct objective response. Irresponsible financials are an entirely different beast. A proportionally big data team in a non-data focused / non-tech company is a low-key red flag, and that's what I'm referring to (i.e. >10% of headcount). CFOs don't (and should not talk) in relatives, like "us vs them".
> Fwiw, I use Kafka regardless of scale. It's the only mainstream system out there that doesn't encourage throwing away your receipts. It's the programming equivalent of "accountants don't use erasers".
About Kafka, as far as I've experienced it and also read about, it is mostly safe if you are paying for managed solutions like Confluent or heavily investing on infra for covering for ordering and other issues. But Kafka is as good a technology as it is recognizedly messy, up to a point where I've seen people loving it and then hating it months-years down the line.
I would stick with batch processing if I don't care for receipts. You need to take care for them? Cool, then take charge but if not then don't tackle complexity your business don't/won't need. That's the point, even big banks have survived without taking care about the receipts.
---
Out of curiosity, I asked ChatGPT-o3 what LLM it thought wrote the ending, and I agreed with its rationale for guessing that 4o wrote it:
"The prose is archetypal ChatGPT rhetoric—second-person apostrophe (“dear traveler”), triadic repetitions, and a moralizing close. / Vocabulary inflation (“complexity merchant”, “moat of self-importance”), syntactic parallelism, and the cadence of stacked, comma-separated clauses are hallmarks of GPT-4’s temperature-controlled completions. / No concrete anecdotes, data, or links—just abstract exhortation—exactly what an LLM defaults to"
But more interesting was the rationale for it not being Claude or Gemini-pro: "those models tend to hedge (“may”, “might”) and inject more safety language. This text is bolder and more florid—typical of GPT-4 with style-boosting prompts."
I'm not sure I would've identified that as stylistic tics of Claude/Gemini. I'll have to look closer for those in the future.
And of course, the full commit history of all of my writing: https://github.com/CharlieDigital/chrlschn Even a simple look would have led you to the repository with full commit history of my fixes for grammar, spelling, etc.
I've also kept a blog long before GPT existed: https://charliedigital.com [0] so I would point you to that and my long history of writing (e.g. https://chrlschn.medium.com/) that this is hand rolled content. Rather I'd say AI has copied my style of writing.
Please feel free to peruse and use your bulletproof analysis to let me know which of these spanning back to 2005 are AI written. I hope that your takeaway from this is that your AI spidey senses are completely broken and realize the irony of your intellectual laziness here relying on an LLM instead of just following the links in my profile or clicking the link to my GH on my blog. If you have any sense of righteousness, please retract your flag and let the mods know of your error.
[0] Exaxmples:
https://charliedigital.com/2020/11/13/principles-of-speed/
https://charliedigital.com/2020/10/29/how-leaders-fail-their...
https://charliedigital.com/2020/09/20/in-praise-of-simplicit...
https://charliedigital.com/2016/04/11/recipe-crunch-time-exe...
Instead, please flag the post and email us so we can take a look and see if we think the content and the comment thread are HN-worthy, as that's ultimately all we can assess. The originality and quality of the writing is always an important consideration, regardless of how it was created.
(We also discourage pasting LLM-generated output into comments. In this case it doesn't really prove anything either way and takes the comment further into off-topic territory. More broadly, we want to keep HN for discussion between humans.)
Accidental complexity is compressable, but essential complexity is not. At some point, you cannot compress further without losing nuance.
In compiler design, there's a concept called the waterbed theory of complexity which states that you can try to abstract complexity away, but it'll just show up elsewhere.
• plain poor design
• complexity in operation rather than design (or technology)
• inescapable real-world complexity
• lots of moving pieces and details, straining working memory
• abstract or novel concepts that are hard to learn up-front, but easy longer-term
Some of these are practically opposites! At the pareto frontier of good design, there is a pretty fundamental trade-off between having more abstract concepts that are harder to learn up-front, and exposing more details that make systems hard to work with on an ongoing basis. People just call both of these "complexity"! These are two concepts that absolutely should not be conflated.
I've seen lots of other patterns that are misleadingly described as "complex". For example, some of the most effective software I've seen has been situated software[1]; that is, software built for, and largely in a specific social context. Think seemingly messy code that is incestuously coupled to a single person's or team's workflow. This software might actually be perfectly simple in context, but it's going to seem painfully baroque to anybody outside that context.
[1]: https://gwern.net/doc/technology/2004-03-30-shirky-situateds...
Given all I've seen, I've come to the conclusion that generic exhortations about "complexity" are actively harmful. If you're going to write a universally applicable rant, just write about how bad design is bad and good design is good! At least that's something that people will disagree with—I've met far more people who insist there is no such thing as "good" or "bad" design than people who insist that complexity is actually better than simplicity.
> By what sorcery? Fear, mostly. Vanity, sometimes. Sloth, occasionally. Pride, definitely. The more insecurities the merchants of complexity can trigger, the easier the sell.
https://world.hey.com/dhh/merchants-of-complexity-4851301b
You can also find blogs and podcasts about this subject, when DHH coined the phrase.
Think about the topic what you will, but the submission looks like a regurgitation to me.
Thinking about it: it's interesting bridge to utilitarianism (the other deadly sins enable the greed of the merchant).
And also, the title ironically is itself an overgeneralization and appeal to fear/scrunity (though certainly not generally unjustified)
I practice and teach computation within design - mostly architecture. When talking to my students I make sure to highlight this distinction. Creating complicated things is one way to create complexity. But complexity as an outcome of carefully chosen simple ideas, techniques, or phenomena is another, arguably more powerful way.
Brooks' ideas of essential and accidental complexity are precisely in line with this. Managing complex effects is itself a challenge (an exciting and rewarding one). And the more "complicated" a system, the more difficult this becomes - and exponentially so I would argue. So both in my own work and in helping students understand how to design for managing complexity but minimizing complicated-ness.
I would be remiss if I didn't mention Murray Gell-Mann's notion of Plectics, which combines simplicity and complexity in one[2].
This is a really awesome subject area to explore, and I look forward to more discussions around it.
[0] https://en.wikipedia.org/wiki/Double-slit_experiment
[1] https://www.youtube.com/watch?v=th3YMEamzmw
[2] https://onlinelibrary.wiley.com/doi/epdf/10.1002/cplx.613001...
stego-tech•1mo ago
It drives leadership nuts, but I find asking the "why" or "what problem does this solve" questions helps the organization remain focused on what actually delivers value versus what's just hype we can ignore or throw away.